1. Introduction
Internet of Things(IoT) and Big Data technologies are becoming increasingly significant parts of national defense and the military, as well as in the civilian usage [
1,
2,
3,
4]. Widely deployed wireless sensor networks (WSNs) have been collecting enormous data from industry, urban areas and the natural environment in last decades. With the increasingly advanced IoT and Big Data technologies, WSNs are becoming larger and more complex. In recent years, the utilization of collaborative drone-wireless sensor network (UAV-WSN) systems is becoming increasingly prevalent [
5]. As shown in
Figure 1, these systems integrate data from ground (WSN), air (UAV), space (satellite) and internet. They have been shown to be effective in large-scale surveillance with high mobility, high accessibility, and low emergency response time [
6]. Well-deployed WSNs are the basis of efficient data collection in the region of interest and later tasks.
In common application scenarios, sensor nodes are often placed at random in the working space. Such a strategy will lead to low coverage and lower monitoring quality [
7,
8]. An appropriate node deployment strategy can improve WSNs’ coverage and maximize their energy efficiency [
9,
10]. In the field of WSN deployment, the particle swarm optimization (PSO) algorithm is frequently used and has proved to be a valid method. In [
11], Zhang et al. proposed an artificial immune particle swarm optimization algorithm that synthesized the strength of PSO and artificial immune mechanism to solve the k-coverage problem with ideal convergence. Abba et al. formulated the mobile sensing problem like a non-linear optimization problem and proposed bacterial foraging optimization that allows mobile sinks to move in a self-organized and self-adaptive way to improve the network coverage [
12]. In [
13], the fruit fly optimization algorithm was introduced to eliminate overlapping nodes while saving the nodes’ energy. Qasim et al. used a modified ant colony optimization algorithm to achieve ideal coverage in a low computational cost way [
14]. In [
15], a distributed hybrid artificial fish swarm algorithm that takes water flow into consideration was proposed to improve the net work coverage efficacy. However, for large-scale WSN deployment, these algorithms are difficult to get rid of the local minima as the optimization problem becomes more complicated.
Another novel approach for deploying WSNs is the virtual force algorithm (VFA). VFA has a wide range of adaptability, especially in the scene of sparse node density in the monitoring area. In view of the general virtual force algorithm’s shortcomings, such as uneven distribution of nodes and more overlap of coverage area, Liu et al. [
16] proposed a virtual molecular force algorithm to maximize the coverage of network. Qi et al. proposed a novel mobile-coverage scheme to cover the target region with a minimal number of mobile sensor nodes [
17]. Inspired by the dust plasma crystallization phenomenon, Tang et al. proposed a 3D deployment algorithm and investigated the parametric effects of the Debye length for network coverage [
18]. In [
19], Liu et al. proposed a distributed node deployment algorithm based on virtual forces to improve network coverage, reduce node energy consumption and balance node residual energy. In [
20], Deng et al. proposed an improving strategy, which incorporated external central forces to aid hexagonal topology development. When the number of nodes is small, the generic virtual force algorithm can uniformly cover the monitoring region. When the number of nodes in the monitoring area is large, however, the monitoring area’s peripheral nodes will push the monitoring area’s center nodes out of the way, resulting in an unequal distribution of nodes in the monitoring area.
Graphs are frequently utilized in computer science for social network analysis, computer networks, transportation networks, and many other applications. Layout algorithms are used to aid readers in comprehending the information of graphs [
21]. To form an aesthetically pleasing graph, layout algorithms arrange the vertices of a graph so that all of the edges are about equal in length and there are a few crossing edges. The deployment of WSN resembles forming a graph with a minimum number of sensors to cover the maximum area. The Fruchterman–Reingold algorithm is one of the most well-known graph layout algorithms. It specifies the optimal distance between each vertex according to their edge connections. Taking the sensor as the graph’s vertex, the Fruchterman–Reingold algorithm can be used as a deployment algorithm for WSN. The large-scale WSN topology can be controlled through edge connections. In this way, the uneven distribution of nodes in center and periphery can be eliminated.
In this work, we modify the Fruchterman–Reinglod algorithm to be suitable for the deployment of WSN. To make the network form a regular hexagonal topology, we propose an edge allocation strategy, which takes energy consumption in to account. Thus, we develop the Fruchterman–Reingold hexagon (FR-HEX) algorithm. We introduce three evaluation metrics, such as average moving distance, pair-correlation diversion (PCD), and system coverage rate. We adopt these three evaluation metrics to comprehensively evaluate 50 simulations. We further explore the performance of the FR-HEX algorithm in such situations as obstacles occurred in the target area and nodes failed.
4. Simulation Analysis of FR-HEX Algorithm
We mainly focus on the performance of the algorithm in two-dimensional large-scale node deployment. We used Java code to implement the FR-HEX algorithm’s main iteration process (Equations (4)–(10)) and Algorithm 1. The version of the Java Development Kit is openjdk-16, and the cpu we used is Intel(R) Core(TM) i5-9500 CPU. In the following simulations, we randomly generate
N = 2000 coordinates (denote the sensor nodes’ positions) in a circular area of radius
, which means for each node
. We can also obtain
. We input these random node coordinates into the Java program to determine the edges between the nodes by Algorithm 1, after which we input the coordinates of the nodes and number of iteration into the iterative program (Java code implemented Equations (4)–(10)). Then in each iteration, the
, and
of each node are computed to obtain its coordinates at the next iteration. In this way, the FR-HEX algorithm is applied to adjust the sensor nodes’ coordinates. The coordinates processed by code iteration are saved per 10 steps for further numerical analysis.
Figure 5 shows how the network topology changes with the iterations of the FR-HEX algorithm. The Voronoi diagram is used here to visualize the topology of the network, where one point represents one node. We can observe that when using the FR-HEX algorithm to deploy the large-scale node, the position of nodes in the central part of the network is adjusted evenly first, and then the distribution of the peripheral nodes gradually becomes uniform. At roughly the 3000th step, the network’s topology becomes rather neat, with a structure similar to that of a regular hexagon. There is no noticeable uneven distribution of central and peripheral nodes.
As shown in
Figure 6, compared with the methods designed for large-scale WSNs, such as the virtual force algorithm inspired by Lennard Jones potential (VFA_LJ) and the virtual force algorithm inspired by Dusty Plasma crystallization (VFA_DP) [
25], the topology produced by the FR-HEX algorithm is obviously more regular in different scale (
and
). VFA_LJ always has some regions with uneven nodes. VFA_DP will result in the uneven distribution of nodes in central and peripheral parts, that is, nodes distributed tightly in the central part and loosely in the peripheral part. From
Table 1, we can see at the 3000th step that FR-HEX has much smaller PCD values of 0.045 and 0.023. It numerically shows that the network deployed by the FR-HEX algorithm has a topology more similar to that of a regular hexagon. We can infer that the uneven network topology due to VFA_LJ and VFA_DP may be worsened when the deployment scale becomes larger as PCD goes up when
N changes from 1000 to 2000. This will also cause some limitations on their application scenarios. However, the FR-HEX algorithm is almost unaffected by the scale change, with only a slight change in the PCD value.
To further explore the performance of the FR-HEX algorithm in deploying large-scale nodes, we performed 50 simulations. In each simulation, we still randomly generated 2000 coordinates in a circular area of radius . For the 50 simulation experiments (iterations of 6000 steps), the average run time was 113.357 s with a standard deviation of 2.081 s. We employed an average moving distance (), pair correlation diversion (PCD) and system coverage rate (C) to comprehensively analyze the results and evaluate the FR-HEX algorithm.
For large-scale WSNs deployment, the complexity and execution time of the algorithm may cause some problems in real-world applications. Our FR-HEX algorithm is computationally efficient. For a node, it only needs to calculate the repulsive force
of other nodes, and the attraction force
of the node associated with it, and finally add the gravitational function
. We completed 50 simulations for different scale of nodes, respectively. As we can see in
Table 2, we can observe a similar exponential increase in the execution time as the nodes go from 500 to 2000. However, for the case of 2000 nodes, which is relatively a large-scale WSN, the average execution time for 6000 iterations is only 113.357 s with a standard deviation of 2.081 s. Even if the number of nodes rises to 3000, the average execution time per step is only 0.066 s, which is acceptable. This shows that, in practice, the FR-HEX algorithm is able to deploy wireless sensor networks dynamically in a stable and efficient manner.
Figure 7 shows the average value of the
per 10 steps of 50 cases in total 6000 steps. Since the nodes are randomly distributed in range
at the start of the deployment process, the position of edge-related nodes needs to be adjusted through
and
. Gravitational function
also drives the nodes to converge toward the center. As a result, the movement of nodes is intense during the first 200 steps with
increased to 0.01. However,
is limited within
(
denotes the data interval, for nodes saved per 10 steps
). When the edge-related nodes have reached a relatively stable location, with attraction
and repulsion
and gravitation
in a roughly balanced state, the nodes’ movement slows down gradually with
drop to 0.002 at the 1000th step. After reaching a rather steady distance under the simultaneous action of
,
and
at roughly 3000th step, the nodes will not move significantly with
close to 0. From the trend of the curves, it can be inferred that all cases have a reasonably constant change in average moving distance with no dramatic fluctuations.
In
Figure 8 a heat map is used to show the distribution of PCD values over time for the 50 cases. The closer the color to dark red means the more cases with PCD values in this interval. A trend similar to that in
Figure 7 can be observed. In the first 200 steps, when nodes adjust their distances based on edge connections, the PCD values change dramatically. At the 300th step, the PCD values of all cases are larger than 0.9. Then at the 600th step, 50 cases’ PCD values rapidly drop to 0.4. At 1000 to 2500 steps, it can be noticed that about 50% of the cases are able to reach a PCD value below 0.05. It can be inferred from
Figure 7 that the nodes are still moving at this stage and the network topology is still adjusting. Very few cases have some fluctuations in PCD values due to the movement of a few nodes when adjusting the network structure, but such fluctuations can be adjusted quickly. At step 4500, all cases have PCD values within 0.2. At this point, the
is also reduced close to 0. Thereafter, there is basically no fluctuation in the position of the nodes. The PCD values of 50 cases are less than 0.05 at step 5200. The algorithm is starting to converge at this time, and the topology has stabilized.
The PCD value represents the resemblance between WSNs’ topology and regular hexagon topology on the one hand, and the algorithm’s convergence on the other. In practical applications, the algorithm can be judged to converge according to the change of PCD value. The operation of the algorithm can be terminated after the algorithm converges to reduce the computational cost.
If the average distance between well-deployed nodes is determined, the appropriate sensing radius for the sensor can be set based on the distance between nodes. This allows us to make the fullest use of the sensor’s hardware. All functions in FR-HEX, including , and , are designed relevant to k. In the original Fruchterman–Reingold algorithm, and maintain the distance between nodes at k. However, when function is added and the nodes are allocated with hexagonal edges, the force between nodes gets complicated, and the distance between nodes does not remain at k.
To determine the relationship between the average distance
and
k, we construct networks with 100, 500, 1000, 1500, and 2000 nodes in the range of
and
in our simulation experiments, respectively.
Table 3 shows the relationship between
and
k value.
Changing the deployment range a does not affect the ratio of to k; increasing nodes will lead to an increase in between nodes. In the FR-HEX algorithm, functions and play a leading role in controlling the distance between nodes. Changing the value of deployment range a is equivalent to scaling and by the same multiple at the same time, but will not significantly affect the ratio of to k. When the number of nodes is increased, the number of times each node must accumulate repulsion increases, but there is no obvious rise in the accumulation number of . This causes the repulsion force increases. At this time, the ratio of to k becomes larger.
As shown in the
Figure 9a, to achieve the densest coverage between nodes, the relationship between the distance
and the sensing radius of the sensor is
[
25]. According to the statistics in the table, for large-scale deployment, the
is about
. That is,
, and the
is close to
. To achieve dense coverage, we can set
.
According to the conclusion drawn above, we set
. Similar to the method used in
Figure 7,
Figure 9b shows the mean value of the
C of 50 cases per 10 steps. The nodes move violently in the first 1000 steps. The
C values have a obvious drop from 0.6 to 0.5 in the first 200 steps. The distance between the nodes become inconsistent as a consequence of location modification according to the edge connection. Inconsistent distance between nodes leads to reduced coverage. After the distance between nodes is adjusted at about the 2000th step, the network reaches a high coverage rate with
. The
C values of 50 cases are steady at around 4000 steps and begin to converge at the 5000th step with
C values reaching about 94%, keeping the same trend as the preceding
(
Figure 7) and PCD (
Figure 8).
Figure 10 shows the system coverage rate of 50 cases at the 6000th step. The
C values of 50 cases have reached about 94%, which is a relatively high coverage rate. When the algorithm converges, the system coverage rate of 50 simulations does not vary much such that all
C values are within 0.940–0.945, demonstrating the stability of the FR-HEX algorithm to some degree.
The FR-HEX algorithm can effectively cover the target region by generating a hexagonal topology. The nodes connected by edges are relatively close together, which is ideal for covering the whole system, achieving high coverage of , and making efficient use of hardware resources. In practical application, the convergence of the algorithm can be judged according to the change of PCD value. We can apply the algorithm flexibly with the trend of system coverage rate. For cases where the monitoring accuracy requirement is relatively low, we can choose to terminate the algorithm at an early stage, such as 1000 steps, so that we can also achieve a C value of about 75%. However, for higher accuracy requirements, we can terminate the algorithm at around 5000 steps so that it ensures .