0% found this document useful (0 votes)
43 views6 pages

Mobile Robot SLAM Methods Improved For Adapting To Search and Rescue Environments

Uploaded by

example
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
43 views6 pages

Mobile Robot SLAM Methods Improved For Adapting To Search and Rescue Environments

Uploaded by

example
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 6

Mobile Robot SLAM Methods Improved for

Adapting to Search and Rescue Environments


Hongling Wang Chengjin Zhang∗ Bao Pang
School of control Science and Yong Song School of control Science
and Engineering The School of Mechanical and Engineering
Shandong University Electrical and Information Engineering Shandong University
Shandong, China Shandong University Shandong, China
[email protected] [email protected] [email protected]
[email protected]

Abstract—Robotic simultaneous localization and mapping know the points in time the robot traversed the same sub-local
(SLAM) confronts extreme challenge in collapsed, cluttered, segments. Graph-based approaches (geometric or polygonal
GPS-signal unreliable environments of search and rescue (SAR). mapping) utilize geometric primitives such as points, lines
Our improved SLAM methods aim to mobile robot perform-
ing SAR requirements which comprise the significant objects and arcs to represent obstacles in data association. The tradi-
identification, loop closure perceiving, exploration area coverage, tional SLAM problem considers a mobile robot incrementally
and the other performances. We developed efficient SLAM build a consistent map for exploring environment, and while
methods adapting to SAR cluttered environments, which are simultaneously determining its location within this map [3-
listed as follows: the 3-D mapping reconstruction extracts the 4]. Mobile robotic SLAM confronts extreme challenges in
features of significant object; the hierarchical perception-driven
approach dramatically improves the loops closure performance; aspect of SAR objects identified and marked, loop-closure,
the local wireless Ad hoc network deployment addressed the and area coverage due to collapsed, cluttered, GPS-signal
signal absence and unreliable of harsh environment; robot arm unreliable of SAR post-disaster environments. Adapting to
perceived and identified the objects of interest. The fusion of cluttered SAR environment, we developed improved SLAM
these efficient strategies forms the SAR SLAM methods that methods for mobile robot exploration SAR area. Fig. 1 (a)
adapt to the cluttered and harsh condition. In this paper, we
evaluate the improved SAR SLAM qualitatively on available and (b) displayed a collapsed SAR post-disaster site scene, a
data measurements. The simulations and experimental results crawler robot is performing SLAM tasks in this environments.
demonstrate that the improved SLAM performances methods
are obviously adapted to SAR environment.
Index Terms—Robot improved SLAM, search and rescue
environment, feature extraction, hierarchical analysis, fusion of
algorithms and information

I. I NTRODUCTION
The current research demonstrates that robotic SLAM con- (a) (b)
fronts challenges that exploring and mapping a whole area
coverage without recourse to GPS, and simultaneously the Fig. 1. A LUKER joint type crawler mobile robot explores in collapsed
scenario. (a) Shandong Pingyi gypsum mine collapsed scenario 12.25, search
robot performs autonomous localization and mapping of the and rescue team is at the scene of disaster. (b) The crawler mobile robot is
object structures. A sensor-based autonomous sub-track con- performing search and rescue task.
troller controls swingable sub-tracks of mobile robots in SAR
environments, which are used to negotiate steps and uneven In this paper, we analyze the performances of improved
terrain. Laser sensors on-board a mobile robot provides terrain SLAM methods which adapt to the SAR cluttered post-disaster
information to the controller for position control of the sub- scenarios. On the basis of conventional SLAM, we proposed
tracks [1]. ICP-based algorithms create a corresponding 3-D the improved approaches for SAR unstructured environments.
map of an environment by associating 3-D range information After introducing the current research of SLAM work adapted
of different robot locations. Occupancy-based grid mapping unstructured scenarios, we described the SLAM approaches
creates volumetric representation, and an occupancy grid map aiming to SAR environments in Section II, and then the corre-
is achieved after executing the local ICP scan registration [2]. sponding algorithms are derived in Section III. We analyze the
However, the map is inconsistent at the global level due to performances by quantitative and qualitative criteria of SLAM
inconsistencies that arise from the accumulation of small errors process in Section IV, Simulations are presented in Section V,
in the ICP scan matching procedure. and experiments are conducted in Section VI, followed by the
In order to acquire globally consistent maps, we need to results and future work in last Section.

978-1-5386-3524-7/17/$31.00 ©2017 IEEE 988


II. SLAM METHODS IMPROVED AIMING TO SAR filter includes the below compositions: 1) Two horizontal
SCENARIOS pixels are multiplied by −1 respectively; 2) Two vertical pixels
A. 3-D mapping reconstruction approaches are multiplied by −1 respectively; 3) The interesting current
pixel is multiplied by 5. This sharpening filtering process is
A 2-D map is simpler and low-cost adapted to SAR harsh
shown in Table 1.
environments, for identifying objects of interest, the improved
SLAM matches 2-D laser scans with RGB-D visual informa- TABLE I
tion of SAR objects. This method can improve identification T HE IMAGE PROCESSING OF A NEIGHBOURHOOD SHARPENING FILTER
mapping, and simultaneously keep 2-D global topological con- 0 Pixel value ×(−1) 0
sistent map. A feature-less monocular visual SLAM algorithm Pixel value ×(−1) Pixel value ×5 Pixel value ×(−1)
tends to build large-scale, consistent maps of the environment. 0 Pixel value ×(−1) 0
Along with highly accurate pose estimation based on direct
image alignment, the 3-D environment is reconstructed in real- B. Hierarchical perception-driven exploration
time as pose-graph of key frames with associated semi-dense
Hierarchical perception-driven SLAM aims to reconstruct a
depth maps [5].
In our approach, the LRF multi-sensors visual channels global consistent map integrated over local grid-maps improv-
locate point features, constrain their correspondences, and ing loop-closure and area exploration coverage. Conservative
incorporate them into scan matching to build dense 3-D maps. global convergence could be achieved using the covariance in-
According to S. Li et al [6], we isolate the mobile robot tersect algorithms for estimating connections; these algorithms
disturbing motion to improve the quality of object images result in a network of optimal SLAM sub-maps connected
and sample efficiency, one of way is to adopt gyro-stabilized by conservative links. The data from sensory devices are
visual system to separate the imaging load from the disturbing processed wirelessly to map the surroundings of the robot
motions.The depth information derives from the image con- on a computer [4]. The mobile robot builds up a topological
verting the RGB color to Hue values. The Hue values can be global map of the environment integrating over graph of
used to estimate relative distance between a mobile robot and metric sub-maps connected by relative transformations. The
the around targets in the environment. According to the J-G model criterion layer is generated according to evaluation
(Joblove and Greenberg) equation, the transpose Hue value is criteria for mapping, and the layer is composed of applicability
expressed as eqs. (1) - (3). C1 , accuracy C2 , computational cost C3 , and interactivity C4 .
Map model representations at the scheme layer include a
⎧ topological map S1 , metric map S2 , and hierarchical map S3 .

⎪ 0◦ , M = m = R = G = B,C = 0; A representation of the scheme criteria is shown in Table 2.


⎨ C mod . ≤ 6,
G−B
M = R,C = R − m;
H7 = (1) TABLE II

⎪ C + 2,
B−R
M = G,C = G − m;

⎪ N OTATION SCHEMES AND CRITERIA

C + 4, M = B,C = B − m;
R−G
criteria applicability accuracy cost interactivity
/ scheme C1 C2 C3 C4
where M = max(R, G, B), m = min(R, G, B), C = (M − m). S S S S
S1 ωC11 ωC21 ωC31 ωC41
S S S S
ωC12 ωC22 ωC32 ωC42
H = 60◦ × H 7
S2
(2) S S S S
S3 ωC13 ωC23 ωC33 ωC43
where H is the Hue value of RGB image pixel. When
R = G = B, that is, C = M − m = 0, it is a neutral color, the The weight factor ωs between each scheme can be defined
transpose Hue value is assigned 0◦ . as eq. (5):
⎡ ⎤
d = 0.0032H 2 − 1.9492H + 359.26 ⎡ ⎤ ⎡ s1 ⎤ ω
(3) ωs1 ωc1 ωcs21 ωcs31 ωcs41 ⎢ c1 ⎥
ωc2
where d is the converted estimation depth distance from the ωs = ⎣ ωs2 ⎦ = ⎣ ωc12 ωc22
s s
ωcs32 ωcs42 ⎦ ⎢
⎣ ωc3
⎥ (5)

s3 s s s3
image Hue values, and H is the Hue value of the object ωs3 ωc1 ωc23 ωc33 ωc4
ωc4
image. Using OpenCV to manipulate images and features, a
sharpening spatial filter reconstructs environmental scenario to The scheme layer is represented by ωS = [ ωs1 ωs2 ωs3 ]T ,
highlight characteristic features of searched interesting object. and the environmental partitioning criterion is
A given sharpen filter matrix is established as formula (4): ωc = [ ωc1 ωc2 ωc3 ωc4 ]T . The collapsed deposits
⎡ ⎤ are represented by topological nodes to construct the
0 −1 0 semantic space, and formula (6) shows this partition methods:
Msh = ⎣ −1 5 −1 ⎦ (4)
0 −1 0 N = {Ri = W j ∪ Dk |i ≥ 1, j > 0, k > 0, j = k} (6)
The sharpening process depends on a kernel matrix, which where N indicates topological nodes, Ri refers to destroyed or
defines an image filter, which contains 3 × 3 pixels of neigh- collapsed regions, and W j are damaged columns; Dk represents
bourhood. The output of such a neighbourhood sharpening the unstructured deposits.

989
C. wireless Ad hoc network deployment control. The vector C is simply weighted sum of all criteria
Wireless Ad hoc local network is deployed in SAR envi- as eq. (10):
n
ronment, and mobile robot SLAM scene can be monitored C = ∑ wiCi (10)
and managed by remote PC control center via this wireless i=1
Ad hoc network. The deployment tries to maximize coverage where wi is the ith weight value of multiple criteria; β (q,V )
while keeping the number of sensors at minimum. During is a robust extensive factors defined to help provide robustness
deployment, robots can measure the average of the received in the presence of kinematic singularities.
signal strength indicator (RSSI) values, and then they can stop
and place wireless sensor nodes where the average of measured E. Information-fusion of multi-sensors and multiple robots
RSSI values falls below a predetermined threshold. According to N. Atanasov et al [9], the active information
According to C. Y. Shih et al [7], the sensor nodes deploy- acquisition system is a stochastic optimal control process,
ment as the following ways: and centralized fusion offers accurate localization at the cost
1) Mobile robots carry sensors and deploy them randomly of increased calculation, however, the decentralized fusion
within appointed SAR area. decreases accuracy while improves the calculation speed. It
2) A mobile robot integrated with kinds of sensors can act is valuable resources that autonomous negotiation and share
as mobile nodes and perform self-deployment by means of the data information among multiple robots and multi-sensors
moving to assigned locations. [10].
The autonomous deployment generally equalized assign- Adopting to the methods in [11], the LRFs multi-sensors
ment. Assumes the SAR area A to be deployed, the algorithm information and visual system data are processed to determine
outputs a series of coordinates as the locations assignment: the distance and the velocity of the objects located relative to
mobile robot in SAR environment. The observations are em-
ployed to estimate between true robot locations and landmark
LA = {A(xi , yi , θi ) ≤ N} (7)
locations. The vectors’ sets are defined as follows:
where LA represents assigned locations; (xi , yi , θi ) indicates the 1) the previous locations of mobile robot:
coordinates and orientation of the ith deployed position; N X0::k = { x0 , x1 , ··· , xk } = { X0::k−1 , xk } (11)
is the wireless sensor nodes numbers. These coordinates are
assigned to mobile robots as they need to coverage visiting 2) the previous step of control inputs:
waypoints, then, the multi-robots perform deployment tasks
U0::k = { u0 , u1 , ··· , uk } = { U0::k−1 , uk } (12)
within this SAR environment.
3) the all landmarks’ set:
D. Robot arm perceived the significant objects
m = { m0 , m1 , ··· , mn } (13)
Adopting to the calibration procedure and the state-
estimation [8], we evaluated the sensors with respect to 4) the all landmark observations’ set:
the robot arm and employed application programming inter- Z0::k = { z0 , z1 , ··· , zk } = { Z0::k−1 , zk } (14)
faces (APIs), which formed an integrated strategy to accom-
plish the exploration task. In the process of the significant where xk : the location and orientation state vector; uk : the
SLAM process, the robot uses arm’s perception exploration control vector, which drives the mobile robot state from
in SAR post-disaster environments. The eq. (8) is our de- xk−1 of time k − 1 to xk of time k; mi : the vector of the
veloped motion model according to the Robai Corporation ith landmark location, which is time invariant; zik : the ith
(http://www.rabai.com, Version 4.0.0). landmark observation, which is corresponding with the mobile
. /−1 . / robot location; zk : the multiple landmark observations.
J V In probabilistic SLAM, a degree of convergence between
q̇ = β (q,V ) T (8)
NJ W −αNJT C the Kalman-filter-based SLAM methods and the probabilistic
localization and mapping is expressed as in eq. (15):
where V is an m-length vector representation of the motion of
the end-effectors; q ia the n-length vector of joint positions; q̇ P(X0:k , m|Z0:k ,U0:k , x0 ) = P(m|X0:k , Z0:k )P(X0:k |Z0:k ,U0:k , x0 )
is time derivative of q, the velocity; J is the m × n manipulator (15)
Jacobian; NJ is an n × (n − m) set of vectors that spans the null where, the robot locations’ probability distribution
space, that is defined as: is P(xk |Z0:k ,U0:k , m), and the probable distribution is
P(xk , m|Z0:k ,U0:k , m).
JNJ = 0 (9) The above eqs. (11)-(15) establish the pre-defined models
of SAR mobile robots’ SLAM performing. The sensors’
where NJ has a rank (n − m). W is a vector function of q. measurements fusion makes the nodes or particles maximally
α is a scalar, mathematically, it achieves the desired value of consistent with the poses of the mobile robot, and trajecto-
V while minimizing 12 q̇T W q̇ + αCT q̇. The parameters α, W ries fusion integrates all trajectories achieved from different
and C can be varied to give many different types of velocity sensors into an estimated trajectory.

990
III. A LGORITHMS DERIVED FOR SAR SLAM METHODS relationship between the distance and Hue values is shown
in Table 3.
A. Iterative closest point
The Hue value of RGB image provides estimation depth
According to [12], the current attitude of mobile robot does distance according to a mathematical model as equations (2)-
not only depend on inertial measurement units (IMUs) current (4). In Fig. 2, the estimation depth distance of RGB image
measurements, but also on the previous pose and orientation Hue value is closed to the actual distance from moving camera
of the robot; the automated and continuous determination of to static searched object, which indicates that using the Hue
the object position allows for using for these measurements value of RGB image sequence is effective to estimate depth
and previous states repeatedly. Using time-of flight (ToF) to information of 3-D scenario reconstruction.
measure depth of point-to-point squared distance for matching
2-D laser scans with 3-D point clouds. T (p) is the previous
rigid transformation to the point p, if T comprises of matrix
R and translation component t, and the model is expressed as:

T (p) = R(p) + t (16)

Extracting sparse visual features of the object from the

TABLE III
T HE ESTIMATION DISTANCE DEPENDING ON H UE VALUES OF IMAGE RGB
COLOR
Fig. 2. The comparison between the object actual distance and estimation
samples R(◦ ) G(◦ ) B(◦ ) Hue value Estimated depth depth distance depending on image RGB Hue value.
H(◦ ) d (cm)
1 20 36 48 200 97.42
2 0 60 255 225.88 82.24 B. Distributed filter R-B EKF
3 2 162 255 202.06 96.06
4 0 249 255 181.41 110.97 According to [13], as the core of the pose filter, extended
5 0 255 31 127.29 163.00 Kalman filter (EKF) results in a modular system to which new
6 20 48 36 160 129.31
sensor measurements can easily be added. The multi-sensors
7 48 36 20 360 72.27
8 36 20 48 280 64.36 data are input to the system architecture. RaoBlackwellized
9 36 48 20 80 223.80 (R-B) particle filters still linearize the observation model,
10 48 20 36 320 63.20 however, this is only typically a reasonable approximation
for range-orientation measurements when the robot’s pose is
two neighbor image frames, and then match them with their known. The basis for the EKF-SLAM method is employed to
corresponding depth values; this process will transform feature describe the mobile robot motion in the probabilistic form as
points of SAR object into 3-D representation of sub-map. eq. (18):
The random sample consensus (RANSAC) alignment function
optimized the output between the two features sets of Ps and P(xk |xk−1 , uk ) ⇔ xk = f (xk−1 , uk ) + wk (18)
Pt to form the better rigid transformation T ∗ , which is defined
where xk is the robot’s pose state vector, uk is the control
as: , - vector, wk represents Gaussian white noise.
1

T = arg min
T
∑ wi |T (ps ) − T (pt )|
|A f | i∈A
i i 2
(17) The R-B particle filter in robotic SLAM is also called
f improved FastSLAM algorithm, which fuses EKF-SLAM to-
gether to establish the observation model and motion model,
where A f represents the summation measurements of squared
and the prior and posterior distributions are modeled as the
distance between feature points in the two sets. pis is the
follows:
feature point in Ps source frame set, and pti is the feature point
P(x1 , x2 ) = P(x2 |x1 )P(x2 ) (19)
in Pt target frame set. T is the previous rigid transformation
aligned two frames to feature point. arg min(·) stands for the l
xk+1 = F l xkl + Bl nlk (20)
target function (·) achieving minimum value. wi indicates the
factor weights used to control the contribution of each item. qk+1 = f (ϖk − nwk )qk (21)
A sub-grid map is incorporated input data in an efficient
representation once mobile robots are determined. Points cloud yqk = qk + eqk (22)
source Ps are matched with their nearest neighboring points p
yk = [ yk yνk yak ] = h(qk ) +C(qk )xkl + [ ekp eνk eak ]T
cloud target Pt , and a rigid transformation is modeled to (23)
minimize the sum of squared spatial points data associated and the margined joint distribution:
error.
The estimation depth distance corresponding to the hue 1 N (i)
value is built according to quadratic eqs. (1)-(3), and the
P(x2 ) = ∑ (x2 |x1 )
N i=1
(24)

991
[ ek eνk eak ]T ∈ ( 0, Rl ), Rl = diag[ R p Rν Ra ],
p
matrix norm of F, it is defined as grasp signal, which is related
(i) to the strength of force with arm grasping object.
x1 ∼ P(x1 ), and P(x1 ) is sampled probability from state
vector set xk . xkl : a linear part; qk : a nonlinear part; N: a set of 2) Loop closure detection
weighted particles; h: evaluated particles weights. A dataset Our method detects loop closures by matching data frames
of poses particles were partly recorded, and the mobile robot against a subset of previously collected frames. RGB-D map-
trajectories were displayed in Fig. 3. ping builds a global model to estimate the robot location and
pose orientation. Using the sparse feature points to match
200
trajctory current frame with previous measurements considering 3-D
150
spatial objects, which parallelly detect loop-closure thread.
Adopting to the approaches in [15], if a loop closure is
100
detected, a constraint is added to the pose graph and a global
50
pose [m]

alignment process is triggered. Fig. 5 is mobile robot trajectory


0
of a loop closure exploration in an SAR environment.
−50

−100

−150
1 2 3 4 5 6 7 8 9 10 11
particles [numbers]

Fig. 3. The robot trajectory with various poses and particles; the blue
line represents x-coordinates trajectory, the green line shows y-coordinates
trajectory, and the red line indicates the orientations or angles.

IV. SAR SLAM PERFORMANCES OF QUALITATIVE AND Fig. 5. Loop closure exploration trajectories of a mobile robot.
QUANTITATIVE ANALYSIS
3) Coverage exploration
C. Y. Shih et al [7] evaluate global localization robust and
The mobile robot autonomously drives in SAR environ-
map completion, according to their research, sensor noise
ments, which makes the robot to achieve real-time trajectories
and imperfection map are managed in a consecutive way,
within the exploration area. Our designed autonomous robotic
area coverage rate is to be validated. Aiming to SAR post-
system is according to [16], enabling robots to explore and
disaster environments, we proposed the metrics criteria of
map post-disaster areas, and obtain maps of current environ-
objects identification, loop-closure, and area coverage rate,
ment. Adopting to a set of criteria analyzed in [17], the quality
which assess SLAM performances in quality and quantity.
of topological map is assessed based on spectral clustering
1) Object identification
of discretized metric sub-maps; our algorithms extract sparse
According to F. Aghili et al [14], the observation error is
spatial information from the sub-maps, and builds an environ-
related to the pose refinement step of ICP, which depends on
ment model which aims at simplifying the exploration task for
not only the quality of the 3-D data achieved from multi-
mobile robots. The performance of area coverage is shown
sensors, but also the performance of ICP to converge to global
level. The 3-D representation primarily handles dynamic, com-
plex environments and the inherently noisy stereo data, while
the 2-D representation efficiently computes local statistics.
For perceiving exploration, the robot arm touches, or grasps

Fig. 6. Mobile robot’s exploration trajectories cover the whole cluttered,


unstructured area; gray radiation and blue band indicate sonar and laser
scanning respectively; the exploration trajectory is represented by red band.

in Fig. 6, the red bands represent mobile robot exploration


trajectories, the blue band and gray radiation lines indicate
LRF scans and sonar signals, respectively. Using our strategies,
the mobile robot explores almost coverage the whole area, and
Fig. 4. Grasp signal of robot arm performs task; the point line represents the also forms loop-closure exploration.
strength of force with arm grasping object.
V. S IMULATIONS OF THE IMPROVED SLAM METHODS
the targets, our extraction algorithms are applied to capture We use a virtual mobile robot which is represented as red
features of objects of interest. As shown in Fig. 4, F is the image in Fig. 7 (a) and (b), explores a cluttered, unstructured

992
SAR area, and it indicates that the trajectories almost com- detects the whole area of the map includes obstacles and
pletely cover the whole exploration area; the significant targets objects, as a result, the map is completely produced when
and their special features are marked in the new updated map, loop-closure detection, this scene is shown in Fig. 7 and Fig.
i. e. the pink rectangular regions and colored blocks, which 8. Visual positioning service (VPS) is applied to vSLAM,
are shown in Fig. 7 (b). which localizes mobile robot relative to SAR objects of interest
landmark-like visualized via remote PC control center. We are
trying to apply visual positioning service (VPS) strategies to
mobile robot SAR SLAM in future work.
ACKNOWLEDGMENT
This research was supported by the NSFC (National Nature
Science Foundation of China) under grant no. 61573213,
(a) (b)
61473174, 61473179.
Fig. 7. Identifying environmental significant targets and simultaneously
marked them in the building map. (a) A virtual mobile robot (the red image) R EFERENCES
building a map and location itself simultaneously. (b) Processing the acquision [1] Y. Liu and G. Nejat, Robotic Urban Search and Rescue: A Survey from
data to form a new map marked the identification targets. the Control Perspective, Journal of Intelligent & Robotic Systems, vol.
72, no. 2, pp. 147-165, 2013.
[2] W. Rone and P. Ben-Tzvi, Mapping, localization and motion planning
VI. M OBILE ROBOT SLAM EXPERIMENTS in mobile multi-robotic systems, Robotica, vol. 31, no. 1, pp. 381-408,
2013.
Fig. 8 (a) and (b) displays perception-driven exploration [3] H. Durrant-Whyte and T. Bailey, Simultaneous Localization and Map-
SLAM in outdoor collapsed SAR environment, and the signifi- ping: Part I, IEEE Robotics & Automation Magazine, vol. 13, no. 2, pp.
99-107, 2006.
cant targets such as the color blocks, are identified and marked. [4] M. M. Saleem, An Economic Simultaneous Localization and Mapping
Compared with our improved SLAM, the conventional map System for Remote Mobile Robot Using SONAR and an Innovative AI
does not have significant marks; the localization of significant Algorithm, International Journal of Future Computer and Communication,
vol. 2, no. 2, April 2013.
objects (i. e. the marked color blocks) are shown in Fig. 9 [5] J. Engel, T. Schöps and D. Cremers, LSD-SLAM: Large-Scale Direct
(a), however, the traditional maps do not have any significant Monocular SLAM, Technical University Munich, pp. 1-16, 2014.
marks to be localized as shown in Fig. 9 (b). [6] S. Li and M. Zhong, High-Precision Disturbance Compensation for a
Three-Axis Gyro-stabilized Camera Mount, IEEE/ASME Transactions on
Mechatronics, vol. 20, no. 6, pp. 3135-3147, Dec. 2015.
[7] C. Y. Shih, J. Capitán, P. J. Marrón, On the Cooperation between
Mobile Robots and Wireless Sensor Networks, Springer-Verlag Berlin
Heidelberg: A. Koubâa and A. Khelil (eds.), Cooperative Robots and
Sensor Networks, Studies in Computational Intelligence, pp. 67-86, 2014.
[8] B. Olofsson, J. Antonsson, H. Kortier, et al., Sensor Fusion for Robotic
Workspace State Estimation, IEEE/ASME Transactions on Mechatronics,
vol. 21, no. 5, pp. 2236-2248, 2016.
[9] N. Atanasov, J. L. Ny, K. Daniilidis, Decentralized active information
(a) (b) acquisition: Theory and application to multi-robot SLAM, IEEE Interna-
tional Conference on Robotics and Automation, 2015, pp. 4775-4782.
Fig. 8. A crawler robot explores practical cluttered SAR environment. (a) [10] L. Wang, M. Liu and Q.H. Meng, Real-Time Multisensor Data Retrieval
The crawler mobile robot equipped with Kinect RGB-D camera and multi- for Cloud Robotic Systems, IEEE Transactions on Automation Science &
sensors explores at collapse area site scene. (b) A map of SAR environment Engineering, vol. 12, no. 2, pp. 507-518, 2015.
with significant targets, i. e., the color blocks marked 1, 2, 3, and 4. [11] N. Milisavljevic, Sensor and data fusion, Journal of Intelligent &
Robotic Systems, vol. 1, no. 2, pp. 103-116, 2009.
[12] S. Zimmermann, T. Tiemerding and S. Fatikow, Automated Robotic Ma-
nipulation of Individual Colloidal Particles Using Vision-Based Control,
map comparison
10000
map2 IEEE/ASME Transactions on Mechatronics, vol. 20, no. 5, pp. 2031-2038,
8000
Oct. 2015.
6000
[13] G. Tuna, V. C. Gungor and K. Gulez, An autonomous wireless sensor
4000
network deployment system using mobile robots for human existence
y

detection in case of disasters, Ad Hoc Networks, vol. 13, no. 1, pp.


2000

0 zero point

−2000
54-68, 2014. vol.
−4000 −2000 0 2000
x
4000 6000 8000
[14] F. Aghili and C.Y. Su, Robust Relative Navigation by Integration of ICP
and Adaptive Kalman Filter Using Laser Scanner and IMU, IEEE/ASME
(a) (b)
Transactions on Mechatronics, vol. 21, no, 4, pp. 2015-2026, 2016.
[15] P. Henry, M. Krainin, E. Herbst, et al., RGB-D Mapping: Using
Fig. 9. The comparison of improved SAR SLAM with conventional SLAM
Depth Cameras for Dense 3D Modeling of Indoor Environments, The
related to significant objects mapping and localization. (a) An SAR environ-
International Journal of Robotics Research, pp. 647-663, 2012.
mental map marked with significant targets. (b) The conventional map without
[16] X. Li, Z. Sun, D. Cao, et al., Real-time trajectory planning for
robot arm perceiving exploration.
autonomous urban driving: framework, algorithms and verifications,
IEEE/ASME Transactions on Mechatronics, vol. 21, no. 2, pp. 740-753,
April 2016.
VII. R ESULTS AND FUTURE WORK [17] Liu, M., F. Colas, L. Oth, et al., Incremental topological segmentation
for semi-structured environments using discretized GVG, Autonomous
The simulations and experiments demonstrate that the initial Robots, vol. 38, no. 2, pp. 143-160, 2015.
metric map is the basis of SAR SLAM, in which the robot

993

You might also like