PLDS-SLAM: Point and Line Features SLAM in Dynamic Environment
Abstract
:1. Introduction
- (1)
- A novel stereo dynamic SLAM system based on point-line feature fusion is proposed. The prior dynamic region features are obtained by detecting and segmenting the dynamic region, and the geometric constraints are used to obtain richer static features for the prior dynamic objects.
- (2)
- A line-segment-based geometric constraint algorithm is proposed to obtain potential dynamic and mis-matched linear features through geometric constraints on line segments to improve the accuracy and robustness of line feature extraction and data management.
- (3)
- A set of prior dynamic object recognition algorithms based on semantic segmentation is designed. The geometric constraint algorithm is used to solve of feature deviation and insufficiency for map matching and motion tracking caused by the existing algorithm without distinguishing between dynamic and static objects, which leads to tracking failure and trajectory deviation.
- (4)
- A Bayesian theory-based outlier elimination algorithm constrained by point-line features is proposed. This method removes dynamic point and line feature noise in complex environments and improves the accuracy of dynamic noise removal by continuous frame tracking of dynamic noise, thereby improving the accuracy and stability of the SLAM system.
2. Related Work
3. Method
3.1. Overall Framework
3.2. Geometric Constraint and Representation of Line Segment
3.3. A Priori Dynamic Object Static
Algorithm 1: A priori dynamic object static |
Input: A priori dynamic object set, D; camera 1 point-line features P1, L1 and descriptors; camera 2 point-line features P2, L2 and descriptors; Output: The static object set, S; 1 Obtain matching keypoints and matching keylines in non-priority dynamic object regions; 2 Calculate the fundamental matrix ; 3 Matching keypoints and keylines for a priori dynamic object regions are extracted; 4 for each dynamic object Oi do 5 for each keypoint matching pair do 6 Calculate keypoint matching pair error , where , denote coordinates of epipolar line vector; 7 end for 8 for each keyline matching pair do 9 Calculate keyline matching pair error , among them, s and e are the start point and end point of the line segment, and l1 and l2 are the coefficients of the line segment; 10 end for 11 if and then 12 Append Oi to S; 13 end if 14 end for |
3.4. Dynamic Noise Tracking
Algorithm 2: Outlier elimination algorithm of dynamic noise tracking |
Input: Priori dynamic object masks, M and map features f; Output: Inlier and dynamic probability set, F; 1 Obtain priori dynamic object contours using masks M, b; 2 for each feature fi do 3 Calculate the distance from feature fi to segmentation boundary b, ; 4 Estimate the semantically dynamic segmentation probability of feature fi: ; 5 Calculate the posterior probability ; 6 using the observation probability and the prior probability , update the current movement probability ; 7 if then 8 Append fi to F; 9 end if 10 end for |
3.5. Optimize Error Function Construction
4. Experimental Results
4.1. Line Feature Matching Geometry Constraint Improvements
4.2. Dynamic Noise Removal Experiment
4.3. Dynamic and Static Object Feature Separation
4.4. Dynamic Environment Trajectory Accuracy Verification Experiment
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Davison, A.J. Real-time simultaneous localisation and mapping with a single camera. In Computer Vision, Proceedings of the IEEE International Conference on IEEE Computer Society, Nice, France, 13–16 October 2003; Department of Engineering Science, University of Oxford: Oxford, UK, 2003; Volume 3, p. 1403. [Google Scholar]
- Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; IEEE: New York, NY, USA; pp. 225–234. [Google Scholar]
- Bartoli, A.; Sturm, P. Structure-from-motion using lines: Representation, triangulation, and bundle adjustment. Comput. Vis. Image Underst. 2005, 100, 416–441. [Google Scholar] [CrossRef] [Green Version]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
- Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
- Gálvez-López, D.; Tardos, J.D. Bags of binary words for fast place recognition in image sequences. IEEE Trans. Robot. 2012, 28, 1188–1197. [Google Scholar] [CrossRef]
- Campos, C.; Elvira, R.; Rodriguez, J.J.G.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular SLAM. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 834–849. [Google Scholar]
- Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast semi-direct monocular visual odometry. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; IEEE: New York, NY, USA, 2014; pp. 15–22. [Google Scholar]
- Engel, J.; Koltun, V.; Cremers, D. Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 611–625. [Google Scholar] [CrossRef]
- Rong, H.; Gao, Y.; Guan, L.; Ramirez-Serrano, A.; Xu, X.; Zhu, Y. Point-Line Visual Stereo SLAM Using EDlines and PL-BoW. Remote Sens. 2021, 13, 3591. [Google Scholar] [CrossRef]
- Akinlar, C.; Cihan, T. EDLines: A real-time line segment detector with a false detection control. Pattern Recognit. Lett. 2011, 32, 1633–1642. [Google Scholar] [CrossRef]
- Gomez-Ojeda, R.; Moreno, F.; Scaramuzza, D.; Gonzalez-Jimenez, J. PL-SLAM: A stereo SLAM system through the combination of points and line segments. IEEE Trans. Robot. 2019, 35, 734–746. [Google Scholar] [CrossRef] [Green Version]
- Zuo, X.; Xie, X.; Liu, Y.; Huang, G. Robust visual SLAM with point and line features. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; IEEE: New York, NY, USA, 2017. [Google Scholar]
- Kim, D.H.; Kim, J.H. Effective background model-based RGB-D dense visual odometry in a dynamic environment. IEEE Trans. Robot. 2016, 32, 1565–1573. [Google Scholar] [CrossRef]
- Li, S.; Lee, D. RGB-D SLAM in dynamic environments using static point weighting. IEEE Robot. Autom. Lett. 2017, 2, 2263–2270. [Google Scholar] [CrossRef]
- Sun, Y.; Liu, M.; Meng, M.Q.H. Motion removal for reliable RGB-D SLAM in dynamic environments. Robot. Auton. Syst. 2018, 108, 115–128. [Google Scholar] [CrossRef]
- Scona, R.; Jaimez, M.; Petillot, Y.R.; Fallon, M.; Cremers, D. Staticfusion: Background reconstruction for dense rgb-d slam in dynamic environments. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; IEEE: New York, NY, USA, 2018; pp. 3849–3856. [Google Scholar]
- Dai, W.; Zhang, Y.; Li, P.; Fang, Z.; Scherer, S. Rgb-d slam in dynamic environments using point correlations. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 373–389. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Z.; Doi, K.; Iwasaki, A.; Xu, G. Unsupervised Domain Adaptation of High-Resolution Aerial Images via Correlation Alignment and Self Training. IEEE Geosci. Remote Sens. Lett. 2020, 18, 746–750. [Google Scholar] [CrossRef]
- Zhang, Z.; Ji, A.; Wang, K.; Zhang, L. UnrollingNet: An attention-based deep learning approach for the segmentation of large-scale point clouds of tunnels. Autom. Constr. 2022, 142, 104456. [Google Scholar] [CrossRef]
- Yu, C.; Liu, Z.; Liu, X.J.; Xie, F.; Yang, Y.; Wei, Q.; Fei, Q. DS-SLAM: A semantic visual SLAM towards dynamic environments. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: New York, NY, USA, 2018; pp. 1168–1174. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Bescos, B.; Fácil, J.M.; Civera, J.; Neira, J. DynaSLAM: Tracking, mapping, and inpainting in dynamic scenes. IEEE Robot. Autom. Lett. 2018, 3, 4076–4083. [Google Scholar] [CrossRef] [Green Version]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Zhang, J.; Henein, M.; Mahony, R.; Ila, V. VDO-SLAM: A visual dynamic object-aware SLAM system. arXiv 2020, arXiv:2005.11052. [Google Scholar]
- Li, A.; Wang, J.; Xu, M.; Chen, Z. DP-SLAM: A visual SLAM with moving probability towards dynamic environments. Inf. Sci. 2021, 556, 128–142. [Google Scholar] [CrossRef]
- Von Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A line segment detector. Image Process. Line 2012, 2, 35–55. [Google Scholar] [CrossRef] [Green Version]
- Zhang, L.; Reinhard, K. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. J. Vis. Commun. Image Represent. 2013, 24, 794–805. [Google Scholar] [CrossRef]
- Saputra, M.R.U.; De Gusmao, P.P.; Wang, S.; Markham, A.; Trigoni, N. Learning monocular visual odometry through geometry-aware curriculum learning. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: New York, NY, USA, 2019; pp. 3549–3555. [Google Scholar]
- Balntas, V.; Lenc, K.; Vedaldi, A.; Tuytelaars, T.; Matas, J.; Mikolajczyk, K. HPatches: A benchmark and evaluation of handcrafted and learned local descriptors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Grupp, M. evo: Python package for the evaluation of odometry and slam. Search 2017. [Google Scholar]
Frame | 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 |
Total Lines | 50 | 547 | 363 | 419 | 358 | 490 | 807 | 405 | 682 |
Projection Errors of K-Nearest Neighbors | 77.57 | 16.78 | 63.34 | 75.39 | 57.23 | 62.62 | 50.45 | 59.50 | 50.56 |
Projection Errors of Ours | 1.16 | 1.06 | 1.25 | 0.95 | 1.46 | 1.06 | 1.16 | 0.87 | 1.97 |
Dataset | Ours | ORB-SLAM2 | ORB-SLAM3 | PL-SLAM |
---|---|---|---|---|
Sequence00 | 3.578 | 1.697 | 1.994 | 2.551 |
Sequence01 | 2.310 | 3.268 | 8.847 | 2.423 |
Sequence02 | 3.799 | 3.679 | 3.601 | 6.635 |
Sequence03 | 2.740 | 2.900 | 3.323 | 4.410 |
Sequence04 | 1.180 | 1.260 | 1.692 | 2.010 |
Sequence05 | 2.888 | 1.732 | 1.961 | 2.572 |
Sequence06 | 2.226 | 1.959 | 2.165 | 6.491 |
Sequence07 | 1.579 | 0.907 | 1.101 | 2.211 |
Sequence08 | 3.948 | 3.350 | 3.075 | 3.317 |
Sequence09 | 3.087 | 4.219 | 3.411 | 4.023 |
Sequence10 | 2.240 | 2.290 | 2.242 | 3.190 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yuan, C.; Xu, Y.; Zhou, Q. PLDS-SLAM: Point and Line Features SLAM in Dynamic Environment. Remote Sens. 2023, 15, 1893. https://doi.org/10.3390/rs15071893
Yuan C, Xu Y, Zhou Q. PLDS-SLAM: Point and Line Features SLAM in Dynamic Environment. Remote Sensing. 2023; 15(7):1893. https://doi.org/10.3390/rs15071893
Chicago/Turabian StyleYuan, Chaofeng, Yuelei Xu, and Qing Zhou. 2023. "PLDS-SLAM: Point and Line Features SLAM in Dynamic Environment" Remote Sensing 15, no. 7: 1893. https://doi.org/10.3390/rs15071893