\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

SEMANTIC-RTAB-MAP (SRM): A semantic SLAM system with CNNs on depth images

Abstract / Introduction Full Text(HTML) Figure(15) Related Papers Cited by
  • SLAM (simultaneous localization and mapping) system can be implemented based on monocular, RGB-D and stereo cameras. RTAB-MAP is a SLAM system, which can build dense 3D map. In this paper, we present a novel method named SEMANTIC-RTAB-MAP (SRM) to implement a semantic SLAM system based on RTAB-MAP and deep learning. We use YOLOv2 network to detect target objects in 2D images, and then use depth information for precise localization of the targets and finally add semantic information into 3D point clouds. We apply SRM in different scenes, and the results show its higher running speed and accuracy.

    Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Performance of SRM. (First from Left) Original RGB-D image including a label of handbag. (Second from Left) Result of edge detection on the depth image. (Second from Right) Result of region growing. (First from Right) The corresponding 3D point cloud

    Figure 2.  Results by Li et. al. [8]

    Figure 3.  The Structure of Memory Management. [7]

    Figure 4.  YOLOv1 structure. [13]

    Figure 5.  Results by YOLOv2

    Figure 6.  Overview of SRM method

    Figure 7.  Results by YOLOv2

    Figure 8.  The flow chart of our precise localization

    Figure 9.  (Top) Original RGB-D image. (Middle Left) Result of edge detection by Canny operator. (Middle Right) Result of region growing. (Bottom) Semantic RGB-D image

    Figure 10.  The left is the original local point cloud of RTAB-MAP. The right is the corresponding point cloud of SRM. Two refrigerators are recognized by YOLOv2 and painted with green (we assume that green represents refrigerators)

    Figure 11.  (First Column) Original RGB image. (Second Column) Edges of targets extracted in depth image by Canny operator. (Third Column) Corresponding local point clouds. The blue sticks in the point clouds in the first three rows are axes. Because we detected different targets separately, some different targets are painted the same color. When we detect them at the same time, we just need to assign different colors to different classes of objects

    Figure 12.  The left is the original RGB image. The right is the corresponding point cloud. We paint the bottle red, the laptop blue and the handbag green. We don't show Edges of targets extracted in depth image here because SRM process different objects one by one, which means we don't have an image including all of their edges. The handbag is not shown completely in the point cloud because the Kinect2 didn't get the depth data of that area

    Figure 13.  The performance of semi-dense 3D semantic mapping. [8] (Top) Original image. (Bottom) Corresponding point cloud. Red represents buildings. Purple represents cars. Green bounding boxes are added by us. They are not included in the original image

    Figure 14.  The left is the original RGB image. The right is the point cloud

    Figure 15.  The left is the original RGB image. The right is the point cloud

  • [1] R. Q. Charles, H. Su, K. Mo and L. J. Guibas, Pointnet: Deep learning on point sets for 3d classification and segmentation, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017). doi: 10.1109/CVPR.2017.16.
    [2] R. Girshick and J. Donahue, Trevor Darrell and Jitendra Malik, Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation, (2013), 580-587.
    [3] K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), 770-778. doi: 10.1109/CVPR.2016.90.
    [4] M. Labbé and F. Michaud, Long-term online multi-session graph-based splam with memory management, Autonomous Robots, 3 (2017), 1-18. 
    [5] M. Labbe and F. Michaud, Online global loop closure detection for large-scale multi-session graph-based SLAM, IEEE/RSJ International Conference on Intelligent Robots and Systems, (2014), 2661-2666. doi: 10.1109/IROS.2014.6942926.
    [6] M. Labbé and F. Michaud, Appearance-based loop closure detection for online large-scale and long-term operation, IEEE Transactions on Robotics, 29 (2013), 734-745. 
    [7] M. Labbe and F. Michaud, Memory management for real-time appearance-based loop closure detection, IEEE/RSJ International Conference on Intelligent Robots and Systems, (2011), 1271-1276. doi: 10.1109/IROS.2011.6094602.
    [8] X. Li and R. Belaroussi, Semi-dense 3d semantic mapping from monocular slam, 2016.
    [9] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed and C. Y. Fu, et al, SSD: Single Shot MultiBox Detector. European Conference on Computer Vision, Springer International Publishing, (2016), 21-37.
    [10] J. Mccormac, A. Handa, A. Davison and S. Leutenegger, Semanticfusion: dense 3d semantic mapping with convolutional neural networks, 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017. doi: 10.1109/ICRA.2017.7989538.
    [11] R. Mur-Artal and J. D. Tardós, Probabilistic semi-dense mapping from highly accurate feature-based monocular SLAM, Robotics: Science and Systems, (2015), 1-9. doi: 10.15607/RSS.2015.XI.041.
    [12] N. Otsu, A threshold selection method from gray-level histograms, IEEE Transactions on Systems, Man, and Cybernetics, 9 (1979), 62-66.  doi: 10.1109/TSMC.1979.4310076.
    [13] J. Redmon, S. Divvala, R. Girshick and A. Farhadi, You only look once: Unified, real-time object detection, Computer Vision and Pattern Recognition, (2016), 779-788. doi: 10.1109/CVPR.2016.91.
    [14] J. Redmon and A. Farhadi, YOLO9000: Better, faster, stronger, IEEE Conference on Computer Vision and Pattern Recognition, (2017), 6517-6525. doi: 10.1109/CVPR.2017.690.
    [15] J. Redmon and A. Farhadi, Yolov3: an incremental improvement, 2018.
    [16] S. RenK. HeR. Girshick and J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39 (2017), 1137-1149.  doi: 10.1109/TPAMI.2016.2577031.
    [17] N. Sünderhauf, T. T. Pham, Y. Latif, M. Milford and I. Reid, Meaningful maps with object-oriented semantic mapping., Ieee/rsj International Conference on Intelligent Robots and Systems, IEEE, (2017), 5079-5085.
    [18] T. Whelan, S. Leutenegger, R. S. Moreno, B. Glocker and A. Davison, ElasticFusion: Dense SLAM Without A Pose Graph. Robotics: Science and Systems, 2015. doi: 10.15607/RSS.2015.XI.001.
  • 加载中

Figures(15)

SHARE

Article Metrics

HTML views(5100) PDF downloads(1736) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return