GCMTN: Low-Overlap Point Cloud Registration Network Combining Dense Graph Convolution and Multilevel Interactive Transformer
Abstract
:1. Introduction
- A Dense Graph Feature Aggregation (DGFA) module based on densely connected graph convolution is proposed to expand the receptive field of point clouds. In this module, the k-NN algorithm is used to update graph features with different scale neighborhoods in each layer, thus aggregating graph features at multiple scales.
- A Multilevel Interaction Transformer (MIT) module is introduced, which includes Multihead Offset Attention (MOA) and Multi-head Cross Attention (MCA). MOA carries out feature refinement inside the point cloud to extract features between point pairs, while MCA makes the feature information of source and target point clouds interact. The multilevel structure can refine the features of overlap region and generate pointwise features that contain the structural information.
- An overlap prediction module is proposed, which includes overlap factor and matching factor. The overlap factor is used to calculate the probability of the superpoints of source and target point clouds lie in the overlap region, and the matching factor is used to determine whether a point is favorable for matching, thereby predicting the distribution of the overlap region while avoiding the negative impact of unfavorable points on the registration accuracy.
2. Related Work
2.1. Traditional Registration Methods
2.2. Learing-Based Registration Methods
2.3. Transformer-Based Registration Methods
3. Method
3.1. Problem Statement
3.2. Network Architecture
3.2.1. Encoder to Sample Superpoint
3.2.2. Dense Graph Feature Aggregation Module
3.2.3. Multilevel Interaction Transformer Module
3.2.4. Overlap Prediction Module
3.2.5. Decoder to Generate Descriptors
3.3. Loss Function
3.3.1. Feature Loss
3.3.2. Overlap Loss
3.3.3. Matching Loss
4. Experiments and Results
4.1. Datasets and Platforms
4.1.1. Datasets
4.1.2. Experimental Parameters
4.1.3. Implementation Details
4.2. Experiments on ModelNet40
4.2.1. Dataset and Evaluation Metrics
4.2.2. Registration Results
4.3. Experiments on 3DMatch
4.3.1. Dataset and Evaluation Metrics
4.3.2. Registration Results
4.4. Experiments on OdometryKITTI
4.4.1. Dataset
4.4.2. Registration Results
4.5. Ablation Study
5. Discussion
5.1. Validity Analysis of Submodules
5.2. Performance and Practicality Analysis of GCMTN
5.3. Limitations of GCMTN and Future Improvement Directions
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Liang, Z.; Guo, Y.; Feng, Y.; Chen, W.; Qiao, L.; Zhou, L.; Zhang, J.; Liu, H. Stereo Matching Using Multi-Level Cost Volume and Multi-Scale Feature Constancy. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 300–315. [Google Scholar] [CrossRef] [PubMed]
- Dubé, R.; Gollub, M.G.; Sommer, H.; Gilitschenski, I.; Siegwart, R.; Cadena, C.; Nieto, J.I. Incremental-Segment-Based Localization in 3-D Point Clouds. IEEE Robot. Autom. Lett. 2018, 3, 1832–1839. [Google Scholar] [CrossRef]
- Chua, C.-S.; Jarvis, R. Point Signatures: A New Representation for 3D Object Recognition. Int. J. Comput. Vis. 1997, 25, 63–85. [Google Scholar] [CrossRef]
- Deschaud, J.-E. IMLS-SLAM: Scan-to-Model Matching Based on 3D Data. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, 21–25 May 2018; IEEE: New York, NY, USA, 2018; pp. 2480–2485. [Google Scholar]
- Han, L.; Xu, L.; Bobkov, D.; Steinbach, E.G.; Fang, L. Real-Time Global Registration for Globally Consistent RGB-D SLAM. IEEE Trans. Robot. 2019, 35, 498–508. [Google Scholar] [CrossRef]
- Besl, P.J.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef] [Green Version]
- Chetverikov, D.; Stepanov, D.; Krsek, P. Robust Euclidean alignment of 3D point sets: The trimmed iterative closest point algorithm. Image Vis. Comput. 2005, 23, 299–309. [Google Scholar] [CrossRef]
- Parkison, S.A.; Gan, L.; Jadidi, M.G.; Eustice, R.M. Semantic Iterative Closest Point through Expectation-Maximization. In Proceedings of the British Machine Vision Conference 2018, BMVC 2018, Newcastle, UK, 3–6 September 2018; BMVA Press: Durham, UK, 2018; p. 280. [Google Scholar]
- Biber, P.; Straßer, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27 October–1 November 2003; IEEE: New York, NY, USA, 2003; pp. 2743–2748. [Google Scholar]
- Aiger, D.; Mitra, N.J.; Cohen-Or, D. 4-points congruent sets for robust pairwise surface registration. ACM Trans. Graph. 2008, 27, 85. [Google Scholar] [CrossRef] [Green Version]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, ICRA 2009,Kobe, Japan, 12–17 May 2009; IEEE: New York, NY, USA, 2009; pp. 3212–3217. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Aligning point cloud views using persistent feature histograms. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Acropolis Convention Center, Nice, France, 22–26 September 2008; IEEE: New York, NY, USA, 2008; pp. 3384–3391. [Google Scholar]
- Salti, S.; Tombari, F.; Stefano, L. Di SHOT: Unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 2014, 125, 251–264. [Google Scholar] [CrossRef]
- Aoki, Y.; Goforth, H.; Srivatsan, R.A.; Lucey, S. PointNetLK: Robust&Efficient Point Cloud Registration Using PointNet. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019; IEEE: New York, NY, USA, 2019; pp. 7163–7172. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Washington, DC, USA, 2017; pp. 77–85. [Google Scholar]
- Lucas, B.D.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, IJCAI’81, Vancouver, BC, Canada, 24–28 August; 1981; pp. 674–679. [Google Scholar]
- Huang, S.; Gojcic, Z.; Usvyatsov, M.; Wieser, A.; Schindler, K. Predator: Registration of 3D Point Clouds with Low Overlap. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, Virtual, 19–25 June 2021; IEEE: New York, NY, USA, 2021; pp. 4267–4276. [Google Scholar]
- Yew, Z.J.; Lee, G.H. REGTR: End-to-end Point Cloud Correspondences with Transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, 18–24 June 2022; IEEE: New York, NY, USA, 2022; pp. 6667–6676. [Google Scholar]
- Qin, Z.; Yu, H.; Wang, C.; Guo, Y.; Peng, Y.; Xu, K. Geometric Transformer for Fast and Robust Point Cloud Registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, 18–24 June 2022; IEEE: New York, NY, USA, 2022; pp. 11133–11142. [Google Scholar]
- Zhao, H.; Zhuang, H.; Wang, C.; Yang, M. G3DOA: Generalizable 3D Descriptor with Overlap Attention for Point Cloud Registration. IEEE Robot. Autom. Lett. 2022, 7, 2541–2548. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All you Need. In Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
- Jian, B.; Vemuri, B.C. Robust Point Set Registration Using Gaussian Mixture Models. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1633–1645. [Google Scholar] [CrossRef] [PubMed]
- Eckart, B.; Kim, K.; Kautz, J. HGMR: Hierarchical Gaussian Mixtures for Adaptive 3D Registration. In Proceedings of the Computer Vision—ECCV 2018—15th European Conference, Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Proceedings, Part XV; Springer: Berlin/Heidelberg, Germany, 2018; Volume 11219, pp. 730–746. [Google Scholar]
- Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 2241–2254. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Campbell, D.; Petersson, L. GOGMA: Globally-Optimal Gaussian Mixture Alignment. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016; IEEE Computer Society: Washington, DC, USA, 2016; pp. 5685–5694. [Google Scholar]
- Campbell, D.; Petersson, L.; Kneip, L.; Li, H.; Gould, S. The Alignment of the Spheres: Globally-Optimal Spherical Mixture Alignment for Camera Pose Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019; IEEE: New York, NY, USA, 2019; pp. 11796–11806. [Google Scholar]
- Liu, Y.; Wang, C.; Song, Z.; Wang, M. Efficient Global Point Cloud Registration by Matching Rotation Invariant Features Through Translation Search. In Proceedings of the Computer Vision—ECCV 2018—15th European Conference, Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Proceedings, Part XII. Springer: Berlin/Heidelberg, Germany, 2018; Volume 11216, pp. 460–474. [Google Scholar]
- Zhou, Q.-Y.; Park, J.; Koltun, V. Fast Global Registration. In Proceedings of the Computer Vision—ECCV 2016—14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Proceedings, Part II; Springer: Berlin/Heidelberg, Germany, 2016; Volume 9906, pp. 766–782. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017; pp. 5099–5108. [Google Scholar]
- Li, X.; Pontes, J.K.; Lucey, S. PointNetLK Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, Virtual, 19–25 June 2021; IEEE: New York, NY, USA, 2021; pp. 12763–12772. [Google Scholar]
- Yuan, W.; Eckart, B.; Kim, K.; Jampani, V.; Fox, D.; Kautz, J. DeepGMR: Learning Latent Gaussian Mixture Models for Registration. In Proceedings of the Computer Vision—ECCV 2020—16th European Conference, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M., Eds.; Proceedings, Part V; Springer: Berlin/Heidelberg, Germany, 2020; Volume 12350, pp. 733–750. [Google Scholar]
- Li, J.; Zhang, C.; Xu, Z.; Zhou, H.; Zhang, C. Iterative Distance-Aware Similarity Matrix Convolution with Mutual-Supervised Point Elimination for Efficient Point Cloud Registration. In Proceedings of the Computer Vision—ECCV 2020—16th European Conference, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M., Eds.; Proceedings, Part XXIV; Springer: Berlin/Heidelberg, Germany, 2020; Volume 12369, pp. 378–394. [Google Scholar]
- Kim, Y.; Choi, S.; Lee, H.; Kim, T.; Kim, C. RPM-Net: Robust Pixel-Level Matching Networks for Self-Supervised Video Object Segmentation. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, WACV 2020, Snowmass Village, CO, USA, 1–5 March 2020; IEEE: New York, NY, USA, 2020; pp. 2046–2054. [Google Scholar]
- Wang, Y.; Solomon, J. Deep Closest Point: Learning Representations for Point Cloud Registration. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Republic of Korea, 27 October–2 November 2019; IEEE: New York, NY, USA, 2019; pp. 3522–3531. [Google Scholar]
- Fu, K.; Liu, S.; Luo, X.; Wang, M. Robust Point Cloud Registration Framework Based on Deep Graph Matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, Virtual, 19–25 June 2021; IEEE: New York, NY, USA, 2021; pp. 8893–8902. [Google Scholar]
- Chen, G.; Wang, M.; Yue, Y.; Zhang, Q.; Yuan, L. Full Transformer Framework for Robust Point Cloud Registration with Deep Information Interaction. arXiv 2021, arXiv:2112.09385. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic Graph CNN for Learning on Point Clouds. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Li, S.; Ye, Y.; Liu, J.; Guo, L. VPRNet: Virtual Points Registration Network for Partial-to-Partial Point Cloud Registration. Remote Sens. 2022, 14, 2559. [Google Scholar] [CrossRef]
- Fan, Z.; Ma, J.; Nie, T.; Wang, H.; Zhao, Y.; Sun, M.; Wen, J. COPRNet: Correspondence confidence and overlap score guided network for indoor partial point cloud registration. Int. J. Remote Sens. 2023, 44, 981–1000. [Google Scholar] [CrossRef]
- Yu, H.; Qin, Z.; Hou, J.; Saleh, M.; Li, D.; Busam, B.; Ilic, S. Rotation-invariant transformer for point cloud matching. In Proceedings of the 2023 IEEE Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; IEEE Computer Society: Washington, DC, USA, 2023; pp. 5384–5393. [Google Scholar]
- Bai, X.; Luo, Z.; Zhou, L.; Fu, H.; Quan, L.; Tai, C.-L. D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, 13–19 June 2020; IEEE: New York, NY, USA, 2020; pp. 6358–6366. [Google Scholar]
- Thomas, H.; Qi, C.R.; Deschaud, J.-E.; Marcotegui, B.; Goulette, F.; Guibas, L.J. KPConv: Flexible and Deformable Convolution for Point Clouds. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Republic of Korea, 27 October–2 November 2019; IEEE: New York, NY, USA, 2019; pp. 6410–6419. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Washington, DC, USA, 2017; pp. 2261–2269. [Google Scholar]
- Guo, M.-H.; Cai, J.; Liu, Z.-N.; Mu, T.-J.; Martin, R.R.; Hu, S.-M. PCT: Point cloud transformer. Comput. Vis. Media 2021, 7, 187–199. [Google Scholar] [CrossRef]
- Xu, J.; Huang, Y.; Wan, Z.; Wei, J. GLORN: Strong Generalization Fully Convolutional Network for Low-Overlap Point Cloud Registration. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Sun, Y.; Cheng, C.; Zhang, Y.; Zhang, C.; Zheng, L.; Wang, Z.; Wei, Y. Circle Loss: A Unified Perspective of Pair Similarity Optimization. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, 13–19 June 2020; IEEE: New York, NY, USA, 2020; pp. 6397–6406. [Google Scholar]
- Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D ShapeNets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, 7–12 June 2015; IEEE Computer Society: Washington, DC, USA, 2015; pp. 1912–1920. [Google Scholar]
- Zeng, A.; Song, S.; Nießner, M.; Fisher, M.; Xiao, J.; Funkhouser, T.A. 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Washington, DC, USA, 2017; pp. 199–208. [Google Scholar]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; IEEE Computer Society: Washington, DC, USA, 2012; pp. 3354–3361. [Google Scholar]
- Gojcic, Z.; Zhou, C.; Wegner, J.D.; Wieser, A. The Perfect Match: 3D Point Cloud Matching with Smoothed Densities. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019; IEEE: New York, NY, USA, 2019; pp. 5545–5554. [Google Scholar]
- Choy, C.B.; Park, J.; Koltun, V. Fully Convolutional Geometric Features. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Republic of Korea, 27 October–2 November 2019; IEEE: New York, NY, USA, 2019; pp. 8957–8965. [Google Scholar]
- Ao, S.; Hu, Q.; Yang, B.; Markham, A.; Guo, Y. SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, Virtual, 19–25 June 2021; IEEE: New York, NY, USA, 2021; pp. 11753–11762. [Google Scholar]
- Yew, Z.J.; Lee, G.H. 3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration. In Proceedings of the Computer Vision—ECCV 2018—15th European Conference, Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Proceedings, Part XV; Springer: Berlin/Heidelberg, Germany, 2018; Volume 11219, pp. 630–646. [Google Scholar]
Datasets | Dataset Partition | ||
---|---|---|---|
Training Set | Validation Set | Testing Set | |
ModelNet and ModelLoNet | 5112 samples | 1202 samples | 1266 samples |
3DMatch and 3DLoMatch | 46 scenes | 8 scenes | 8 scenes |
OdometryKITTI | sequence 00–05 | sequence 06–07 | sequence 08–10 |
Datasets | DGFA L | MIT N | Loss Function | |||||||
---|---|---|---|---|---|---|---|---|---|---|
ModelNet | 3 | 3 | 384 | 64 | 0.1 | 1.4 | 0.018 | 0.06 | 0.04 | 0.04 |
3DMatch | 3 | 3 | 256 | 24 | 0.1 | 1.4 | 0.036 | 0.13 | 0.036 | 0.04 |
odometryKITTI | 3 | 3 | 512 | 48 | 0.1 | 1.4 | 0.21 | 0.72 | 0.43 | 0.28 |
Methods | ModelNet | ModelLoNet | ||||
---|---|---|---|---|---|---|
RRE | RTE | CD | RRE | RTE | CD | |
DCP-v2 [34] | 11.975 | 0.171 | 0.0117 | 16.501 | 0.300 | 0.0268 |
RPM-Net [33] | 1.712 | 0.018 | 0.00085 | 7.342 | 0.124 | 0.0050 |
Predator [17] | 1.739 | 0.019 | 0.00089 | 5.235 | 0.132 | 0.0083 |
REGTR [18] | 1.473 | 0.014 | 0.00078 | 3.930 | 0.087 | 0.0037 |
GCMTN (ours) | 1.532 | 0.013 | 0.00075 | 4.531 | 0.085 | 0.0056 |
#Samples | 3DMatch | 3DLoMatch | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
5000 | 2500 | 1000 | 500 | 250 | 5000 | 2500 | 1000 | 500 | 250 | |
Feature Matching Recall (%) ↑ | ||||||||||
PerfectMatch [50] | 95.0 | 94.3 | 92.9 | 90.1 | 82.9 | 63.6 | 61.7 | 53.6 | 45.2 | 34.2 |
FCGF [46] | 97.4 | 97.3 | 97.0 | 96.7 | 96.6 | 76.6 | 75.4 | 74.2 | 71.7 | 67.3 |
D3Feat [41] | 95.6 | 95.4 | 94.5 | 94.1 | 93.1 | 67.3 | 66.7 | 67.0 | 66.7 | 66.5 |
SpinNet [52] | 97.6 | 97.2 | 96.8 | 95.5 | 94.3 | 75.3 | 74.9 | 72.5 | 70.0 | 63.6 |
Predator [17] | 96.6 | 96.6 | 96.5 | 96.3 | 96.5 | 78.6 | 77.4 | 76.3 | 75.7 | 75.3 |
REGTR [18] | 97.8 | 97.4 | 96.9 | 96.1 | 95.6 | 74.3 | 74.4 | 74.2 | 73.8 | 72.9 |
GCMTN (ours) | 97.2 | 97.3 | 97.0 | 96.8 | 96.6 | 79.9 | 79.4 | 78.6 | 78.5 | 77.9 |
Inlier Ratio (%) ↑ | ||||||||||
PerfectMatch [50] | 36.0 | 32.5 | 26.4 | 21.5 | 16.4 | 11.4 | 10.1 | 8.0 | 6.4 | 4.8 |
FCGF [46] | 56.8 | 54.1 | 48.7 | 42.5 | 34.1 | 21.4 | 20.0 | 17.2 | 14.8 | 11.6 |
D3Feat [41] | 39.0 | 38.8 | 40.4 | 41.5 | 41.8 | 13.2 | 13.1 | 14.0 | 14.6 | 15.0 |
SpinNet [52] | 47.5 | 44.7 | 39.4 | 33.9 | 27.6 | 20.5 | 19.0 | 16.3 | 13.8 | 11.1 |
Predator [17] | 58.0 | 58.4 | 57.1 | 54.1 | 49.3 | 26.7 | 28.1 | 28.3 | 27.5 | 25.8 |
REGTR [18] | 57.3 | 55.2 | 53.8 | 52.7 | 51.1 | 27.6 | 27.3 | 27.1 | 26.6 | 25.4 |
GCMTN (ours) | 63.0 | 64.1 | 61.8 | 60.1 | 58.2 | 36.8 | 37.4 | 36.6 | 36.2 | 35.7 |
Registration Recall (%) ↑ | ||||||||||
PerfectMatch [50] | 78.4 | 76.2 | 71.4 | 67.6 | 50.8 | 33.0 | 29.0 | 23.3 | 17.0 | 11.0 |
FCGF [46] | 85.1 | 84.7 | 83.3 | 81.6 | 71.4 | 40.1 | 41.7 | 38.2 | 35.4 | 26.8 |
D3Feat [41] | 81.6 | 84.5 | 83.4 | 82.4 | 77.9 | 37.2 | 42.7 | 46.9 | 43.8 | 39.1 |
SpinNet [52] | 88.6 | 86.6 | 85.5 | 83.5 | 70.2 | 59.8 | 54.9 | 48.3 | 39.8 | 26.8 |
Predator [17] | 89.0 | 89.9 | 90.6 | 88.5 | 86.6 | 59.8 | 61.2 | 62.4 | 60.8 | 58.1 |
REGTR [18] | 92.0 | 91.2 | 89.7 | 90.6 | 90.4 | 64.8 | 64.4 | 64.2 | 62.3 | 59.7 |
GCMTN (ours) | 91.2 | 91.6 | 91.0 | 90.6 | 90.0 | 65.6 | 65.9 | 66.0 | 65.1 | 63.2 |
Method | RTE [cm] ↓ | RRE [◦] ↓ | RR ↑ |
---|---|---|---|
3DFeat-Net [53] | 25.9 | 0.57 | 96.0 |
FCGF [51] | 9.5 | 0.30 | 96.6 |
D3Feat [41] | 7.2 | 0.30 | 99.8 |
Predator [17] | 6.8 | 0.27 | 99.8 |
GLORN [45] | 6.2 | 0.27 | 99.8 |
GCMTN (ours) | 6.7 | 0.25 | 99.8 |
Key Module | 3DMatch | 3DLoMatch | ||||||
---|---|---|---|---|---|---|---|---|
DGFA | MIT | OP | FMR (%) | IR (%) | RR (%) | FMR (%) | IR (%) | RR (%) |
× | × | × | 96.7 | 58.0 | 89.0 | 78.6 | 26.7 | 59.8 |
√ | × | × | 96.6 | 59.9 | 89.5 | 78.7 | 30.8 | 60.9 |
√ | √ | × | 96.8 | 61.4 | 90.3 | 79.2 | 33.6 | 62.8 |
√ | × | √ | 97.0 | 62.1 | 90.7 | 79.5 | 34.9 | 64.2 |
√ | √ | √ | 97.2 | 63.0 | 91.2 | 79.9 | 36.8 | 65.6 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, X.; Yuan, Y. GCMTN: Low-Overlap Point Cloud Registration Network Combining Dense Graph Convolution and Multilevel Interactive Transformer. Remote Sens. 2023, 15, 3908. https://doi.org/10.3390/rs15153908
Wang X, Yuan Y. GCMTN: Low-Overlap Point Cloud Registration Network Combining Dense Graph Convolution and Multilevel Interactive Transformer. Remote Sensing. 2023; 15(15):3908. https://doi.org/10.3390/rs15153908
Chicago/Turabian StyleWang, Xuchu, and Yue Yuan. 2023. "GCMTN: Low-Overlap Point Cloud Registration Network Combining Dense Graph Convolution and Multilevel Interactive Transformer" Remote Sensing 15, no. 15: 3908. https://doi.org/10.3390/rs15153908