Trajectory Recovery and Terrain Reconstruction Based on Descent Images under Dual-Restrained Conditions: Tianwen-1
Abstract
:1. Introduction
2. Data and Methodology
2.1. Data
2.2. Methodology
- (I)
- For the feature extraction and matching part, SIFT with a low contrast threshold is applied to extract and descript feature points, after which BF matcher with KNN (k = 2) is utilized to filter bad matches.
- (II)
- The TR-TR algorithm uses two constraints to remove mismatching points: the scale monotonicity constraint and the a priori terrain information constraint. The scale monotonicity constraint takes advantage of the fact that the feature point scale monotonically varies with the descent image order to remove mismatching points. The constraint of prior terrain information requires first finding the homography matrix (H) corresponding to the points in the scene plane by a contrario RANSAC (AC-RANSAC) method. Then, DEM of the landing area generated from the orbiter images is exploited as prior terrain information to find the upper boundary of the plane induced parallax (introduced in Section 2.2.2) corresponding to the homography matrix (H), and, finally, remove points beyond the boundary.
- (III)
- According to the two solutions obtained by the homography matrix (H) decomposition, two possible positions of epipoles are obtained as constraints to perform the epipoles constrained parallax beam (ECPB) algorithm. If the fundamental matrix is obtained, the determined motion solution can be obtained directly by decomposition of the fundamental. If the fundamental matrix cannot be obtained, the solution that does not conform to the parabolic descent trajectory constraint in the two solutions obtained by decomposition of the homography matrix (H) is eliminated to obtain the determined motion solution.
- (IV)
- After refining the motion by bundle adjustment, the accurate descent trajectory is obtained. A lot of sparse spatial points are also obtained when estimating motion between images in the previous step. Triangular mesh is established based on these sparse spatial points. Each triangle in the mesh is projected to all images that can see the triangle. Based on these projections, multi-image normalized cross-correlation score (NCC) curves are drawn and initial seed points are determined accordingly. Initial seed points are then propagated to obtain depth maps for generating dense point clouds. Finally, DEM and DOM of the landing area are generated based on dense point clouds.
2.2.1. Feature Extraction and Matching
2.2.2. Mismatching Points Elimination
2.2.3. Robust Motion Estimation
2.2.4. Terrain Reconstruction
3. Field Experiment
3.1. Experiment Overview
3.2. Matching Accuracy and Initial Motion
3.2.1. Matching Accuracy
3.2.2. Initial Motion
3.3. Feature Points Extraction and Refined Motion
3.3.1. Feature Points Extraction
3.3.2. Refined Motion
3.4. Terrain Reconstruction
4. In-Orbit Data Processing
4.1. Footprint Map
4.2. Recovery of Trajectory and Velocity
4.3. Terrain Reconstruction
5. Discussion and Conclusions
5.1. Discussion
5.2. Conclusions
5.3. Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ingersoll, A.P. Three eras of planetary exploration. Nat. Astron. 2017, 1, 10. [Google Scholar] [CrossRef] [Green Version]
- Binzel, R.P. A golden spike for planetary science. Science 2012, 338, 203–204. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Zhang, Y.; Di, K.C.; Chen, M.; Duan, J.F.; Kong, J.; Xie, J.F.; Liu, Z.Q.; Wan, W.H.; Rong, Z.F.; et al. Localization of the Chang’e-5 Lander Using Radio-Tracking and Image-Based Methods. Remote Sens. 2021, 13, 590. [Google Scholar] [CrossRef]
- Li, C.; Wang, C.; Wei, Y.; Lin, Y. China’s present and future lunar exploration program. Science 2019, 365, 238–239. [Google Scholar] [CrossRef] [PubMed]
- Yang, P.; Huang, Y.; Li, P.; Liu, S.; Shan, Q.; Zheng, W. Trajectory Determination of Chang’E-5 during Landing and Ascending. Remote Sens. 2021, 13, 4837. [Google Scholar] [CrossRef]
- Jiang, X.; Yang, B.; Li, S. Overview of China’s 2020 Mars mission design and navigation. Astrodynamics 2018, 2, 1–11. [Google Scholar] [CrossRef]
- Wan, W.X.; Wang, C.; Li, C.L.; Wei, Y. China’s first mission to Mars. Nat. Astron. 2020, 4, 721. [Google Scholar] [CrossRef]
- Wan, W.; Yu, T.; Di, K.; Wang, J.; Liu, Z.; Li, L.; Liu, B.; Wang, Y.; Peng, M.; Bo, Z. Visual Localization of the Tianwen-1 Lander Using Orbital, Descent and Rover Images. Remote Sens. 2021, 13, 3439. [Google Scholar] [CrossRef]
- Ground Research and Application System of China’s Lunar and Planetary Exploration Program. Tianwen-1 Middle Resolution Imaging Camera Dataset; China National Space Administration: Beijing, China, 2020. [Google Scholar]
- Ground Research and Application System of China’s Lunar and Planetary Exploration Program. Tianwen-1 High Resolution Imaging Camera Dataset; China National Space Administration: Beijing, China, 2020. [Google Scholar]
- Kirk, R.L.; Mayer, D.P.; Fergason, R.L.; Redding, B.L.; Galuszka, D.M.; Hare, T.M.; Gwinner, K. Evaluating Stereo Digital Terrain Model Quality at Mars Rover Landing Sites with HRSC, CTX, and HiRISE Images. Remote Sens. 2021, 13, 3511. [Google Scholar] [CrossRef]
- Huang, X.; Li, M.; Wang, X.; Hu, J.; Zhao, Y.; Guo, M.; Xu, C.; Liu, W.; Wang, Y.; Hao, C.; et al. The Tianwen-1 Guidance, Navigation, and Control for Mars Entry, Descent, and Landing. Space Sci. Technol. 2021, 2021, 9846185. [Google Scholar] [CrossRef]
- Peng, M.; Di, K.; Wang, Y.; Wan, W.; Liu, Z.; Wang, J.; Li, L. A Photogrammetric-Photometric Stereo Method for High-Resolution Lunar Topographic Mapping Using Yutu-2 Rover Images. Remote Sens. 2021, 13, 2975. [Google Scholar] [CrossRef]
- Chen, Z.; Jiang, J. Crater Detection and Recognition Method for Pose Estimation. Remote Sens. 2021, 13, 3467. [Google Scholar] [CrossRef]
- Liu, Z.; Di, K.; Peng, M.; Wan, W.; Liu, B.; Li, L.; Yu, T.; Wang, B.; Zhou, J.; Chen, H. High precision landing site mapping and rover localization for Chang’e-3 mission. Sci. China Phys. Mech. Astron. 2015, 58, 1–11. [Google Scholar] [CrossRef]
- Yan, R.; Cao, Z.; Wang, J.; Zhong, S.; Klein, D.; Cremers, A. Horizontal velocity estimation via downward looking descent images for lunar landing. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 1197–1221. [Google Scholar] [CrossRef]
- Johnson, A.; Willson, R.; Cheng, Y.; Goguen, J.; Leger, C.; Sanmartin, M.; Matthies, L. Design through operation of an image-based velocity estimation system for mars landing. Int. J. Comput. Vis. 2007, 74, 319–341. [Google Scholar] [CrossRef]
- Cheng, Y.; Goguen, J.; Johnson, A.; Leger, C.; Matthies, L.; San Martin, M.; Willson, R. The Mars exploration rovers descent image motion estimation system. IEEE Intell. Syst. 2004, 19, 13–21. [Google Scholar] [CrossRef]
- Xiong, Y.; Olson, C.F.; Matthies, L.H. Computing depth maps from descent images. Mach. Vis. Appl. 2005, 16, 139–147. [Google Scholar] [CrossRef]
- Olson, C.; Matthies, L.; Xiong, Y.; Li, R.; Ma, F. Multi-Resolution Mapping Using Surface, Descent and Orbit Images; Jet Propulsion Laboratory: Pasadena, CA, USA, 2001. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM (USA) 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Andrew, A.M. Multiple view geometry in computer vision. Kybernetes 2001, 30, 1333–1341. [Google Scholar]
- Li, M.; Liu, S.; Ma, Y.; Sun, C.; Jia, Y. Descent image based landing area terrain reconstruction technology for lunar landing mission. Imaging Sci. J. 2015, 63, 440–446. [Google Scholar] [CrossRef]
- Meng, C.; Zhou, N.; Xue, X.L.; Jia, Y. Homography-based depth recovery with descent images. Mach. Vis. Appl. 2013, 24, 1093–1106. [Google Scholar] [CrossRef]
- Meng, C.; Zhou, N.; Jia, Y. Improved best match search method in depth recovery with descent images. Mach. Vis. Appl. 2015, 26, 251–266. [Google Scholar] [CrossRef]
- Xu, X.C.; Zheng, Z.Z.; Xu, A.G.; Liu, S.C. An Optimized Method for Terrain Reconstruction Based on Descent Images. J. Eng. Technol. Sci. 2016, 48, 31–48. [Google Scholar] [CrossRef] [Green Version]
- Liu, J.J.; Ren, X.; Yan, W.; Li, C.L.; Zhang, H.; Jia, Y.; Zeng, X.G.; Chen, W.L.; Gao, X.Y.; Liu, D.W.; et al. Descent trajectory reconstruction and landing site positioning of Chang’E-4 on the lunar farside. Nat. Commun. 2019, 10, 4229. [Google Scholar] [CrossRef] [Green Version]
- Di, K.; Liu, Z.; Liu, B.; Wan, W.; Peng, M.; Wang, Y.; Gou, S.; Yue, Z.; Xin, X.; Jia, M.; et al. Chang’e-4 lander localization based on multi-source data. J. Remote Sens. 2019, 23, 177–184. [Google Scholar]
- Di, K.; Liu, Z.; Liu, B.; Wan, W.; Peng, M.; Li, J.; Xie, J.; Jia, M.; Niu, S.; Xin, X. Topographic Analysis of Chang’e-4 Landing Site Using Orbital, Descent and Ground Data. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, XLII-2/W13, 1383–1387. [Google Scholar] [CrossRef] [Green Version]
- Liu, Z.Q.; Di, K.C.; Li, J.; Xie, J.F.; Cui, X.F.; Xi, L.H.; Wan, W.H.; Peng, M.; Liu, B.; Wang, Y.X.; et al. Landing site topographic mapping and rover localization for Chang’e-4 mission. Sci. China-Inf. Sci. 2020, 63, 140901. [Google Scholar] [CrossRef] [Green Version]
- Wan, W.; Liu, Z.; Liu, B.; Di, K.; Wang, J.; Liu, C.; Yu, T.; Miao, Y.; Peng, M.; Wang, Y.; et al. Descent trajectory recovery of Chang’e-4 lander based on descent images. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, XLII-2/W13, 1457–1461. [Google Scholar] [CrossRef] [Green Version]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary Robust Invariant Scalable Keypoints. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
- Tareen, S.A.K.; Saleem, Z. A comparative analysis of sift, surf, kaze, akaze, orb, and brisk. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 3–4 March 2018; pp. 1–10. [Google Scholar]
- Suju, D.A.; Jose, H. FLANN: Fast Approximate Nearest Neighbour Search Algorithm for elucidating Human-Wildlife conflicts in Forest areas. In Proceedings of the 4th International Conference on Signal Processing, Communication and Networking (ICSCN), Chennai, India, 16–18 March 2017. [Google Scholar]
- Guo, G.D.; Wang, H.; Bell, D.; Bi, Y.X.; Greer, K. KNN model-based approach in classification. In On the Move to Meaningful Internet Systems 2003: Coopis, Doa, and Odbase; Meersman, R., Tari, Z., Schmidt, D.C., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelber, Germany, 2003; Volume 2888, pp. 986–996. [Google Scholar]
- Rebert, M.; Monnin, D.; Bazeille, S.; Cudel, C. Parallax beam: A vision-based motion estimation method robust to nearly planar scenes. J. Electron. Imaging 2019, 28, 023030. [Google Scholar] [CrossRef]
- Frahm, J.; Pollefeys, M. RANSAC for (Quasi-)Degenerate data (QDEGSAC). In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; pp. 453–460. [Google Scholar]
- Decker, P.; Paulus, D.; Feldmann, T. Dealing with degeneracy in essential matrix estimation. In Proceedings of the 15th IEEE International Conference on Image Processing (ICIP 2008), San Diego, CA, USA, 12–15 October 2008; pp. 1964–1967. [Google Scholar]
- Chum, O.; Werner, T.; Matas, J. Two-view geometry estimation unaffected by a dominant plane. In Proceedings of the Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 772–779. [Google Scholar]
- Torr, P.H.S. An assessment of information criteria for motion model selection. In Proceedings of the 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.97CB36082), San Juan, Puerto Rico, 17–19 June 1997; pp. 47–52. [Google Scholar] [CrossRef]
- Collins, R.T. A space-sweep approach to true multi-image matching. In Proceedings of the 1996 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.96CB35909), San Francisco, CA, USA, 18–20 June 1996; pp. 358–363. [Google Scholar] [CrossRef]
- Moulon, P.; Monasse, P.; Marlet, R. Adaptive Structure from Motion with a Contrario Model Estimation. Computer Vision–ACCV 2012. In Proceedings of the 11th Asian Conference on Computer Vision, Daejeon, Korea, 5–9 November 2012; Revised Selected Papers, 2013. pp. 257–270. [Google Scholar] [CrossRef] [Green Version]
- Malis, E.; Vargas, M. Deeper Understanding of the Homography Decomposition for Vision-Based Control; INRIA: Le Chesnay-Rocquencourt, France, 2007; p. 90.
- Chen, L.; Xu, Y.; Li, B. Comparitive Study of the Geomorphological Characteristics of Valley Networks between Mars and the Qaidam Basin. Remote Sens. 2021, 13, 4471. [Google Scholar] [CrossRef]
- Jia, Y.; Liu, S.; Li, M.; Li, Q.; Peng, S.; Wen, B.; Ma, Y.; Zhang, S. Chang’E-3 system pinpoint landing localization based on descent image sequence. Chin. Sci. Bull. 2014, 59, 1838–1843. [Google Scholar]
Name | Value |
---|---|
Diagonal field of view (FOV) | 43° |
Square FOV | 30° × 30° |
Focal length | 20 mm |
Spectral range | 500~800 nm |
Entrance pupil diameter | ≥4 mm |
Entrance pupil position | 21.9 mm |
Integral time | 0.03~64 ms |
Image area resolution | 2048 × 2048 |
Pixel size | 5.5 μm |
Image ID | Acquisition Time (s) | Image ID | Acquisition Time (s) |
---|---|---|---|
23 | T23 + 0.000 | 33 | T23 + 65.536 |
24 | T23 + 28.672 | 34 | T23 + 81.273 |
25 | T23 + 32.768 | 35 | T23 + 94.209 |
26 | T23 + 36.864 | 36 | T23 + 98.305 |
27 | T23 + 40.960 | 37 | T23 + 102.401 |
28 | T23 + 45.056 | 38 | T23 + 106.497 |
29 | T23 + 49.152 | 39 | T23 + 110.593 |
30 | T23 + 53.248 | 40 | T23 + 114.689 |
31 | T23 + 57.344 | 41 | T23 + 118.785 |
32 | T23 + 61.440 | 42 | T23 + 122.881 |
Length of Feature Tracks | SIFT | The Method of This Paper |
---|---|---|
2 | 1105 | 2176 |
3 | 772 | 3680 |
4 | 383 | 4262 |
5 | 34 | 2997 |
6 | 7 | 1821 |
7 | 0 | 782 |
8 | 0 | 215 |
9 | 0 | 16 |
Z (m) | Res in X-axis (m) | Res in Y-axis (m) | Z (m) | Res in X-axis (m) | Res in Y-axis (m) |
---|---|---|---|---|---|
2508.252 | 8.213 | 3.935 | −1482.290 | 5.689 | 8.766 |
828.214 | −8.655 | −0.465 | −2542.272 | 6.976 | 0.182 |
591.026 | −8.452 | −0.334 | −3094.474 | 0.944 | −1.198 |
301.692 | −1.730 | −3.466 | −3210.715 | 0.092 | −5.491 |
130.631 | −3.613 | −8.914 | −3324.941 | −0.003 | −1.711 |
−161.540 | −0.994 | −6.517 | −3437.747 | −0.582 | −2.156 |
−404.045 | −2.488 | −2.847 | −3545.003 | −0.247 | −0.095 |
−657.444 | 4.802 | 3.305 | −3647.823 | −1.170 | 2.323 |
−896.076 | 3.575 | 5.838 | −3738.504 | −2.928 | 0.689 |
−1213.155 | 4.299 | 7.291 | −3822.342 | −3.728 | 0.865 |
Statistic Variate | X-axis | Y-axis |
---|---|---|
RMSE (m) | 4.458 | 4.351 |
Coefficient of Determination | 0.903 | 0.967 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qi, C.; Liu, S.; Xu, Y.; Xu, A.; Zhang, J.; Ma, Y.; Li, M.; Xu, X.; Yang, H.; Yan, Y. Trajectory Recovery and Terrain Reconstruction Based on Descent Images under Dual-Restrained Conditions: Tianwen-1. Remote Sens. 2022, 14, 709. https://doi.org/10.3390/rs14030709
Qi C, Liu S, Xu Y, Xu A, Zhang J, Ma Y, Li M, Xu X, Yang H, Yan Y. Trajectory Recovery and Terrain Reconstruction Based on Descent Images under Dual-Restrained Conditions: Tianwen-1. Remote Sensing. 2022; 14(3):709. https://doi.org/10.3390/rs14030709
Chicago/Turabian StyleQi, Chen, Shaochuang Liu, Yaming Xu, Aigong Xu, Jianli Zhang, Youqing Ma, Minglei Li, Xinchao Xu, Huan Yang, and Yongzhe Yan. 2022. "Trajectory Recovery and Terrain Reconstruction Based on Descent Images under Dual-Restrained Conditions: Tianwen-1" Remote Sensing 14, no. 3: 709. https://doi.org/10.3390/rs14030709
APA StyleQi, C., Liu, S., Xu, Y., Xu, A., Zhang, J., Ma, Y., Li, M., Xu, X., Yang, H., & Yan, Y. (2022). Trajectory Recovery and Terrain Reconstruction Based on Descent Images under Dual-Restrained Conditions: Tianwen-1. Remote Sensing, 14(3), 709. https://doi.org/10.3390/rs14030709