Next Article in Journal
Mapping Glacier Forelands Based on UAV BVLOS Operation in Antarctica
Previous Article in Journal
Properties Analysis of Lunar Regolith at Chang’E-4 Landing Site Based on 3D Velocity Spectrum of Lunar Penetrating Radar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vision-Based Decision Support for Rover Path Planning in the Chang’e-4 Mission

1
State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China
2
Beijing Aerospace Control Center, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(4), 624; https://doi.org/10.3390/rs12040624
Submission received: 31 December 2019 / Revised: 6 February 2020 / Accepted: 10 February 2020 / Published: 14 February 2020
(This article belongs to the Section Engineering Remote Sensing)

Abstract

:
In planetary rover missions, rover path planning is critical to ensure the safety and efficiency of the rover traverse and in situ explorations. In the Chang’e-4 (CE-4) mission, we have proposed and developed vision-based decision support methods comprising obstacle map generation and path evaluation for the rover. At each waypoint along the rover traverse, digital elevation model (DEM) is automatically generated, which is then used for obstacle map generation and path searching. For path evaluation, the searched path and the predicted wheel trajectories are projected onto the original images captured by different cameras to coincide with the real observation scenario. The proposed methods have been applied in the CE-4 mission to support teleoperation of the rover, and examples of multiple applications used in the mission are presented in this paper. Under the support of the proposed techniques, by the end of the 14th lunar day (3 February 2020), the rover has already travelled 367.25 meters on the far side of the Moon.

Graphical Abstract

1. Introduction

The Chang’e-4 (CE-4) probe, including a lander and a rover, successfully lifted off on 8 December 2018 from Xichang Satellite Launch Center, and soft landed in the Von Kármán crater inside the South Pole-Aitken basin at 10:26 a.m. on 3 January 2019 [1]. The success of the mission marked CE-4 as the first probe landing on the far side of the Moon [2]. After separation from the lander on the same day, the rover Yutu-2 (Jade Rabbit-2) started surface traversing and in situ exploration, under teleoperation from the Earth control center.
To ensure safety and efficiency of surface exploration, it is critical to plan a secure and accurate path for the rover to travel and approach science targets. There are three levels of path planning in the CE-4 mission: overall mission planning, period planning, and unit planning [3,4]. Overall mission planning employs the orbiter images to plan rough directions and paths for the rover to approach the science targets. Period planning designs a route that spans tens of meters, consisting of several waypoints and unit planning paths between them. Unit path planning is fundamental and provides the final path for the rover to traverse. The length of a unit planning path between adjacent waypoints is usually less than ten meters. In this paper, we mainly discuss vision-based support for unit path planning.
Generally, the unit path-planning process includes several stages: traversability map, or cost map generation [3,5,6], automatic path searching, path evaluation, and command sequence generation. The rover then executes the path according to the uplinked command sequence. As the rover carries several types of stereo cameras, vision-based support is used to guide the planning process. To support the traversability map establishment and all the subsequent processes, stereo camera images have been routinely used to generate a digital elevation model (DEM) and digital orthophoto map (DOM) of the terrain, which are further used to recognize obstacles and hazards to derive an obstacle map [4].
In the Chang’e-3 (CE-3) mission, the path evaluation work was mainly achieved by expert check with path visualizing on the corresponding DOM. That is to say that the path planning and evaluation are both in overhead view (DEM and DOM). Due to the mounting positions of the cameras and the imaging angles, the resolution of the DEM decreases as the optical path length increases, though the DEM can be interpolated with the same grid spacing. Therefore, the areas far away from the rover on the DEM and DOM may have more distortion than the closer areas because of 3D measurement error and interpolation, etc. [7]. The distortions can make it difficult for experts to interpret the terrain, inspect, and evaluate the searched path.
In the CE-4 mission, we have proposed and developed vision-based decision support methods to generate an obstacle map and evaluate the searched path in both overhead view (DEM and DOM) and perspective view (DEM and original images). In the former view, the path-searching results are projected on the DOM in perspective view to coincide with the actual observation angle. In the latter, the path-searching results are mapped and back-projected onto different original images: images captured by navigation camera (Navcam) and their stitched mosaic, and images captured by hazard avoidance camera (Hazcam). The latter strategy takes full advantage of the original images resolutions, and therefore provides a more detailed texture than that of DEM. Based on the back-projection method, multiple applications can be carried out to support decision-making during the mission operation, such as searched path evaluation, prediction of imaging area, etc.

2. Data

Same as in the CE-3 mission, the rover of CE-4 mission carries three pairs of stereo cameras: panoramic camera (Pancam), Navcam, and Hazcam [8]. The Pancam and Navcam are mounted on the top mast of the lunar rover, which can be rotated in both yaw and pitch directions. The Hazcam is fixed at the front panel of the rover. Among them, the Navcam and Hazcam are engineering payloads and parts of the guidance, navigation, and control (GNC) system, which is an important engineering system aiming at supporting engineering applications such as navigation, positioning, path planning, and obstacle avoidance, etc. [9].
Vision-based decision support methods use images captured by Navcam and Hazcam of the GNC system, hence the parameters of these two cameras are shown in Table 1. Normally at each waypoint, the rover takes a sequence of Navcam images at every 20 degree in yaw direction at a fixed pitch angle. The overlap angle between adjacent images is about 26.6 degrees [8]. The Hazcam uses fisheye lenses to provide a wide field of view (FOV) of the area starting from the front wheels of the rover. Before the probe launch, the intrinsic and structure parameters of both Navcam and Hazcam have been precisely calibrated [10,11,12].

3. Methodology

The vision-based path-planning decision support consists of two major steps: DEM and obstacle map generation before automatic path searching, and evaluation of the path afterwards. Figure 1 shows the overall flowchart of vision-based decision support for rover path planning in CE-4 mission, with our work introduced in this paper highlighted. In this section, techniques for DEM and obstacle map generation, and path planning are briefly introduced. Then evaluation of the searched path using projection strategy is elaborated in detail. In addition, the error analysis of the proposed strategy is presented at the end of this section.

3.1. DEM and Obstacle Map Generation

DEM is the fundamental data for obstacle analysis and path planning. A DEM and the corresponding DOM are automatically generated using Navcam stereo images at each waypoint immediately after the images are downlinked. The DEM usually covers a range of 16 m from the rover, with a grid spacing of 0.02 m.
First, feature points matching on neighboring stereo images are performed using their epipolar images generated based on intrinsic and extrinsic parameters of the Navcam. Bundle adjustment is then applied to all the images to refine their extrinsic parameters. After building a triangular irregular network (TIN) based on the matched feature points, for each pixel on the left image of every pair, dense matching is implemented with control and prediction from the TIN in order to obtain a matched point on the right image. To achieve a more precise 3D terrain map, least square matching method is then used to obtain sub-pixel accuracy. After that, 3D coordinates are calculated using the refined image extrinsic parameters and accurate matching results. Finally, DEM is accomplished by these dense 3D points using Kriging interpolation. For more detailed information, please refer to our previous work [7].
Based on the generated DEM, terrain features are analyzed by performing window-based calculations using several statistical metrics: slope, roughness, and elevation change. The slope reflects the degree of surface steepness. Roughness reflects the extent of terrain fluctuation in the window area. The elevation change is the difference between the minimum and maximum heights inside the window. By setting reasonable thresholds according to the rover mechanism such as climbing capability, obstacle crossing capability, etc., the analyzed three features are integrated to generate the obstacle map [13,14]. The obstacle map, combined with light and communication conditions, are integrated into a traversability map for automatic path searching.

3.2. Path Searching

For unit path planning, on the basis of the established traversability map, the path from the current waypoint to the next is searched by path-searching methods such as A* [3,15], or SD*lite [16]. Please note that the mechanical capabilities of the rover, such as climbing ability (<30 degree), obstacle crossing capability (<200 mm) [17], maximum speed 200 m/h [18], are also taken into account during path searching. The automatically searched path is described by coordinates of start, turning, and end points of the path, as well as the curvature of each segment between these points. Please note that the coordinates of the points represent the rover center positions.

3.3. Path Evaluation

3.3.1. Traditional Path Evaluation in Overhead View

Before uplinking the command sequence to the rover, the automatically searched path should be carefully evaluated and checked by experts. In CE-3 mission, together with the searched path that represents the rover center, wheels trace calculated according to the relationship between the rover center and the wheels, are visualized on the DOM of the current waypoint and checked in overhead view. In CE-4 mission, we retain this visualization method as one of the evaluation options. However, due to reasons described in the previous section, the texture of the DOM gets worse as the optical path length grows. Therefore, new path evaluation method has been developed and applied in teleoperation of Yutu-2 rover.

3.3.2. Path Projection into Perspective Views

In CE-4 mission, new approaches were applied to support searched path evaluations, which include: (A) Project the searched path on 3D perspective view of DOM; (B) project the searched path back on the original Hazcam images, Navcam images, and their mosaic image. The details are as follows.
  • Project on DOM in 3D perspective view
Assign the DEM as elevations of both the DOM surface and the searched path so that they can be visualized in 3D perspective view. For better visualization effect, the elevations are exaggerated with a factor of 2. In this way of visualization, the searched path goes up and down vividly along with the terrain reliefs.
2.
Project back on Hazcam and Navcam images
As described in the previous section, the texture of the DOM deforms worse as the optical path length grows. According to the generation process of the DEM and DOM, two–three pixels on the Navcam images make one pixel on the DEM and the DOM. As the optical path length grows, the number increases to five–six pixels make one when far away from the rover.
Therefore, except projecting the path on DOM that the texture problem still remains, we project the searched path and other auxiliary information back onto the original images taken by different cameras. In these perspective views, the resolutions of the original images can be made full use of and without texture deformation or distortion. Three different kinds of images have been used for path projection, including original Navcam image, Navcam image mosaic, and original Hazcam image.
Figure 2 is the flowchart of the back-projection processes on three different kinds of images. Generally, first the searched path is sampled into 2D points according to the path-searching result, and their corresponding elevation values are obtained by interpolation from the DEM generated from the Navcam images. Then the mapping relationships between the three and two-dimensional information are established based on the camera models of both Navcam and Hazcam. Finally, the 3D information of the searched path is mapped and back-projected on multi-source 2D images to support expert decisions. Processes of back projecting on different images are not the same, therefore the rest of the sub-section will introduce each procedure separately.
(1)
Original left Navcam image
According to the searched path description information, the path is sampled into discrete two-dimensional points from the start to the end points with proper intervals. By interpolation from the corresponding DEM, their corresponding elevation values can be obtained.
The back-projection model of the Navcam is divided into two parts: pinhole camera model and lens distortion model. First, transform the 3D coordinates of the sampled path points from the work coordinate system to the image coordinate system based on the pinhole camera model and the calibrated Navcam parameters. The resultant coordinates are distortion free. The final image coordinates of the sampled points are obtained by adding distortions to the undistorted ones according to the distortion model of the Navcam. The distortion model of the Navcam in Yutu-2 rover considers both radial and tangential distortions. The complete distortion model is as follows:
x = x d + ( x d x 0 ) ( k 1 r + k 2 r 2 + k 3 r 3 + α ) + β ( y d y 0 ) + p 1 [ r + 2 ( x d x 0 ) 2 ] + 2 p 2 ( x d x 0 ) ( y d y 0 ) y = y d + ( y d y 0 ) ( k 1 r + k 2 r 2 + k 3 r 3 ) + p 2 [ r + 2 ( y d y 0 ) 2 ] + 2 p 1 ( x d x 0 ) ( y d y 0 )
where k 1 , k 2 , k 3 are the radial distortion coefficients; p 1 , p 2 are the tangential distortion coefficients, α and β are the orthogonal correction coefficients in the x and y directions, respectively. In Equation (1), (x, y) are the ideal undistorted image coordinates of a point p, r = ( x d x 0 ) 2 + ( y d y 0 ) 2 , where ( x d , y d ) are image coordinates of point with distortions in the coordinate system with origin at distortion center point (x0, y0). As the distortion model is established from distortion coordinates to distortion free ones, the distorted coordinates are solved iteratively in practice.
According to the above procedure, each sampled point is mapped from the 3D space to the 2D image. The projected path is finally acquired by linking all the projected points on the original Navcam image.
(2)
Mosaic image stitched by a sequence of original left Navcam images
As the FOV of the Navcam is limited, only one Navcam image is not enough to show the entire projected path in most cases. In practice, the Navcam would continuously rotate in the yaw direction to take 6–7 pairs or more as a routine. Therefore, projecting the searched path on a panoramic image stitched by the sequence of images taken by one of the Navcam can give a wider FOV and more intuitional visualization effect.
All the images captured by the left Navcam are used to extract and match feature points using SIFT algorithm [19], and then RANSAC is applied to eliminate the outliers [20]. To obtain a high-precision stitched image, bundle adjustment is applied to refine the orientation information of each image using corresponding points of each pair of adjacent images. The mapping relations of the images calculated using the refined extrinsic parameters are used to stitch the images [21]. Finally, the image mosaic is obtained by gain compensation and multi-band blending after stitching [22]. Figure 3 shows a sequence of images captured by the left Navcam at waypoint LE00302 and the resultant image mosaic obtained by the method described above.
While in the process of stitching the sequence of images, the refined orientation information is stored. For each sampled point on the searched path, the stored orientation information helps to decide on which image the point would be back-projected by judging whether the projected coordinates exceed the boundary of each image. Since the adjacent images have overlap areas, there are always 2–3 projections on different images calculated by this judgment criterion. As the lens distortion increases as the distance from the point to the image center increase, the point closest to the image center will be picked as the final projected point, and its coordinates on the mosaic image are calculated according to the stored orientation information.
(3)
Original left Hazcam image
As the Navcam normally cannot image the area range from the front wheel of the rover to about 1.2–1.6 m away from the rover, part of the searched path cannot be seen and projected onto the Navcam (mosaic) image. The Hazcam, fixed on the front panel of the rover, can effectively complement the closer area where Navcam cannot image. Therefore, projecting the planned path onto the Hazcam image can reflects the terrain influence of the planned path at areas that cannot be imaged by Navcam.
The projection models of the Hazcam include a pinhole imaging model and a distortion model, wherein the imaging model is similar to Navcam model, but with different calibrated parameters values. Since Hazcam uses fisheye lenses, the distortion model is different from Navcam. The distortion model of the Hazcam is divided into two parts. The first part is as same as Equation (1). According to Equation (1), the obtained image coordinates are marked as intermediate results (x′, y′), and then the final rectified coordinates are obtained by:
x = x tan ( t ) / t y = y tan ( t ) / t .
In Equation (2),
t = x 2 + y 2 f ,
where f is the focal length of the Hazcam.
In CE-4 mission operations, the Navcam captures images at each waypoint. Therefore, at each waypoint, the searched path is back-projected onto the Navcam image mosaic as the main decision support information. Sometimes the projection on single Navcam image is applied for particular occasions. When Hazcam images are available, the planned path is also back-projected onto the Hazcam image to provide complementary support information. Moreover, by providing proper 3D information, the back-projection approaches are not limited to support path evaluation. They can be extended in more prediction and evaluation applications, as illustrated in the following sections.

3.3.3. Error Analysis

The back-projection procedure involves two error propagation processes: (1) the error of the DEM generated by the Navcam stereo images where the 3D coordinates of the sampled path points are obtained; (2) projection of the 3D points onto different type of images.
(1) The first error propagation process and the accuracy evaluations can be found in our previous work [7,23]. According to the calibration parameters of Yutu-2 rover and the error propagation principle, the error in the depth direction for Navcam is less than 0.11 m at distance of 10 m, and for Hazcam is less than 0.2 m at distance of 5 m.
(2) As the projection processes from 3D to 2D Navcam images and 2D Hazcam images are similar, they have the same error propagation process, yet with different parameters values. Denote the 3D coordinates in the work coordinate system as ( X c , Y c , Z c ) , the obtained image coordinates through pinhole model are ( x p , y p ) , which has:
{ x p = X c Z c f y p = Y c Z c f .
According to the law of error propagation [24], the errors of the corresponding image coordinates are calculated by the following equations:
{ σ x p = ( f Z c ) 2 σ X c 2 + ( x Z c ) 2 σ Z c 2 σ y p = ( f Z c ) 2 σ Y c 2 + ( y Z c ) 2 σ Z c 2 ,
where σ X c , σ Y c , and σ Z c are measurement errors calculated in the first error propagation process, (x, y) represent the image coordinates which come from the image in the first error propagation process. In addition, ( σ x p , σ y p ) are the projection errors after propagation effect.

4. Results

4.1. Obstacle Map Generation and Support of Path Searching

After the rover reaches a new waypoint, Navcam is powered on to capture a sequence of stereo images. Then DEM and DOM with 0.02 m resolution are automatically generated as the basis for obstacle analysis and path searching. Figure 4 shows an example of DEM and DOM at waypoint LE00203, which were automatically generated from 6 pairs of Navcam images captured at a fixed pitch angle. The mapped area covers from about 1.6m in front of the rover to about 16 m.
The obstacle map calculated based on the generated DEM at this waypoint is shown in Figure 5, where the red and green areas represent obstacles and safe areas, respectively. As can be seen from the figure, a crater with diameter about 8.7 m is in front of the rover. Based on this information and other conditions described in Section 3.2, the path from waypoint LE00203 to LE00204 is searched, which is shown on the obstacle map in Figure 5.

4.2. Evaluation of the Path

After path searching, vision-based evaluation is applied. This section introduces the process of evaluating searched paths from waypoint LE00207 to LE00208 in CE-4 mission.
In Figure 6, the searched path from the current waypoint LE00207 to the next waypoint LE00208 is overlain on DOM. As described before, the traditional visualization of the searched path for decision support is to draw the center line, wheels trace, and course lines on the DOM, as shown in Figure 6.
Then more evaluation work was carried out, and the results are shown in Figure 7. First, the searched path was projected on the DOM in 3D perspective view, which is shown in Figure 7a. The elevations are exaggerated twice so that the terrain relief is easier to identify. As can be seen from the zoomed-in view in Figure 7c, the left and right wheels would pass over two pits (where arrows 2 and 3 point to) just before approaching the destination LE00208. Due to limitation of the DOM resolution and texture deformation, only large craters can be identified, the small rocks and pits are impossible to be recognized. In Figure 7c, arrow 1 points to something that looks like a rock or a pit. Due to the design of Yutu-2 rover wheel, no risk of getting stuck by a rock in the sieved-like wheel should be taken. Therefore, the searched path should be checked using back-projection strategy on the image mosaic stitched by Navcam images which has subtler texture, as shown in Figure 7b. Because the resolution of the mosaic is at least two–three times higher than the DOM, more details can be provided. As shown in Figure 7b, the viewing angle is similar to that of Figure 7a, yet with more detailed texture. In the corresponding zoomed-in view (shown in Figure 7d) of the blue rectangle annotated in Figure 7b, the ambiguous target pointed by arrow 1 can be clearly identified as a small pit, and a small rock next to the pit is also clearer than that in Figure 7c.
From the above evaluation and analysis, the searched path was not optimal and has room for improvement. Therefore, the first searched path, denoted as the Path 1, was then fine-tuned to obtain Path 2. The refined path is shown in Figure 8a. The comparison of the back-projection on Navcam mosaic image of the two paths are shown in Figure 8b. Path 2 drifts left from the first one, which makes the wheels avoiding the two pits before the end point. Moreover, the small pit and the rock near the left wheel trace are also be kept far away. From the fluctuation extent of the projected wheel trace, Path 2 is more appropriate than Path 1.
As introduced above, the Navcam cannot image the foot of the rover, therefore the projection of the path on that part is missing. For evaluation of this part, the two paths were projected on the left Hazcam image, as shown in Figure 9. As can be seen from the figure, in the 2nd path, the right wheels would pass the edge of a pit (pointed by the black arrow in Figure 9). From the fluctuation extent of the projected wheels trajectories, the influence on the wheels can be ignored. Finally, Path 2 was executed to reach waypoint LE00208.

4.3. Error Analysis Results

According to the error propagation analysis in the previous section, projection error analysis results are shown in Figure 10. For Navcam, since the searched path starts from the current waypoint and generally has a length less than 10 m, error from 0 to 10 meters is analyzed. For Hazcam, as the effective working distance is shorter, we analyze the error from 0 to 5 m. The overall trend in both analysis results shown in Figure 10 indicates that the error grows as the distance from the rover and the drift from the image center grows. From [7,23], the range measurement error is inversely proportional to the square of the distance. Therefore, in Equation (4) combining two error propagation effects, the projection error is approximately linear inversely proportional to the distance. In addition, in the first error propagation process, the image point position affects the error, where the points farther from the image center get larger error. This effect also propagates in the second process reflected in Equation (4). From the above analyses, as can be seen from Figure 10, with the distance from the point to the image center grows, the error increases, and with the image distance grows the error increases.
From the above error analyses, it can be seen that under the assurance of high calibration accuracy, the error of the planned path projection is within the acceptable range, and can correctly reflect the searched path. It should be noted that since there is no additional point position error from Navcam images to their stitched image mosaic, the error of projection on the Navcam image mosaic is the same as that on the Navcam image.

5. Discussion

5.1. Support Other Applications

Inspired by the most frequently used applications described in the above section, by providing proper information, the projection method can be extended and used in more applications. Here introduces two of them implemented in CE-4 mission.
  • Wheels trace prediction of rover spin
From Figure 8a it can be seen that sometimes the rover needs to make turns at the start or (and) the end points. According to the design of the rover mobility sub-system [17] which contains the locations of the front, middle, and rear wheels, combining with their transformation relationships to the rover center and the cameras, the wheel trajectories can be determined by them and projected together with the searched paths on the Navcam and Hazcam images.
As shown in Figure 8a, the rover would make an anti-clockwise turn of 43.42° at the start point, and an anti-clockwise turn of 90.13° at the end point. Based on the path projection method, the predicted wheel traces of rover spin at the end point of Path 2 are calculated and illustrated in Figure 11. From the figure, it can be seen that the middle wheels are relatively stable, the right front wheels are affected by the terrain in the middle of the turn, and the left rear wheel would pass over the shallow pit. By the guide of these information, the depth of the pit and slopes of the areas are carefully checked and appeared safe according to the rover capabilities.
2.
Imaging area prediction of other sensors
The projection approach was also extended to implement imaging area prediction for other payloads. One of the typical applications is predicting science target imaging area of Visible and Near-Infrared Imaging Spectrometer (VNIS) onboard the Yutu-2 rover. The VNIS, consisting of a visible and near-infrared (VNIR) imaging spectrometer and a shortwave infrared (SWIR) spectrometer, aims at investigating the mineralogical compositions of lunar surface materials [25]. The VNIS is fixed on the rover front panel, the FOVs of VNIR and SWIR are 8.5° and 3.6°, respectively [26,27]. Due to the fixed mounting and small FOVs, approaching and pointing to the science target entirely depend on the movement of the rover [28]. Moreover, sometimes an interesting science target (e.g., a rock) may be very small. Therefore, precise imaging area predictions are crucial to plan and refine the rover path to make sure that the target is within the FOV of the VNIS after the movement.
According to the mounting and calibration parameters of Navcam, Hazcam, and VNIS, the rigid transformation relations, including rotation and translation from one to another, can be calculated. Therefore, the FOV areas of both VNIR and SWIR can be projected on the Navcam and Hazcam images to provide guidance for the rover movement. Figure 12 shows the process of approaching a target at the beginning of the 3rd lunar day since Yutu-2 started its work. The rover was awaked at waypoint LE00301 where the rock can be observed by Navcam. By a short movement to arrive LE00302, the imaging area of VNIS on Navcam image captured at this waypoint was predicted as shown in Figure 12a, in which the deformed rectangle represents the FOV of VNIR and the deformed circle represents the FOV of SWIR. It can be seen that the SWIR imaging area was only partial on the target. Therefore, by the support of these information, the rover adjusted its position and gesture, and moved to waypoint LE00303. The imaging area of VNIS was predicted again on Hazcam image as this time the target is closer (shown in Figure 12b). Under the guidance of the prediction, the rover completed the target approaching task. The acquired VNIR image is shown in Figure 12c with FOV of SWIR marked as a red circle. The actual captured image indicates that the science target approaching task guided by imaging area prediction was implemented successfully. The VNIR and SWIR data of the rock and other science targets have contributed significantly to mineralogical study, surface composition research and other fields on the lunar far side [29].

5.2. Error Source Analysis

From the path evaluation applications introduced in Section 4.2, it can be found that the projection errors (i.e., deviations from the true projection points) of the searched path at the farthest area from the rover are less than the amount that analyzed in Section 4.3. This is because the overall terrain has no severe elevation steps, which leads to high matching accuracy of the corresponding points. Moreover, camera parameters with high calibration accuracy assure the precise measurement of the DEM.
However, in the extended application that predicts the imaging area of VNIS, we found the predicted area is a bit different from the obtained image, as shown in Figure 12b,c. The deviation may be caused by one or more of the following reasons: (1) Calibration errors. Although the parameters of the Navcam and Hazcam are calibrated with high precision, the transformation error i.e., from the VNIS coordinate system to the work coordinate system and to the Navcam coordinate system, may combine with the slight camera calibration errors and cause the deviation. (2) Rover localization errors. The position and orientation of the rover are calculated by visual localization method, whose accuracy relies on feature matching and sensor parameters.
According to the above preliminary analysis of the causes, error compensation work can be carried out from the following aspects in the future. (1) At certain waypoints, the cameras can acquire images of the real wheel trajectories of the last few passing waypoints. The searched path can be projected on this kind of images and overlaid with these real wheel trajectories for comparisons. (2) Collect more VNIS imaging area predictions results and real VNIS images for comparisons. By comparison data of the above two aspects, together with more complicated error distribution and propagation simulations, more clues of error correction or compensation will be found out.

6. Conclusions

The paper presents vision-based decision support methods for the generation of obstacle map and evaluation of path-planning results for Yutu-2 rover in the CE-4 mission. At each waypoint, 0.02 m resolution DEM and the corresponding DOM are automatically generated by Navcam stereo images, and obstacle map is established based on terrain analysis to support path searching. Based on the camera models and calibrated parameters, the paths and wheel traces are mapped from the 3D coordinates gained from DEM to the 2D Navcam images, Navcam image mosaic, and Hazcam images. The methods provide intuitional visualization of the path projection results, which conforms to the actual observation angle and makes full use of the image resolution of Navcam and Hazcam images.
The vision-based decision support methods and its applications in multiple aspects have been directly used in the CE-4 mission to support teleoperation of the rover. In addition, we presented some typical and extended applications in this paper. While Yutu-2 continues exploring the far side of the Moon, the proposed methods are routinely applied to ensure safe traversing and data acquisition. Under the support of the proposed techniques, by the end of the 14th lunar day (3 February 2020), the rover has already travelled 367.25 meters on the far side of the Moon [30].

Author Contributions

Conceptualization, Y.W., Z.L., W.W., T.Y., and L.L.; Methodology, Y.W., W.W., Z.L. and M.P.; Software, Y.W., W.W., Z.L. and S.G.; validation, M.P., Y.W. and K.D.; formal analysis, Y.W., K.D. and Z.L.; investigation, T.Y. and L.L.; resources, T.Y., J.W. and X.C.; data curation, L.L., X.C. and J.W.; writing—original draft preparation, Y.W. and Z.L.; writing—review and editing, K.D. and S.G.; visualization, Y.W.; supervision, K.D.; project administration, K.D.; funding acquisition, K.D., Y.W. and M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Key Research and Development Program of China (No. 2018YFB1305004) and National Natural Science Foundation of China (Nos. 41701489, 41671458 and 41771488).

Acknowledgments

The authors thank the Lunar and Deep Space Exploration Science Applications Center of the National Astronomical Observatories for providing the VNIS data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Di, K.; Liu, Z.; Liu, B.; Wan, W.; Peng, M.; Wang, Y.; Gou, S.; Yue, Z.; Xin, X.; Jia, M.; et al. Chang’e-4 lander localization based on multi-source data. J. Remote Sens. 2019, 23, 177–184. [Google Scholar]
  2. Ye, P.; Sun, Z.; Zhang, H.; Zhang, L.; Wu, X.; Li, F. Mission design of Chang’e-4 probe system. Sci. Sin. Technol. 2019, 49, 124–137. [Google Scholar] [CrossRef]
  3. Yu, T.; Fei, J.; Li, L.; Cheng, X. Study on path planning method of lunar rover. J. Deep Space Explor. 2019, 6, 384–390. [Google Scholar]
  4. Liu, Z.; Di, K.; Peng, M.; Wan, W.; Liu, B.; Li, L.; Yu, T.; Wang, B.; Zhou, J.; Chen, H. High precision landing site mapping and rover localization for Chang’e-3 mission. Sci. China-Phys. Mech. Astron. 2015, 58, 1–11. [Google Scholar] [CrossRef]
  5. Helmick, D.M.; Angelova, A.; Livianu, M.; Matthies, L.H. Terrain Adaptive Navigation for Mars Rovers. In Proceedings of the 2007 IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2007; pp. 1–11. [Google Scholar]
  6. Carsten, J.; Rankin, A.; Ferguson, D.; Stentz, A. Global Path Planning on Board the Mars Exploration Rovers. In Proceedings of the 2007 IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2007; p. 1125. [Google Scholar]
  7. Peng, M.; Wan, W.; Wu, K.; Liu, Z.; Li, L.; Di, K.; Li, L.; Miao, Y.; Zhan, L. Topographic mapping capbility analysis of Chang’e-3 Navcam stereo images and three-dimensional terrain reconstruction for mission operations. J. Remote Sens. 2014, 18, 995–1002. [Google Scholar]
  8. Liu, Z.; Wan, W.; Peng, M.; Zhao, Q.; Xu, B.; Liu, B.; Liu, Y.; Di, K.; Li, L.; Yu, T.; et al. Remote sensing mapping and localizatin techniques for teleoperation of Chang’e-3 rover. J. Remote Sens. 2014, 18, 971–980. [Google Scholar]
  9. Chen, J.; Xing, Y.; Teng, B.; Mao, X.; Liu, X.; Jia, Y.; Zhang, J.; Wang, L. Guidance, Navigation and Control Technologies of Chang’E-3 Lunar Rover. Sci. China Technol. Sci. 2014, 44, 461–469. [Google Scholar]
  10. Wang, B.; MAO, X.; Tang, G. A Calibration Method for Fish-Eye Camera of Lunar Rover. J. Astronaut. 2011, 32, 933–939. [Google Scholar]
  11. Zhang, S.; Liu, S.; Ma, Y.; Qi, C.; Yang, H.M. Self calibration of the stereo vision system of the Chang’e-3 lunar roverbased on the bundle block adjustment. Isprs J. Photogramm. Remote Sens. 2017, 128, 287–297. [Google Scholar] [CrossRef]
  12. Zhang, S.; Xu, Y.; Liu, S.; Yan, D. Calibration of Chang’e-3 Lunar Rover Stereo-camera System Based on Control Field. Geomat. Inf. Sci. Wuhan Univ. 2015, 40, 1509–1513. [Google Scholar]
  13. Zhou, J.; Xie, Y.; Zhang, Q.; Wu, K.; Xia, Y.; Zhang, Z. Research on mission planning in teleoperation of lunar rovers. Sci. China Inf. Sci. 2014, 44, 441–451. [Google Scholar]
  14. Wang, Y.; Peng, M.; Di, K.; Wan, W.; Liu, Z.; Yue, Z.; Xing, Y.; Mao, X.; Teng, B. Vision Based Obstacle Detection Using Rover Stereo Images. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences ISPRS Geospatial Week, Enschede, The Netherlands, 10–14 June 2019; pp. 1471–1477. [Google Scholar]
  15. Wu, W.; Zhou, J.; Wang, B.; Liu, C. Key technologies in the teleoperation of Chang’E-3 Jade Rabbit rover. Sci. China Inf. Sci. 2014, 44, 425–440. [Google Scholar]
  16. Li, Q.; Jia, Y.; Peng, S.; Han, L. Top Design and Implementation of the Lunar Rover Mission Planning. J. Deep Explor. 2017, 4, 58–65. [Google Scholar]
  17. Zhang, Y.; Xiao, J.; Zhang, X.; Liu, D.; Zou, H. Design and implementation of Chang’E-3 rover location system. Sci. China Technol. Sci. 2014, 44, 483–491. [Google Scholar]
  18. Shen, Z.; Zhang, W.; Jia, Y.; Sun, Z. System Design and Technical Characteristics Analysis of Chang’e-3 Lunar Rover. Spacecr. Eng. 2015, 24, 8–13. [Google Scholar]
  19. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  20. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  21. Liang, J.; Liu, Z.; Di, K. Construction of a 3D Measurable Virtual Reality Enviroment for Planetary Exploration Applications. Spacecr. Eng. 2012, 21, 28–34. [Google Scholar]
  22. Brown, M.; Lowe, D.G. Automatic Panoramic Image Stitching Using Invariant Features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef] [Green Version]
  23. Di, K.; Li, R. Topographic Mapping Capability Analysis of Mars Exploration Rover 2003 Mission Imagery. In Proceedings of the 5th International Symposium on Mobile Mapping Technology (MMT 2007), Padua, Italy, 28–31 May 2007. [Google Scholar]
  24. Wang, Z. Principles of Photogrammetry (with Remote Sensing); Publishing House of Surveying and Mapping: Beijing, China, 1990; pp. 312–313. [Google Scholar]
  25. Jia, Y.; Zou, Y.; Ping, J.; Xue, C.; Yan, J.; Ning, Y. The scientific objectives and payloads of Chang’E-4 mission. Planet. Space Sci. 2018, 162, 207–215. [Google Scholar] [CrossRef]
  26. He, Z.P.; Wang, B.Y.; Lv, G.; Li, C.L.; Yuan, L.Y.; Xu, R.; Chen, K.; Wang, J.Y. Visible and near-infrared imaging spectrometer and its preliminary results from the Chang’E 3 project. Rev. Sci. Instrum. 2014, 85, 083104. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Li, C.; Wang, Z.; Xu, R.; Lv, G.; Yuan, L.; He, Z.; Wang, J. The Scientific Information Model of Chang’e-4 Visible and Near-IR Imaging Spectrometer (VNIS) and In-Flight Verification. Sensors 2019, 19, 2806. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Liu, Z.; Di, K.; Li, J.; Xie, J.; Cui, X.; Xi, L.; Wan, W.; Peng, M.; Liu, B.; Wang, Y.; et al. Landing Site Topographic Mapping and Rover Localization for Chang’e-4 Mission. Sci. China-Phys. Mech. & Astron. accepted.
  29. Li, C.; Liu, D.; Liu, B.; Ren, X.; Liu, J.; He, Z.; Zuo, W.; Zeng, X.; Xu, R.; Tan, X.; et al. Chang’E-4 initial spectroscopic identification of lunar far-side mantle-Deriv. materials. Nature 2019, 569, 378–382. [Google Scholar] [CrossRef] [PubMed]
  30. China’s Lunar Rover Travels over 367 Meters on Moon’s Far Side. Available online: https://www.chinadaily.com.cn/a/202002/03/WS5e37ce2aa310128217274769.html (accessed on 3 February 2020).
Figure 1. Overall flowchart of vision-based decision support for CE-4 rover path planning.
Figure 1. Overall flowchart of vision-based decision support for CE-4 rover path planning.
Remotesensing 12 00624 g001
Figure 2. Flowchart of the back-projection processes on three different kinds of images.
Figure 2. Flowchart of the back-projection processes on three different kinds of images.
Remotesensing 12 00624 g002
Figure 3. Original Navcam images and mosaic at waypoint LE00302. (a) A sequence of original images captured by the left Navcam; (b) the resultant image mosaic.
Figure 3. Original Navcam images and mosaic at waypoint LE00302. (a) A sequence of original images captured by the left Navcam; (b) the resultant image mosaic.
Remotesensing 12 00624 g003
Figure 4. Automatically generated DEM (a) and DOM (b) from Navcam stereo images at waypoint LE00203.
Figure 4. Automatically generated DEM (a) and DOM (b) from Navcam stereo images at waypoint LE00203.
Remotesensing 12 00624 g004
Figure 5. Obstacle map at waypoint LE00203 and a searched path calculated by its support.
Figure 5. Obstacle map at waypoint LE00203 and a searched path calculated by its support.
Remotesensing 12 00624 g005
Figure 6. DOM at waypoint LE00207 and a searched path in overhead view.
Figure 6. DOM at waypoint LE00207 and a searched path in overhead view.
Remotesensing 12 00624 g006
Figure 7. Comparisons of searched path projections on DOM in 3D perspective view and on Navcam image mosaic at waypoint LE00207: (a)The projected path on DOM in 3D perspective view; (b) projection results of the searched path on the Navcam image mosaic; (c) a zoomed-in view of the blue rectangle on (a); (d) a zoomed-in view of the blue rectangle on (b).
Figure 7. Comparisons of searched path projections on DOM in 3D perspective view and on Navcam image mosaic at waypoint LE00207: (a)The projected path on DOM in 3D perspective view; (b) projection results of the searched path on the Navcam image mosaic; (c) a zoomed-in view of the blue rectangle on (a); (d) a zoomed-in view of the blue rectangle on (b).
Remotesensing 12 00624 g007aRemotesensing 12 00624 g007b
Figure 8. The 2nd planned path and comparison of the two paths: (a) The 2nd planned path at waypoint LE00207; (b) a zoomed-in view of comparisons of two paths at waypoint LE00207 projected on the Navcam image mosaic.
Figure 8. The 2nd planned path and comparison of the two paths: (a) The 2nd planned path at waypoint LE00207; (b) a zoomed-in view of comparisons of two paths at waypoint LE00207 projected on the Navcam image mosaic.
Remotesensing 12 00624 g008
Figure 9. Comparisons of two planned paths at waypoint LE00207 projected on the left Hazcam image.
Figure 9. Comparisons of two planned paths at waypoint LE00207 projected on the left Hazcam image.
Remotesensing 12 00624 g009
Figure 10. Projection error analyses results for both Navcam (a) and Hazcam (b).
Figure 10. Projection error analyses results for both Navcam (a) and Hazcam (b).
Remotesensing 12 00624 g010
Figure 11. Projection application of wheel trace prediction of rover spin on the Navcam image mosaic.
Figure 11. Projection application of wheel trace prediction of rover spin on the Navcam image mosaic.
Remotesensing 12 00624 g011
Figure 12. VNIS imaging area prediction: (a) prediction result on a Navcam image at waypoint LE00302; (b) prediction result on a Navcam image at waypoint LE00303; (c) the image obtained by VNIS at waypoint LE00303 (the circle represents the FOV of SWIR).
Figure 12. VNIS imaging area prediction: (a) prediction result on a Navcam image at waypoint LE00302; (b) prediction result on a Navcam image at waypoint LE00303; (c) the image obtained by VNIS at waypoint LE00303 (the circle represents the FOV of SWIR).
Remotesensing 12 00624 g012
Table 1. Basic parameters of Navcam and Hazcam [9].
Table 1. Basic parameters of Navcam and Hazcam [9].
PayloadFOV
(°)
Focal Length (mm)Baseline (mm)Image Size
(pixel × pixel)
Working Mode
Navcam46.617.72701024 × 1024Fixed roll angle and changeable pitch and yaw angles, imaging along yaw angle at an interval of 20 degrees. Power on every waypoint.
Hazcam1207.31001024 × 1024Fixed on the front panel of the rover. Power on at certain waypoints.

Share and Cite

MDPI and ACS Style

Wang, Y.; Wan, W.; Gou, S.; Peng, M.; Liu, Z.; Di, K.; Li, L.; Yu, T.; Wang, J.; Cheng, X. Vision-Based Decision Support for Rover Path Planning in the Chang’e-4 Mission. Remote Sens. 2020, 12, 624. https://doi.org/10.3390/rs12040624

AMA Style

Wang Y, Wan W, Gou S, Peng M, Liu Z, Di K, Li L, Yu T, Wang J, Cheng X. Vision-Based Decision Support for Rover Path Planning in the Chang’e-4 Mission. Remote Sensing. 2020; 12(4):624. https://doi.org/10.3390/rs12040624

Chicago/Turabian Style

Wang, Yexin, Wenhui Wan, Sheng Gou, Man Peng, Zhaoqin Liu, Kaichang Di, Lichun Li, Tianyi Yu, Jia Wang, and Xiao Cheng. 2020. "Vision-Based Decision Support for Rover Path Planning in the Chang’e-4 Mission" Remote Sensing 12, no. 4: 624. https://doi.org/10.3390/rs12040624

APA Style

Wang, Y., Wan, W., Gou, S., Peng, M., Liu, Z., Di, K., Li, L., Yu, T., Wang, J., & Cheng, X. (2020). Vision-Based Decision Support for Rover Path Planning in the Chang’e-4 Mission. Remote Sensing, 12(4), 624. https://doi.org/10.3390/rs12040624

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop