Direct monocular odometry using points and lines
2017 IEEE International Conference on Robotics and Automation (ICRA), 2017•ieeexplore.ieee.org
Most visual odometry algorithm for a monocular camera focuses on points, either by feature
matching, or direct alignment of pixel intensity, while ignoring a common but important
geometry entity: edges. In this paper, we propose an odometry algorithm that combines
points and edges to benefit from the advantages of both direct and feature based methods. It
works better in texture-less environments and is also more robust to lighting changes and
fast motion by increasing the convergence basin. We maintain a depth map for the keyframe …
matching, or direct alignment of pixel intensity, while ignoring a common but important
geometry entity: edges. In this paper, we propose an odometry algorithm that combines
points and edges to benefit from the advantages of both direct and feature based methods. It
works better in texture-less environments and is also more robust to lighting changes and
fast motion by increasing the convergence basin. We maintain a depth map for the keyframe …
Most visual odometry algorithm for a monocular camera focuses on points, either by feature matching, or direct alignment of pixel intensity, while ignoring a common but important geometry entity: edges. In this paper, we propose an odometry algorithm that combines points and edges to benefit from the advantages of both direct and feature based methods. It works better in texture-less environments and is also more robust to lighting changes and fast motion by increasing the convergence basin. We maintain a depth map for the keyframe then in the tracking part, the camera pose is recovered by minimizing both the photometric error and geometric error to the matched edge in a probabilistic framework. In the mapping part, edge is used to speed up and increase stereo matching accuracy. On various public datasets, our algorithm achieves better or comparable performance than state-of-the-art monocular odometry methods. In some challenging texture-less environments, our algorithm reduces the state estimation error over 50%.
ieeexplore.ieee.org
Showing the best result for this search. See all results