Visual Navigation for Recovering an AUV by Another AUV in Shallow Water
Abstract
:1. Introduction
- Detection of landmarks in shallow water suffers from ambient light and nonuniform spreading. In order to overcome them, we propose an LCB method for the detection of underwater landmarks.
- Incomplete observations of all predefined landmarks that may be caused by broken landmarks and biofouling coverage comprise a practical problem in pose estimation for recovering AUVs in long-duration missions. A novel pose-estimation method for incomplete observations in AUV recovery is proposed in this work. To the best of our knowledge, the proposed method is the first points-based method for pose estimation in VBAR algorithms.
- Field experiments were performed to evaluate the proposed LCB method and the pose-estimation method.
- We provide an underwater active landmark dataset that was collected in our field experiments. The 2D location of each landmark was manually labeled for each image. The dataset can help researchers develop and validate their underwater vision-based recovery or docking algorithms in the absence of related underwater infrastructures.
2. System Overview
3. Underwater-Docking Recovery Algorithm
3.1. Landmark Detection
3.2. Pose Estimation
3.2.1. Pose Estimation in the Case of Complete Observations
3.2.2. Pose Estimation in the Case of Incomplete Observations
4. Experiment Results
4.1. Experiment Detection Results
4.1.1. Comparison of Detection: Algorithm Performance
4.1.2. Detection-Performance Analysis
4.2. Pose-Estimation Algorithm Analysis
4.3. Field Experiments
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Park, J.Y.; Huan Jun, B.; Mook Lee, P.; Oh, J. Experiments on vision guided docking of an autonomous underwater vehicle using one camera. Ocean Eng. 2009, 36, 48–61. [Google Scholar] [CrossRef]
- Maki, T.; Shiroku, R.; Sato, Y.; Matsuda, T.; Sakamaki, T.; Ura, T. Docking method for hovering type AUVs by acoustic and visual positioning. In Proceedings of the 2013 IEEE International Underwater Technology Symposium, Tokyo, Japan, 5–8 March 2013; pp. 1–6. [Google Scholar] [CrossRef]
- Li, Y.; Jiang, Y.; Cao, J.; Wang, B.; Li, Y. AUV docking experiments based on vision positioning using two cameras. Ocean Eng. 2015, 110, 163–173. [Google Scholar] [CrossRef]
- Palomeras, N.; Vallicrosa, G.; Mallios, A.; Bosch, J.; Vidal, E.; Hurtos, N.; Carreras, M.; Ridao, P. AUV homing and docking for remote operations. Ocean Eng. 2018, 154, 106–120. [Google Scholar] [CrossRef]
- Sarda, E.I.; Dhanak, M.R. A USV-Based Automated Launch and Recovery System for AUVs. IEEE J. Ocean. Eng. 2017, 42, 37–55. [Google Scholar] [CrossRef]
- Raspante, F. Underwater mobile docking of autonomous underwater vehicles. In Proceedings of the OCEANS 2012 Conference, Hampton Roads, VA, USA, 21–24 May 2012; pp. 1–15. [Google Scholar]
- Irani, R.; Kehoe, D.; Spencer, W.; Watt, G.; Gillis, C.; Carretero, J.; Dubay, R. Towards a UUV launch and recovery system on a slowly moving submarine. In Proceedings of the Warship 2014: Naval Submarines & UUV’s, Bath, UK, 18–19 June 2014; pp. 93–103. [Google Scholar]
- Renilson, M. A simplified concept for recovering a UUV to a submarine. Underw. Technol. 2014, 32, 193–197. [Google Scholar] [CrossRef] [Green Version]
- Watt, G.D.; Roy, A.R.; Currie, J.; Gillis, C.B.; Giesbrecht, J.; Heard, G.J.; Birsan, M.; Seto, M.L.; Carretero, J.A.; Dubay, R.; et al. A concept for docking a UUV with a slowly moving submarine under waves. IEEE J. Ocean. Eng. 2016, 41, 471–498. [Google Scholar]
- Brizzolara, S.; Chryssostomidis, C. Design of an Unconventional ASV for Underwater Vehicles Recovery: Simulation of the motions for operations in rough seas. In Proceedings of the ASNE International Conference on Launch & Recovery, Linthicum, MD, USA, 14–15 November 2012. [Google Scholar]
- Singh, H.; Bellingham, J.G.; Hover, F.; Lemer, S.; Moran, B.A.; Von der Heydt, K.; Yoerger, D. Docking for an autonomous ocean sampling network. IEEE J. Ocean. Eng. 2001, 26, 498–514. [Google Scholar] [CrossRef]
- Wirtz, M.; Hildebrandt, M.; Gaudig, C. Design and test of a robust docking system for hovering AUVs. In Proceedings of the 2012 Oceans, Arlington, VA, USA, 22–24 October 2012; pp. 1–6. [Google Scholar]
- Kimball, P.W.; Clark, E.B.; Scully, M.; Richmond, K.; Flesher, C.; Lindzey, L.E.; Harman, J.; Huffstutler, K.; Lawrence, J.; Lelievre, S.; et al. The ARTEMIS under-ice AUV docking system. J. Field Robot. 2018, 35, 299–308. [Google Scholar] [CrossRef]
- Pai, S.; Guerrini, P.; Potter, J.; Maguer, A.; Chitre, M.; Biagini, S. Autonomous initial capture system for AUV recovery. In Proceedings of the 3rd International Conference and Exhibition on Underwater Acoustic Measurements: Technologies and Results, Nafplion, Greece, 21–26 June 2009. [Google Scholar]
- Li, D.J.; Chen, Y.H.; Shi, J.G.; Yang, C.J. Autonomous underwater vehicle docking system for cabled ocean observatory network. Ocean Eng. 2015, 109, 127–134. [Google Scholar] [CrossRef]
- Liu, S.; Ozay, M.; Okatani, T.; Xu, H.; Sun, K.; Lin, Y. Detection and Pose Estimation for Short-Range Vision-Based Underwater Docking. IEEE Access 2019, 7, 2720–2749. [Google Scholar] [CrossRef]
- Feezor, M.D.; Yates Sorrell, F.; Blankinship, P.R.; Bellingham, J.G. Autonomous underwater vehicle homing/docking via electromagnetic guidance. IEEE J. Ocean. Eng. 2001, 26, 515–521. [Google Scholar] [CrossRef]
- Cowen, S.; Briest, S.; Dombrowski, J. Underwater docking of autonomous undersea vehicles using optical terminal guidance. In Proceedings of the Oceans ’97. MTS/IEEE Conference, Halifax, NS, Canada, 6–9 October 1997; Volume 2, pp. 1143–1147. [Google Scholar] [CrossRef]
- Sato, Y.; Maki, T.; Masuda, K.; Matsuda, T.; Sakamaki, T. Autonomous docking of hovering type AUV to seafloor charging station based on acoustic and visual sensing. In Proceedings of the 2017 IEEE Underwater Technology (UT), Busan, Korea, 21–24 February 2017; pp. 1–6. [Google Scholar]
- Ghosh, S.; Ray, R.; Vadali, S.R.K.; Shome, S.N.; Nandy, S. Reliable pose estimation of underwater dock using single camera: A scene invariant approach. Mach. Vis. Appl. 2016, 27, 221–236. [Google Scholar] [CrossRef]
- Hong, Y.H.; Kim, J.Y.; Oh, J.H.; Lee, P.M.; Jeon, B.H.; Oh, K.H. Development of the homing and docking algorithm for AUV. In Proceedings of the Thirteenth International Offshore and Polar Engineering Conference, Honolulu, HI, USA, 25–30 May 2003. [Google Scholar]
- Kong, H.; Akakin, H.C.; Sarma, S.E. A generalized Laplacian of Gaussian filter for blob detection and its applications. IEEE Trans. Cybern. 2013, 43, 1719–1733. [Google Scholar] [CrossRef] [PubMed]
- Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
- ABDEL-AZIZ, Y. Direct linear transformation from comparator coordinates into object space in close-range photogrammetry. In Proceedings of the ASP Symposium on Close-Range Photogrammetry, Falls Church, VA, USA, 11 December 1971; pp. 1–18. [Google Scholar]
- Araújo, H.; Carceroni, R.L.; Brown, C.M. A Fully Projective Formulation to Improve the Accuracy of Lowe’s Pose-Estimation Algorithm. Comput. Vis. Image Underst. 1998, 70, 227–238. [Google Scholar] [CrossRef]
- Oberkampf, D.; DeMenthon, D.; Davis, L.S. Iterative Pose Estimation Using Coplanar Feature Points. Comput. Vis. Image Underst. 1996, 63, 495–511. [Google Scholar] [CrossRef]
- Lu, C.; Hager, G.D.; Mjolsness, E. Fast and globally convergent pose estimation from video images. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 610–622. [Google Scholar] [CrossRef]
- Li, S.; Xu, C.; Xie, M. A Robust O(n) Solution to the Perspective-n-Point Problem. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1444–1450. [Google Scholar] [CrossRef] [PubMed]
Method | Number of Failed Detections | Location Errors (Pixel per Landmark) |
---|---|---|
1. HATS | 1907 | 18.13 |
2. HATS (ROI) | 1370 | 1.91 |
3. LCB () | 9 | 1.68 |
Method | Number of Failed Detections | Location Errors (Pixel per Landmark) |
---|---|---|
1. Otsu | 2306 | N/A |
2. LoG () + Otsu | 397 | 1.83 |
3. LoG () + Block-wise + Otsu | 1715 | 7.39 |
4. Coarse detection + LoG () + Otsu | 322 | 1.64 |
5. Coarse detection + LoG () + Blockwise + Otsu | 9 | 1.68 |
6. Coarse detection + LoG () + Blockwise + Otsu | 26 | 1.67 |
7. Coarse detection + LoG () + Blockwise + Otsu | 10 | 1.70 |
8. Coarse detection + LoG () + Blockwise + Otsu | 11 | 1.70 |
No. | Ground Truth | |||||||
---|---|---|---|---|---|---|---|---|
Orien. (deg) | Pos. (mm) | Orien. (deg) | Pos. (mm) | Orien. (deg) | Pos. (mm) | Orien. (deg) | Pos. (mm) | |
(a) | Yaw: 0.0 | x: 99.2 | 1.231 | 10.366 | 0.960 | 99.916 | 0.293 | 98.862 |
Pitch: 4.5 | y: −73.6 | 2.809 | −71.517 | 1.932 | −72.216 | 2.069 | −71.717 | |
Roll: −0.8 | z: 3686.5 | −1.273 | 3687.955 | −1.258 | 3679.886 | −1.244 | 3666.954 | |
(b) | Yaw: 6.8 | x: 98.1 | 2.115 | 96.559 | 1.196 | 95.318 | 1.127 | 95.288 |
Pitch: −9.2 | y: 839.8 | −11.302 | 837.126 | −11.911 | 834.394 | −12.314 | 832.257 | |
Roll: −1.3 | z: 3598.1 | −2.142 | 3590.579 | −2.135 | 3598.303 | −2.128 | 3572.824 | |
(c) | Yaw: 2.7 | x: 435.8 | −3.714 | 433.207 | −4.509 | 431.473 | −4.443 | 430.364 |
Pitch: 12.2 | y: −1036.2 | 16.827 | −1029.845 | 16.277 | −1028.301 | 16.198 | −1024.642 | |
Roll: −1.2 | z: 3553.8 | −1.915 | 3538.244 | −1.841 | 3531.964 | −1.898 | 3518.536 | |
(d) | Yaw: 18.9 | x: −1212.8 | 22.741 | −1213.576 | 22.503 | −1211.537 | 22.265 | −1206.448 |
Pitch: −0.4 | y: −167.8 | 3.573 | −168.660 | 3.067 | −168.734 | 3.050 | −167.966 | |
Roll: −2.3 | z: 3453.7 | −2.042 | 3460.211 | −2.237 | 3453.575 | −2.183 | 3438.054 | |
(e) | Yaw: −37.0 | x: 515.5 | −35.471 | 513.549 | −33.238 | 518.459 | −29.508 | 522.176 |
Pitch: 1.9 | y: −422.3 | 2.576 | −427.395 | 2.563 | −430.459 | 3.374 | −430.230 | |
Roll: −2.9 | z: 4098.4 | −2.571 | 4088.068 | −2.624 | 4115.437 | −2.817 | 4120.387 | |
(f) | Yaw: −28.3 | x: −393.6 | −27.256 | −394.989 | −26.977 | −395.335 | −25.288 | −396.326 |
Pitch: 2.5 | y: −374.2 | 3.573 | −374.406 | 3.472 | −375.174 | 3.615 | −377.715 | |
Roll: −1.2 | z: 3307.9 | −1.363 | 3304.232 | −1.328 | 3309.443 | −1.402 | 3329.823 | |
(g) | Yaw: −12.0 | x: −1296.5 | −11.813 | −1284.283 | −11.366 | −1285.865 | −10.421 | −1290.430 |
Pitch: 2.6 | y: −327.8 | 3.991 | −323.670 | 4.033 | −324.082 | 4.006 | −325.703 | |
Roll: −1.9 | z: 3102.0 | −2.317 | 3073.298 | −2.326 | 3078.568 | −2.331 | 3090.632 | |
(h) | Yaw: 10.4 | x: −59.8 | 4.375 | −62.959 | 4.274 | −63.164 | 3.078 | −63.927 |
Pitch: 22.8 | y: −966.9 | 25.979 | −972.440 | 23.945 | −973.179 | 22.672 | −971.425 | |
Roll: −3.0 | z: 3728.8 | −3.420 | 3745.472 | −3.508 | 3741.764 | −3.665 | 3730.413 | |
(i) | Yaw: 3.2 | x: 12.3 | −2.501 | 3.982 | −2.122 | 20.593 | 10.690 | 23.763 |
Pitch: 10.1 | y: −46.3 | 8.912 | −38.898 | 13.662 | −38.762 | 7.017 | −57.425 | |
Roll: 5.4 | z: 3889.0 | 4.965 | 3881.185 | 6.749 | 3898.219 | 2.879 | 3899.503 | |
(j) | Yaw: −1.4 | x: 252.5 | −5.338 | 259.961 | −4.975 | 245.176 | 7.377 | 239.694 |
Pitch: −17.6 | y: 823.0 | −16.601 | 831.225 | −16.833 | 814.238 | −19.823 | 810.849 | |
Roll: 1.8 | z: 3954.4 | 1.733 | 3962.113 | 0.919 | 3945.519 | 4.324 | 3964.525 | |
Mean Error | N/A | N/A | 1.823 | 6.306 | 2.168 | 7.039 | 2.770 | 9.818 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, S.; Xu, H.; Lin, Y.; Gao, L. Visual Navigation for Recovering an AUV by Another AUV in Shallow Water. Sensors 2019, 19, 1889. https://doi.org/10.3390/s19081889
Liu S, Xu H, Lin Y, Gao L. Visual Navigation for Recovering an AUV by Another AUV in Shallow Water. Sensors. 2019; 19(8):1889. https://doi.org/10.3390/s19081889
Chicago/Turabian StyleLiu, Shuang, Hongli Xu, Yang Lin, and Lei Gao. 2019. "Visual Navigation for Recovering an AUV by Another AUV in Shallow Water" Sensors 19, no. 8: 1889. https://doi.org/10.3390/s19081889
APA StyleLiu, S., Xu, H., Lin, Y., & Gao, L. (2019). Visual Navigation for Recovering an AUV by Another AUV in Shallow Water. Sensors, 19(8), 1889. https://doi.org/10.3390/s19081889