Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter (O) May 27, 2021

Evaluation of four point cloud similarity measures for the use in autonomous driving

Bewertung von vier Punktwolkenähnlichkeitsmaßen für die Anwendung im Autonomen Fahren
  • Felix Berens

    M. Sc. Felix Berens is doctoral candidate in the field of sensor fusion for autonomous driving at the Institue for Automation and Applied Computer Science at the Karlsruhe Institute of Technology, and the Institute for Artificial Intelligence at the Ravensburg-Weingarten University of Applied Sciences. Research Interests: Sensor fusion, sensor placement, machine learning, object detection.

    ORCID logo EMAIL logo
    , Stefan Elser

    Prof. Dr. rer. nat. Stefan Elser works as professor for autonomous driving at the Ravensburg-Weingarten University of Applied Sciences. Research Interests: Machine learning, object detection, sensor fusion and their applications in autonomous driving.

    and Markus Reischl

    apl. Prof. Dr.-Ing. Markus Reischl is head of the research group ”Machine Learning for High-Through put and Mechatronics” of the Institute for Automation and Applied Computer Science at the Karlsruhe Institute of Technology. Research Interests: Man-machine interfaces, image processing, machine learning, data analytics.

Abstract

Measuring the similarity between point clouds is required in many areas. In autonomous driving, point clouds for 3D perception are estimated from camera images but these estimations are error-prone. Furthermore, there is a lack of measures for quality quantification using ground truth. In this paper, we derive conditions point cloud comparisons need to fulfill and accordingly evaluate the Chamfer distance, a lower bound of the Gromov Wasserstein metric, and the ratio measure. We show that the ratio measure is not affected by erroneous points and therefore introduce the new measure “average ratio”. All measures are evaluated and compared using exemplary point clouds. We discuss characteristics, advantages and drawbacks with respect to interpretability, noise resistance, environmental representation, and computation.

Zusammenfassung

In vielen Bereichen ist es erforderlich, die Ähnlichkeit zwischen Punktwolken zu messen. Für das autonomen Fahren können Punktwolke zur 3D Umgebungswahrnehmung aus dem Kamerabild vorhergesagt werden, jedoch sind diese Vorhersagen fehleranfällig. Darüber hinaus fehlen Maße zur Qualitätsquantifizierung anhand der Ground Truth. In diesem Paper leiten wir Bedingungen ab, die ein Maß zum Punktwolkenvergleichen erfüllen muss und bewerten entsprechend die folgenden Maße: die Chamfer Distanz, eine untere Schranke der Gromov Wasserstein Metrik und das Ratio Maß. Wir zeigen, dass das Ratio Maß nicht durch fehlerhafte Punkte beeinflusst wird und stellen deswegen das neue Maß “average ratio” vor. Alle Maße werden anhand beispielhafter Punktwolken bewertet und verglichen. Wir diskutieren Charakteristiken, Vor- und Nachteile in Bezug auf Interpretierbarkeit, Geräuschbeständigkeit, Umgebungsdarstellung und Berechnung.

About the authors

Felix Berens

M. Sc. Felix Berens is doctoral candidate in the field of sensor fusion for autonomous driving at the Institue for Automation and Applied Computer Science at the Karlsruhe Institute of Technology, and the Institute for Artificial Intelligence at the Ravensburg-Weingarten University of Applied Sciences. Research Interests: Sensor fusion, sensor placement, machine learning, object detection.

Stefan Elser

Prof. Dr. rer. nat. Stefan Elser works as professor for autonomous driving at the Ravensburg-Weingarten University of Applied Sciences. Research Interests: Machine learning, object detection, sensor fusion and their applications in autonomous driving.

Markus Reischl

apl. Prof. Dr.-Ing. Markus Reischl is head of the research group ”Machine Learning for High-Through put and Mechatronics” of the Institute for Automation and Applied Computer Science at the Karlsruhe Institute of Technology. Research Interests: Man-machine interfaces, image processing, machine learning, data analytics.

References

1. Jingyun Liu et al. “TOF Lidar Development in Autonomous Vehicle”. In: 2018 IEEE 3rd Optoelectronics Global Conference (OGC). 2018, pp. 185–190. doi: 10.1109/OGC.2018.8529992.Search in Google Scholar

2. Sean Campbell et al. “Sensor technology in autonomous vehicles: A review”. In: 2018 29th Irish Signals and Systems Conference (ISSC). IEEE. 2018.10.1109/ISSC.2018.8585340Search in Google Scholar

3. Andreas Geiger et al. “Vision meets robotics: The kitti dataset”. In: The International Journal of Robotics Research 32.11 (2013), pp. 1231–1237.10.1177/0278364913491297Search in Google Scholar

4. Holger Caesar et al. “nuscenes: A multimodal dataset for autonomous driving”. In: arXiv preprint arXiv:1903.11027 (2019).10.1109/CVPR42600.2020.01164Search in Google Scholar

5. Jakob Geyer et al. “A2D2: Audi Autonomous Driving Dataset”. In: (2020). arXiv:2004.06320 [cs.CV]. url: https://www.a2d2.audi.Search in Google Scholar

6. Erik Ward and John Folkesson, “Vehicle localization with low cost radar sensors”. In: (2016), pp. 864–870. doi: 10.1109/IVS.2016.7535489.10.1109/IVS.2016.7535489Search in Google Scholar

7. Simon Chadwick, Will Maddern and Paul Newman, “Distant Vehicle Detection Using Radar and Vision”. In: (2019), pp. 8311–8317. doi: 10.1109/ICRA.2019.8794312.10.1109/ICRA.2019.8794312Search in Google Scholar

8. Bence Major et al. “Vehicle Detection With Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors”. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). 2019, pp. 924–932. doi: 10.1109/ICCVW.2019.00121.Search in Google Scholar

9. Yair Wiseman, “Ancillary ultrasonic rangefinder for autonomous vehicles”. In: International Journal of Security and Its Applications 10.5 (2018), pp. 49–58.10.14257/ijsia.2018.12.5.05Search in Google Scholar

10. Wenyuan Xu et al. “Analyzing and Enhancing the Security of Ultrasonic Sensors for Autonomous Vehicles”. In: IEEE Internet of Things Journal 5.6 (2018), pp. 5015–5029. doi: 10.1109/JIOT.2018.2867917.Search in Google Scholar

11. Marco Claudio De Simone, Zandra Betzabe Rivera and Domenico Guida, “Obstacle avoidance system for unmanned ground vehicles by using ultrasonic sensors”. In: Machines 6.2 (2018), p. 18.10.3390/machines6020018Search in Google Scholar

12. David Eigen, Christian Puhrsch and Rob Fergus, “Depth map prediction from a single image using a multi-scale deep network”. In: Advances in neural information processing systems. 2014, pp. 2366–2374.Search in Google Scholar

13. Ian P Howard, Perceiving in depth, volume 1: Basic mechanisms. Oxford University Press, 2012.10.1093/acprof:oso/9780199764143.001.0001Search in Google Scholar

14. Jin Han Lee et al. “From big to small: Multi-scale local planar guidance for monocular depth estimation”. In: arXiv preprint arXiv:1907.10326 (2019).Search in Google Scholar

15. Huan Fu et al. “Deep ordinal regression network for monocular depth estimation”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, pp. 2002–2011.Search in Google Scholar

16. Arun CS Kumar, Suchendra M Bhandarkar and Mukta Prasad, “Monocular depth prediction using generative adversarial networks”. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE. 2018, pp. 413–4138. doi: 10.1109/CVPRW.2018.00068.Search in Google Scholar

17. Yasin Almalioglu et al. “Ganvo: Unsupervised deep monocular visual odometry and depth estimation with generative adversarial networks”. In: 2019 International Conference on Robotics and Automation (ICRA). IEEE. 2019, pp. 5474–5480.10.1109/ICRA.2019.8793512Search in Google Scholar

18. Tuo Feng and Dongbing Gu, “Sganvo: Unsupervised deep visual odometry and depth estimation with stacked generative adversarial networks”. In: IEEE Robotics and Automation Letters 4.4 (2019), 4431–4437. doi: 10.1109/LRA.2019.2925555.Search in Google Scholar

19. Praful Hambarde et al. “Depth Estimation From Single Image And Semantic Prior”. In: 2020 IEEE International Conference on Image Processing (ICIP). IEEE. 2020, pp. 1441–1445.10.1109/ICIP40778.2020.9190985Search in Google Scholar

20. Rui Wang, Stephen M Pizer and Jan-Michael Frahm, “Recurrent neural network for (un-) supervised learning of monocular video visual odometry and depth”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 5555–5564.10.1109/CVPR.2019.00570Search in Google Scholar

21. Arun CS Kumar, Suchendra M Bhandarkar and Mukta Prasad, “Depthnet: A recurrent neural network architecture for monocular depth prediction”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018, pp. 283–291.10.1109/CVPRW.2018.00066Search in Google Scholar

22. John Paul Tan Yusiong and Prospero Clara Naval Jr, “DFRNets: Unsupervised Monocular Depth Estimation Using a Siamese Architecture for Disparity Refinement”. In: Pertanika Journal of Science & Technology 28.1 (2020).Search in Google Scholar

23. Cesar Cadena, Yasir Latif and Ian D. Reid, “Measuring the performance of single image depth estimation methods”. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2016, pp. 4150–4157.10.1109/IROS.2016.7759611Search in Google Scholar

24. Guandao Yang et al. “PointFlow: 3D Point Cloud Generation With Continuous Normalizing Flows”. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV). 2019, pp. 4540–4549.10.1109/ICCV.2019.00464Search in Google Scholar

25. H. Fan, H. Su and L. Guibas, “A Point Set Generation Network for 3D Object Reconstruction from a Single Image”. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017, pp. 2463–2471.10.1109/CVPR.2017.264Search in Google Scholar

26. Zhuotun Zhu et al. “Deep learning representation using autoencoder for 3D shape retrieval”. In: Neurocomputing 204 (2016), pp. 41–50.10.1109/SPAC.2014.6982699Search in Google Scholar

27. Liisa Holm and Chris Sander, “Dali: a network tool for protein structure comparison”. In: Trends in biochemical sciences 20.11 (1995), pp. 478–480.10.1016/S0968-0004(00)89105-7Search in Google Scholar

28. Carsten Berndt, Jens-Dirk Schwenn and Christopher Horst Lillig, “The specificity of thioredoxins and glutaredoxins is determined by electrostatic and geometric complementarity”. In: Chemical Science 6.12 (2015), pp. 7049–7058.10.1039/C5SC01501DSearch in Google Scholar PubMed PubMed Central

29. Manuela Gellert et al. “Substrate specificity of thioredoxins and glutaredoxins – towards a functional classification”. In: Heliyon 5.12 (2019), e02943. issn: 2405-8440. doi: 10.1016/j.heliyon.2019.e02943.Search in Google Scholar PubMed PubMed Central

30. Angel X. Chang et al. “ShapeNet: An Information-Rich 3D Model Repository”. In: arXiv:1512.03012 [cs.GR] (2015).Search in Google Scholar

31. David Lopez-Paz and Maxime Oquab, “Revisiting classifier two-sample tests”. In: International Conference on Learning Representations. 2017.Search in Google Scholar

32. Qiantong Xu et al. “An empirical study on evaluation metrics of generative adversarial networks”. In: arXiv preprint arXiv:1806.07755 (2018).Search in Google Scholar

33. Facundo Mémoli, “Gromov–Wasserstein distances and the metric approach to object matching”. In: Foundations of computational mathematics 11.4 (2011), pp. 417–487.10.1007/s10208-011-9093-5Search in Google Scholar

34. Michael Meyer and Georg Kuschk, “Automotive radar dataset for deep learning based 3d object detection”. In: 2019 16th European Radar Conference (EuRAD). IEEE. 2019, pp. 129–132.Search in Google Scholar

Received: 2020-08-28
Accepted: 2021-01-19
Published Online: 2021-05-27
Published in Print: 2021-06-25

© 2021 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 5.11.2024 from https://www.degruyter.com/document/doi/10.1515/auto-2020-0140/html
Scroll to top button