Mv-jar: Masked voxel jigsaw and reconstruction for lidar-based self-supervised pre-training
Proceedings of the IEEE/CVF Conference on Computer Vision and …, 2023•openaccess.thecvf.com
This paper introduces the Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for
LiDAR-based self-supervised pre-training and a carefully designed data-efficient 3D object
detection benchmark on the Waymo dataset. Inspired by the scene-voxel-point hierarchy in
downstream 3D object detectors, we design masking and reconstruction strategies
accounting for voxel distributions in the scene and local point distributions within the voxel.
We employ a Reversed-Furthest-Voxel-Sampling strategy to address the uneven distribution …
LiDAR-based self-supervised pre-training and a carefully designed data-efficient 3D object
detection benchmark on the Waymo dataset. Inspired by the scene-voxel-point hierarchy in
downstream 3D object detectors, we design masking and reconstruction strategies
accounting for voxel distributions in the scene and local point distributions within the voxel.
We employ a Reversed-Furthest-Voxel-Sampling strategy to address the uneven distribution …
Abstract
This paper introduces the Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training and a carefully designed data-efficient 3D object detection benchmark on the Waymo dataset. Inspired by the scene-voxel-point hierarchy in downstream 3D object detectors, we design masking and reconstruction strategies accounting for voxel distributions in the scene and local point distributions within the voxel. We employ a Reversed-Furthest-Voxel-Sampling strategy to address the uneven distribution of LiDAR points and propose MV-JAR, which combines two techniques for modeling the aforementioned distributions, resulting in superior performance. Our experiments reveal limitations in previous data-efficient experiments, which uniformly sample fine-tuning splits with varying data proportions from each LiDAR sequence, leading to similar data diversity across splits. To address this, we propose a new benchmark that samples scene sequences for diverse fine-tuning splits, ensuring adequate model convergence and providing a more accurate evaluation of pre-training methods. Experiments on our Waymo benchmark and the KITTI dataset demonstrate that MV-JAR consistently and significantly improves 3D detection performance across various data scales, achieving up to a 6.3% increase in mAPH compared to training from scratch. Codes and the benchmark are available at https://github. com/SmartBot-PJLab/MV-JAR.
openaccess.thecvf.com
Showing the best result for this search. See all results