Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,151)

Search Parameters:
Keywords = LiDAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10431 KiB  
Article
SG-LPR: Semantic-Guided LiDAR-Based Place Recognition
by Weizhong Jiang, Hanzhang Xue, Shubin Si, Chen Min, Liang Xiao, Yiming Nie and Bin Dai
Electronics 2024, 13(22), 4532; https://doi.org/10.3390/electronics13224532 (registering DOI) - 18 Nov 2024
Abstract
Place recognition plays a crucial role in tasks such as loop closure detection and re-localization in robotic navigation. As a high-level representation within scenes, semantics enables models to effectively distinguish geometrically similar places, therefore enhancing their robustness to environmental changes. Unlike most existing [...] Read more.
Place recognition plays a crucial role in tasks such as loop closure detection and re-localization in robotic navigation. As a high-level representation within scenes, semantics enables models to effectively distinguish geometrically similar places, therefore enhancing their robustness to environmental changes. Unlike most existing semantic-based LiDAR place recognition (LPR) methods that adopt a multi-stage and relatively segregated data-processing and storage pipeline, we propose a novel end-to-end LPR model guided by semantic information—SG-LPR. This model introduces a semantic segmentation auxiliary task to guide the model in autonomously capturing high-level semantic information from the scene, implicitly integrating these features into the main LPR task, thus providing a unified framework of “segmentation-while-describing” and avoiding additional intermediate data-processing and storage steps. Moreover, the semantic segmentation auxiliary task operates only during model training, therefore not adding any time overhead during the testing phase. The model also combines the advantages of Swin Transformer and U-Net to address the shortcomings of current semantic-based LPR methods in capturing global contextual information and extracting fine-grained features. Extensive experiments conducted on multiple sequences from the KITTI and NCLT datasets validate the effectiveness, robustness, and generalization ability of our proposed method. Our approach achieves notable performance improvements over state-of-the-art methods. Full article
(This article belongs to the Collection Advance Technologies of Navigation for Intelligent Vehicles)
16 pages, 1262 KiB  
Article
MS3D: A Multi-Scale Feature Fusion 3D Object Detection Method for Autonomous Driving Applications
by Ying Li, Wupeng Zhuang and Guangsong Yang
Appl. Sci. 2024, 14(22), 10667; https://doi.org/10.3390/app142210667 - 18 Nov 2024
Abstract
With advancements in autonomous driving, LiDAR has become central to 3D object detection due to its precision and interference resistance. However, challenges such as point cloud sparsity and unstructured data persist. This study introduces MS3D (Multi-Scale Feature Fusion 3D Object Detection Method), a [...] Read more.
With advancements in autonomous driving, LiDAR has become central to 3D object detection due to its precision and interference resistance. However, challenges such as point cloud sparsity and unstructured data persist. This study introduces MS3D (Multi-Scale Feature Fusion 3D Object Detection Method), a novel approach to 3D object detection that leverages the architecture of a 2D Convolutional Neural Network (CNN) as its core framework. It integrates a Second Feature Pyramid Network to enhance multi-scale feature representation and contextual integration. The Adam optimizer is employed for efficient adaptive parameter tuning, significantly improving detection performance. On the KITTI dataset, MS3D achieves average precisions of 93.58%, 90.91%, and 88.46% in easy, moderate, and hard scenarios, respectively, surpassing state-of-the-art models like VoxelNet, SECOND, and PointPillars. Full article
(This article belongs to the Special Issue Advances in Autonomous Driving and Smart Transportation)
15 pages, 4674 KiB  
Article
Research on Automatic Alignment for Corn Harvesting Based on Euclidean Clustering and K-Means Clustering
by Bin Zhang, Hao Xu, Kunpeng Tian, Jicheng Huang, Fanting Kong, Senlin Mu, Teng Wu, Zhongqiu Mu, Xingsong Wang and Deqiang Zhou
Agriculture 2024, 14(11), 2071; https://doi.org/10.3390/agriculture14112071 - 18 Nov 2024
Viewed by 131
Abstract
Aiming to meet the growing need for automated harvesting, an automatic alignment method based on Euclidean clustering and K-means clustering is proposed to address issues of driver fatigue and inaccurate driving in manually operated corn harvesters. Initially, the corn field environment is scanned [...] Read more.
Aiming to meet the growing need for automated harvesting, an automatic alignment method based on Euclidean clustering and K-means clustering is proposed to address issues of driver fatigue and inaccurate driving in manually operated corn harvesters. Initially, the corn field environment is scanned using LiDAR to obtain point cloud data, which are then subjected to pass-through filtering and statistical filtering to remove noise and non-corn contour points. Subsequently, Euclidean clustering and K-means clustering methods are applied to the filtered point cloud data. To validate the impact of Euclidean clustering on subsequent clustering, two separate treatments of the obtained point cloud data were conducted during experimental validation: the first used the K-means clustering algorithm directly, while the second involved performing Euclidean clustering followed by K-means clustering. The results demonstrate that the combined method of Euclidean clustering and K-means clustering achieved a success rate of 81.5%, representing a 26.5% improvement over traditional K-means clustering. Additionally, the Rand index increased by 0.575, while accuracy improved by 57% and recall increased by 61%. Full article
Show Figures

Figure 1

16 pages, 12606 KiB  
Article
Monitoring and Modeling Urban Temperature Patterns in the State of Iowa, USA, Utilizing Mobile Sensors and Geospatial Data
by Clemir Abbeg Coproski, Bingqing Liang, James T. Dietrich and John DeGroote
Appl. Sci. 2024, 14(22), 10576; https://doi.org/10.3390/app142210576 - 16 Nov 2024
Viewed by 392
Abstract
Thorough investigations into air temperature variation across urban environments are essential to address concerns about city livability. With limited research on smaller cities, especially in the American Midwest, the goal of this research was to examine the spatial patterns of air temperature across [...] Read more.
Thorough investigations into air temperature variation across urban environments are essential to address concerns about city livability. With limited research on smaller cities, especially in the American Midwest, the goal of this research was to examine the spatial patterns of air temperature across multiple small to medium-sized cities in Iowa, a relatively rural US state. Extensive fieldwork was conducted utilizing manually built mobile temperature sensors to collect air temperature data at a high temporal and spatial resolution in ten Iowa urban areas during the afternoon, evening, and night on days exceeding 32 °C from June to September 2022. Using the random forest machine-learning algorithm and estimated urban morphological variables at varying neighborhood distances derived from 1 m2 aerial imagery and derived products from LiDAR data, we created 24 predicted surface temperature models that demonstrated R2 coefficients ranging from 0.879 to 0.997 with the majority exceeding an R2 of 0.95, all with p-values < 0.001. The normalized vegetation index and 800 m neighbor distance were found to be the most significant in explaining the collected air temperature values. This study expanded upon previous research by examining different sized cities to provide a broader understanding of the impact of urban morphology on air temperature distribution while also demonstrating utility of the random forest algorithm across cities ranging from approximately 10,000 to 200,000 inhabitants. These findings can inform policies addressing urban heat island effects and climate resilience. Full article
(This article belongs to the Special Issue Geospatial Technology: Modern Applications and Their Impact)
Show Figures

Figure 1

21 pages, 11350 KiB  
Article
A Fast Obstacle Detection Algorithm Based on 3D LiDAR and Multiple Depth Cameras for Unmanned Ground Vehicles
by Fenglin Pang, Yutian Chen, Yan Luo, Zigui Lv, Xuefei Sun, Xiaobin Xu and Minzhou Luo
Drones 2024, 8(11), 676; https://doi.org/10.3390/drones8110676 (registering DOI) - 15 Nov 2024
Viewed by 250
Abstract
With the advancement of technology, unmanned ground vehicles (UGVs) have shown increasing application value in various tasks, such as food delivery and cleaning. A key capability of UGVs is obstacle detection, which is essential for avoiding collisions during movement. Current mainstream methods use [...] Read more.
With the advancement of technology, unmanned ground vehicles (UGVs) have shown increasing application value in various tasks, such as food delivery and cleaning. A key capability of UGVs is obstacle detection, which is essential for avoiding collisions during movement. Current mainstream methods use point cloud information from onboard sensors, such as light detection and ranging (LiDAR) and depth cameras, for obstacle perception. However, the substantial volume of point clouds generated by these sensors, coupled with the presence of noise, poses significant challenges for efficient obstacle detection. Therefore, this paper presents a fast obstacle detection algorithm designed to ensure the safe operation of UGVs. Building on multi-sensor point cloud fusion, an efficient ground segmentation algorithm based on multi-plane fitting and plane combination is proposed in order to prevent them from being considered as obstacles. Additionally, instead of point cloud clustering, a vertical projection method is used to count the distribution of the potential obstacle points through converting the point cloud to a 2D polar coordinate system. Points in the fan-shaped area with a density lower than a certain threshold will be considered as noise. To verify the effectiveness of the proposed algorithm, a cleaning UGV equipped with one LiDAR sensor and four depth cameras is used to test the performance of obstacle detection in various environments. Several experiments have demonstrated the effectiveness and real-time capability of the proposed algorithm. The experimental results show that the proposed algorithm achieves an over 90% detection rate within a 20 m sensing area and has an average processing time of just 14.1 ms per frame. Full article
Show Figures

Figure 1

25 pages, 2899 KiB  
Article
Learning Omni-Dimensional Spatio-Temporal Dependencies for Millimeter-Wave Radar Perception
by Hang Yan, Yongji Li, Luping Wang and Shichao Chen
Remote Sens. 2024, 16(22), 4256; https://doi.org/10.3390/rs16224256 - 15 Nov 2024
Viewed by 382
Abstract
Reliable environmental perception capabilities are a prerequisite for achieving autonomous driving. Cameras and LiDAR are sensitive to illumination and weather conditions, while millimeter-wave radar avoids these issues. Existing models rely heavily on image-based approaches, which may not be able to fully characterize radar [...] Read more.
Reliable environmental perception capabilities are a prerequisite for achieving autonomous driving. Cameras and LiDAR are sensitive to illumination and weather conditions, while millimeter-wave radar avoids these issues. Existing models rely heavily on image-based approaches, which may not be able to fully characterize radar sensor data or efficiently further utilize them for perception tasks. This paper rethinks the approach to modeling radar signals and proposes a novel U-shaped multilayer perceptron network (U-MLPNet) that aims to enhance the learning of omni-dimensional spatio-temporal dependencies. Our method involves innovative signal processing techniques, including a 3D CNN for spatio-temporal feature extraction and an encoder–decoder framework with cross-shaped receptive fields specifically designed to capture the sparse and non-uniform characteristics of radar signals. We conducted extensive experiments using a diverse dataset of urban driving scenarios to characterize the sensor’s performance in multi-view semantic segmentation and object detection tasks. Experiments showed that U-MLPNet achieves competitive performance against state-of-the-art (SOTA) methods, improving the mAP by 3.0% and mDice by 2.7% in RD segmentation and AR and AP by 1.77% and 2.03%, respectively, in object detection. These improvements signify an advancement in radar-based perception for autonomous vehicles, potentially enhancing their reliability and safety across diverse driving conditions. Full article
Show Figures

Graphical abstract

9 pages, 629 KiB  
Communication
Space Debris In-Orbit Detection with Commercial Automotive LiDAR Sensors
by Isabel Lopez-Calle
Sensors 2024, 24(22), 7293; https://doi.org/10.3390/s24227293 - 14 Nov 2024
Viewed by 431
Abstract
This article presents an alternative approach to detecting and mapping space debris in low Earth orbit by utilizing commercially available automotive LiDAR sensors mounted on CubeSats. The main objective is to leverage the compact size, low weight, and minimal power consumption of these [...] Read more.
This article presents an alternative approach to detecting and mapping space debris in low Earth orbit by utilizing commercially available automotive LiDAR sensors mounted on CubeSats. The main objective is to leverage the compact size, low weight, and minimal power consumption of these sensors to create a “Large Cosmic LiDAR” (LCL) system. This LCL system would operate similarly to a giant radar circling the Earth, with strategically positioned LiDAR sensors along the target orbit. The article examines the feasibility of this concept by analyzing the relative orbital velocity between the sensor and debris objects, and calculating the time required to scan a complete orbit. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

16 pages, 4667 KiB  
Article
State Estimation for Quadruped Robots on Non-Stationary Terrain via Invariant Extended Kalman Filter and Disturbance Observer
by Mingfei Wan, Daoguang Liu, Jun Wu, Li Li, Zhangjun Peng and Zhigui Liu
Sensors 2024, 24(22), 7290; https://doi.org/10.3390/s24227290 - 14 Nov 2024
Viewed by 396
Abstract
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and [...] Read more.
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and stable state estimation in complex environments has become particularly important. Existing state estimation algorithms relying on multi-sensor fusion, such as those using IMU, LiDAR, and visual data, often face challenges on non-stationary terrains due to issues like foot-end slippage or unstable contact, leading to significant state drift. To tackle this problem, this paper introduces a state estimation algorithm that integrates an invariant extended Kalman filter (InEKF) with a disturbance observer, aiming to estimate the motion state of quadruped robots on non-stationary terrains. Firstly, foot-end slippage is modeled as a deviation in body velocity and explicitly included in the state equations, allowing for a more precise representation of how slippage affects the state. Secondly, the state update process integrates both foot-end velocity and position observations to improve the overall accuracy and comprehensiveness of the estimation. Lastly, a foot-end contact probability model, coupled with an adaptive covariance adjustment strategy, is employed to dynamically modulate the influence of the observations. These enhancements significantly improve the filter’s robustness and the accuracy of state estimation in non-stationary terrain scenarios. Experiments conducted with the Jueying Mini quadruped robot on various non-stationary terrains show that the enhanced InEKF method offers notable advantages over traditional filters in compensating for foot-end slippage and adapting to different terrains. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

21 pages, 7459 KiB  
Article
Deep Learning for Urban Tree Canopy Coverage Analysis: A Comparison and Case Study
by Grayson R. Morgan, Danny Zlotnick, Luke North, Cade Smith and Lane Stevenson
Geomatics 2024, 4(4), 412-432; https://doi.org/10.3390/geomatics4040022 - 14 Nov 2024
Viewed by 273
Abstract
Urban tree canopy (UTC) coverage, or area, is an important metric for monitoring changes in UTC over large areas within a municipality. Several methods have been used to obtain these data, but remote sensing image classification is one of the fastest and most [...] Read more.
Urban tree canopy (UTC) coverage, or area, is an important metric for monitoring changes in UTC over large areas within a municipality. Several methods have been used to obtain these data, but remote sensing image classification is one of the fastest and most reliable over large areas. However, most studies have tested only one or two classification methods to accomplish this while using costly satellite imagery or LiDAR data. This study seeks to compare three urban tree canopy cover classifiers by testing a deep learning U-Net convolutional neural network (CNN), support vector machine learning classifier (SVM) and a random forests machine learning classifier (RF) on cost-free 2012 aerial imagery over a small southern USA city and midsize, growing southern USA city. The results of the experiment are then used to decide the best classifier and apply it to more recent aerial imagery to determine canopy changes over a 10-year period. The changes are subsequently compared visually and statistically with recent urban heat maps derived from thermal Landsat 9 satellite data to compare the means of temperatures within areas of UTC loss and no change. The U-Net CNN classifier proved to provide the best overall accuracy for both cities (89.8% and 91.4%), while also requiring the most training and classification time. When compared spatially with city heat maps, city periphery regions were most impacted by substantial changes in UTC area as cities grow and the outer regions get warmer. Furthermore, areas of UTC loss had higher temperatures than those areas with no canopy change. The broader impacts of this study reach the urban forestry managers at the local, state/province, and national levels as they seek to provide data-driven decisions for policy makers. Full article
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
Show Figures

Figure 1

21 pages, 7764 KiB  
Article
Atmospheric Boundary Layer Stability in Urban Beijing: Insights from Meteorological Tower and Doppler Wind Lidar
by Linlin Wang, Bingcheng Wan, Yuanjian Yang, Sihui Fan, Yi Jing, Xueling Cheng, Zhiqiu Gao, Shiguang Miao and Han Zou
Remote Sens. 2024, 16(22), 4246; https://doi.org/10.3390/rs16224246 - 14 Nov 2024
Viewed by 246
Abstract
The limited understanding of the structure of the urban surface atmospheric boundary layer can be attributed to its inherent complexity, as well as a deficiency in comprehensive measurements. We analyzed one year of meteorological data and Doppler wind lidar measurements in Beijing to [...] Read more.
The limited understanding of the structure of the urban surface atmospheric boundary layer can be attributed to its inherent complexity, as well as a deficiency in comprehensive measurements. We analyzed one year of meteorological data and Doppler wind lidar measurements in Beijing to explore how atmospheric stability is influenced by wind speed, radiation, turbulence, and pollution levels. Results indicate that the predominant state of the urban boundary layers in Beijing is an active condition (characterized by strong unstable and unstable stability regimes) throughout the day, attributed to the significant heat storage capacity of the urban canopy. Strong stable regimes are more frequently observed during winter and autumn, peaking during transitions from night to day. Furthermore, both strong unstable and strong stable regimes occur under very weak wind conditions (indicating weak dynamic instability), with strong instability associated with high net radiation levels while strong stability correlates with low net radiation conditions (indicative of robust thermal stability). The unstable regime manifests under strong winds (reflecting strong dynamic instability) alongside moderate net radiation environments, characterized by elevated values of turbulence kinetic energy and urban boundary height, highlighting the critical role of mechanical turbulence generation during periods of high wind activity. Additionally, six instances of pronounced stable conditions observed during daytime can be partially attributed to low radiation coupled with high pollutant concentrations near the surface, resulting from prolonged temperature inversions due to intense radiative cooling effects and weak dynamic forcing. Our findings presented herein are expected to have urban boundary layer climate and environment implications for other cities with high pollution and dense urban infrastructure all over the world. Full article
Show Figures

Figure 1

17 pages, 4689 KiB  
Article
Development of a Methodology Based on ALS Data and Diameter Distribution Simulation to Characterize Management Units at Tree Level
by Jean A. Magalhães, Juan Guerra-Hernández, Diogo N. Cosenza, Susete Marques, José G. Borges and Margarida Tomé
Remote Sens. 2024, 16(22), 4238; https://doi.org/10.3390/rs16224238 - 14 Nov 2024
Viewed by 227
Abstract
Characterizing Management Units (MUs) with tree-level data is instrumental for a comprehensive understanding of forest structure and for providing information needed to support forest management decision-making. Airborne Laser Scanning (ALS) data may enhance this characterization. While some studies rely on Individual Tree Detection [...] Read more.
Characterizing Management Units (MUs) with tree-level data is instrumental for a comprehensive understanding of forest structure and for providing information needed to support forest management decision-making. Airborne Laser Scanning (ALS) data may enhance this characterization. While some studies rely on Individual Tree Detection (ITD) methods using ALS data to estimate tree diameters within stands, these methods often face challenges when the goal is to characterize MUs in dense forests. This study proposes a methodology that simulates diameter distributions from LiDAR data using an Area-Based Approach (ABA) to overcome these limitations. Focusing on maritime pine (Pinus pinaster Ait.) MUs within a forest intervention zone in northern Portugal, the research initially assesses the suitability of two highly flexible Probability Density Functions (PDFs), Johnson’s SB and Weibull, for simulating diameter distribution in maritime pine stands in Portugal using the PINASTER database. The selected PDF is then used in conjunction with ABA to derive the variables needed for parameter recovery, enabling the simulation of diameter distributions within each MU. Monte Carlo Simulation (MCS) is applied to generate a sample list of tree diameters from the simulated distributions. The results indicate that this methodology is appropriate to estimate diameter distributions within maritime pine MUs by using ABA combined with Johnson’s SB and Weibull PDFs. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

15 pages, 3219 KiB  
Article
Polarization Optics to Differentiate Among Bioaerosols for Lidar Applications
by Alain Miffre, Danaël Cholleton, Adrien P. Genoud, Antonio Spanu and Patrick Rairoux
Photonics 2024, 11(11), 1067; https://doi.org/10.3390/photonics11111067 - 14 Nov 2024
Viewed by 315
Abstract
Polarization optics, which characterize the orientation of the electromagnetic field through Stokes vectors formalism, have been effectively used in lidar remote sensing to detect particles that differ in shape, such as mineral dust or pollen. In this study, for the first time, we [...] Read more.
Polarization optics, which characterize the orientation of the electromagnetic field through Stokes vectors formalism, have been effectively used in lidar remote sensing to detect particles that differ in shape, such as mineral dust or pollen. In this study, for the first time, we explore the capability of polarization optics to distinguish the light-backscattering patterns of pollen and fungal spores, two complex-shaped particles that vary significantly in surface structure. A unique laboratory polarimeter operating at lidar backscattering at 180.0° was conducted to assess their light depolarization property in laboratory ambient air. If, at the precise lidar backscattering angle of 180.0°, the depolarization ratios of pollen and fungal spores were difficult to differentiate, slight deviations from 180.0° allowed us to reveal separate scattering matrices for pollen and fungal spores. This demonstrates that polarization optics can unambiguously differentiate these particles based on their light-(back)scattering properties. These findings are consistent at both 532 and 1064 nm. This non-invasive, real-time technique is valuable for environmental monitoring, where rapid identification of airborne allergens is essential, as well as in agricultural and health sectors. Polarization-based light scattering thus offers a valuable method for characterizing such atmospheric particles, aiding in managing airborne contaminants. Full article
(This article belongs to the Special Issue Polarization Optics)
Show Figures

Graphical abstract

27 pages, 7874 KiB  
Review
Advances in LiDAR Hardware Technology: Focus on Elastic LiDAR for Solid Target Scanning
by Wentao Li, Tianyun Shi, Rui Wang, Jingjie Yang, Zhen Ma, Wanpeng Zhang, Huijin Fu and Pengyue Guo
Sensors 2024, 24(22), 7268; https://doi.org/10.3390/s24227268 - 14 Nov 2024
Viewed by 487
Abstract
This paper explores the development of elastic LiDAR technology, focusing specifically on key components relevant to solid target scanning applications. By analyzing its fundamentals and working mechanisms, the advantages of elastic LiDAR for precise measurement and environmental sensing are demonstrated. This paper emphasizes [...] Read more.
This paper explores the development of elastic LiDAR technology, focusing specifically on key components relevant to solid target scanning applications. By analyzing its fundamentals and working mechanisms, the advantages of elastic LiDAR for precise measurement and environmental sensing are demonstrated. This paper emphasizes innovative advances in emitters and scanning systems, and examines the impact of optical design on performance and cost. Various ranging methods are discussed. Practical application cases of elastic LiDAR are presented, and future trends and challenges are explored. The purpose of this paper is to provide a comprehensive perspective on the technical details of elastic LiDAR, the current state of application, and future directions. All instances of “LiDAR” in this paper specifically refer to elastic LiDAR. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

20 pages, 6095 KiB  
Article
MSANet: LiDAR-Camera Online Calibration with Multi-Scale Fusion and Attention Mechanisms
by Fengguang Xiong, Zhiqiang Zhang, Yu Kong, Chaofan Shen, Mingyue Hu, Liqun Kuang and Xie Han
Remote Sens. 2024, 16(22), 4233; https://doi.org/10.3390/rs16224233 - 14 Nov 2024
Viewed by 448
Abstract
Sensor data fusion is increasingly crucial in the field of autonomous driving. In sensor fusion research, LiDAR and camera have become prevalent topics. However, accurate data calibration from different modalities is essential for effective fusion. Current calibration methods often depend on specific targets [...] Read more.
Sensor data fusion is increasingly crucial in the field of autonomous driving. In sensor fusion research, LiDAR and camera have become prevalent topics. However, accurate data calibration from different modalities is essential for effective fusion. Current calibration methods often depend on specific targets or manual intervention, which are time-consuming and have limited generalization capabilities. To address these issues, we introduce MSANet: LiDAR-Camera Online Calibration with Multi-Scale Fusion and Attention Mechanisms, an end-to-end deep learn-based online calibration network for inferring 6-degree of freedom (DOF) rigid body transformations between 2D images and 3D point clouds. By fusing multi-scale features, we obtain feature representations that contain a lot of detail and rich semantic information. The attention module is used to carry out feature correlation among different modes to complete feature matching. Rather than acquiring the precise parameters directly, MSANet online corrects deviations, aligning the initial calibration with the ground truth. We conducted extensive experiments on the KITTI datasets, demonstrating that our method performs well across various scenarios, the average error of translation prediction especially improves the accuracy by 2.03 cm compared with the best results in the comparison method. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

6 pages, 210 KiB  
Proceeding Paper
Localization Challenges in Autonomous Motorsport
by Balázs Kail, Zsombor Papp and Máté Fazekas
Eng. Proc. 2024, 79(1), 88; https://doi.org/10.3390/engproc2024079088 - 12 Nov 2024
Viewed by 170
Abstract
Precise position tracking is crucial for autonomous vehicle development. Technological innovations offer more accurate sensors, but the ideal combination is still debated. This study, inspired by a motorsport project, analyzes errors in simpler positioning solutions, examining environmental interference and dynamic movement impacts. It [...] Read more.
Precise position tracking is crucial for autonomous vehicle development. Technological innovations offer more accurate sensors, but the ideal combination is still debated. This study, inspired by a motorsport project, analyzes errors in simpler positioning solutions, examining environmental interference and dynamic movement impacts. It discusses real-world testing challenges and explores improving precision through sensor combinations. Using high-accuracy GPS and accelerometers with LIDAR, radar, or cameras enhances navigation, especially in complex environments. The study emphasizes that the future of autonomous vehicle localization depends on technological advancements, sensor integration, and intelligent algorithms. Full article
Back to TopTop