1. Introduction
Deep learning is a form of machine learning that applies neural networks to mimic the structural and functional dynamics of the human brain [
1]. The scope of application of deep learning is extensive. The operational aspects of autonomous cars heavily rely on the analysis of vast data from the environment based which operational decisions and awareness are derived [
1,
2]. This ability to learn from the environment through the collection and analysis of data in autonomous cars is enabled by deep learning [
2]. Initially, the idea of autonomous vehicles was a fictional idea. However, due to the availability and accessibility of advanced technologies like deep learning, autonomous vehicles are now a reality [
3]. Therefore, it is vital to understand how AI-based technologies like deep learning work in AVs as a primary step towards level five automation.
The thesis of this research is based on the argument that deep learning algorithms have been extensively used in optimizing the technical and operational architecture of Autonomous Vehicles (AVs). In Autonomous vehicles, the current research postulates that deep learning algorithms are used to enable perception, decision-making, localization, and mapping in autonomous navigation. However, the application of deep learning algorithms is also hindered by challenges ranging from the complexity of model training, sensor challenges, and the complexity and uncertainty of deep learning systems. The paper concludes that addressing these challenges will optimize the accuracy and robustness of deep learning systems in AVs.
The main objective of the study will be to analyze the uses and challenges of deep learning in Autonomous vehicles (AVs). Through the analysis of how deep learning is applied in Autonomous vehicles, this research paper will enhance the existing understanding of AI-based technologies used in AVs. The paper will also highlight reasons why the role of technologies like deep learning in AVs is indispensable. By highlighting some of the barriers and challenges involved in the application of deep learning models in AVs, this study will also spur and inspire future research directions targeted at up scaling the application of deep learning in AVs.
Subsequently, this study will extensively contribute towards the feasible and large-scale adoption of Deep learning in AVs.
2. Research Method
This research paper will apply a qualitative research method to achieve the research objective. It applied a systematic literature review incorporating a dynamic and comprehensive scope of literature related to the application of deep learning in autonomous vehicles. A systematic literature review was used because it enabled the researcher to acquire an in-depth, clear and comprehensive overview of the research variables [
4,
5]. The systematic literature review was also based on updated publications published between 2017 and 2023 to derive an updated overview of the application and challenges of deep learning in autonomous vehicles.
3. Literature Review
3.1. Autonomous Vehicles
The wave of modernization and technological development is responsible for the paradigm shift being witnessed in the automotive industry. By 2030, level 2 AVs will represent 92% of the market share and level 3 AVs 8% [
6]. Additionally, the AV market is expected to grow by 39.47% from
$ 54.23 billion in 2019 to
$ 75.6 billion in 2026 and ultimately aggregate
$ 87 billion by 2030 [
7]. By 2035, self-driving cars are also expected to account for 25% of the total car sales. Between 2019 and 2026, Europe is expected to have the highest growth rate in the AV market at 42.6% with North America also being expected to be a leader in the AV industry [
7,
8]. Nevertheless, whereas AI systems like deep learning have enhanced the technical architecture of AVs, the large-scale adoption of AVs is also hindered by social acceptability, adverse road conditions, weather, data privacy and cyber security among others [
1].
Despite the current development, Biswas & Wang [
9] argue that the practicality of level 5 autonomous vehicles is still under development. The primary impetus factors that cause these phenomena include unaddressed technological barriers besides trust, safety and ethical issues. Nevertheless, technological giants like Tesla, Google, Audi, BMW and Mercedes Benz among others, through ongoing road-testing, have extensively influenced current research designed to address the AV technological architectural barriers [
10]. Through such efforts, giants like Tesla and Google have managed to incorporate self-driving features in current AVs. Furthermore, with the increasing availability of data and advanced technology, the detection accuracy, latency and response time of AVs are expected to be optimized.
The 6 automation levels used to categorize autonomous vehicles are summarized in the table below.
Table 1.
The Automation Levels used to Categorize autonomous vehicles.
Table 1.
The Automation Levels used to Categorize autonomous vehicles.
Levels |
Description |
Level 0 (No automation) |
The dynamic driving task (DDT) is fully controlled by human beings [11] |
Level 1 (Driver assistance) |
It is the lowest level of automation that incorporates mild driver assistance systems like adaptive cruise control. |
Level 2 (Partial driving automation) |
It incorporates an advanced driver assistance system that controls aspects like speed and steering. Human intervention is still required. |
Level 3 (Conditional driving automation) |
Advanced autonomy with numerous sensors to analyze the environment and make informed decisions. They incorporate autonomous systems like automated emergency breaking (AED), traffic jam assist and driver monitoring (DM) among other functionalities [11]. |
Level 4 (High driving automation) |
They can operate in self-driving mode but due to geo-fencing, they are limited to certain low speed urban areas. Incomprehensive legislation and inadequate infrastructure required to such AVs also limits self-driving [11]. |
Level 5 (Fully autonomous driving) |
The dynamic driving task is eliminated and hence such AVs do not require human intervention. They will not be limited by geo-fencing. Despite the ongoing extensive research on actualizing level 5 AVs, the universal adoption of such AVs is a long-term objective. |
3.2. The Need for Autonomous Vehicles
There are various reasons why autonomous cars are relevant and significant in the backdrop of changing transportation. Besides alleviating the economic and environmental issues related to transportation, autonomous vehicles are promising solutions to congestion, accidents and emissions [
12].
Notably, Fayyad et al [
10] agree that autonomous vehicles will provide a safe, efficient, cost-effective, and accessible means of transport. Autonomous cars are also expected to alleviate the impact of undesirable impacts of carbon emissions on climate change. For example, Ercan et al [
13] illustrate that a 1% increase in the sale of Electric vehicles has the potential to reduce carbon emissions in a city by 0.096% and 0.087% in a nearby city. Additionally, electronic vehicles also reduce carbon emissions indirectly through substitution, consumption, and technological effects. Overall, the results of Ercan et al [
13] which analyzed data from more than 929 metro/metropolitan areas in the US showed that the adoption of autonomous vehicles could reduce greenhouse gases by 34% by 2050.
However, another study undertaken at the Massachusetts Institute of Technology (MIT) revealed that the powerful onboard computers programmed to use deep learning and neural networks are not environmentally friendly [
14]. Therefore, widespread, and global adoption of autonomous vehicles is likely to generate over 0.14 gigatons of greenhouse emissions annually, similar to the annual greenhouse emissions of Argentina [
14]. Therefore, enhancing the theoretical, technical, and operational understanding and challenges of deep learning in autonomous vehicles is likely to alleviate such undesirable environmental impacts.
Autonomous vehicles are also expected to solve other transport-related issues like accidents and congestion. For instance, 93% of accidents, especially crashes, are caused by human errors [
15]. Autonomous vehicles will reduce such statistics by reducing human involvement in driving which will subsequently minimize human errors like speeding, distraction and driving under the influence [
15]. This impact on minimizing accidents has already been realized in semi-autonomous vehicles. A Survey by the Insurance Institute of Highway Safety showed that partially autonomous features like forward collision avoidance, side view assistance and lane departure warning reduced road accident crashes, accidents and fatalities by at least 33% [
16]. Karnati et al. [
1] also agree that the application of AI in AVs will optimize the ability of self-driving vehicles to address some of the problems affiliated with conventional cars like road safety, limited independence for people with disabilities, low efficiency, traffic congestion and environmental pollution.
3.3. Deep Learning
Deep learning is a specialized form of machine learning based on artificial neural networks (ANN) whose structure is derived from the human brain. Deep learning algorithms comprise multiple layers of ANNs that are trained to extract and learn relevant features from vast amounts of data [
17,
18]. This ability to learn and extract relevant features makes deep learning algorithms applicable in different AV features like natural language processing, image and speech recognition and autonomous navigation [
19]. One of the turning points in deep learning that fostered its application in self-driving cars among other applications was the achievement of state-of-the-art results in image Net visual recognition challenges by a deep convoluted neural network called AlexNet [
20,
21].
Ultimately, the comprehensive application of deep learning has been influenced by various factors. Such factors include the advancements in powerful computing resources and the availing of large, quality and reliable training data sets [
22,
23]. Some of the existing deep learning structures are summarized in the table below.
Table 2.
Deep Learning Structures.
Table 2.
Deep Learning Structures.
Autoencoder |
Comprised of an encoder and decoder. It is also designed to learn a compressed version of input data from which the original input data can be recreated [19]. Autoencoders are incorporated with end-to-end deep learning strategies to help AVs determine the appropriate steering angle during autonomous navigation [24]. |
Convolutional Neural networks (CNN) |
CNN uses convolution operations to extract and learn relevant features from data. It helps in identification of data patterns that could have been challenging to detect using traditional algorithms. It has a hierarchical structure whereby the lower layers learn simple data features whereas the high layers extract complex data features [25]. |
Deep belief networks (DBN) |
Comprises multiple layers of Restricted Boltzmann Machine (RBM). The shallow and two-layered RBMs are stacked on top of each other to form a deep DBN network [26]. Besides being trained through unsupervised learning, DBNs can be applied in AV functions like natural language processing, speech recognition and computer vision relevant in detection and classification of images during autonomous navigation [27]. |
Recurrent Neural networks (RNN) |
RNN’s could analyze sequential data as input. This ability to model temporal dependencies and patterns has enabled RNNs to be used for different AV functions like natural language processing, speech recognition and time series predictions [19]. However, RNNs are also sensitive to the order of input data. |
4. Results
4.1. Uses of Deep Learning in Autonomous Vehicles
Deep learning has multiple uses in AVs as demonstrated by a wide scope of literature related to the research topic. These uses are affiliated with components/aspects of AVs like perception, decision-making, motion planning and safety validation.
4.2. Perception
Perception refers to the ability of the AV to continuously scan and track the surrounding environment. Perception also involves the semantic segmentation of roads with different drivable surfaces like off-road and tarmacked surfaces. For this purpose, the AV uses LiDar and Radar sensors besides cameras to mimic human vision [
28] . The existing deep learning algorithms enable both mediated and direct perception.
Mediated perception applies both deep learning and convolutional neural networks to detect images of the surrounding environment. The detailed map of the surroundings is developed from the analysis of distance and coordinates from other vehicles and other physical obstacles like trees and road signs [
29]. The study of Tong et al [
30] sought to establish the perception accuracy of deep learning algorithms. The study showed that deep learning enabled AVs to detect traffic signs with an accuracy of 99.46% which was higher than humans in some tests [
30]. Other deep learning models like YOLO Darknet v2 detect 40-70 frames per second, which is an 80% detection accuracy rate in real-time AV driving [
30]. Ultimately, high-definition images are expected to enhance the detection accuracy of deep learning algorithms. Additionally, Guan et al. [
31] acknowledges that advanced techniques like salient analysis and edge detection have been developed to derive high-definition images.
On the other hand, direct perception involves decision-making and integrated scene awareness. Hence, direct perception focuses on the immediate AV aspects like the immediate steering wheel motion and speed while avoiding the preliminary localization and mapping [
32]. Therefore, instead of using a detailed local map, the AV uses deep learning to develop sections of maps required to acquire immediate scene awareness components like the distance from immediate vehicles and lane markings [
33].
One of the most recommended deep learning algorithms used for direct perception in AVs is PilotNet. The deep learning model is efficient because it comprises a single normalization layer and five convolutional layers besides three fully connected layers [
34]. Using sensor and Camera data as input, the primary output of the model is steering parameters which help to steer the AV.
Odometry is also an important aspect of perception enabled by deep learning algorithms. It involves the determination of relative position change in position and orientation during autonomous navigation [
35,
36]. Notably, Li et al., [
37] and Mohamed et al. [
38] established visual odometry algorithms like UnDeepVO significantly relied on unlabeled data, unsupervised learning, and deep neural networks to enhance accuracy and robustness. Others like Probabilistic Visual Odometry (ESP-VO) also use deep learning, Recurrent Convolutional Neural Networks (RCNN) and monocular cameras to estimate pose and generate depth maps [
39,
40]. These examples demonstrate the extensive application of deep learning algorithms in fostering perception during autonomous navigation in AVs.
4.3. Decision Making
The scope of decision-making in Autonomous vehicles comprises path planning, automated parking, traffic and obstacle manoeuvres, and following other vehicles [
41]. These decisions in level five AVs take place without human intervention. For human drivers, optimal decision-making around such aspects is particularly affected by uncontrollable external factors like the potential action of other vehicles [
42]. Alternatively, AVs apply advanced AI technologies like deep learning to address such external factors by precisely predicting the actions of other vehicles through the manipulation of predicted position sets using stochastic models and probability distribution [
42]. Deep learning is supplemented by other AI systems like steering control, speech recognition and gesture control among others that optimize the decision-making capabilities of AVs.
One of the challenging aspects of decision-making prospects like path planning and obstacle manoeuvres is the dynamic and uncertain nature of real-life driving [
43]. To address such challenges, the existing scope of literature analyzed in these studies showed that contemporary path-planning methods have been combined with advanced AI-based path-planning methods like deep learning [
44]. For instance, Meng et al. [
45] found that contemporary techniques used in optimizing path planning included distance control algorithms, lane curvature, bumper field approach and stereo block matching among others like vector field histograms. However, such contemporary techniques are affiliated with limitations like being computationally intensive and time-consuming [
46]. To manage such limitations, advanced AI models like deep learning have been integrated to optimize path planning. The integration of deep learning has enhanced the capacity to analyze inputs like the steering angle, vehicle speed and images to control the lateral flow of information, speed and angle of the steering system [
47]
4.4. Localization and Mapping
Localization is the ability of the AV to effectively use its sensors in precisely detecting and perceiving the environmental features based on the developed environmental map (Reid, et al., 2019). It involves identifying, classifying, and integrating physical obstacles and features into an actual navigational map using sensor data and Deep learning among other AI-based systems [
48]. The navigational capabilities of AVs are extensively dependent on localization. Ultimately, Li et al [
49] agree that Localization is a major indicator of an autonomous system’s reliability since it is one of the primary sources of autonomous driving challenges.
By applying the sensor data, deep learning algorithms and other AI-based systems, the AV should be able to not only estimate its location but also detect and assess the proximity of physical obstacles and other vehicles [
49]. The deep learning algorithm relies on a diverse scope of sensor data to enable localization in AVs. For example, the point clouds generated by LiDAR are analyzed to develop a map of the environment. Additionally, features like particle filters enable deep learning models to enhance the accuracy of the data collected by sensors by comparing the observed environmental description with a known map used as part of the algorithm training data set [
50]. Additionally, features that cannot be precisely identified through this comparison are used by the deep learning algorithm to update new features on the existing map held on the algorithm.
The sensor data and deep learning algorithms are used to develop absolute and relative maps. Firstly, absolute maps describe a geographical location based on its fixed point on a worldwide coordinate frame [
51]. It shows stationary landmarks defined by two parameters that show their location on a Cartesian plane relative to the worldwide coordinate frame [
51]. On the other hand, relative maps are used by AVs to derive awareness about the distance between two landmarks [
52]. Golroudbari & Sabour [
19] also show that deep learning algorithms like convolutional neural networks are effective in object detection through their abilities to acquire a comprehensive representation of the object under detection.
Notably, deep learning-based object detection and localization mechanisms like Faster R-CNN and YOLO have demonstrated high accuracy, robustness and speed in the real-time detection of obstacles regardless of factors like adverse weather conditions or darkness [
53]. This accuracy in object detection during autonomous navigation has also been ascertained by several studies. For example, Afif et al [
54] assessed the effectiveness of lightweight EfficientDet in autonomous navigation. The study established that this deep learning object detection approach among others like TensorFlow and OpenCV optimized obstacle detection by providing a high-resolution binary image of the obstacle [
54]. Therefore, despite the challenges affiliated with the acquisition of adequate training data, it is evident that deep learning-based systems are highly effective in AV localization and mapping.
5. Challenges
5.1. Complexity and Uncertainty
The application of deep learning in Autonomous Vehicles is also affiliated with complexity and uncertainty. According to Grigorescu et al [
55], such uncertainties emerge in two major forms. Firstly, the uncertainties emerging from datasets due to the inability of censors to work properly under different external factors like weather. This leads to multiple errors that emerge from such inadequate and poor-quality datasets. Another cause of uncertainty is affiliated with the type of deep learning model used. These uncertainties are influenced by the misalignment between the functional requirements of the deep learning model and the data sensors used [
56]. Unfortunately, the dynamic and unpredictable nature of the driving environment is likely to make data sensors unreliable in meeting the data collection standards of the respective deep learning models [
34].
Besides the unpredictable and dynamic nature of the driving environment, the uncertainty of deep learning models in guaranteeing the correct outcome in autonomous vehicles is also influenced by multiple other factors. For instance, malicious attacks resulting from cyber security are a significant impetus factor since they do not need physical accessibility to the vehicular composition [
9]. Additionally, unfamiliar, and challenging scenarios like adverse weather conditions, severe collisions, and road blockage can lead to data analysis deviation in the state-of-the-art deep learning algorithms used. For example, lane detection might become a challenge at night since the deep learning algorithm utilizes previous datasets collected during the daytime. In such cases, humans can fill in to optimize safety. Nevertheless, the objective should be to develop advanced deep learning models whose detection accuracy is not thwarted by unpredictable and dynamic situations.
This uncertain, unstable and delicate detection accuracy of deep learning models is likely to affect passenger safety in autonomous vehicles. Notably, Biswas & Wang [
9] showed that minor changes in environmental conditions and sensor data like cropped graphics might affect the detection and segmentation capabilities of advanced driver assistance systems. Additionally, the automotive safety standards under the ISO 26262 standard were developed without the consideration of deep learning applications in autonomous vehicles [
9]. Therefore, the standardization of safety issues and standards is vital in the integration of Artificial intelligence systems like deep learning in Autonomous vehicles.
5.2. Sensor Challenges
The detection accuracy and latency of deep learning algorithms significantly depend on the quality of data obtained from the multiple sensors embedded in AVs. Notably, one of the approaches used to foster accuracy in AVs is sensor fusion. It involves the integration of data from different sensors to increase the quantity and quality of data available for deep learning algorithms to make better and more accurate decisions [
57,
58]. For example, the integration of LiDar and Camera data optimizes AV performance at night [
58]. However, adverse weather conditions are likely to affect the performance of data collection sensors which can be slightly improved through sensor fusion.
Biswas & Wang [
9] also acknowledge that a challenge for AV manufacturers emerges from the tradeoff between the cost of sensors and their accuracy. The outcome is different manufacturers opting for different sensors. Such sensor inconsistencies, among others, lead to heterogeneous data sets that might have undesirable effects on accuracy. Besides the varying reliability and quality of sensors, Yeong et al. [
58] note that the different frequencies and timestamps of sensors also affect the synchronization accuracy and subsequently safety of AVs.
Another issue is the lack of universal standards and comprehensive research regarding the aspect of sensor failure. Sensor failure is an important factor since the safety and reliability of AVs significantly rely on the presence and optimal functionality of fundamental sensors [
59]. Therefore, undetected sensor failure might influence severe technical failures like accidents. Besides technical failure, sensor failure due to external factors like dirt, deviation and blockage might also lead to the communication of false data within the AV’s architecture [
60].
5.3. The Complexity of Model Training
For optimal implementation of deep learning algorithms, training using representative datasets that simulate different scenarios is required. However, the driving environment of an AV comprises dynamic and diverse situations that might not be adequately covered by the model training data set [
9]. The existence of such complexities might hamper the optimal functionality of AV functions like lane detection, perception, SLAM and decision-making [
9]. Additionally, developing an accurate model training data set involves a cumbersome process of obtaining the accurate coordinates of pedestrians, vehicles, lanes and other physical obstacles. Therefore, the unpredictability and diversity of scenarios that characterize real-life driving scenarios amplify the challenges of training deep learning models with time-sensitive training models and data sets [
9]. Nevertheless, various approaches have been proposed to address such model training challenges. Such approaches include collaborative training, lightweight deep learning algorithms and model compression technologies.
Another complexity emerges from infeasible training. For instance, the training of deep learning algorithms in AVs is undertaken through three major approaches including simulations, experiments with model vehicles and real-world experiments. Whereas the first two have been extensively applied, the use of real-world experiments has not been widely adopted due to multiple technical and infrastructural setbacks. Ultimately, the absence of real-world, dynamic and uncertain training scenarios significantly affects the accuracy of the deep-learning training dataset. To emphasize the importance of real-life training scenarios, a study by Ni et al., [
61] demonstrated that 10-9 hours of vehicle operation would be required to ascertain the failure rate. Besides, the test would have to be repeated several times to enhance statistical significance. Nevertheless, multiple incidences of real-life training have been completed by industry giants like Tesla further highlighting the limitations of existing AV architecture [
62].
6. Conclusion
Ultimately, it is prudent that deep-learning-based systems have enhanced the safety and reliability of navigation in AVs. The analysis showed that Deep learning algorithms have been applied in major AV components like perception, localization, mapping, path planning, and navigation. Future advancements in deep learning algorithms are expected to enhance the accuracy of AVs in decision-making, perception, localization and mapping. Therefore, to optimize the use of deep learning in AVs, the current study recommends increased standardization of the sensor to enhance synchronization and accuracy. Real-life testing should also be actively incorporated in model training to ensure deep learning algorithms adapt to the dynamic nature of real driving. These recommendations along with future research will enhance the safety, reliability and social acceptability of autonomous vehicle systems.
References
- Karnati, A.; Mehta, D. Artificial Intelligence in Self Driving Cars: Applications, Implications and Challenges. Ushus Journal of Business Management 2022, 21. [Google Scholar]
- Miglani, A.; Kumar, N. Deep learning models for traffic flow prediction in autonomous vehicles: A review, solutions, and challenges. Vehicular Communications 2019, 20, 100184. [Google Scholar] [CrossRef]
- Kisačanin, B. (2017, May). Deep learning for autonomous vehicles. In 2017 IEEE 47th International Symposium on Multiple-Valued Logic (ISMVL) (pp. 142-142).
- Tikito, I.; Souissi, N. Meta-analysis of systematic literature review methods. International Journal of Modern Education and Computer Science 2019, 12, 17. [Google Scholar] [CrossRef]
- Davies, A. Carrying out systematic literature reviews: An introduction. British Journal of Nursing 2019, 28, 1008–1014. [Google Scholar] [CrossRef]
- GreyB, T. (2022, July 23). Top 30 Self Driving Technology and Car Companies. GreyB. Retrieved October 23, 2022, from https://www.greyb.com/autonomous-vehicle-companies/.
- León, L.F.A.; Aoyama, Y. Industry emergence and market capture: The rise of autonomous vehicles. Technological Forecasting and Social Change 2022, 180, 121661. [Google Scholar] [CrossRef]
- Pütz, F.; Murphy, F.; Mullins, M.; O'Malley, L. Connected automated vehicles and insurance: Analysing future market-structure from a business ecosystem perspective. Technology in Society 2019, 59, 101182. [Google Scholar] [CrossRef]
- Biswas, A.; Wang, H.C. Autonomous vehicles enabled by the integration of IoT, edge intelligence, 5G, and blockchain. Sensors 2023, 23, 1963. https://www.researchgate.net/publication/368436881_Autonomous_Vehicles_Enabled_by_the_Integration_of_IoT_Edge_Intelligence_5G_and_Blockchain. [CrossRef] [PubMed]
- Fayyad, J.; Jaradat, M.A.; Gruyer, D.; Najjaran, H. Deep learning sensor fusion for autonomous vehicle perception and localization: A review. Sensors 2020, 20, 4220. [Google Scholar] [CrossRef] [PubMed]
- Qian, J.; Zeleznikow, J. Who shares legal liability for road accidents caused by drivers assisted by artificial intelligence software? Canberra Law Review 2021, 18, 18–35. [Google Scholar]
- Benefits of Self-Driving Vehicles. (2018, March 19). Coalition for Future Mobility. Retrieved October 23, 2022, from https://coalitionforfuturemobility.com/benefits-of-self-driving-vehicles/.
- Ercan, T.; Onat, N.C.; Keya, N.; Tatari, O.; Eluru, N.; Kucukvar, M. Autonomous electric vehicles can reduce carbon emissions and air pollution in cities. Transportation Research Part D: Transport and Environment 2022, 112, 103472. [Google Scholar] [CrossRef]
- Fagnant, D.J.; Kockelman, K. Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations. Transportation Research Part A: Policy and Practice 2015, 77, 167–181. [Google Scholar] [CrossRef]
- Fagnant, D.J.; Kockelman, K. Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations. Transportation Research Part A: Policy and Practice 2015, 77, 167–181. [Google Scholar] [CrossRef]
- Anderson, J.M.; Nidhi, K.; Stanley, K.D.; Sorensen, P.; Samaras, C.; Oluwatola, O.A. (2014). Autonomous vehicle technology: A guide for policymakers. Rand Corporation.
- Heaton, J. Ian Goodfellow, Yoshua Bengio, and Aaron Courville: Deep learning: The MIT Press, 2016, 800 pp, ISBN: 0262035618. Genetic programming and evolvable machines 2018, 19(1-2), 305-307.
- Jebamikyous, H.H.; Kashef, R. Autonomous vehicles perception (avp) using deep learning: Modeling, assessment, and challenges. IEEE Access 2022, 10, 10523–10535. [Google Scholar] [CrossRef]
- Golroudbari, A.A.; Sabour, M.H. Recent Advancements in Deep Learning Applications and Methods for Autonomous Navigation--A Comprehensive Review. arXiv preprint 2023, arXiv:2302.11089. [Google Scholar]
- Rao, Q.; Frtunikj, J. (2018, May). Deep learning for self-driving cars: Chances and challenges. In Proceedings of the 1st international workshop on software engineering for AI in autonomous systems (pp. 35-38).
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Communications of the ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Ren, J.; Gaber, H.; Al Jabar, S.S. (2021, May). Applying deep learning to autonomous vehicles: A survey. In 2021 4th International Conference on Artificial Intelligence and Big Data (ICAIBD) (pp. 247-252). IEEE.
- Smaldone, A.M.; Kyro, G.W.; Batista, V.S. Quantum Convolutional Neural Networks for Multi-Channel Supervised Learning. arXiv preprint 2023, arXiv:2305.18961. [Google Scholar] [CrossRef]
- Pak, A.; Manjunatha, H.; Filev, D.; Tsiotras, P. Carnet: A dynamic autoencoder for learning latent dynamics in autonomous driving tasks. arXiv preprint 2022, arXiv:2205.08712. [Google Scholar]
- Chen, L.; Lin, S.; Lu, X.; Cao, D.; Wu, H.; Guo, C.; Wang, F.Y.; et al. Deep neural network based vehicle and pedestrian detection for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems 2021, 22, 3234–3246. [Google Scholar] [CrossRef]
- Huang, Y.; Panahi, A.; Krim, H.; Yu, Y.; Smith, S.L. Deep adversarial belief networks. arXiv preprint 2019, arXiv:1909.06134. [Google Scholar]
- Ren, J.; Green, M.; Huang, X. (2021). From traditional to deep learning: Fault diagnosis for autonomous vehicles. In Learning Control (pp. 205-219). Elsevier.
- Ivanov, S.A.; Rasheed, B. Predicting the Behavior of Road Users in Rural Areas for Self-Driving Cars. Advanced Engineering Research (Rostov-on-Don) 2023, 23, 169–179. [Google Scholar] [CrossRef]
- Kenesei, Z.; Ásványi, K.; Kökény, L.; Jászberényi, M.; Miskolczi, M.; Gyulavári, T.; Syahrivar, J. Trust and perceived risk: How different manifestations affect the adoption of autonomous vehicles. Transportation research part A: Policy and practice 2022, 164, 379–393. [Google Scholar] [CrossRef]
- Tong, Q.; Li, X.; Lin, K.; Li, C.; Si, W.; Yuan, Z. Cascade-LSTM-based visual-inertial navigation for magnetic levitation haptic interaction. IEEE Network 2019, 33, 74–80. [Google Scholar] [CrossRef]
- Guan, W.; Wang, T.; Qi, J.; Zhang, L.; Lu, H. Edge-aware convolution neural network based salient object detection. IEEE Signal Processing Letters 2018, 26, 114–118. [Google Scholar] [CrossRef]
- Lee, D.H.; Chen, K.L.; Liou, K.H.; Liu, C.L.; Liu, J.L. Deep learning and control algorithms of direct perception for autonomous driving. Applied Intelligence 2021, 51, 237–247. [Google Scholar] [CrossRef]
- Bojarski, M.; Yeres, P.; Choromanska, A.; Choromanski, K.; Firner, B.; Jackel, L.; Muller, U. Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv preprint 2017, arXiv:1704.07911. [Google Scholar]
- Pavel, M.I.; Tan, S.Y.; Abdullah, A. Vision-based autonomous vehicle systems based on deep learning: A systematic literature review. Applied Sciences 2022, 12, 6831. [Google Scholar] [CrossRef]
- Aqel, M.O.; Marhaban, M.H.; Saripan, M.I.; Ismail, N.B. Review of visual odometry: Types, approaches, challenges, and applications. SpringerPlus 2016, 5, 1–26. [Google Scholar] [CrossRef]
- Péter, G.; Kiss, B.; Tihanyi, V. Vision and odometry based autonomous vehicle lane changing. ICT Express 2019, 5, 219–226. [Google Scholar] [CrossRef]
- Li, R.; Wang, S.; Long, Z.; Gu, D. (2018, May). Undeepvo: Monocular visual odometry through unsupervised deep learning. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 7286-7291). IEEE.
- Mohamed, S.A.; Haghbayan, M.H.; Westerlund, T.; Heikkonen, J.; Tenhunen, H.; Plosila, J. A survey on odometry for autonomous navigation systems. IEEE access 2019, 7, 97466–97486. [Google Scholar] [CrossRef]
- Wang, S.; Clark, R.; Wen, H.; Trigoni, N. End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks. The International Journal of Robotics Research 2018, 37, 513–542. [Google Scholar] [CrossRef]
- Xue, F.; Wang, X.; Wang, J.; Zha, H. Deep visual odometry with adaptive memory. IEEE Transactions on Pattern Analysis and Machine Intelligence 2020, 44, 940–954. [Google Scholar] [CrossRef]
- Tang, X.; Yang, K.; Wang, H.; Wu, J.; Qin, Y.; Yu, W.; Cao, D. Prediction-uncertainty-aware decision-making for autonomous vehicles. IEEE Transactions on Intelligent Vehicles 2022, 7, 849–862. [Google Scholar] [CrossRef]
- Gomes, T.; Matias, D.; Campos, A.; Cunha, L.; Roriz, R. A survey on ground segmentation methods for automotive LiDAR sensors. Sensors 2023, 23, 601. [Google Scholar] [CrossRef]
- Hoel, C.J.; Driggs-Campbell, K.; Wolff, K.; Laine, L.; Kochenderfer, M.J. Combining planning and deep reinforcement learning in tactical decision making for autonomous driving. IEEE transactions on intelligent vehicles 2019, 5, 294–305. [Google Scholar] [CrossRef]
- Ayawli BB, K.; Appiah, A.Y.; Nti, I.K.; Kyeremeh, F.; Ayawli, E.I. Path planning for mobile robots using Morphological Dilation Voronoi Diagram Roadmap algorithm. Scientific African 2021, 12, e00745. [Google Scholar] [CrossRef]
- Meng, T.; Yang, T.; Huang, J.; Jin, W.; Zhang, W.; Jia, Y.; Zhong, Z.; et al. Improved Hybrid A-Star Algorithm for Path Planning in Autonomous Parking System Based on Multi-Stage Dynamic Optimization. International Journal of Automotive Technology 2023, 24, 459–468. [Google Scholar] [CrossRef]
- Orthey, A.; Chamzas, C.; Kavraki, L.E. Sampling-Based Motion Planning: A Comparative Review. Annual Review of Control, Robotics, and Autonomous Systems 2023, 7.
- Wang, Z.; Sun, K.; Ma, S.; Sun, L.; Gao, W.; Dong, Z. Improved Linear Quadratic Regulator Lateral Path Tracking Approach Based on a Real-Time Updated Algorithm with Fuzzy Control and Cosine Similarity for Autonomous Vehicles. Electronics 2022, 11, 3703. [Google Scholar] [CrossRef]
- Wang, X.; Gilliam, C.; Kealy, A.; Close, J.; Moran, B. Probabilistic map matching for robust inertial navigation aiding. NAVIGATION: Journal of the Institute of Navigation 2023, 70. [Google Scholar] [CrossRef]
- Li, Q.; Queralta, J.P.; Gia, T.N.; Zou, Z.; Westerlund, T. Multi-sensor fusion for navigation and mapping in autonomous vehicles: Accurate localization in urban environments. Unmanned Systems 2020, 8, 229–237. [Google Scholar] [CrossRef]
- Berntorp, K.; Hoang, T.; Di Cairano, S. Motion planning of autonomous road vehicles by particle filtering. IEEE transactions on intelligent vehicles 2019, 4, 197–210. [Google Scholar] [CrossRef]
- Wong, K.; Gu, Y.; Kamijo, S. Mapping for autonomous driving: Opportunities and challenges. IEEE Intelligent Transportation Systems Magazine 2020, 13, 91–106. [Google Scholar] [CrossRef]
- Joubert, N.; Reid, T.G.; Noble, F. (2020, October). Developments in modern GNSS and its impact on autonomous vehicle architectures. In 2020 IEEE Intelligent Vehicles Symposium (IV) (pp. 2029-2036). IEEE.
- Krišto, M.; Ivasic-Kos, M.; Pobar, M. Thermal object detection in difficult weather conditions using YOLO. IEEE access 2020, 8, 125459–125476. [Google Scholar] [CrossRef]
- Afif, M.; Ayachi, R.; Said, Y.; Atri, M. An evaluation of EfficientDet for object detection used for indoor robots assistance navigation. Journal of Real-Time Image Processing 2022, 19, 651–661. [Google Scholar] [CrossRef]
- Grigorescu, S.; Trasnea, B.; Cocias, T.; Macesanu, G. A survey of deep learning techniques for autonomous driving. Journal of Field Robotics 2020, 37, 362–386. [Google Scholar] [CrossRef]
- Li, Y.; Chen, R.; Niu, X.; Zhuang, Y.; Gao, Z.; Hu, X.; El-Sheimy, N. Inertial sensing meets machine learning: Opportunity or challenge? IEEE Transactions on Intelligent Transportation Systems 2021. [Google Scholar] [CrossRef]
- Jo, J.; Tsunoda, Y.; Stantic, B.; Liew, A.W.C. (2017). A likelihood-based data fusion model for the integration of multiple sensor data: A case study with vision and lidar sensors. In Robot Intelligence Technology and Applications 4: Results from the 4th International Conference on Robot Intelligence Technology and Applications (pp. 489-500). Springer International Publishing.
- Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
- Sabaliauskaite, G.; Liew, L.S.; Cui, J. Integrating autonomous vehicle safety and security analysis using STPA method and the six-step model. International Journal on Advances in Security 2018, 11, 160–169. [Google Scholar]
- Abdulkhaleq, A.; Lammering, D.; Wagner, S.; Röder, J.; Balbierer, N.; Ramsauer, L.; Boehmert, H. A systematic approach based on STPA for developing a dependable architecture for fully automated driving vehicles. Procedia Engineering 2017, 179, 41–51. [Google Scholar] [CrossRef]
- Ni, J.; Chen, Y.; Chen, Y.; Zhu, J.; Ali, D.; Cao, W. A survey on theories and applications for self-driving cars based on deep learning methods. Applied Sciences 2020, 10, 2749. [Google Scholar] [CrossRef]
- Bachute, M.R.; Subhedar, J.M. Autonomous driving architectures: Insights of machine learning and deep learning algorithms. Machine Learning with Applications 2021, 6, 100164. [Google Scholar] [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).