Traffic Sign Classifier On Android

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Traffic Sign Classifier on Android

Declaration

I hereby declare that this particular project was completed on my own and that this project is
my original work. Furthermore, I hereby declare that this report has not been published or
submitted previously for any academic programme.

R.M.C.S. Rathnayake

Page 1 of 40
Acknowledgement

I would like to sincerely express my gratitude to my module lecturer who carefully and
proficiently supported me to complete this project in an effective manner.

Next, it should be mentioned that the successful and timely completion of this analytical
research would not have been a possible if not for the encouragement and guidance I received
from all the members of the Board of Directors, the institute’s Management, and Academic
Faculty of the Cardiff Metropolitan University of the United Kingdom and ICBT Campus of
Colombo, Sri Lanka.

Finally, I acknowledge the immensely valuable support provided by my family members and
colleagues as their constant support has been beneficially impactful in completing this project
successfully.

Page 2 of 40
Table of Content

Declaration ................................................................................................................................. 1
Acknowledgement ..................................................................................................................... 2
Table of Content ........................................................................................................................ 3
Table of Figures ......................................................................................................................... 4
Abbreviation .............................................................................................................................. 5
Abstract ...................................................................................................................................... 6
1. Introduction ........................................................................................................................ 7
1.1 Identified problem ....................................................................................................... 7
1.2 Solution proposed to overcome the problem............................................................... 8
1.3 Application Overview ............................................................................................... 10
1.4 Report Structure and Overview ..................................................................................... 10
2. Past Studies on the Subject .............................................................................................. 11
2.1 Dataset............................................................................................................................ 11
2.1.1 Challenges of the dataset. ....................................................................................... 12
2.2 Mobile Traffic Sign Recognition Systems ..................................................................... 13
3. Artificial Intelligence in Depth for TSC .............................................................................. 15
3.1 Technologies Used .................................................................................................... 15
3.1.1 Computer Vision and CNN ..................................................................................... 15
3.1.2 R-CNN .................................................................................................................... 16
3.1.2 Fast R-CNN ............................................................................................................ 17
3.1.3 Faster R-CNN ......................................................................................................... 18
4. Methodology ........................................................................................................................ 20
4.1 Setting Up ...................................................................................................................... 20
4.1.1 Environment Setup.................................................................................................. 20
4.2 Implementation .............................................................................................................. 21
4.2.1 Model ...................................................................................................................... 21
4.2.2 Training Script ........................................................................................................ 24
4.3 Mobile Application ........................................................................................................ 32
5. Conclusion ........................................................................................................................... 34
6. Bibliography ........................................................................................................................ 35

Page 3 of 40
Table of Figures

Figure 1 - Screenshot of the Application ................................................................................. 10

Figure 2 - GTSRB Sample images........................................................................................... 12

Figure 3 - CNN multiple ROIs................................................................................................. 15

Figure 4 - CNN single ROI ...................................................................................................... 15

Figure 5 - R-CNN Region with features .................................................................................. 16

Figure 6 - Fast R-CNN features ............................................................................................... 17

Figure 7 - Time differences of different algorithms ................................................................ 18

Figure 8 - Faster R-CNN.......................................................................................................... 18

Figure 9 - Comparison of test-time speed of object detection algorithms ............................... 19

Figure 10 - The Keras deep learning framework is used to build a Convolutional Neural
Network (CNN) for traffic sign classification. ........................................................................ 21

Figure 11 - Accuracy/Loss Curves .......................................................................................... 31

Page 4 of 40
Abbreviation

TSC Traffic Sign Classification

GTSRB German Traffic Sign Recognition Benchmark

CNN Convolutional Neural Network

R-CNN Region Convolutional Neural Network

ROI Region of Interest

LMRI Low or Mid-Range Income

Page 5 of 40
Abstract

This project repost is about the new implementation of Traffic Sign Classification (TSC) on
Android Device. The system is capable of recognizing German Traffic Signs because the
system is trained with German Traffic Sign Recognition Benchmark (GTSRB). Although its
developed to recognize German Traffic Signs, it can be trained with any countries dataset and
implemented in the mobile app. I find it’s the most less expensive and most convenient way of
implementing such system into vehicles. This application can run on any android device run
on Android 4.4 or above. TSC has two phases, the first one is to detect the traffic sing from
input data, and the second is to recognize the detected traffic sign. Convolutional Neural
Networks used to detect and classify the traffic signs. Pythons and Google’s Tensorflow used
to build the classifier and train the model along with Keras API. In the report I will discuss
deeply about proposed the solution, the techniques used, the implementation process,
challenges and further improvements. Even though I have used GTSRB for the model training,
I will be discussing more about the low and middle range income(LMRI) countries.

Page 6 of 40
1. Introduction

1.1 Identified problem

World’s accident count goes as far as 1.38 million a year (WHO, 2018), 93 % of those happened
to be in low and mid-range income countries even though those countries have 60% of the
world’s vehicles.

Traffic accidents, including which speeding when merged with alcohol contributes the most to
the casualty. Therefore, an automated smart assistant in the car to warn about speed limits, road
conditions and what’s ahead in the road could save lives. However, having such an advance
tech in a car is not a luxury that everyone can afford.

Smart driving assistant systems are complex and usually consist of few components as
mentioned below:

1. Navigation system

2. Obstacle detection

3. TSC

4. Night vision

5. Lane assistance

6. Cruise control

7. Driver drowsiness detection

Even Europe’s average age of passenger vehicles are more than 11 years. (ACEA, 2019)
Passenger cars have average age of 11.1, vans 11.0 and commercial vehicles 12 years to be
exact. When it comes LMRI countries these figures will be much higher. Rather than having
such an expensive facility in the vehicle majority of people would love to have some of those
features in their vehicles for free with what they have.

By 2019 there are 3.2 billion (statistica, 2019) smart phone users in the world. Hence smart
phone is a facility that most people experience right now and its already with them. That’s
where the proposed solution comes in.

Page 7 of 40
1.2 Solution proposed to overcome the problem

Proposed is an accessible solution for the problem we discussed in 1.1. It is a mobile application
based TSC which can be used in any vehicle by anyone with a smart phone. The application I
develop and explain here is an android app powered by Tensorflow, Keras and Tensorflow-
lite. Proposed TSC is an advance system with two main steps of the whole process.

1. Traffic Sign Detection

System will detect an localize the region of traffic sin in the input visual data.

2. Detected Sign Classification

Then the system will classify the localized Region of Interest (ROI)

I have used Tensorflow Object Detection API for detecting the traffic sign and Faster R-CNN
algorithm to classify the detected ROI.

As the project is based on GTSRB dataset, the models are trained to recognize German traffic
signs only. In this project I have used Faster R-CNN classification algorithms to train the model
and it contains 42 categories of signs. Even though the project model is trained based on single
set of data, program can retrain with any dataset with the same dataset structure easily without
making huge changes to the core application or training functions. This project architecture can
be used in future for the virtual trained model implementation which does not require
rebuilding the android application to add another trained model.

I have exported the trained Keras model to Tensorflow-Lite version so I can use the .tflite
format model in Android application with Tensorflow-lite Object Detection API for android.

Table 1 - Mobile and Inbuilt TSC differences

Factor Inbuilt Traffic Sign Classifiers Mobile Based Traffic Sign Classifier

More accurate with multiple


Only the single camera unit can capture
cameras, distance data and other
Accuracy the visual and therefore, accuracy will
inbuilt sensor data. These can
be less comparing to inbuilt models.
handle more noise in the visual.

Easy, since it’s a mobile application


Accessibility Very low, because there are inbuilt which drivers can easily download and
systems. Most of these systems install in their mobile phones.

Page 8 of 40
cannot buy separately for another
vehicle.

Very low, maybe free for limited


Very high because to have this
functions. Because it’s a simple mobile
Cost systems driver must buy the vehicle
application that any driver can install
itself.
into their mobile phone.

Not easy since these inbuilt systems


Maintenance must be handled by a trained Easy, just uninstall the app and reinstall.
technician.

Only the software updates will be


supported. Future hardware All new feature updates will be available
Updates
features will not be provided with to the users.
the paid price.

Page 9 of 40
1.3 Application Overview

Figure 1 - Screenshot of the Application

There are 42 signs that the current application can recognize and from those 85 signs 8 of them
can be categorize as speed signs, 4 of them as immediate and the rest is regular signs. Speed
sign indicator will be showing in the application for the whole time until another speed sign
comes up since there cannot be two different speed limits at the same time. Notification for
immediate signs will be on the screen for 10 seconds after passing the sign. Regular signs will
be notified for 30 seconds after passing. Even if single speed and immediate signs can be there
at once, there can be multiple regular signs can be at a same place. Therefore, application will
notify all the regular signs that are recognize. Application will have a menu for settings, in
which can enable and disable sounds, and to run the application in the background.

1.4 Report Structure and Overview

The rest of the report structures and presented as follow. In chapter 2, I will discuss about past
studies on this subject and current application in depth. In 3rd chapter the techniques used, and
the methodologies applied in the project will be discussed. Chapter 4 is where the
implementation and usage reports will be.

Page 10 of 40
2. Past Studies on the Subject

TSC is one of the most important and most valuable implementations of Computer Vision and
Artificial Intelligence. First studies on this matter was in japan 1987 (H. Akatsuka, 1987). The
efforts of TSC systems continue since then and various institutes and companies have carried
out significant number of researches and studies on this regard. Traffic sign classification
consist with two main categories. Detecting the sign, and the classifying/recognizing the
detected sign. Detection a sign is the first step for TSC. However, detection traffic signs are a
bit complex process since traffic signs are not in same colors, shapes or features. Therefore,
traffic signs can be detected as Color based, shape-based and feature based. These could be
differing from country or region oriented or traffic sign category oriented. Identification of
such category-oriented signs can be done via machine learning techniques like Neural
Networks, Support Vector Machines etc.

As the chapter continues, I will be discussing about dataset, TSC in above AI techniques and
Attempts Mobile TSC.

2.1 Dataset

German Traffic Sign Recognition Benchmark (GTSRB) Dataset is used to train the model since
it’s the light weighted, simple and complete dataset out there to work with. GTSRB dataset
consists of 42 traffic signs and almost 50,000 images. Traffic signs are pre cropped for us in
the dataset. Figure X shows sample dataset images. Dataset creators have manually labeled the

Page 11 of 40
signs and cropped the images to ROI. Training, Testing and Meta images are labeled in .csv
files as their respective folder name. Traffic sign classes listed in the ‘signnames.csv’ file.

Figure 2 - GTSRB Sample images

2.1.1 Challenges of the dataset.

There are several challenges in the GTSRB dataset. First one is the images are low in resolution
and low in contrast as well. As seen in Figure X sometimes images extremely challenging that
even human eye and brain can barely identify them. There comes another problem with the
cropped images. Which is detecting signs is sort of a challenging task. The other challenge of
the dataset is the class skew. 42 different classes have different number of sample images
varying from 200 to 2000. Therefore, to address those issues in the dataset, I have to preprocess
images to improve contrast and make account for image skew labels, which I will be discussing
on Chapter 4.

Page 12 of 40
2.2 Mobile Traffic Sign Recognition Systems

Mobile devices getting smarter and smarter every day. And also, they are getting more
powerful day by day as well. However, Mobile device based TRC is not a common or
completely developed system. TSC on vehicles can be categorize as 3 types.

• Embedded TRC system


• External Device TRC
• Mobile Application based TSC

At the time of the report writing, there are many car manufacturers have developed and installed
TSC systems on their high model vehicles. Almost all the European brands, American brands
like and Japanese brand are on top of that. From all those vehicles and TSC systems, Tesla
motors has one of the state-of-the-art AI based TSC which is capable to self -navigating the
car. Tesla Model S, Tesla Model 3 and Tesla Model x are the most famous models to implement
that system. Apart form that not all the cars have self-navigating feature. However, Toyota
Yaris 2020 (Vacaville, 2019) is one of the latest and most economical vehicles to have installed
TSC system on board.

The second TSC solution is the external TSC devices. Those are separate units as vehicle GPS
units. They can be either mounted to the wind screen or dashboard. Daimler is a company who
had developed an external TSC device. Their smart driving kit requires an iPhone to connect
in to. It will detect the traffic signs and notify them in connected iPhone. Bluetooth interface
used for the connection between iPhone and the smart driving kit.

Though there are many vehicles with built-in TSC systems, it’s a luxury that majority og the
people cannot afford. However, all the drivers on the road faces same exact scenarios and
situations. Any driver could miss a traffic sign, traffic light or any road obstacle while on the
road. Even installing separate external TSC device is a bit costly. That’s where the importance
of mobile application based TSC occurs. Lai and Yu (Yu, 2010) developed a mobile based
TSC system which runs on smart phones. This app detects traffic signs and take a picture of
that, then send it to a computer which is a remote server. Mobile data connection or WiFi used
for sending the picture to computer then in the computer it classifies the image and notify in
the mobile application. Its based-on client-server architect. Ozgur, Cornella, Israel and Alba
(Izmirli., 2012) (Jordi Cornell`a Calsina, n.d.) used the same architecture approach for their
mobile application. This client server application module is not practical in real life scenarios

Page 13 of 40
since it only can detect signs when there is a internet connection. And the speed of the detection
process is depending on the speed of the internet connection. Therefore, Yukio Fukuzawa
(Fukuzawa, 2013) developed an standalone Android based application to detect and recognize
traffic signs. Yuriko used pre trained models on the mobile application to archive faster
detection results.

Page 14 of 40
3. Artificial Intelligence in Depth for TSC

3.1 Technologies Used

As I have used Python, Google Tensorflow and Keras to build and train the model, here I will
be talking about the mechanism behind it. The algorithm used I here is called Faster R-CNN
which is also a Convolutional Neural Network but faster than any other type of CNN.

3.1.1 Computer Vision and CNN

Computer vision is a great field of study since Convolutional Neural Networks introduced. It
has gaining huge amount of attraction with the self-driven car taking the advantage of computer
vision. Not only the self-driven cars but also object detection is also an important usage of
computer vision. Object detection uses for many applications like surveillance systems, pose
detection, vehicle detection, traffic sign detection and more. Image recognition is also a part of
computer vision which has use cases like image classification, image search engines,
localization and mapping etc.

Object detection algorithms used in computer vision draws a bounding box around region of
interest (ROI) with in the image. There can be single or multiple ROIs and multiple bounding
boxes in a single image.

Figure 4 - CNN single ROI

Figure 3 - CNN multiple ROIs

With a traditional CNN we can proceed and get a result for still image like this which has
known inputs and known boundary boxes. But for real world applications we cannot proceed
Page 15 of 40
with this approach. As in above examples, output layer of those to scenarios are not the same.
It could be a variable not a constant value, number of ROIs are different in every scenario.
There could be many numbers of ROIs with every type of aspect ratios and sizes. Doing
localization to match up all those variables could blow the computational power and it will take
huge amount of time to classify real world scenario. To address this issue R-CNN, YOLO like
algorithms developed.

3.1.2 R-CNN

As a solution for CNN’s huge number of region selection mechanism, Ross Girshick (Girshick,
2013) introduced a method, where it select selected number of regions per image. Girshi
decided it to be 2000 regions per image. These proposed regions called as region proposals. In
R-CNN rather than classifying existing regions in the images, now can work with these 2000
region proposals. These 2000 region proposals were generated with the support of following
algorithms

Selective Search:

1. Generate initial sub-segmentation, we generate many candidate regions

2. Use greedy algorithm to recursively combine similar regions into larger ones

3. Use the generated regions to produce the final candidate region proposals

Figure 5 - R-CNN Region with features

Problems of R-CNN

Page 16 of 40
• This takes huge amount of time to train a model from this since it must classify 2000
region proposals per each image in the training dataset.
• This is not a solution that can be implemented for real time classification since it takes
up to 47 seconds to each image test
• Selective search is a constant pre-defined defined algorithm which cannot be trains as
it goes and cannot make it to a self-learning mechanism.

3.1.2 Fast R-CNN

Figure 6 - Fast R-CNN features

To solve some of the major drawbacks of R-CNN Ross Girshick introduced a fast algorithm
for object detection called Fast R-CNN. It has technically the same approach but instead of
feeding wrapped region proposals, we create convolutional feature map by directly feeding the
image to CNN. Then this convolutional feature map is used to identify region proposals and
create wrapped feature squares. The purpose here is to feed this feature squares to fully
connected layers. Therefore, we have to reshape these feature squares. From there algorithm
uses the softmax layer to predict the proposed region classes and bounding box values.

Page 17 of 40
This approach is much faster comparing to R-CNN because it doesn’t have feed 2000 region
proposal to the network each time.

Figure 7 - Time differences of different algorithms

3.1.3 Faster R-CNN

R-CNN and Fast R-CNN uses selective search approach for to find the region proposals. It is
a much slower and less efficient process which could be a huge effect on a real time network.
Therefore, addressing the issues of those two algorithms, Shaoqing Ren, Kaiming He, Ross

Figure 8 - Faster R-CNN

Page 18 of 40
Girshick, and Jian Sun (Shaoqing Ren, 2016) suggested an object detection algorithm that
could learn itself the region proposals.

Figure 9 - Comparison of test-time speed of object detection algorithms

Page 19 of 40
4. Methodology

4.1 Setting Up
4.1.1 Environment Setup

I have used Python, Tensorfolow and Keras for the build of the model. To have all this in one
go I have used python combo package called Anaconda Distribution. It has all the necessary
plugins and frameworks inbuilt. It’s a hassle-free process. As I have using Windows computer
I downloaded the Anaconda Python 3.7 version because the python 2.7 version does not
support Google’s Tensorflow 2.0.

Link for the Anaconda : https://www.anaconda.com/distribution/

After installing anaconda, I must make sure that all the plugins inside python is installed.
Therefore, the following plugins much be checked.

• OpenCV
• NymPy
• scikit-learn
• scikit-image
• imutils
• matplotlib

These plugins may come with Anaconda distribution but its better to double check. I have use
“pip install” command in the python environment command line.

Anaconda comes with Jupyter notebook which can be used as a Python executing platform.
But rather using Jupiter I used “Sublime Text” for editing the Python code.

For the mobile application set-up I have to have Java installed in my system. It was already
installed in my system therefore I didn’t have to install it. As an Android application developing
IDE I have used Android Studio which is the best Android developing platform out there.

Page 20 of 40
4.2 Implementation

Figure 10 - The Keras deep learning framework is used to build a Convolutional Neural Network (CNN) for traffic sign
classification.

4.2.1 Model

I have named the classifier as TrafficSignModel

Page 21 of 40
Our tf.keras imports are listed on Lines 1-8. We will be taking advantage of Keras’ Sequential
API to build our TrafficSignModel CNN (Line 1).

Line 10 defines our TrafficSignModel class followed by Line 12 which defines our build
method. The build method accepts four parameters: the image dimensions, depth , and number
of classes in the dataset.

Lines 15-18 initialize our Sequential model and specify the CNN’s inputShape .

CONV => RELU => BN => POOL layer set:

(CONV => RELU => CONV => RELU) * 2 => POOL layers:

Page 22 of 40
Fully connected layers and a softmax classifier:

Page 23 of 40
4.2.2 Training Script

Now that our TrafficSignModel architecture has been implemented, let’s create our Python
training script that will be responsible for:

• Loading our training and testing split from the GTSRB dataset
• Preprocessing the images
• Training our model
• Evaluating our model’s accuracy
• Serializing the model to disk so we can later use it to make predictions on new traffic
sign data

• matplotlib : The de facto plotting package for Python. We use the "Agg" backend
ensuring that we are able to export our plots as image files to disk (Lines 1 and 2).
• TrafficSignNet : Our traffic sign Convolutional Neural Network that we coded with
Keras in the previous section (Line 4).
• tensorflow.keras : Ensures that we can handle data augmentation, Adam optimization,
and one-hot encoding (Lines 5-7).
• classification_report : A scikit-learn method for printing a convenient evaluation for
training (Line 8).
• skimage : We will use scikit-image for preprocessing our dataset in lieu of OpenCV as
scikit-image provides some additional preprocessing algorithms that OpenCV does not
(Lines 9-11).
• numpy : For array and numerical operations (Line 13).
• argparse : Handles parsing command line arguments (Line 14).
• random : For shuffling our dataset randomly (Line 15).
• os : We’ll use this module for grabbing our operating system’s path separator (Line 16).

Page 24 of 40
The GTSRB dataset is pre-split into training/testing splits for us. Line 18 defines load_split to
load each training split respectively. It accepts a path to the base of the dataset as well as a .csv
file path which contains the class label for each image.
Lines 20 and 21 initialize our data and labels lists which this function will soon populate and
return.
Line 26 loads our .csv file, strips whitespace, and grabs each row via the newline delimiter,
skipping the first header row. The result is a list of rows which Line 27 then shuffles randomly.

Let’s go ahead and loop over the rows now and extract + preprocess the data that we need:

Line 30 loops over the rows. Inside the loop, we proceed to:

• Display a status update to the terminal for every 1000th image processed (Lines 32 and
33).
• Extract the ClassID ( label) and imagePath from the row (Line 37).
• Derive the full path to the image file + load the image with scikit-image (Lines 40 and
41).

Page 25 of 40
As mentioned in the 2.1.1 section above, one of the biggest issues with the dataset is that many
images have low contrast, making it challenging for the human eye to recognize a given sign
(let alone a computer vision/deep learning model).
We can automatically improve image contrast by applying an algorithm called Contrast
Limited Adaptive Histogram Equalization (CLAHE), the implementation of which can be
found in the scikit-image library.

Using CLAHE we can improve the contrast of our traffic sign images:

Let’s preprocess our images by applying CLAHE now:

Page 26 of 40
To complete our loop over the rows, we:
• Resize the image to 32×32 pixels (Line 46).
• Apply CLAHE image contrast correction (Line 47).
• Update data and labels lists with the image itself and the class label (Lines 50 and 51).
Then, Lines 54-58 convert the data and labels into NumPy arrays and return them to the calling
function.

Our three command line arguments consist of:


• --dataset : The path to our GTSRB dataset.
• --model : The desired path/filename of our output model.
• --plot : The path to our training history plot.
Initialize of few hyperparameters and load class label names:

Lines 72-74 initialize the number of epochs to train for, our initial learning rate, and batch size.
Lines 77 and 78 load the class labelNames from a .csv file. Unnecessary markup in the file is
automatically discarded.

Page 27 of 40
Load + preprocess data:

In here I have:
• Derive paths to the training and testing splits (Lines 81 and 82).
• Use our load_split function to load each of the training/testing splits, respectively (Lines
86 and 87).
• Preprocess the images by scaling them to the range [0, 1] (Lines 90 and 91).
• One-hot encode the training/testing class labels (Lines 94-96).
• Account for skew in our dataset (i.e. the fact that we have significantly more images
for some classes than others). Lines 99 and 100 assign a weight to each class for use
during training.

Page 28 of 40
Prepare + train the model:

Lines 103-111 initialize our data augmentation object with random rotation, zoom, shift, shear,
and flip settings. Notice how we’re not applying horizontal or vertical flips here as traffic signs
in the wild will not be flipped.
Lines 115-119 compile our TraffigSignModel model with the Adam optimizer and learning
rate decay.
Lines 123-129 train the model using Keras’ fit_generator method. Notice the class_weight
parameter is passed to accommodate the skew in our dataset.

Page 29 of 40
Evaluate the model and serialize it to disk:

Line 133 evaluates the model on the testing set. From there, Lines 134 and 135 print a
classification report in the terminal.
Line 139 serializes the Keras model to disk so that we can later use it for inference in our
prediction script.
Line 142 save the model as .h5 format, which later can be converted in to TensorFlow Lite
.tflite format to use in Android application

Finally, the following code block plots the training accuracy/loss curves and exports the plot
to an image file on disk:

Page 30 of 40
Accuracy/loss curves:

Figure 11 - Accuracy/Loss Curves

Page 31 of 40
4.3 Mobile Application

Page 32 of 40
Page 33 of 40
5. Conclusion

Having examine about the AI band TSC and Mobile Application Development on this, I can
conclude that the AI is becoming closer and closer to humans without even noticing. More
smart devices out there which are powered by AI principles. Since new technologies like
Tensorflow introduced, the applications of AI are much easier and not a hard-core process to
implement it.

The technologies like Fast R-CNN, Faster R-CNN makes the AI application faster and also
lighter so that a mobile application is capable of completing huge tasks in seconds.

When it comes to traffic sign detection, what it concluded is that it is a big concern of big
automobile makers. However, it is not as interesting feature as other option for consumer level
drivers. The main reason for that is the price tag of the in-built TSC vehicles. Those are not
affordable for a person with an average income. Therefore, people still have no interest having
such features for huge price tag. Majority doesn’t even like to install separate module for the
task as it costs and the hassle of installing a device as well as maintain it. Therefore, I conclude
that a Mobile Application is the best solution for the average drivers, to use TSC.

Even though many studies and researches has been done on the subject, still, there is no such
reliable mobile application for TSC in public. Since the mobile app experience of the current
generation is so high, expectation of an application is high too. Therefore, with the app we built
we must make sure we deliver it to satisfy the user.

I would like to conclude that, taking this research and the project as a prototype and then built
the commercial level application on that.

Page 34 of 40
6. Bibliography

ACEA, 2019. Average Vehicle Age. [Online]


Available at: https://www.acea.be/statistics/tag/category/average-vehicle-age
[Accessed 2020].

Anyoha, R., 2017. The History of Artificial Intelligence.

Ashley, 2017. HISTORY OF TRAFFIC SIGNS. [Online]


Available at: https://www.degemmill.com/history-traffic-signs/
[Accessed 01 2020].

Christian Szegedy, V. V. S. I. J. S., 2016. Rethinking the Inception Architecture for Computer
Vision. IEEE Xplore.

Fleyeh, H., 2008. Traffic and Road Sign Recognition. s.l., Napier University.

Fukuzawa, Y., 2013. Traffic Sign Recognition system on Android devices, s.l.: Massey
University.

Garnham, A., 2018. Artificial Inteligence an Introduction. New York: Routledge.

Girshick, R. B., 2013. Rich feature hierarchies for accurate object detection and semantic.
[Online]
Available at: http://arxiv.org/abs/1311.2524.pdf
[Accessed 2020].

H. Akatsuka, S. I., 1987. Road Sign-posts Recognition System. Society of Automotive


Engineers.

Izmirli., B. X. a. O., 2012. A road sign detection and recognition system for mobile devices,
s.l.: s.n.

Jordi Cornell`a Calsina, I. D. B. G. V., n.d. Service detection, localization and recognition of
traffic signals for mobile devices: mobile terminal software. Title in Catalan: Servei de
detecci´o, reconeixement i localitzaci´o de senyals de tr`ansit per terminals m`obils: Software
de terminal m`obil, s.l.: s.n.

Page 35 of 40
Shaoqing Ren, K. H. R. G. a. J. S., 2016. Faster R-CNN: Towards Real-Time Object Detection
with Region Proposal Networks. [Online]
Available at: https://arxiv.org/pdf/1506.01497.pdf

statistica, 2019. Number of smartphone users worldwide from 2016 to 2021. [Online]
Available at: https://www.statista.com/statistics/330695/number-of-smartphone-users-
worldwide/
[Accessed 2020].

Vacaville, T., 2019. How the Toyota Road Sign Assist Feature Works. [Online]
Available at: https://www.toyotavacaville.com/blog/how-the-toyota-road-sign-assist-feature-
works/
[Accessed 2020].

WHO, 2018. Road traffic injuries. [Online]


Available at: https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries
[Accessed 2020].

Yu, C.-H. L. a. C.-C., 2010. An efficient real-time traffic sign recognition system for intelligent
vehicles with smart phones. Washington, DC, USA, IEEE Computer Society..

Page 36 of 40
Page 37 of 40
Page 38 of 40
Page 39 of 40
Page 40 of 40

You might also like