Smart Control of Traffic Using Artificial Intelligence: CE4098D Major Project - Part I

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

CE4098D Major Project - Part I

Smart Control of Traffic Using ArtificialIntelligence

Group Members

Sl No. Roll No. Name


1 B181097CE YARAMASU VAMSI
2 B181083CE UPPARI ANAND
3 B180511CE NARIGE YESHWANTH
4 B170454CE KISHAN KUMAR

Under the Supervision of

Dr.Yogeshwar Vijaykumar Navandar


Assistant Professor
Department of Civil Engineering

Department of Civil Engineering


National Institute of Technology Calicut
2021-22

1
Introduction

With the increasing number of vehicles in urban areas, many road networks are facing problems with the
capacity drop of roads and the corresponding Level of Service. Many traffic-related issues occur because of
traffic control systems on intersections that use fixed signal timers. They repeat the same phase sequence and
its duration with no changes. Increased demand for road capacity also increases the need for new solutions for
traffic control that can be found in the field of Intelligent Transport Systems.

Let us take the case study of Mumbai and Bangalore. Traffic flow in Bangalore is the worst in the world
while Mumbai is close behind in fourth position, according to a report detailing the traffic situation in 416 cities
across 57 countries. In Bangalore, a journey during rush-hour takes 71% longer. In Mumbai, it is 65% longer.

There are three standard methods for traffic control that are being used currently:
1) Manual Controlling: As the name suggests, it requires manpower to control the traffic. The traffic police
areallotted for a required area to control traffic. The traffic police carry signboard, sign light, and whistle
to control the traffic.
2) Conventional traffic lights with static timers: These are controlled by fixed timers. A constant numerical
value is loaded in the timer. The lights are automatically switching to red and green based on the timer
value.
3) Electronic Sensors: Another advanced method is placing some loop detectors or proximity sensors on
the road. This sensor gives data about the traffic on the road. According to the sensor data, the traffic
signals are controlled.

These conventional methods face certain drawbacks. The manual controlling system requires a large amount
of manpower. As there is poor strength of traffic police, we cannot have them controlling traffic manually in
all areas of a city or town. So a better system to control the traffic is needed. Static traffic controlling uses a traffic
light with a timer for every phase, which is fixed and does not adapt according to the real-time traffic on that
road. While using electronic sensors i.e., proximity sensors or loop detectors, the accuracy and coverage are
often in conflict because the collection of high-quality information is usually based on sophisticated and
expensive technologies, and thus limited budget will reduce the number of facilities. Moreover, due to the limited
effective range of most sensors, the total coverage on a network of facilities usually requires a lot of sensors.

In recent years, video monitoring and surveillance systems have been extensively used in traffic management for
security, ramp metering, and providing information and updates to travelers in real-time. The traffic density
estimation and vehicle classification can also be achieved using video monitoring systems, which can then be
used to control the timers of the traffic signals so as to optimize traffic flow and minimize congestion. Our
proposed system aims to design a traffic light controller based on Computer Vision that can adapt to the current
traffic situation. It uses live images from the CCTV cameras at traffic junctions for real-time traffic density
calculation by detecting the number of vehicles at the signal and setting the green signal time accordingly.

The vehicles are classified as a car, bike, bus/truck, or rickshaw to obtain an accurate estimate of the green signal
time. It uses YOLO in order to detect the number of vehicles and then set the timer of the traffic signal according
to vehicle density in the corresponding direction. This helps to optimize the green signal times, and traffic is
cleared at a much faster rate than a static system, thus reducing the unwanted delays, congestion, and waiting
time, which in turn will reduce the fuel consumption and pollution.
2
Literature Review/background

Khushi 2017 [2], proposes a solution using video processing. The video from the live feed is processed before
being sent to the servers where a C++ based algorithm is used to generate the results. Hard code and Dynamic
coded methodologies are compared, in which the dynamic algorithm showed an improvement of 35%.

A. Vogel 2017 [3], proposes an Arduino-UNO based system that aims to reduce traffic congestion and
waiting time. This system acquires images through the camera and then processes the image in MATLAB,
where the image is converted to a threshold image by removing saturation and hues, and traffic density is
calculated. Arduino and MATLAB are connected using USB and simulation packages, which are preinstalled.
Depending on traffic count and traffic density, the Arduino sets the duration of green light for each lane. But
this method has several flaws. The cars often overlap with each other and it is difficult to get a proper count of
how many vehicles are on the road. Moreover, different objects interfered with the detection as they too were
converted to black and white and there was no way of making a distinction between regular objects like
billboards, poles, and trees with vehicles.

A. A. Zaid 2017 [4], proposes a fuzzy logic-controlled traffic light that can be adapt to the current traffic
situations. This system makes use of two fuzzy controllers with 3 inputs and one output for primary and
secondary driveways. A simulation was done using VISSIM and MATLAB and for low traffic density, it
improved traffic conditions.

Renjith Soman 2018 [5], proposes a smart traffic light system using ANN and fuzzy controller. This system
makes use of images captured from cameras installed at traffic site. The image is first converted to a grayscale
image before further normalization. Then, segmentation is performed using sliding window technique to count
the cars irrespective of size and ANN is run through the segmented image, the output of which is used in fuzzy
controller to set timers for red and green light using crisp output. Results had an average error of 2% with
execution time of 1.5 seconds.

A. Kanungo 2016[6], makes use of a support vector machine algorithm along with image processing
techniques. From live video. images in small frames are captured and the algorithm is applied. Image processing
is done using OpenCV and the images are converted to grayscale images before SVM is applied. This system
not only detects traffic density but also detects red light violations.

Siddharth Srivastava 2019 [7], proposes the use of adaptive light timer control using image processing
techniques and traffic density. This system consists of microcontroller-controlled traffic light timer, high image
sensing devices, MATLAB and transmission using UART principles. However, this systemfails to prioritize
the authorized emergency vehicles nor to detect accidents on the intersection.

Ms. Saili Shinde 2016 [8], reviews various techniques used for traffic light management system. This paper
observes that each technique has a common architecture: choose input data, acquire traffic parameters from
input data, process it, determine density, and update parameters.
 In the first method, VANETS are used to get information and location of every vehicle, which in turn is
passed on to the nearest Intelligent Traffic light with the help of installed GPS Further, these ITLs will
update the statistics and sent it to nearby vehicles. In case of accidents, the information would be sent to
3
drivers to choose an alternate route to avoid congestion. However, this technique is not feasible as its
deployment is quite expensive.
 In the second method, infrared sensor-based microcontrollers are used, which capture the unique ID of
every car using transmitter and receiver. In case of an emergency situation, vehicle’s radio frequency tags
can be used to identify them and let other vehicles move. This method detects red light violations.
However, this technique is not flexible due to the fact that infrared sensors need to be in sight.
 In the third method, fuzzy logic technique is used in which two fuzzy logic controllers are used – one is
to optimize the signal and the other controller is used to extend the green phase of a road in an intersection.
The sensors used to collect input data are video cameras that are placed at incoming and outgoing lines.
The controller then utilizes the information collected through these sensors to make optimal decisions
and minimize the goal function.
 In the fourth method, fuzzy logic is used, and the system takes in the number of vehicles and the average
speed of traffic flow in each direction as the input parameters. The number of vehicles and the average
speed of traffic flow can be determined using sensors placed on the road.
 In the fifth method, photoelectric sensors are used, which are set at some distance apart, that capture
data and send it to the traffic cabinet, which calculates the weight of each road and then set the traffic
light accordingly. However, the maintenance cost is quite high.
 In the sixth method, video imaging is used to capture the data. Dynamic background subtraction and
various morphological operations are performed to capture a clear image of the vehicle. Every time a
new vehicle enters the area of interest, a new rectangle is drawn and vehicle count is incremented. The
algorithm is easy to implement but does not handle occlusion and shadow overlapping.

Aims/ Objectives of the Project

 To design a traffic light controller based on Computer Vision to adapt to the dynamic traffic situation.
 To optimize the green signal times.
 To reduce the unwanted delays, congestion, and waiting time, which will reduce fuel consumption and pollution.

Methodology

A. Proposed System Overview


Our proposed system takes an image from the CCTV cameras at traffic junctions as input for real-time
traffic density calculation using image processing and object detection. As shown in Fig. 1, this image is
passed on to the vehicle detection algorithm, which uses YOLO. The number of vehicles of each class, such
as car, bike, bus, and truck, is detected, which is to calculate the density of traffic. The signal switching algorithm
uses this density, among some other factors, to set the green signal timer for each lane. The red signal times
are updated accordingly. The green signal time is restricted to a maximum and minimum value in order to avoid
starvation of a particular lane. A simulation is also developed to demonstrate the system’s effectiveness and
compare it with the existing static system.

4
B. Vehicle Detection Module

The proposed system uses YOLO (You only look once) for vehicle detection, which

Fig. 1. Proposed System Model

provides the desired accuracy and processing time. A custom YOLO model was trained for vehicle detection,
which can detect vehicles of different classes like cars, bikes, heavy vehicles (buses and trucks), and rickshaws.

YOLO is a clever convolutional neural network (CNN) for performing object detection in real-time. The
algorithm applies a single neural network to the full image, and then divides the image into regions and predicts
bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted
probabilities. YOLO is popular because it achieves high accuracy while also being able to run in real-time.

The algorithm “only looks once” at the image in the sense that it requires only one forward propagation pass
through the neural network to make predictions. After non- max suppression (which makes sure the object
detection algorithm only detects each object once), it then outputs recognized objects together with the bounding
boxes. With YOLO, a single CNN simultaneously predicts multiple bounding boxes and class probabilities for
those boxes.

The dataset for training the model was prepared by scraping images from google and labelling them
manually using LabelIMG, a graphical image annotation tool . Then the model was trained using the pre-trained
weights downloaded from the YOLO website. The configuration of the .cfg file used for training was changed
in accordance with the specifications of our model.

The number of output neurons in the last layer was set equal to the number of classes the model is supposed
to detect by changing the 'classes' variable. In our system, this was 4 viz. Car, Bike, Bus/Truck, and Rickshaw.
5
The number of filters also needs to be changed by the formula 5*(5+number of classes), i.e., 45 in our case. After
making these configuration changes, the model was trained until the loss was significantly less and no longer
seemed to reduce.

Fig. 2. Vehicle Detection Results

Fig. 2 shows test images on which our vehicle detection model was applied. The left side of the figure shows the original image and
the right side is the output after the vehicledetection model is applied on the image, with bounding boxesand corressponding labels

This marked the end of the training, and the weights were now updated according to our requirements. A
threshold is set as the minimum confidence required for successful detection. After the model is loaded and an
image is fed to the model, it gives the result in a JSON format i.e., in the form of key-value pairs, in which labels
are keys, and their confidence and coordinates are values. Again, OpenCV can be used to draw the bounding
boxes on the images from the labels and coordinates received.

C. Signal Switching Module


The Signal Switching Algorithm sets the green signal timer according to traffic density returned by the
vehicle detection module, and updates the red signal timers of other signals accordingly. It also switches
between the signals cyclically according to the timers. The algorithm takes the information about the vehicles that
were detected from the detection module, as explained in the previous section, as input. This is in JSON format,
with the label of the object detected as the key and the confidence and coordinates as the values. This input is then
parsed to calculate the total number of vehicles of each class. After this, the green signal time for the signal is
calculated and assigned to it, and the red signal times of other signals are adjusted accordingly. The algorithm
can be scaled up or down to any number of signals at an intersection.
The following factors were considered while developing the algorithm:
1) The processing time of the algorithm to calculate traffic density and then the green light duration – this
decides at what time the image needs to be acquired
2) Number of lanes
3) Total count of vehicles of each class like cars, trucks, motorcycles, etc.
6
4) Traffic density calculated using the above factors
5) Time added due to lag each vehicle suffers during start-up and the non-linear increase in lag suffered by
the vehicles which are at the back
6) The average speed of each class of vehicle when the green light starts i.e. the average time required to cross
the signal by each class of vehicle
7) The minimum and maximum time limit for the green light duration - to prevent starvation
When the algorithm is first run, the default time is set for the first signal of the first cycle and the times for
all other signals of the first cycle and all signals of the subsequent cycles are set by the algorithm. A separate
thread is started which handles the detection of vehicles for each direction and the main thread handles the timer
of the current signal. When the green light timer of the current signal (or the red light timer of the next green signal)
reaches 5 seconds, the detection threads take the snapshot of the next direction. The result is then parsed and
the timer of the next green signal is set. All this happens in the background while the main thread is counting
down the timer of the current green signal. This allows the assignment of the timer to be seamless and hence
prevents any lag. Once the green timer of the current signal becomes zero, the next signal becomes green for
the amount of time set by the algorithm.
The image is captured when the time of the signal that is to turn green next is 5 seconds. This gives the
system a total of 10 seconds to process the image, to detect the number of vehicles of each class present in the
image, calculate the green signal time, and accordingly set the times of this signal as well as the red signal time of
the next signal. To find the optimum green signal time based on the number of vehicles of each class at a signal,
the average speeds of vehicles at startup and their acceleration times were used, from which an estimate of the
average time each class of vehicle takes to cross an intersection was found . The green signal time is then
calculated using (1).

(1)
where:
 GST is green signal time
 NoOfVehiclesOfClass is the number of vehicles of each class of vehicle at the signal as detected by the
vehicle detection module,
 AverageTimeOfClass is the average time the vehicles of that class take to cross an intersection, and
 NoOfLanes is the number of lanes at the intersection.
The average time each class of vehicle takes to cross an intersection can be set according to the location,
i.e., region- wise, city-wise, locality-wise, or even intersection-wise based on the characteristics of the
intersection, to make traffic management more effective. Data from the respective transport authorities can be
analyzed for this.
The signals switch in a cyclic fashion and not according to the densest direction first. This is in accordance
with the current system where the signals turn green one after the other in a fixed pattern and does not need the
people to alter their ways or cause any confusion. The order of signals is also the same as the current system,
and the yellow signals have been accounted for as well.
Order of signals: Red → Green → Yellow → Red
D.Simulation Module
A simulation was developed from scratch using Pygame to simulate real-life traffic. It assists in visualizing
the system and comparing it with the existing static system. It contains a 4-way intersection with 4 traffic
signals. Each signal has a timer on top of it, which shows the time remaining for the signal to switch from
green to yellow, yellow to red, or red to green. Each signal also has the number of vehicles that have crossed
the intersection displayed beside it. Vehicles such as cars, bikes, buses, trucks, and rickshaws come in from all
7
directions. In order to make the simulation more realistic, some of the vehicles in the rightmost lane turn to
cross the intersection. Whether a vehicle will turn or not is also set using random numbers when the vehicle is
generated. It also contains a timer that displays the time elapsed since the start of the simulation. Fig. 3 shows
a snapshot of the final output of the simulation.

Pygame is a cross-platform set of Python modules designed for writing video games. It includes computer
graphics and sound libraries designed to be used with the Python programming language. Pygame adds
functionality on

Fig. 3. Simulation output

top of the excellent SDL library. This allows users to create fully featured games and multimedia programs in
the python language. Pygame is highly portable and runs on nearly every platform and operating system. It is free
and licensed under LGPL.

8
Works Done

1. Collected various journals, code books, and Ratings systems related to control of traffic signals.

2. Understood the various parameters that makes traffic delays.

The delay has been described as a measure of performance at priority junctions and it is often used to assess
the level of service of priority junctions. Service delay is an important element of the total delay experienced
by the drivers on minor approaches at junctions controlled by stop signs. Service delay in principle depends
on the conflicting traffic volume, its composition and the right of way at the junction under consideration.
There are two methods for calculating of delay at the priority junctions, calculation of delay based on the
length of the gap on the major movement which probabilities of acceptance by minor road drivers known
as gap acceptance and calculation of delay with queue length of the minor road vehicles which will be
determined based on the length of the gap on the major road.

In this research, the first method was used for the calculation of delay during day time and twilight time.
Guidelines for priority junctions have used various techniques for establishing delay models. Lu and Lall
established a non–linear multivariable model for two–way stop– controlled junctions by using 34 hours data
collected by video camera in Alaska. The model of minor road traffic delay is described as a function of the
minor road traffic volume and the major road traffic volume. The model is also said to have a modest data
requirement in comparison with the HCM delay model. Al-Omari and Benekohal established a technique to
analyse delay at unsaturated TWSC junctions.

The technique was based on the 28–hours traffic data collected at different locations using a video camera
recording technique. The model was reported to be able to estimate delays more accurately than the 1994
HCM delay model. Although small dissimilarities exist between the results of these models, there is no clear
understanding regarding which of the techniques are more precise. In practice, the HCM delay model is
used generally for the evaluation of control delay at priority junctions. The control delays in this study are
based on two theoretical methods, i.e., Tanner and Highway Capacity Manual methods. Tanner’s model
assumes that the minor road vehicles arrive at the junction at random, the major road traffic flow forms an
alternating renewal process with the time taken for a group of vehicles to cross the junction having an
arbitrary distribution and the gaps among bunches being distributed exponentially, and minor street vehicles
pass the major street at equally spaced instants during a gap provided that there is at least a time (t) constant
before the start of the next group.

9
3. Understood what are the benefits of Smart control system of traffic signals.

To discover the traffic situation, the system will be developed utilizing the CCTV cameras at intersections.
Smart traffic lights can be also combined to work with GPS, laser radars, sensors to either increase the safety
of pedestrians or to warn against potential vehicle traffic threats like irregular traffic flow, a vehicle swerving
or standing still. This technology is already used in Tesla vehicles, and nothing stops traffic lights from
sharing this information to assist or prevent potential accidents. Should an accident occur, the smart traffic
light systems will automatically notify emergency services of the accident. There are also vehicles already
making use of this technology. In doing so, the golden hour, as referred to in the medical fraternity to help
people survive and the collision can be cleared sooner. Traffic lights are part of a cities infrastructure and
therefore present a perfect opportunity to assist in creating a Smart City. Information is essential to any
network, and as smart traffic lights gather more information through monitoring traffic flow and using
artificial intelligence, they will be able to share information citywide and work out the best options for traffic
citywide.

Smart or intelligent traffic lights, whichever way we prefer to call them, also offer great value for money
as only one traffic light at any intersection has to be fitted with smart technology. This is because cities
mostly design intersections the same, street intersection with paving and buildings next to it. The computer
systems in these smart traffic lights only get better as technology improves approximately every 18 months
and, therefore, can make use of more sensors and process more data. The problem is that the system will
work at its best if the vehicles are also using smart technology like GPS, driver assistance systems
monitoring traffic, speed, and pedestrians and if they make use of map navigation systems in the vehicle, on
their GPS device like TomTom or their smart phones.

Vehicles with smart technology can work together and send information to the smart traffic lights, which
will give it more information to work with and therefore be a better system as it has more information
available to make smart decisions. Vehicles can send information as often as it is programmed to which can
be hundreds of times per second thanks to smart technology. The Tesla self-driving technology is witness
to this. When the smart traffic light receives the information, not only will it be able to make instant and
clever decisions, it will also be able to send this information back to the smart vehicles and in doing so,
control the vehicles throughout the city. This is also made possible because the smart traffic lights can
communicate with one another to find the best solution for any incidents like an accident or during adverse
weather conditions. Top automakers including Volkswagen, Honda, Ford and BMW are experimenting with
technology that allows cars and traffic lights to communicate and work together to ease congestion, cut
emissions and increase safety. Smart technologies of the future will also be able to give emergency services
green light. It will be an innovation that prevents accidents and casualties.

This is already happening in Helmond, a city in southern Netherlands. In June 2019 Dynniq and its partner
KPN announced at the ITS European Congress in Eindhoven that Dynniq’s innovative GreenFlow for
BlueLights invention automatically switches traffic lights to green for emergency service vehicles. There
are more and more accidents involving emergency vehicles and most such accidents occur at junctions
featuring traffic lights within built-up areas, after emergency vehicles have encountered a red light and other
motorists have encountered a green light. Quite a few risks are encountered when driving an emergency
10
vehicle. Because normal traffic rules are not followed, it is more difficult for other road users to predict at
how emergency service vehicles will behave, which can result in accidents and casualties. Therefore, it is
crucial that traffic safety, flow and traffic efficiency is improved by developing innovative smart
technologies. Advancements in technology also mean that not only do we have to rely on placing and relying
on smart systems in actual cities, but we can test, analyse, improve and predict cities traffic condition by
using different scenarios without leaving a room by using computer model simulation. This works by putting
information already gathered by smart traffic lights for example Amount of vehicles, weather conditions,
periods of heavy traffic or peak hour as it is known, speed of vehicles and then use computer programs to
identify areas causing problems and how to not only resolve the problems but recommend the changes one
can make to improve traffic flow. For example, changing the signal time of a traffic light or placing a traffic
island in the middle of the road. This will help to direct traffic flow as well as to provide safety for
pedestrians.

These network simulators are like the ones used when training pilots called flight simulators, and there are
even simulators that can give one the best design for a traffic light considering the location: height and
visibility. There are faults with this current and future design as the smart traffic lights make use of wireless
connection, unlike the wired connection in the old days. If there is a building between two Wi-Fi signal
areas, this may block the signal, but by using more Wi-Fi receivers in the streets, smart vehicles Wi-Fi
signals or by using satellite communication this can be overcome.

To appreciate the real value of smart traffic lights one can make use of a simple example where the smart
traffic light will notice a vehicle driving too slowly or stopping a few seconds longer than is necessary and
immediately not only identify this but make a decision to overcome this situation and prevent an accident. I
think this is where one can appreciate and understand the use of smart traffic lights or the use of smart
technology in them. Future use will possibly include the smart traffic light applying the breaks in a vehicle
or group of vehicles to prevent a traffic accident or to improve traffic flow. With the speed of technology
advancement, one can only imagine how cleaver and helpful these systems will become. Scalable network-
wide traffic classification, combined with knowledge of endpoint identities, will enable the next wave of
innovation in networking, by exposing a valuable layer of TC-enriched flow context information for
applications to consume. Not only does it improve road safety, but the drivers quality of life, which all
drivers will welcome.

The use of smart technology also raises the question of security as these systems can be hacked, and
governments are aware of this and working hard to protect their infrastructure from these risks, which can
bring a city to a standstill. Although smart technology will assist in helping the daily travel of millions of
people, it does raise the issue of privacy as people may feel that they are being watched or monitored
wherever they go as the traffic lights may be able to identify vehicles, mobile signals, and make use of
CCTV. Traffic light systems in practice have multiple vulnerabilities that can be exploited. Since the
attackers can see the service set identifier of the network, they may just need to acquire the radio of the same
model as the controllers, which can be done through social engineering the manufacturer to sell one.

Already in London, The High Court dismissed the case brought by Ed Bridges, a resident of Cardiff, Wales,
who said his rights were violated by the use of facial recognition technology. Mr. Bridges claimed that he
had been recorded without his permission by the Police using systems that scan human faces in real-time.
Already companies, including Apple, Facebook, and Google, use the technology to identify people in
pictures. So one can argue that smart or intelligent traffic lights can be a blessing and a curse. However, we
11
are not able to stop technology and must adapt to the changes as it does improve our daily lives and road
safety. Considering the impact vehicles have on the environment, smart traffic lights will even get a higher
priority as it not only reduce CO2 emissions but can encourage people to make use of buses by giving them
priority and therefore reduce the number of vehicles on the road.

The problem is also the older generation versus the next generation. Will the older generation accept the
technological changes like smart traffic lights or delay it, and will the next generation appreciate the older
generations' value in implementing these technologies, considering that they hold a lot of knowledge. Chris
Rand, an established campaigner, believes that the key factor in making a change is to listen to the young
people for they seem to understand perhaps better than the transport and planning authorities – the need for
proactive policies. However, I believe that the best way to overcome this barrier is to learn from past
successes and failures. This will help us understand our shortcomings and help us find new solutions. That
is to say, if artificial intelligence does not do so for us.

Also, smart traffic lights technology will not be able to be implemented overnight, so the sooner it gets
implemented and supported by governments and cities, the better. The reason for this is because it will
require lengthy consultations, possible land purchases, or access agreements and will have to be a rolling
program over several years. Disrupting traffic will be part of the program for future smart cities, but the
benefits in doing so will make up for this.

4. Studied about software packages (Python, Opencv, Object Detection, Pygame)


a. PYTHON

Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its
high-level built in data structures, combined with dynamic typing and dynamic binding, make it very
attractive for Rapid Application Development, as well as for use as a scripting or glue language to connect
existing components together. Python's simple, easy to learn syntax emphasizes readability and therefore
reduces the cost of program maintenance. Python supports modules and packages, which encourages
program modularity and code reuse. The Python interpreter and the extensive standard library are available
in source or binary form without charge for all major platforms, and can be freely distributed.

Often, programmers fall in love with Python because of the increased productivity it provides. Since there
is no compilation step, the edit-test-debug cycle is incredibly fast. Debugging Python programs is easy: a
bug or bad input will never cause a segmentation fault. Instead, when the interpreter discovers an error, it
raises an exception. When the program doesn't catch the exception, the interpreter prints a stack trace. A
source level debugger allows inspection of local and global variables, evaluation of arbitrary expressions,
setting breakpoints, stepping through the code a line at a time, and so on. The debugger is written in Python
itself, testifying to Python's introspective power. On the other hand, often the quickest way to debug a
program is to add a few print statements to the source: the fast edit-test-debug cycle makes this simple
approach very effective.

b. OPENCV

e term Computer Vision (CV) is used and heard very often in artificial intelligence (AI) and deep
learning (DL) applications. The term essentially means giving a computer the ability to see the world
as we humans do.
12
Computer Vision is a field of study which enables computers to replicate the human visual system.
As already mentioned above, It’s a subset of artificial intelligence which collects information from
digital images or videos and processes them to define the attributes. The entire process involves image
acquiring, screening, analysing, identifying and extracting information. This extensive processing
helps computers to understand any visual content and act on it accordingly.
Computer vision projects translate digital visual content into explicit descriptions to gather multi-
dimensional data. This data is then turned into a computer-readable language to aid the decision-
making process. The main objective of this branch of artificial intelligence is to teach machines to
collect information from pixels.

You most probably look for different shapes and colours in the Image and that might help you decide
that this is an image of a dog. But does a computer also see it in the same way? The answer is no.
A digital image is an image composed of picture elements, also known as pixels, each with finite,
discrete quantities of numeric representation for its intensity or grey level. So the computer sees an
image as numerical values of these pixels and in order to recognise a certain image, it has to recognise
the patterns and regularities in this numerical data.

Here is a hypothetical example of how pixels form an image. The darker pixels are represented by a
number closer to the zero and lighter pixels are represented by numbers approaching one. All other
colours are represented by numbers between 0 and 1. But usually, you will find that for any colour
image, there are 3 primary channels – Red, green and blue and the value of each channel varies from
0-255. In more simpler terms we can say that a digital image is actually formed by the combination
of three basic colour channels Red, green, and blue whereas for a grayscale image we have only one
channel whose values also vary from 0-255.

OpenCV ( Open Source Computer Vision Library) is an open source software library for computer
vision and machine learning. OpenCV was created to provide a shared infrastructure for applications
for computer vision and to speed up the use of machine perception in consumer products. OpenCV,
as a BSD-licensed software, makes it simple for companies to use and change the code. There are
some predefined packages and libraries that make our life simple and OpenCV is one of them.

Gary Bradsky invented OpenCV in 1999 and soon the first release came in 2000. This library is based
on optimised C / C++ and supports Java and Python along with C++ through an interface. The library
has more than 2500 optimised algorithms, including an extensive collection of computer vision and
machine learning algorithms, both classic and state-of-the-art. Using OpenCV it becomes easy to do
complex tasks such as identify and recognise faces, identify objects, classify human actions in videos,
track camera movements, track moving objects, extract 3D object models, generate 3D point clouds
from stereo cameras, stitch images together to generate an entire scene with a high resolution image
and many more.

Python is a user friendly language and easy to work with but this advantage comes with a cost of
speed, as Python is slower to languages such as C or C++.So we extend Python with C/C++, which
allows us to write computationally intensive code in C/C++ and create Python wrappers that can be
used as Python modules. Doing this, the code is fast, as it is written in original C/C++ code (since it
is the actual C++ code working in the background) and also, it is easier to code in Python than C/C++.
OpenCV-Python is a Python wrapper for the original OpenCV C++ implementation.

13
c. Object Detection

One of the important fields of Artificial Intelligence is Computer Vision. Computer Vision is the
science of computers and software systems that can recognize and understand images and scenes.
Computer Vision is also composed of various aspects such as image recognition, object detection,
image generation, image super-resolution and more. Object detection is probably the most profound
aspect of computer vision due the number practical use cases. In this tutorial, I will briefly introduce
the concept of modern object detection, challenges faced by software developers, the solution my
team has provided as well as code tutorials to perform high performance object detection.

Object detection refers to the capability of computer and software systems to locate objects in an
image/scene and identify each object. Object detection has been widely used for face detection,
vehicle detection, pedestrian counting, web images, security systems and driverless cars. There are
many ways object detection can be used as well in many fields of practice. Like every other computer
technology, a wide range of creative and amazing uses of object detection will definitely come from
the efforts of computer programmers and software developers.

Getting to use modern object detection methods in applications and systems, as well as building new
applications based on these methods is not a straight forward task. Early implementations of object
detection involved the use of classical algorithms, like the ones supported in OpenCV, the popular
computer vision library. However, these classical algorithms could not achieve enough performance
to work under different conditions.

The breakthrough and rapid adoption of deep learning in 2012 brought into existence modern and
highly accurate object detection algorithms and methods such as R-CNN, Fast-RCNN, Faster-RCNN,
RetinaNet and fast yet highly accurate ones like SSD and YOLO. Using these methods and algorithms,
based on deep learning which is also based on machine learning require lots of mathematical and deep
learning frameworks understanding. There are millions of expert computer programmers and software
developers that want to integrate and create new products that uses object detection. But this
technology is kept out of their reach due to the extra and complicated path to understanding and
making practical use of it.

d. PYGAME

Python is the most popular programming language or nothing wrong to say that it is the next-
generation programming language. In every emerging field in computer science, Python makes its
presence actively. Python has vast libraries for various fields such as Machine Learning (Numpy,
Pandas, Matplotlib), Artificial intelligence (Pytorch, TensorFlow), and Game development (Pygame,
Pyglet).

Pygame is a cross-platform set of Python modules which is used to create video games. It consists of
computer graphics and sound libraries designed to be used with the Python programming language.
Pygame was officially written by Pete Shinners to replace PySDL. Pygame is suitable to create client-
Pygame, we need to understand what kind of game we want to develop. To learn Pygame, it is required
to have basic knowledge of Python.

14
Game programming is very rewarding nowadays and it can also be used in advertising and as a
teaching tool too. Game development includes mathematics, logic, physics, AI, and much more and
it can be amazingly fun. In python, game programming is done in Pygame and it is one of the best
modules for doing so.

Pygame is a set of Python modules designed for writing video games. Pygame adds functionality on
top of the excellent SDL library. This allows you to create fully featured games and multimedia
programs in the python Language. Pygame is highly portable and runs on nearly every platform and
operating system. Pygame itself has been downloaded millions of times. Pygame is free. Released
under the LGPL licence, you can create open source, freeware, shareware, and commercial games
with it. See the licence for full details. For a nice introduction to Pygame, examine the line-by-line
chimp tutorial, and the introduction for python programmers. buffer, and many other different
backends... including an ASCII art backend! OpenGL is often broken on Linux systems, and also on
windows systems - which is why professional games use multiple backends.

5. Developed Vehicle Detection Code Using Python Programming Language

This module is responsible for detecting the number of vehicles in the image received as input from
the camera. More specifically, it will provide as output the number of vehicles of each vehicle class
such as car, bike, bus, truck, and rickshaw.

15
16
Code for object detection

17
6. Developed traffic flow simulation code using Python Programming Language
1. The following Python modules are used

 random - This module implements pseudo-random number generators for various distributions. For
integers, there is uniform selection from a range. For sequences, there is uniform selection of a random
element, a function to generate a random permutation of a list in-place, and a function for random sampling
without replacement.

On the real line, there are functions to compute uniform, normal (Gaussian), lognormal, negative
exponential, gamma, and beta distributions. For generating distributions of angles, the von Mises
distribution is available. The functions supplied by this module are actually bound methods of a hidden
instance of the random.Random class. You can instantiate your own instances of Random to get
generators that don’t share state.

Class Random can also be subclassed if you want to use a different basic generator of your own devising:
in that case, override the random(), seed(), getstate(), and setstate() methods. Optionally, a new generator
can supply a getrandbits() method — this allows randrange() to produce selections over an arbitrarily
large range

 math - This module provides access to the mathematical functions defined by the C standard.

These functions cannot be used with complex numbers; use the functions of the same name from
the cmath module if you require support for complex numbers. The distinction between functions which
support complex numbers and those which don’t is made since most users do not want to learn quite as
much mathematics as required to understand complex numbers.

 time - This module provides various time-related functions. For related functionality, see also
the datetime and calendar modules.

Although this module is always available, not all functions are available on all platforms. Most of the
functions defined in this module call platform C library functions with the same name. It may sometimes
be helpful to consult the platform documentation, because the semantics of these functions varies among
platforms.

 threading - In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at
once (even though certain performance-oriented libraries might overcome this limitation). If you want
your application to make better use of the computational resources of multi-core machines, you are advised
to use multiprocessing or concurrent.futures.ProcessPoolExecutor. However, threading is still an
appropriate model if you want to run multiple I/O-bound tasks simultaneously.

18
 sys - This module provides access to some variables used or maintained by the interpreter and to
functions that interact strongly with the interpreter. It is always available. Append the callable hook to the
list of active auditing hooks for the current (sub)interpreter.

When an auditing event is raised through the sys.audit() function, each hook will be called in the order it
was added with the event name and the tuple of arguments. Native hooks added
by PySys_AddAuditHook() are called first, followed by hooks added in the current (sub)interpreter.
Hooks can then log the event, raise an exception to abort the operation, or terminate the process entirely.

Calling sys.addaudithook() will itself raise an auditing event named sys.addaudithook with no arguments.
If any existing hooks raise an exception derived from RuntimeError, the new hook will not be added and
the exception suppressed. As a result, callers cannot assume that their hook has been added unless they
control all existing hooks.

 os - This module provides a portable way of using operating system dependent functionality. If you just
want to read or write a file see open(), if you want to manipulate paths, see the os.path module, and if you
want to read all the lines in all the files on the command line see the fileinput module. For creating
temporary files and directories see the tempfile module, and for high-level file and directory handling see
the shutil module.

2. The following Sample images are used for real vehicles

19
3. Traffic Lights

4. 4 – way Junction

20
5. Developed green interval time calculating algorithm and signal switching algorithm –

This algorithm updates the red, green, and yellow times of all signals. These timers are set bases
on the count of vehicles of each class received from the vehicle detection module and several other
factors such as the number of lanes, average speed of each class of vehicle, etc.

o Cycle: A signal cycle is one complete rotation through all of the indications provided.
o Cycle length: Cycle length is the time in seconds that it takes a signal to complete one full cycle
of indications. It indicates the time interval between the starting of of green for one approach till
the next time the green starts. It is denoted by C.
o Interval: Thus it indicates the change from one stage to another. There are two types of intervals -
change interval and clearance interval. Change interval is also called the yellow time indicates the
interval between the green and red signal indications for an approach. Clearance interval is also
called all red is included after each yellow interval indicating a period during which all signal faces
show red and is used for clearing off the vehicles in the intersection.
o Green interval: It is the green indication for a particular movement or set of movements and is
denoted by Gi. This is the actual duration the green light of a traffic signal is turned on.
o Red interval: It is the red indication for a particular movement or set of movements and is denoted
by Ri. This is the actual duration the red light of a traffic signal is turned on.
o Phase: A phase is the green interval plus the change and clearance intervals that follow it. Thus,
during green interval, non conflicting movements are assigned into each phase. It allows a set of
movements to flow and safely halt the flow before the phase of another set of movements start.
o There is no precise methodology for the design of phases. This is often guided by the geometry of
the intersection, flow pattern especially the turning movements, the relative magnitudes of flow.
Therefore, a trial and error procedure is often adopted. However, phase design is very important
because it affects the further design steps. Further, it is easier to change the cycle time and green
time when flow pattern changes, where as a drastic change in the flow pattern may cause
considerable confusion to the drivers. To illustrate various phase plan options, consider a four
legged intersection with through traffic and right turns. Left turn is ignored. See figure 41:1. The

21
first issue is to decide how many phases are required. It is possible to have two, three, four or even
more number of phases.
o There are two intervals, namely the change interval and clearance interval, normally provided in a
traffic signal. The change interval or yellow time is provided after green time for movement. The
purpose is to warn a driver approaching the intersection during the end of a green time about the
coming of a red signal. They normally have a value of 3 to 6 seconds.
o The design consideration is that a driver approaching the intersection with design speed should be
able to stop at the stop line of the intersection before the start of red time. Institute of transportation
engineers (ITE) has recommended a methodology for computing the appropriate length of change
interval.
o Cycle time is the time taken by a signal to complete one full cycle of iterations. i.e. one complete
rotation through all signal indications. It is denoted by C. The way in which the vehicles depart
from an intersection when the green signal is initiated will be discussed now. Figure 41:6 illustrates
a group of N vehicles at a signalized intersection, waiting for the green signal.
o As the signal is initiated, the time interval between two vehicles, referred as headway, crossing
the curb line is noted. The first headway is the time interval between the initiation of the green
signal and the instant vehicle crossing the curb line.
o The second headway is the time interval between the first and the second vehicle crossing the curb
line. Successive headways are then plotted as in figure 41:7. The first headway will be relatively
longer since it includes the reaction time of the driver and the time necessary to accelerate. The
second headway will be comparatively lower because the second driver can overlap his/her
reaction time with that of the first driver’s.
o After few vehicles, the headway will become constant. This constant headway which characterizes
all headways beginning with the fourth or fifth vehicle, is defined as the saturation headway, and
is denoted as h.

22
23
6. Snapshots of the simulation

24
7. Final Output of the simulation

25
Works To Be Done In The Next Evaluation

1. Adding emergency situations for ambulances, etc


2. Real time application in a major city traffic junction and final comparison between the static model
and our model

26
References

[1] TomTom.com, 'Tom Tom World Traffic Index', 2019. [Online].


Available: https://www.tomtom.com/en_gb/traffic-index/ranking/
[2] Khushi, "Smart Control of Traffic Light System using Image Processing," 2017 International Conference
on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC), Mysore, 2017, pp.
99-103, doi: 10.1109/CTCEEC.2017.8454966.
[3] A. Vogel, I. Oremović, R. Šimić and E. Ivanjko, "Improving Traffic Light Control by Means of Fuzzy
Logic," 2018 International Symposium ELMAR, Zadar, 2018, pp. 51-56, doi:
10.23919/ELMAR.2018.8534692.
[4] A. A. Zaid, Y. Suhweil and M. A. Yaman, "Smart controlling for traffic light time," 2017 IEEE Jordan
Conference on Applied Electrical Engineering and Computing Technologies (AEECT), Aqaba, 2017, pp. 1-
5, doi: 10.1109/AEECT.2017.8257768.
[5] Renjith Soman "Traffic Light Control and Violation Detection Using Image Processing”.” IOSR Journal of
Engineering (IOSRJEN), vol. 08, no. 4, 2018, pp. 23-27
[6] A. Kanungo, A. Sharma and C. Singla, "Smart traffic lights switching and traffic density calculation using
video processing," 2014 Recent Advances in Engineering and Computational Sciences (RAECS),
Chandigarh, 2014, pp. 1-6, doi: 10.1109/RAECS.2014.6799542.
[7] Siddharth Srivastava, Subhadeep Chakraborty, Raj Kamal, Rahil, Minocha, “Adaptive traffic light timer
controller” , IIT KANPUR, NERD MAGAZINE
[8] Ms. Saili Shinde, Prof. Sheetal Jagtap, Vishwakarma Institute Of Technology, Intelligent traffic
management system:a Review, IJIRST 2016
[9] Open Data Science, ‘Overview of the YOLO Object Detection Algorithm’, 2018. [Online]. Available:
https://medium.com/@ODSC/ overview-of-the-yolo-object-detection-algorithm-7b52a745d3e0
[10] J. Hui, ‘Real-time Object Detection with YOLO, YOLOv2 and now YOLOv3’, 2018. [Online]. Available:
https://medium.com/ @jonathan hui/ real-time-object-detection-with-yolo-yolov2-28b1b93e2088
[11] J. Redmon, ‘Darknet: Open Source Neural Networks in C’, 2016. [Online]. Available:
https://pjreddie.com/darknet/
[12] Tzutalin, ‘LabelImg Annotation Tool’, 2015. [Online]. Available: https://github.com/tzutalin/labelImg
[13] Li, Z., Wang, B., and Zhang, J. “Comparative analysis of drivers' start‐ up time of the first two vehicles at
signalized intersections”, 2016 J. Adv. Transp., 50: 228– 239. doi: 10.1002/atr.1318
[14] Arkatkar, Shriniwas & Mitra, Sudeshna & Mathew, Tom. “India” in Global Practices on Road Traffic
Signal Control, ch.12, pp.217-242
[15] ‘PygameLibrary’,2019. [Online].Available: https://www.pygame.org/wiki/about
[16] ‘Traffic Signal Synchronization’.[Online]. Available:https://www.cityofirvine.org/signal-operations-
maintenance/traffic- signal-synchronization

27
Verified and approved by:

Dr.Yogeshwar Vijaykumar Navandar


Assistant Professor
Department of Civil Engineering

(Name and dated signature of Supervisor)

28

You might also like