2208.06461v1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Real-Time Accident Detection in Traffic

Surveillance Using Deep Learning


Hadi Ghahremannezhad Hang Shi Chengjun Liu
Department of Computer Science Innovative AI Technologies Department of Computer Science
New Jersey Institute of Technology Newark, NJ 07103, USA New Jersey Institute of Technology
Newark, NJ 07102, USA Email: [email protected] Newark, NJ 07102, USA
Email: [email protected] Email: [email protected]

Abstract—Automatic detection of traffic accidents is an im- applied computer vision techniques in traffic surveillance
portant emerging topic in traffic monitoring systems. Nowadays systems [1]–[10] for various tasks. Automatic detection of
many urban intersections are equipped with surveillance cameras traffic incidents not only saves a great deal of unnecessary
connected to traffic management systems. Therefore, computer
manual labor, but the spontaneous feedback also helps the
arXiv:2208.06461v1 [cs.CV] 12 Aug 2022

vision techniques can be viable tools for automatic accident detec-


tion. This paper presents a new efficient framework for accident paramedics and emergency ambulances to dispatch in a timely
detection at intersections for traffic surveillance applications. The fashion. An automatic accident detection framework provides
proposed framework consists of three hierarchical steps, includ- useful information for adjusting intersection signal operation
ing efficient and accurate object detection based on the state- and modifying intersection geometry in order to defuse severe
of-the-art YOLOv4 method, object tracking based on Kalman
filter coupled with the Hungarian algorithm for association, and traffic crashes.
accident detection by trajectory conflict analysis. A new cost Considering the applicability of our method in real-time
function is applied for object association to accommodate for edge-computing systems, we apply the efficient and accurate
occlusion, overlapping objects, and shape changes in the object YOLOv4 [11] method for object detection. The second step
tracking step. The object trajectories are analyzed in terms of is to track the movements of all interesting objects that are
velocity, angle, and distance in order to detect different types
of trajectory conflicts including vehicle-to-vehicle, vehicle-to- present in the scene to monitor their motion patterns. A new
pedestrian, and vehicle-to-bicycle. Experimental results using real set of dissimilarity measures are designed and used by the
traffic video data show the feasibility of the proposed method in Hungarian algorithm [12] for object association coupled with
real-time applications of traffic surveillance. In particular, trajec- the Kalman filter approach [13] for smoothing the trajectories
tory conflicts, including near-accidents and accidents occurring at and predicting missed objects. The third step in the framework
urban intersections are detected with a low false alarm rate and
a high detection rate. The robustness of the proposed framework involves motion analysis and applying heuristics to detect
is evaluated using video sequences collected from YouTube with different types of trajectory conflicts that can lead to accidents.
diverse illumination conditions. The dataset is publicly available The moving direction and speed of road-user pairs that are
at: http://github.com/hadi-ghnd/AccidentDetection. close to each other are examined based on their trajectories
in order to detect anomalies that can cause them to crash.
I. I NTRODUCTION
Figure 1 illustrates the system architecture of our proposed
One of the main problems in urban traffic management is the accident detection framework.
conflicts and accidents occurring at the intersections. Drivers The layout of this paper is as follows. In section II, the major
caught in a dilemma zone may decide to accelerate at the time steps of the proposed accident detection framework, including
of phase change from green to yellow, which in turn may object detection (section II-A), object tracking (section II-B),
induce rear-end and angle crashes. Additionally, despite all and accident detection (section II-C) are discussed. Section III
the efforts in preventing hazardous driving behaviors, running provides details about the collected dataset and experimental
the red light is still common. Other dangerous behaviors, such results and the paper is concluded in section section IV.
as sudden lane changing and unpredictable pedestrian/cyclist
movements at the intersection, may also arise due to the nature II. M ETHODOLOGY
of traffic control systems or intersection geometry. Timely This section provides details about the three major steps
detection of such trajectory conflicts is necessary for devising in the proposed accident detection framework. These steps
countermeasures to mitigate their potential harms. involve detecting interesting road-users by applying the state-
Currently, most traffic management systems monitor the of-the-art YOLOv4 [11] method with a pre-trained model
traffic surveillance camera by using manual perception of the based on deep convolutional neural networks, tracking the
captured footage. Since most intersections are equipped with movements of the detected road-users using the Kalman filter
surveillance cameras automatic detection of traffic accidents approach, and monitoring their trajectories to analyze their
based on computer vision technologies will mean a great motion behaviors and detect hazardous abnormalities that can
deal to traffic monitoring systems. Numerous studies have lead to mild or severe crashes. The proposed framework is
Input video frame Object detection Object tracking Output video frame

Accident Detection

Calculate the
Yes Distance < Euclidean distance Yes More than one No
threshold? between a pair of road‐user?
objects
No

No

Analyze the Yes Report an accident


trajectories of the Trajectory Conflict? and keep track of
object pair road‐users involved

Fig. 1. The system architecture of our proposed accident detection framework.

purposely designed with efficient algorithms in order to be model as backbone network for feature extraction followed by
applicable in real-time traffic monitoring systems. a neck and a head part. The neck refers to the path aggregation
network (PANet) and spatial attention module and the head is
A. Road-User Detection the dense prediction block used for bounding box localization
As in most image and video analytics systems the first step and classification. This architecture is further enhanced by
is to locate the objects of interest in the scene. Since here we additional techniques referred to as bag of freebies and bag of
are also interested in the category of the objects, we employ specials.
a state-of-the-art object detection method, namely YOLOv4 Here, we have applied the YOLOv4 [11] model pre-trained
[11], to locate and classify the road-users at each video on the MS COCO dataset [18] for the task of object detection.
frame. The family of YOLO-based deep learning methods Although the model is pre-trained on a dataset with different
demonstrates the best compromise between efficiency and visual characteristics in terms of object sizes and viewing
performance among object detectors. angles, YOLOv4 proved to generalize well to images with
The first version of the You Only Look Once (YOLO) deep overhead perspective. We are interested in trajectory conflicts
learning method was introduced in 2015 [14]. The main idea among most common road-users at regular urban intersections,
of this method is to divide the input image into an S × S grid namely, vehicles, pedestrians, and cyclists.
where each grid cell is either considered as background or
used for the detecting an object. A predefined number (B) of B. Road-User Tracking
bounding boxes and their corresponding confidence scores are Multiple object tracking (MOT) has been intensively studies
generated for each cell. The intersection over union (IOU) of over the past decades [19] due to its importance in video
the ground truth and the predicted boxes is multiplied by the analytics applications. Here we employ a simple but effective
probability of each object to compute the confidence scores. tracking strategy similar to that of the Simple Online and
In later versions of YOLO [15], [16] multiple modifications Realtime Tracking (SORT) approach [20]. The Hungarian
have been made in order to improve the detection perfor- algorithm [12] is used to associate the detected bounding boxes
mance while decreasing the computational complexity of the from frame to frame. Additionally, the Kalman filter approach
method. Although there are online implementations such as [13] is used as the estimation model to predict future locations
YOLOX [17], the latest official version of the YOLO family of each detected object based on their current location for
is YOLOv4 [11], which improves upon the performance of better association, smoothing trajectories, and predict missed
the previous methods in terms of speed and mean average tracks.
precision (mAP). As illustrated in fig. 2, the architecture of The inter-frame displacement of each detected object is
this version of YOLO is constructed with a CSPDarknet53 estimated by a linear velocity model. The state of each target
Backbone Neck

SPP
MaxPool (5)

CBLx3 MaxPool (9) CBLx3


C1 C2 C3 C4 C5

MaxPool (13)

PANet Head

Concat + CBLx5

CBL + UpSampling DownSampling

Concat + CBLx5 Concat + CBLx5

CBL + UpSampling DownSampling

Concat + CBLx5

Fig. 2. Architecture of the YOLOv4 model with three major component.

A
in the Kalman filter tracking approach is presented as follows: where Ci,j is a value between 0 and 1, b is the bin index, Hb
is the histogram of an object in the RGB color-space, and H̄
oti = [xi , yi , si , ri , ẋi , y˙i , s˙i ] (1) is computed as follows:
1 X
where xi and yi represent the horizontal and vertical locations H̄(ok ) = Hb (ok ) (4)
B
of the bounding box center, si , and ri represent the bounding b
box scale and aspect ratio, and ẋi , y˙i , s˙i are the velocities in in which B is the total number of bins in the histogram of an
each parameter xi , yi , si of object oi at frame t, respectively. object ok .
The velocity components are updated when a detection is The size dissimilarity is calculated based on the width and
associated to a target. Otherwise, in case of no association, height information of the objects:
the state is predicted based on the linear velocity model. 
1 |hi − hj | |wi − wj |

S
Considering two adjacent video frames t and t + 1, we will Ci,j = + (5)
2 hi + hj wi + wj
have two sets of objects detected at each frame as follows:
where w and h denote the width and height of the object
Ot = {ot1 , ot2 , . . . , otn } bounding box, respectively. The more different the bounding
(2) boxes of object oi and detection oj are in size, the more CSi,j
Ot+1 = {ot+1 t+1 t+1
1 , o2 , . . . , om }
approaches one. The position dissimilarity is computed in a
Every object oi in set Ot is paired with an object oj in set Ot+1 similar way:
that can minimize the cost function C(oi , oj ). The index i ∈
 
P 1 |xi − xj | |yi − yj |
[N ] = 1, 2, . . . , N denotes the objects detected at the previous Ci,j = + (6)
2 xi + xj yi + yj
frame and the index j ∈ [M ] = 1, 2, . . . , M represents the new P
objects detected at the current frame. where the value of Ci,j is between 0 and 1, approaching more
towards 1 when the object oi and detection oj are further. In
In order to efficiently solve the data association problem
addition to the mentioned dissimilarity measures, we also use
despite challenging scenarios, such as occlusion, false positive
the IOU value to calculate the Jaccard distance as follows:
or false negative results from the object detection, overlapping
objects, and shape changes, we design a dissimilarity cost K Box(oi ) ∩ Box(oj )
Ci,j =1− (7)
function that employs a number of heuristic cues, including Box(oi ) ∪ Box(oj )
appearance, size, intersection over union (IOU), and position. where Box(ok ) denotes the set of pixels contained in the
The appearance distance is calculated based on the histogram bounding box of object k.
correlation between and object oi and a detection oj as The overall dissimilarity value is calculated as a weighted
follows: sum of the four measures:
P  
A S P A K
A b Hb (oi ) − H̄(oi ) Hb (oj ) − H̄(oj ) Ci,j = wa Ci,j + ws Ci,j + wp Ci,j + wa Ci,j + wk Ci,j (8)
Ci,j = 1 − qP 2 P 2
b Hb (oi ) − H̄(oi ) b Hb (oj ) − H̄(oj ) in which wa , ws , wp , and wk define the contribution of each
(3) dissimilarity value in the total cost function. The total cost
locations on the Google Maps [24]. The distance in kilometers
Bounding boxes
can then be calculated by applying the haversine formula [25]
Object Object
detection tracking as follows:
   
Video frames from scene i φq − φp λq − λp
2D coordinates h = sin2 + cos φp · cos φq · sin2
Scene i 2 2
H matrix
√ 
Camera Haversine
calibration formula
dh (p, q) = 2r arcsin h
Camera coordinates
(9)
Distance in KM where φp and φq are the latitudes, λp and λq are the longitudes
of the first and second averaged points p and q, respectively,
Speed
estimation h is the haversine of the central angle between the two points,
Google Maps Projected virtual grids r ≈ 6371 kilometers is the radius of earth, and dh (p, q) is
latitude/longitude on the ground plane
the distance between the points p and q in real-world plane
in kilometers. The speed s of the tracked vehicle can then be
Fig. 3. The workflow of the speed estimation method demonstrated on a estimated as follows:
scene from the NVIDIA AI City Challenge 2022 dataset [21]. dh (p, q) × 3600 × f ps
S= (10)
f
function is used by the Hungarian algorithm [12] to assign where f ps denotes the frames read per second and S is the
the detected objects at the current frame to the existing tracks. estimated vehicle speed in kilometers per hour. Note that if the
If the dissimilarity between a matched detection and track is locations of the bounding box centers among the f frames do
above a certain threshold (τd ), the detected object is initiated not have a sizable change (more than a threshold), the object
as a new track. is considered to be slow-moving or stalled and is not involved
in the speed calculations.
C. Accident Detection Another factor to account for in the detection of accidents
and near-accidents is the angle of collision. Traffic accidents
In this section, details about the heuristics used to detect
include different scenarios, such as rear-end, side-impact,
conflicts between a pair of road-users are presented. The
single-car, vehicle rollovers, or head-on collisions, each of
conflicts among road-users do not always end in crashes, how-
which contain specific characteristics and motion patterns.
ever, near-accident situations are also of importance to traffic
Accordingly, our focus is on the side-impact collisions at the
management systems as they can indicate flaws associated
intersection area where two or more road-users collide at a
with the signal control system and/or intersection geometry.
considerable angle. The bounding box centers of each road-
Logging and analyzing trajectory conflicts, including severe
user are extracted at two points: (i) when they are first observed
crashes, mild accidents and near-accident situations will help
and (ii) at the time of conflict with another road-user. Then
decision-makers improve the safety of the urban intersections.
the approaching angle of the a pair of road-users a and b is
The most common road-users involved in conflicts at inter-
calculated as follows:
sections are vehicles, pedestrians, and cyclists [22]. Therefore,  
0
for this study we focus on the motion patterns of these three yat − yat
major road-users to detect the time and location of trajectory ma =
(xt − xta0 )
conflicts.  a 00

First, the Euclidean distances among all object pairs are ybt − ybt (11)
calculated in order to identify the objects that are closer than mb = t t 00

x − xb
a threshold to each other. These object pairs can potentially b 
ma − mb
engage in a conflict and they are therefore, chosen for further θ = arctan
analysis. The recent motion patterns of each pair of close 1 + ma mb
objects are examined in terms of speed and moving direction. where θ denotes the estimated approaching angle, ma and mb
As there may be imperfections in the previous steps, are the the general moving slopes of the road-users a and b
especially in the object detection step, analyzing only two with respect to the origin of the video frame, xta , yat , xtb , ybt
successive frames may lead to inaccurate results. Therefore, a represent the center coordinates of the road-users a and b at the
0 0
predefined number f of consecutive video frames are used to current frame, xta and yat are the center coordinates of object
00 00
estimate the speed of each road-user individually. The average a when first observed, xtb and ybt are the center coordinates
bounding box centers associated to each track at the first of object b when first observed, respectively.
half and second half of the f frames are computed. The two If the bounding boxes of the object pair overlap each other
averaged points p and q are transformed to the real-world or are closer than a threshold the two objects are considered
coordinates using the inverse of the homography matrix H−1 , to be close. The trajectories of each pair of close road-users
which is calculated during camera calibration [23] by selecting are analyzed with the purpose of detecting possible anomalies
a number of points on the frame and their corresponding that can lead to accidents. The variations in the calculated
Fig. 4. Vehicle-to-Vehicle (V2V) traffic accidents at intersections detected by our proposed framework. The red circles indicate the location of the incidents.

magnitudes of the velocity vectors of each approaching pair TABLE I


of objects that have met the distance and angle conditions P ERFORMANCE COMPARISON WITH OTHER REPRESENTATIVE ACCIDENT
DETECTION METHODS .
are analyzed to check for the signs that indicate anomalies
in the speed and acceleration. If the pair of approaching Methods Num. of videos DR % FAR %
road-users move at a substantial speed towards the point Ki et al. [26] 1 63 6
of trajectory intersection during the previous f frames and Singh et al. [27] 7 77.5 22.5
Ijjina et al. [28] 45 71 0.53
the speed of one or both shows a sudden drop at the most Wang et al. [29] – 92.5 7.5
recent frames, a trajectory conflict is reported. Trajectory Pawar et al. [30] 7 79 20.5
conflicts involve near-accident and accident occurrences and Proposed method 29 93.1 6.89
include three types, namely, vehicle-to-vehicle (V2V), vehicle-
to-pedestrian (V2P), and vehicle-to-bicycle (V2B).
The proposed framework achieved a Detection Rate of 93.10%
and a False Alarm Rate of 6.89%. The performance is
III. E XPERIMENTS
compared to other representative methods in table I. The
Due to the lack of a publicly available benchmark for traffic object detection and object tracking modules are implemented
accidents at urban intersections, we collected 29 short videos asynchronously to speed up the calculations. The trajectory
from YouTube that contain 24 vehicle-to-vehicle (V2V), 2 conflicts are detected and reported in real-time with only 2
vehicle-to-bicycle (V2B), and 3 vehicle-to-pedestrian (V2P) instances of false alarms which is an acceptable rate consid-
trajectory conflict cases. The dataset includes day-time and ering the imperfections in the detection and tracking results.
night-time videos of various challenging weather and illumina- Our framework is able to report the occurrence of trajectory
tion conditions. Each video clip includes a few seconds before conflicts along with the types of the road-users involved
and after a trajectory conflict. The spatial resolution of the immediately. Additionally, it keeps track of the location of the
videos used in our experiments is 1280 × 720 pixels with involved road-users after the conflict has happened. Figure 4
a frame-rate of 30 frames per seconds. We used a desktop shows sample accident detection results by our framework
with a 3.4 GHz processor, 16 GB RAM, and an Nvidia GTX- given videos containing vehicle-to-vehicle (V2V) side-impact
745 GPU, to implement our proposed method. The average collisions. Furthermore, Figure 5 contains samples of other
processing speed is 35 frames per second (fps) which is types of incidents detected by our framework, including near-
feasible for real-time applications. accidents, vehicle-to-bicycle (V2B), and vehicle-to-pedestrian
The results are evaluated by calculating Detection and False (V2P) conflicts.
Alarm Rates as metrics:
IV. C ONCLUSION
detected conflict cases
DR = In this paper a new framework is presented for automatic de-
total number of conflicts (12)
number of false alarms tection of accidents and near-accidents at traffic intersections.
F AR = The framework integrates three major modules, including
total number of conflicts
[7] H. Ghahremannezhad, H. Shi, and C. Liu, “A real time accident detection
framework for traffic video analysis,” in Machine Learning and Data
Mining in Pattern Recognition, MLDM, pp. 77–92, ibai publishing,
Leipzig, 2020.
[8] M. O. Faruque, H. Ghahremannezhad, and C. Liu, “Vehicle classification
in video using deep learning,” in Machine Learning and Data Mining
in Pattern Recognition, MLDM, pp. 117–131, ibai publishing, Leipzig,
2019.
[9] H. Ghahremannezhad, H. Shi, and C. Liu, “A new online approach
(a) (b)
for moving cast shadow suppression in traffic videos,” in 2021 IEEE
International Intelligent Transportation Systems Conference (ITSC),
pp. 3034–3039, IEEE, 2021.
[10] H. Shi, H. Ghahremannezhad, and C. Liu, “Anomalous driving detection
for traffic surveillance video analysis,” in 2021 IEEE International
Conference on Imaging Systems and Techniques (IST), pp. 1–6, IEEE,
2021.
[11] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Op-
timal speed and accuracy of object detection,” arXiv preprint
(c) (d) arXiv:2004.10934, 2020.
[12] H. W. Kuhn, “The hungarian method for the assignment problem,” Naval
Fig. 5. Different types of conflicts detected at the intersections. (a) Vehicle- research logistics quarterly, vol. 2, no. 1-2, pp. 83–97, 1955.
to-Vehicle (V2V) near-accident, (b) Vehicle-to-Bicycle (V2B) near-accident, [13] R. E. Kalman, “A new approach to linear filtering and prediction
(c) and (d) Vehicle-to-Pedestrian (V2P) accident. problems,” 1960.
[14] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look
once: Unified, real-time object detection,” in Proceedings of the IEEE
conference on computer vision and pattern recognition, pp. 779–788,
object detection based on YOLOv4 method, a tracking method 2016.
based on Kalman filter and Hungarian algorithm with a new [15] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” arXiv
cost function, and an accident detection module to analyze preprint, 2017.
[16] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”
the extracted trajectories for anomaly detection. The robust arXiv preprint arXiv:1804.02767, 2018.
tracking method accounts for challenging situations, such as [17] Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “Yolox: Exceeding yolo series
occlusion, overlapping objects, and shape changes in tracking in 2021,” arXiv preprint arXiv:2107.08430, 2021.
[18] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,
the objects of interest and recording their trajectories. The P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in
trajectories are further analyzed to monitor the motion patterns context,” in European conference on computer vision, pp. 740–755,
of the detected road-users in terms of location, speed, and Springer, 2014.
[19] W. Luo, J. Xing, A. Milan, X. Zhang, W. Liu, and T.-K. Kim, “Multiple
moving direction. Different heuristic cues are considered in object tracking: A literature review,” Artificial Intelligence, vol. 293,
the motion analysis in order to detect anomalies that can p. 103448, 2021.
lead to traffic accidents. A dataset of various traffic videos [20] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online
and realtime tracking,” in 2016 IEEE international conference on image
containing accident or near-accident scenarios is collected to processing (ICIP), pp. 3464–3468, IEEE, 2016.
test the performance of the proposed framework against real [21] “Nvidia ai city challenge – data and evaluation.” https://www.
videos. Experimental evaluations demonstrate the feasibility of aicitychallenge.org/2022-data-and-evaluation/. Accessed: 2022-04-27.
[22] L. Yue, M. Abdel-Aty, Y. Wu, O. Zheng, and J. Yuan, “In-depth
our method in real-time applications of traffic management. approach for identifying crash causation patterns and its implications
for pedestrian crash prevention,” Journal of safety research, vol. 73,
R EFERENCES pp. 119–132, 2020.
[23] Z. Tang, G. Wang, H. Xiao, A. Zheng, and J.-N. Hwang, “Single-camera
[1] H. Shi and C. Liu, “A new foreground segmentation method for video and inter-camera vehicle tracking and 3d speed estimation based on
analysis in different color spaces,” in 24th International Conference on fusion of visual and semantic features,” in Proceedings of the IEEE
Pattern Recognition, IEEE, 2018. conference on computer vision and pattern recognition workshops,
[2] G. Liu, H. Shi, A. Kiani, A. Khreishah, J. Lee, N. Ansari, C. Liu, and pp. 108–115, 2018.
M. M. Yousef, “Smart traffic monitoring system using computer vision [24] “Google maps.” https://www.google.com/maps/. Accessed: 2022-04-27.
and edge computing,” IEEE Transactions on Intelligent Transportation [25] K. Gade, “A non-singular horizontal position representation,” The jour-
Systems, 2021. nal of navigation, vol. 63, no. 3, pp. 395–417, 2010.
[3] H. Ghahremannezhad, H. Shi, and C. Liu, “Automatic road detection [26] Y.-K. Ki and D.-Y. Lee, “A traffic accident recording and reporting
in traffic videos,” in 2020 IEEE Intl Conf on Parallel & Distributed model at intersections,” IEEE Transactions on Intelligent Transportation
Processing with Applications, Big Data & Cloud Computing, Sustain- Systems, vol. 8, no. 2, pp. 188–194, 2007.
able Computing & Communications, Social Computing & Networking [27] D. Singh and C. K. Mohan, “Deep spatio-temporal representation for
(ISPA/BDCloud/SocialCom/SustainCom), pp. 777–784, IEEE, 2020. detection of road accidents using stacked autoencoder,” IEEE Transac-
[4] H. Ghahremannezhad, H. Shi, and C. Liu, “A new adaptive bidirectional tions on Intelligent Transportation Systems, vol. 20, no. 3, pp. 879–887,
region-of-interest detection method for intelligent traffic video analysis,” 2018.
in 2020 IEEE Third International Conference on Artificial Intelligence [28] E. P. Ijjina, D. Chand, S. Gupta, and K. Goutham, “Computer vision-
and Knowledge Engineering (AIKE), pp. 17–24, IEEE, 2020. based accident detection in traffic surveillance,” in 2019 10th Inter-
[5] H. Ghahremannezhad, H. Shi, and C. Liu, “Robust road region extraction national Conference on Computing, Communication and Networking
in video under various illumination and weather conditions,” in 2020 Technologies (ICCCNT), pp. 1–6, IEEE, 2019.
IEEE 4th International Conference on Image Processing, Applications [29] C. Wang, Y. Dai, W. Zhou, and Y. Geng, “A vision-based video crash
and Systems (IPAS), pp. 186–191, IEEE, 2020. detection framework for mixed traffic flow environment considering
[6] H. Shi, H. Ghahremannezhadand, and C. Liu, “A statistical model- low-visibility condition,” Journal of advanced transportation, vol. 2020,
ing method for road recognition in traffic video analytics,” in 2020 2020.
11th IEEE International Conference on Cognitive Infocommunications [30] K. Pawar and V. Attar, “Deep learning based detection and localization
(CogInfoCom), pp. 000097–000102, IEEE, 2020. of road accidents from traffic surveillance videos,” ICT Express, 2021.

You might also like