0% found this document useful (0 votes)
6 views6 pages

9 Dawn

Uploaded by

HIPERDAGA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
6 views6 pages

9 Dawn

Uploaded by

HIPERDAGA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 6

1

DAWN: Vehicle Detection in Adverse Weather


Nature
Mourad A. Kenk, Mahmoud Hassaballah

Abstract—Recently, self-driving vehicles have been introduced and ensure the required safety [4]. To detect such objects,
with several automated features including lane-keep assistance, diverse sensors such as cameras and light detection and ranging
queuing assistance in traffic-jam, parking assistance and crash are commonly utilized in autonomous vehicles. Among these
avoidance. These self-driving vehicles and intelligent visual traffic various types of sensors, the quality of camera’s images is quite
surveillance systems mainly depend on cameras and sensors fu- affected by adverse weather conditions such as heavy foggy,
sion systems. Adverse weather conditions such as heavy fog, rain,
arXiv:2008.05402v1 [cs.CV] 12 Aug 2020

snow, and sandstorms are considered dangerous restrictions of


sleeting rain, snowstorms, dusty blast, and low light conditions.
the functionality of cameras impacting seriously the performance Consequently, the visibility is inefficient for detecting accu-
of adopted computer vision algorithms for scene understanding rately the vehicles on the roads and yields traffic accidents.
(i.e., vehicle detection, tracking, and recognition in traffic scenes). Clear visibility can be reach by developing efficient image
For example, reflection coming from rain flow and ice over enhancement methods to obtained good visual appearance or
roads could cause massive detection errors which will affect the discriminative features. Thus, providing detection systems with
performance of intelligent visual traffic systems. Additionally, clear images can improve the performance of vehicle detection
scene understanding and vehicle detection algorithms are mostly and tracking in intelligent visual surveillance systems and
evaluated using datasets contain certain types of synthetic images autonomous vehicles applications [5–7].
plus a few real-world images. Thus, it is uncertain how these Recently, computer vision community introduced different
algorithms would perform on unclear images acquired in the
wild and how the progress of these algorithms is standardized
vehicle detection approaches [8]. In particular, deep learning
in the field. To this end, we present a new dataset (benchmark) based traffic object detection using camera sensors has become
consisting of real-world images collected under various adverse more significant in autonomous vehicles because it achieves
weather conditions called DAWN. This dataset emphasizes a high detection accuracy, and consequently, it has become
diverse traffic environment (urban, highway and freeway) as well a substantial method in self-driving applications [9]. Two
as a rich variety of traffic flow. The DAWN dataset comprises a essential conditions should be satisfied by the detector: a real
collection of 1000 images from real-traffic environments, which time detection is necessary for an active echo of vehicle’s
are divided into four sets of weather conditions: fog, snow, rain controllers, and the high detection accuracy of the traffic
and sandstorms. The dataset is annotated with object bounding objects is mandatory which has not been investigated under
boxes for autonomous driving and video surveillance scenarios. adverse weather conditions before.
This data helps interpreting effects caused by the adverse weather
conditions on the performance of vehicle detection systems.
Although these methods have achieved fast detection with
high efficiency, they could not improve the detection accuracy
Keywords—Vehicles Detection, Intelligent Transportation Sys- [10, 11]. Lately, object detectors based on CNN models that
tems, Autonomous Vehicles, Self Driving Vehicles, Visual Surveil- integrate various strategies have been widely studied to take
lance, Vehicles dataset, Vehicles in Adverse Weather, Vehicles in advantage of both types of deep learning categories and to
Poor Weather.
compensate for their particular drawbacks. CFENet [12], a
one-stage detector, has used an extensive feature improvement
I. I NTRODUCTION strategy based on SSD to increase the detection accuracy.

T HE efficiency of vehicle detection is considered as a


critical step in traffic monitoring or intelligent visual
surveillance in general [1, 2]. Recently, the evolution of sensors
RefineDet [13], a one-stage detector, improves the detection
accuracy by using an anchor refinement strategy and an object
detection module. RFBNet [14], has applied a receptive field
and GPU along with deep learning algorithms has concentrated block to improve the detection accuracy. However, using hard
research into autonomous or self-driving applications based lighting conditions without the presence of adverse weather
on artificial intelligence and became a trend [3]. Autonomous conditions and with an input image resolution of 512 × 512 or
vehicles must precisely detect traffic objects (e.g., cars, cy- higher have been incapable to achieve a real time detection
clists, traffic lights, etc.) in real-time to right control decisions speed above 30 frames per second as reported in previous
studies [9, 11–13]. Real time detection is a requirement for
Manuscript received Feb. 27, 2020; Accepted: XXX, Published: XXXX.
This paper was recommended by Associate Editor XYZ. (Corresponding traffic monitoring and self-driving applications under adverse
author: Mourad A. Kenk) weather conditions. Though, real time detection speed is
Mourad A. Kenk is with Department of Mathematics, Faculty achieved in [14], it is hard to employ it in adverse weather
of Science, South Valley University, Qena 83523, Egypt (e-mail: conditions because of low detection accuracy. This denotes
[email protected])
Mahmoud Hassaballah is with the Department of Computer Science, Faculty that the previous strategies are insufficient in terms of a trade-
of Computers and Information, South Valley University, Qena 83523, Egypt off between the accuracy and time of detection, which restricts
(e-mail: [email protected]) usage in applications with adverse weather conditions. It can
2

confuse in the judgment of an accurate vehicle detection TABLE I: Summary of available vehicles datasets in literature.
and reduce the efficiency of vehicles detection under adverse Lighting variations as L, Occlusion as O, Crowded as C.
weather conditions and lead to a traffic accident. In other
words, it is extremely important to employ a vehicle detector Dataset Mode No. Image Video Test Train L O C
with high detection accuracy and consider this factor along UA-DETRAC [20] TSC 10 Hour × × × ×
TME Motorway [21] OVC 28 Clip × × ×
with the real time detection speed to reduce the false alarms KITTI [22] OVC × 7518 7481 × ×
of the detected bounding boxes and to allow space of time to Stanford car [23] web 16,185 50-50 50-50
improve the visibility in the traffic environment under adverse PASCAL VOC [24] web × × × × ×
Rain&Snow [25] TSC 22 Clip × ×
weather conditions and thus preventing traffic accidents. Cityscape [26] OVC 25,000 × × × × ×
The available vehicle datasets in literature still need to Mapillary [27] OVC 25,000 × × × × ×
address more challenging adverse weather conditions datasets. BDD100K [28] OVC 100,000 × × × × ×
ApolloScape [29] OVC 143,906 × × × × ×
Table I summarizes the available vehicles datasets in literature, Stanford Drone [30] drone × ×
where the datasets are collected by traffic surviellance camera
(TSC), On-raod vehicles camera (OVC), web servey (Web),
or by drone camera. On the other hand, there is no generic
datasets for the different adverse weather conditions such as
the combination of nasty winter weather, sleeting rain, and
dust storms. For instance, Sakaridis et al. [15] proposed a
convolution neural network (CNN) based model to generate
synthetic fog on real vehicle images to investigate defogging
algorithms in the traffic environments. Hodges et al. [16] ma-
nipulated the dehazing model by a dehazing network to reform Fig. 1: Sample images of the KITTI dataset.
the full image and a discriminator network to fine tunning
the enhancement weights parameters to increase the vehicle
detection performance on a dataset of synthetic foggy/hazy
images. Li et al. [17] presented a benchmark including both
synthetic and real-world rainy images with some rain types to
investigate deraining algorithms in traffic monitoring scene and
vehicle detection. Uzun et al. [18] implemented cycle-spinning Fig. 2: Sample images for vehicles in MS-COCO dataset.
with generative adversarial networks (GAN) for raindrops
removal in outdoor surveillance systems and investigated the
object detection performance under Raindrop dataset [19].
However, these methods are mainly evaluated on rendered proposed dataset for vehicle detection in adverse weather.
synthetic fog/rain images and few real images assuming a 1) KITTI dataset [22]: is the most widely used for on-
specific fog/rain model. It is thus unclear how these algorithms road vehicle detection and self-driving researches. The KITTI
would be proceeding on various adverse weather conditions dataset consists of 7,481 images for training and 7,518 images
and how the progress could be measured in the wild. for testing and includes six classes: car, van, truck, tram,
To solve the problem, a new benchmark dataset is introduced cyclist, and pedestrian. The input image size is 512×512 and
called DAWN consisting of real world images collected under 17,607 total bounding box of GT. The KITTI dataset considers
various adverse weather conditions (e.g., fog, rain, snow, and a traffic environment that covers freeways through rural zones
sandstorms). The collected images provide a diverse traffic and urban scenes with lighting variability in normal weather
environment (e.g., urban, crossroads, motorway, etc.) with conditions at daylight only as shown in Figure. 1.
various vehicles categories that are annotated for intelligent 2) The MS-COCO dataset [31]: is a more challenging
visual surveillance, traffic monitoring and self-driving vehicles scene understanding than KITTI dataset. It is often used by
applications. the current state-of-the-art deep learning models. MS-COCO
includes a large-scale of complex scenes annotated for 80
II. BACKGROUND classes where the traffic objects and environment scenes are
In this section, we present performance analysis of the addressing general settings for normal weather situations as
proposed methods under different adverse weather conditions shown in Figure. 2.
adapted on detecting vehicles categories (e.g., car, bus, truck, In these two datasets, the traffic scene is often addressing
motorcycle, bicycle) with the presence of human as a cate- normal weather conditions. Moreover, we clarify that the top-
gory (person) for pedestrian and cyclist in traffic environment performing methods for vehicle detection and visual scene
scenes. understanding do not completely apprehend the difficulty and
variability of poor real-world weather conditions. For more
A. Summary of the available vehicles image datasets in liter- details on scene understanding datasets, we refer the readers to
ature. [32]. The disparity of traffic images in DAWN dataset and the
In this part we provide an overview of the datasets used state of the art datasets (Rain & Snow and BDD) is compared
for evaluating vehicle detector models, as the detail of the and shown in Figure 3. DAWN dataset include extrem level
3

been examined before in the literature as well as issues that


are of key importance for autonomous vehicles technology and
ITS safety applications.
The goal of DAWN dataset is to investigate the performance
of vehicle detection and classification methods on a wide range
of natural images for traffic scenes in the cross-generalization
of adverse weather conditions, which are divided into four
categories according to the weather (i.e., fog, snow, rain and
sand). DAWN dataset contains significant variation in terms of
vehicle category, size, orientation, pose, illumination, position
and occlusion. Moreover, this dataset exhibits a systematic
bias for traffic scenes during nasty winter weather, heavy
snow hits, sleet rain, hazardous weather, sand and dust storms.
Samples images from DAWN dataset are shown in Figure.
4. To ensure an accurate evaluation, the traffic scenes are
comprehensive with normally moving and congested traffic,
combined motorway, highway, urban roads and intersections
which built up of several countries to cover the weather change
of the different regions in the universe. Annotations of the
vehicles are consistent, accurate and exhaustive for vehicles’
classes (e.g., car, bus, truck, motorcycle and bicycle) with the
presence of the human as cyclist and pedestrian. Examples of
annotations in DAWN dataset are illustrated in Figure. 5.
Images in DAWN dataset are collected through Google
and Bing search engines during a visual search that contains
a list of query keywords (include; foggy, haze, mist, nasty
Fig. 3: Sample of traffic images for comparison with state of winter weather, blustery weather, heavy snow hits, sleet rain,
the art datasets (Rain&Snow and BDD). sandstorm, duststorm, hazardous weather, adverse weather,
traffic, motorway, vehicle). Then, the candidate images are
filtered and selected by human in loop. The candidate images
for each situation in DAWN must respect the corresponding
of weather condition and variation of traffic environments. In terms of use for Google, Bing and Flickr terms of use
addition, the dataset is annotated with object bounding boxes where the license types include: ’Free to share and use’. This
for autonomous driving and video surveillance scenarios. dataset contains a collection of 1K image from real-traffic
3) DAWN dataset: To the best of our knowledge, few environments, which are divided into four primary subsets
datasets address the problem of adverse weather conditions by according to weather situations: Fog, Rain, Snow and Sand.
certain types of synthetic weather in images plus a few real- Finally, this dataset is annotated using LabelMe [33] to five
world images. For instance, Sakaridis et al. [15] proposed two types of vehicles and person for cyclist/pedestrian with 7,845
datasets; synthetic foggy cityscapes and foggy driving datasets total bounding box of GT including car (82.21%), bus (2.05%),
to investigate vehicle detection and defogging algorithms in truck (8.22%), motorcycles + bicycles (1.36%), and person
traffic environments with 8 classes. Li et al. [17] introduced (6.07%) as reported by charts shown in Figure. 6.
a benchmark to evaluate deraining algorithms in the traffic
scene consisting of rain in driving and surveillance datasets.
This dataset consists of synthetic and real-rainy environment III. C ONCLUSION
of 2,495 and 2,048 images, respectively. There is a need for
a dataset of real-world images addressing the shortcomings In this paper, we proposed a novel dataset (called DAWN)
of the aforementioned datasets considering imaging in bad for vehicle detection in adverse weather conditions, including
weather conditions. Currently, it is uncertain how deep learning heavy fog, rain, snow and sandstorms. The unique character-
algorithms would carry out on the wild through the influence istics of the new dataset, DAWN, gives researchers a chance
of cross-generalization for adverse weather conditions. In to examine aspects of vehicles detection that have not been
addition, how the progress of these algorithms is standardized examined before in the literature, as well as issues that are of
and applied safely in the ITS’s applications. To this end, we key importance for autonomous vehicles technology and ITS
introduce a novel dataset of real-world images collected under safety applications.
various adverse weather conditions, which we called ”DAWN:
Detection in Adverse Weather Nature”. It is designed to sup- ACKNOWLEDGMENT
port the research in ITS’s applications for safety opportunities.
The unique characteristics of DAWN dataset give researchers The authors would like to thank the providers of vehicle
a chance to examine aspects of vehicles detection that have not datasets.
4

Fig. 4: Sample images of the DAWN dataset illustrating four cases of adverse weather conditions.

Fig. 5: Examples of annotations in DAWN dataset. The dataset is annotated using LabelMe [33] into 7,845 total bounding boxes
of five types (e.g., car, bus, truck, motorcycles, and bicycles) and person for cyclist/pedestrian.
5

Bus Truck Car Person Motorcycler + Bicycle


250 2000 200
1800 180
200 1600 160
1400 140
150 1200 120
1000 100
100 800 80
600 60
50 400 40
200 20
0 0 0
Rain Snow Fog Sand Rain Snow Fog Sand
(a) vehicles (b) person, motorcycle and bicycle

Fig. 6: Statistics of DAWN dataset Ground Truth (GT).

R EFERENCES IEEE Transactions on Intelligent Transportation Systems,


[1] Yue Zhang, Bin Song, Xiaojiang Du, and Mohsen vol. 19, no. 1, pp. 174–186, 2018.
Guizani, “Vehicle tracking using surveillance with mul- [9] X. Hu, X. Xu, Y. Xiao, H. Chen, S. He, J. Qin, and
timodal data fusion,” IEEE Transactions on Intelligent P. Heng, “SINet: A scale-insensitive convolutional neural
Transportation Systems, vol. 19, no. 7, pp. 2353–2361, network for fast vehicle detection,” IEEE Transactions
2018. on Intelligent Transportation Systems, vol. 20, no. 3, pp.
[2] Kaiqi Liu, Wenguang Wang, Ratnasingham Tharmarasa, 1010–1019, 2019.
and Jun Wang, “Dynamic vehicle detection with sparse [10] M Hassaballah, Mourad A Kenk, and Ibrahim M El-
point clouds based on PE-CPD,” IEEE Transactions on Henawy, “Local binary pattern-based on-road vehicle
Intelligent Transportation Systems, vol. 20, no. 5, pp. detection in urban traffic scene,” Pattern Analysis and
1964–1977, 2019. Applications, pp. 1–17, 2020.
[3] Long Chen, Qin Zou, Ziyu Pan, Danyu Lai, Liwei Zhu, [11] Zhaowei Cai, Quanfu Fan, Rogerio S Feris, and Nuno
Zhoufan Hou, Jun Wang, and Dongpu Cao, “Surrounding Vasconcelos, “A unified multi-scale deep convolutional
vehicle detection using an FPGA panoramic camera and neural network for fast object detection,” in European
deep CNNs,” IEEE Transactions on Intelligent Trans- Conference on Computer Vision, 2016, pp. 354–370.
portation Systems, 2019. [12] Qijie Zhao, Yongtao Wang, Tao Sheng, and Zhi Tang,
[4] Bichen Wu, Forrest Iandola, Peter H Jin, and Kurt “Comprehensive feature enhancement module for single-
Keutzer, “Squeezedet: Unified, small, low power fully shot object detector,” in Asian Conference on Computer
convolutional neural networks for real-time object de- Vision, 2018, pp. 325–340.
tection for autonomous driving,” in IEEE Conference [13] Shifeng Zhang, Longyin Wen, Xiao Bian, Zhen Lei, and
on Computer Vision and Pattern Recognition Workshops, Stan Z Li, “Single-shot refinement neural network for
2017, pp. 129–137. object detection,” in IEEE Conference on Computer
[5] Y. Cho, J. Jeong, and A. Kim, “Model-assisted multiband Vision and Pattern Recognition, 2018, pp. 4203–4212.
fusion for single image enhancement and applications to [14] Songtao Liu, Di Huang, et al., “Receptive field block
robot vision,” IEEE Robotics and Automation Letters, net for accurate and fast object detection,” in European
vol. 3, no. 4, pp. 2822–2829, 2018. Conference on Computer Vision, 2018, pp. 385–400.
[6] H. Kuang, X. Zhang, Y. Li, L. L. H. Chan, and H. Yan, [15] Christos Sakaridis, Dengxin Dai, and Luc Van Gool, “Se-
“Nighttime vehicle detection based on bio-inspired image mantic foggy scene understanding with synthetic data,”
enhancement and weighted score-level feature fusion,” International Journal of Computer Vision, vol. 126, no.
IEEE Transactions on Intelligent Transportation Systems, 9, pp. 973–992, 2018.
vol. 18, no. 4, pp. 927–936, 2017. [16] Cameron Hodges, Mohammed Bennamoun, and Hossein
[7] Tianyang Dong, Guoqing Zhao, Jiamin Wu, Yang Ye, Rahmani, “Single image dehazing using deep neural
and Ying Shen, “Efficient traffic video dehazing using networks,” Pattern Recognition Letters, vol. 128, pp. 70–
adaptive dark channel prior and spatial–temporal corre- 77, 2019.
lations,” Sensors, vol. 19, no. 7, pp. 1593, 2019. [17] Siyuan Li, Iago Breno Araujo, Wenqi Ren, Zhangyang
[8] Weidong Min, Mengdan Fan, Xiaoguang Guo, and Qing Wang, Eric K Tokuda, Roberto Hirata Junior, Roberto
Han, “A new approach to track multiple vehicles with Cesar-Junior, Jiawan Zhang, Xiaojie Guo, and Xiaochun
the combination of robust detection and two classifiers,” Cao, “Single image deraining: A comprehensive bench-
6

mark analysis,” in IEEE Conference on Computer Vision 2018, pp. 954–960.


and Pattern Recognition, 2019, pp. 3838–3847. [30] Alexandre Robicquet, Amir Sadeghian, Alexandre Alahi,
[18] Ülkü Uzun and Alptekin Temizel, “Cycle-spinning gan and Silvio Savarese, “Learning social etiquette: Human
for raindrop removal from images,” in 16th IEEE trajectory understanding in crowded scenes,” in European
International Conference on Advanced Video and Signal conference on computer vision, 2016, pp. 549–565.
Based Surveillance, 2019, pp. 1–6. [31] Tsung-Yi Lin, Michael Maire, Serge Belongie, James
[19] Rui Qian, Robby T Tan, Wenhan Yang, Jiajun Su, and Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and
Jiaying Liu, “Attentive generative adversarial network C Lawrence Zitnick, “Microsoft COCO: Common ob-
for raindrop removal from a single image,” in IEEE jects in context,” in European Conference on Computer
Conference on Computer Vision and Pattern Recognition, Vision, 2014, pp. 740–755.
2018, pp. 2482–2491. [32] Shervin Minaee, Yuri Boykov, Fatih Porikli, Antonio
[20] Longyin Wen, Dawei Du, Zhaowei Cai, Zhen Lei, Ming- Plaza, Nasser Kehtarnavaz, and Demetri Terzopoulos,
Ching Chang, Honggang Qi, Jongwoo Lim, Ming-Hsuan “Image segmentation using deep learning: A survey,”
Yang, and Siwei Lyu, “UA-DETRAC: A new benchmark arXiv:2001.05566, 2020.
and protocol for multi-object detection and tracking,” [33] Bryan C Russell, Antonio Torralba, Kevin P Murphy, and
Computer Vision and Image Understanding, vol. 193, William T Freeman, “LabelMe: A database and web-
2020. based tool for image annotation,” International Journal
[21] Claudio Caraffi, Tomas Vojir, Jura Trefny, Jan Sochman, of Computer Vision, vol. 77, no. 1-3, pp. 157–173, 2008.
and Jiri Matas, “A System for Real-time Detection
and Tracking of Vehicles from a Single Car-mounted Mourad A. Kenk received his MSc degree in Com-
Camera,” in ITS Conference, 2012, pp. 975–982. puter Science in 2015 from the Faculty of Science,
[22] Andreas Geiger, Philip Lenz, and Raquel Urtasun, “Are South Valley University. Egypt. He is an assistant
lecturer at the Department of Mathematics, Faculty
we ready for autonomous driving? the kitti vision bench- of Science, South Valley University, Egypt. He is
mark suite,” in IEEE Conference on Computer Vision currently working toward his Ph.D. degree in Com-
and Pattern Recognition, 2012, pp. 3354–3361. puter Science at the Faculty of Science, South Val-
[23] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei- ley University, Egypt. His research interests include
Fei, “3d object representations for fine-grained cate- computer vision and robotics. Especially, Object
Detection/Tracking, Pose Estimation, Human-aware
gorization,” in Proceedings of the IEEE international and Visual Navigation for Autonomous Robots. In
conference on computer vision workshops, 2013, pp. 2017, he has joint Electrotechnics and Automatics Research Group (GREAH),
554–561. Normandy University, Le Havre, France as a researcher. For two years, he
[24] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, worked on Logistics Robotics to develop a picking application in logistics
warehouse and intelligent visual surveillance system for mobile robots. He
and A. Zisserman, “The PASCAL Visual Object Classes has served as a reviewer for Pattern Recognition Letters and Machine Vision
Challenge 2012 (VOC2012) Results,” http://www.pascal- Applications Journals.
network.org/challenges/VOC/voc2012/workshop/index.html.
[25] Chris H. Bahnsen and Thomas B. Moeslund, “Rain Mahmoud Hassaballah received his BSc degree
removal in traffic surveillance: Does it matter?,” IEEE in mathematics in 1997 and his MSc degree in
Transactions on Intelligent Transportation Systems, pp. computer science in 2003, both from South Valley
1–18, 2018. University, Egypt, and his Doctor of Engineering
(DEng) in computer science from Ehime University,
[26] Marius Cordts, Mohamed Omran, Sebastian Ramos, Japan in 2011. He is currently an associate professor
Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, of computer science at the Faculty of Computers
Uwe Franke, Stefan Roth, and Bernt Schiele, “The and Information, South Valley University, Egypt.
cityscapes dataset for semantic urban scene understand- He serves as a reviewer for several journals such
ing,” in Proc. of the IEEE Conference on Computer as IEEE Transactions on Image Processing, IEEE
Transactions on Fuzzy Systems, Pattern Recognition,
Vision and Pattern Recognition (CVPR), 2016. Pattern Recognition Letters, IET Image Processing, IET Computer Vision, IET
[27] Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulo, Biometrics, Journal of Real-Time Image Processing, and Journal of Electronic
and Peter Kontschieder, “The mapillary vistas dataset Imaging. He has published 5 books and over 50 research papers in refereed
for semantic understanding of street scenes,” in Proceed- journals and conferences. His research interests include feature extraction, ob-
ject detection/recognition, artificial intelligence, biometrics, image processing,
ings of the IEEE International Conference on Computer computer vision, machine learning, and data hiding.
Vision, 2017, pp. 4990–4999.
[28] Fisher Yu, Wenqi Xian, Yingying Chen, Fangchen Liu,
Mike Liao, Vashisht Madhavan, and Trevor Darrell,
“Bdd100k: A diverse driving video database with scalable
annotation tooling,” arXiv:1805.04687, 2018.
[29] Xinyu Huang, Xinjing Cheng, Qichuan Geng, Binbin
Cao, Dingfu Zhou, Peng Wang, Yuanqing Lin, and
Ruigang Yang, “The apolloscape dataset for autonomous
driving,” in Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition Workshops,

You might also like