Download textbook Advances In Visual Informatics 5Th International Visual Informatics Conference Ivic 2017 Bangi Malaysia November 28 30 2017 Proceedings 1St Edition Halimah Badioze Zaman Et Al Eds ebook all chapter pdf
Download textbook Advances In Visual Informatics 5Th International Visual Informatics Conference Ivic 2017 Bangi Malaysia November 28 30 2017 Proceedings 1St Edition Halimah Badioze Zaman Et Al Eds ebook all chapter pdf
Download textbook Advances In Visual Informatics 5Th International Visual Informatics Conference Ivic 2017 Bangi Malaysia November 28 30 2017 Proceedings 1St Edition Halimah Badioze Zaman Et Al Eds ebook all chapter pdf
https://textbookfull.com/product/advances-in-visual-
informatics-6th-international-visual-informatics-conference-
ivic-2019-bangi-malaysia-november-19-21-2019-proceedings-halimah-
badioze-zaman/
https://textbookfull.com/product/computational-science-and-
technology-4th-iccst-2017-kuala-lumpur-
malaysia-29-30-november-2017-1st-edition-rayner-alfred/
https://textbookfull.com/product/business-information-
systems-20th-international-conference-bis-2017-poznan-poland-
june-28-30-2017-proceedings-1st-edition-witold-abramowicz-eds/
https://textbookfull.com/product/health-information-science-6th-
international-conference-his-2017-moscow-russia-
october-7-9-2017-proceedings-1st-edition-siuly-siuly-et-al-eds/
https://textbookfull.com/product/cyber-security-cryptography-and-
machine-learning-first-international-conference-cscml-2017-beer-
sheva-israel-june-29-30-2017-proceedings-1st-edition-shlomi-
Halimah Badioze Zaman · Peter Robinson
Alan F. Smeaton · Timothy K. Shih
Sergio Velastin · Tada Terutoshi
Azizah Jaafar · Nazlena Mohamad Ali (Eds.)
LNCS 10645
Advances in
Visual Informatics
5th International Visual Informatics Conference, IVIC 2017
Bangi, Malaysia, November 28–30, 2017
Proceedings
123
Lecture Notes in Computer Science 10645
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zurich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7412
Halimah Badioze Zaman Peter Robinson
•
Advances in
Visual Informatics
5th International Visual Informatics Conference, IVIC 2017
Bangi, Malaysia, November 28–30, 2017
Proceedings
123
Editors
Halimah Badioze Zaman Sergio Velastin
Universiti Kebangsaan Malaysia Carlos III University of Madrid
Bangi, Selangor Madrid
Malaysia Spain
Peter Robinson Tada Terutoshi
University of Cambridge Toyo University
Cambridge Kawagoe
UK Japan
Alan F. Smeaton Azizah Jaafar
Dublin City University Universiti Kebangsaan Malaysia
Dublin Bangi, Selangor
Ireland Malaysia
Timothy K. Shih Nazlena Mohamad Ali
National Central University Universiti Kebangsaan Malaysia
Jhongli Bangi, Selangor
Taiwan Malaysia
LNCS Sublibrary: SL6 – Image Processing, Computer Vision, Pattern Recognition, and Graphics
The twenty-first century is a visual century. With the advent of Industry 4.0 or Fourth
Industrial Revolution (4IR), visual informatics has become a crucial aspect of computer
science. Specifically, it is a multidisciplinary field of computer science, information
technology, and engineering, which integrates areas such as computer vision, image
processing, pattern recognition, computer graphics, simulation, virtual reality, data
visualization and analytics, cyber security as well as social computing, applied in
various knowledge domains such as education, medical and health, finance, agriculture,
and security. The Institute of Visual Informatics (IVI), Universiti Kebangsaan Malaysia
(UKM) – or The National University of Malaysia – is a centre of excellence
(CoE) established as an outcome of the First Visual Informatics Conference (IVIC)
held in 2009. The institute, which conducts research in the aforementioned basic areas,
conducts master’s and doctoral (PhD) degree programs by research, as well as short
professional practical certifications currently in the areas of data science and visual
analytics. The institute has successfully graduated five master’s and 28 PhD students
since its inception in 2010 through 2017. We are indeed indebted to the international
fraternity from the last four IVIC conferences (2009, 2011, 2013, and 2015), who have
given us support that has resulted in the establishment of the institute. Our smart
partnerships, through this conference, will not only help the institute to grow but will
enhance research in this area internationally that can be applied by private organiza-
tions and governments in the respective countries.
The Visual Informatics Research Group and the Institute of Visual Informatics
(IVI) at UKM once again hosted this 5th International Visual Informatics Conference
(IVIC 2017), with the objective of bringing together experts to discuss and share new
knowledge and ideas in this research area so that more concerted efforts can be
undertaken nationally and globally. Like the previous IVIC conferences, this confer-
ence was conducted collaboratively by the visual informatics fraternity from various
public and private universities and industry from different parts of the world. This fifth
conference was co-sponsored by MyVIC Sdn. Bhd. (a local private company), the ICT
Cluster of the National Council of Professors (MPN), the Malaysian Information
Technology Society (MITS), the Malaysian Research Education Network (MyREN),
the Malaysian Digital Economy Corporation (MDeC), the Malaysian Communications
and Multimedia Commission (MCMC), and the Malaysian Administrative Modern-
ization Planning Unit (MAMPU). The conference was co-chaired by six professors
from UK, Ireland, Spain, Japan, Taiwan, and Malaysia.
The theme of the conference, “Visual Informatics: Changing Landscapes in 4IR
Through Data-Driven Decisions” reflects the importance of big data in this fourth
industrial revolution digital economy. It also portrayed the belief of the organizers
(both locally and globally) of the importance of open data and sharing of big data and
analytics, which would lead to the creation and innovation of new products through
data-driven decisions. The changing landscapes of industry 4.0 (4IR) require
VI Preface
data-driven decisions be made, based on big data analytics that would allow for more
comprehensive and accurate visual insights, for more précise predictions of data that
would result in an efficient data-driven decision-making for economic and social good.
Thus, the theme of the conference was relevant, apt, and timely.
The conference focused on four tracks — Visualization and Data-Driven Tech-
nology, Engineering and Data-Driven Innovation, Data-Driven Societal Well-Being
and Applications, and Data-Driven Cyber Security — which lasted for two days
(November 28 and 29, 2017) and ended with a one-day workshop (November 30,
2017). There were five keynote speakers and 68 paper presentations based on topics
covered by the four main tracks. The reviewing of the papers was conducted by experts
who represented the Program Committee from Asia, Europe, Oceania, and USA. Each
paper was reviewed by three reviewers and the acceptance rate was 51%. The
reviewing process was managed using EasyChair.
The conference also included the first meeting of a national task force on big data
represented by stakeholders from the private sector, academia, and government
agencies in Malaysia. The objective of the task force is to consolidate big data and big
data analytics initiatives, and help the government in formulation of policies relating to
BD, BDA, as well as open data. The conclusion and recommendations made by the
task force will be submitted to the appropriate governing body.
On behalf of the Organizing and Program Committee of IVIC 2017, we thank all
authors for their submissions and camera-ready copies of papers, and all participants
for their thought-provoking ideas and active participation in the conference. We also
thank the vice-chancellor of UKM (host university), and the vice-chancellors and deans
of all IT faculties of the IHLs for their support in organizing this conference. We also
acknowledge the sponsors, members of the Organizing Committees, Program Com-
mittee members, support committees, and individuals who gave their continuous help
and support in making the conference a success. IVIC has grown from strength to
strength and it is our fervent hope that it can one day be held in different host countries
in Asia, Europe, Oceania, the UK or the USA.
The 5th International Visual Informatics Conference (IVIC 2017) was organized by the
Visual Informatics Research Group and Institute of Visual Informatics, Universiti
Kebangsaan Malaysia (UKM), in collaboration with 18 local public and private
universities in Malaysia, the Malaysian Information Technology Society (MITS), the
Multimedia Development Corporation (MDeC), the Malaysian Research Educational
Network (MyREN), and the ICT Cluster of the National Professors’ Council (MPN).
Deputy Chair
Zaharin Yusoff (SunwayUni)
Secretary
Azizah Jaafar (UKM)
Assistant Secretary
Nazlena Mohamad Ali (UKM)
Treasurer
Rabiah Abd. Kadir (UKM)
Assistant Treasurer
Zuraini Zainol (UPNM)
Program Committee
Program Co-chairs
Halimah Badioze Zaman Universiti Kebangsaan Malaysia, Malaysia
Peter Robinson University of Cambridge, UK
Alan F. Smeaton Dublin City University, Ireland
Timothy K. Shih National Central University, Taiwan
Sergio Velastin Universidad Carlos III de Madrid, Madrid, Spain
Tada Terutoshi Toyo University, Japan
VIII Organization
Logistics
Riza Sulaiman (UKM) – Head Muslihah Wook (UPNM)
Mohd Taha Ijab (UKM) Ahmad Hanif Ahmad Baharin (UKM)
Syed Nasir Syed Zakaria Alsagoff Ummul Hanan Mohamad (UKM)
(UPNM)
Sponsorship
Azlina Ahmad (UKM) – Head Adriana Md Rizal (IBS/UTM)
Halimah Badioze Zaman (UKM) Suziah Sulaiman (UTP)
Wan Fatimah Wan Ahmad (UTP) Noor Afiza Mat Razali (UPNM)
M. Iqbal Saripan (UPM) Zaharin Yusoff (SunwayUni)
Bahari Belaton (USM) Ahmad Rafi Mohamed Eshaq (MMU)
Ahmad Sufril Azlan Mohamed (USM)
Workshop
Mohamad Taha Ijab (UKM) – Head Amelia Ritahani Ismail (UIA)
Bahari Belaton (USM) Syed Nasir Asagoff (UPNM)
Wan Fatimah Wan Ahmad (UTP) Ho Chiung Ching (MMU)
Tour
Azreen Azman (UPM) – Head
Aliimran Nordin (UKM)
EasyChair
Sponsoring Institutions
Keynote
Face Recognition with Real Time Eye Lid Movement Detection . . . . . . . . . . 352
Syazwan Syafiqah Sukri, Nur Intan Raihana Ruhaiyem,
and Ahmad Sufril Azlan Mohamed
Visual Object Interface Signifier of Museum Application for Large Display . . . 582
Fasihah Mohammad Shuhaili, Suziah Sulaiman, Saipunidzam Mahamad,
and Aliza Sarlan
Abstract. This paper presents a comparative study of two deep learning models
used here for vehicle detection. Alex Net and Faster R-CNN are compared with
the analysis of an urban video sequence. Several tests were carried to evaluate
the quality of detections, failure rates and times employed to complete the
detection task. The results allow to obtain important conclusions regarding the
architectures and strategies used for implementing such network for the task of
video detection, encouraging future research in this topic.
1 Introduction
Since 2010 the annual image recognition challenge known as the ImageNet
Large-Scale Visual Recognition Competition (ILSVRC) [13] is being dominated by
this approach.
For vehicle detection, several works using Deep Learning in vehicle detection are
reported in the literature. Earlier approaches relied on 2D Deep Belief Networks
(2D-DBN) [14], learning features by means of this architecture and using a pre-training
sparse filtering process [15] or Hybrid architectures (HDNN) which overcome the issue
of the single scale extraction features of traditional DNNs [16]. Color as a discriminative
feature is used in [17] and [18]. There are also pre-training schemes [19] that obtain
competitive results even with low resolution images and implementable in real time as in
[20]. More recently, detection and classification of multiples classes is performed using
integrated models as Fast R-CNN and Faster R-CNN [21–26]. Reports exist of methods
able to recognize vehicle make and models (MMR) [27, 28], re-identification archi-
tectures for security urban surveillance [29–31], strategies using DBN [14, 32–34] that
work with relatively few labelled data and models that are able to classify even the pose
or orientation of the vehicle [23, 35, 36]. Generally most of the detection and classifi-
cations models are implemented using different CNN architectures such as CaffeNet [37,
38], GoogLeNet [39], and VGGNet [26] used in [27]. AlexNet [40] is used by Su et al.
[18] in conjunction with GoogleNet [39] and NIN (Network in Network) [41].
Nevertheless, as far as we know, there are no comparative studies of DL strategies
used for vehicle classification, nor on the use of CNNs already trained for feature
extraction to perform vehicle discrimination in video sequences.
This work compares the results of a CNN used for feature extraction and a CNN
integrated model network, both used for the task of classifying vehicles in video
sequences. The paper is organized as follows: Sect. 2 gives a brief explanation of the
architecture of the convolutional neural networks, explaining the advantage of the use
of an already-trained network for feature extraction and the benefits of the integrated
CNN model. Section 3 shows the classification approaches, describing the character-
istics of the models built for the video detection task. Section 4 shows the results of the
two models, comparing and explaining the results. Section 5 presents the conclusions
and proposes some future work.
In this section, we describe the principal characteristics of the CNN AlexNet and the
Faster R-CNN networks used in this comparative study.
2.1 AlexNet
AlexNet is considered the pioneer work of CNN networks, even after the work of Yann
LeCun [42]. The AlexNet model was introduced in the paper “ImageNet Classification
with Deep Convolutional Networks”, were the authors created a “large, deep convo-
lutional neural network”, used to win the 2012 ILSVRC (ImageNet Large-Scale Visual
Recognition Challenge) [43]. The network was trained on ImageNet data, with over 15
million annotated images from a total of over 22,000 categories.
Vehicle Detection Using Alex Net and Faster R-CNN Deep Learning Models 5
The main characteristics of the network includes the use of ReLU for the nonlin-
earity functions that decrease the training time as ReLUs are faster than the conven-
tional tanh function used in MLP. For training proposes, the authors used techniques as
data augmentation consisting in horizontal reflections, image translations or even patch
extractions. Dropout layers were also included to overcome the problems of vanishing
gradient and overfitting. The model was trained using batch stochastic gradient descent,
using specific values for momentum and weight decay. It took nearly six days for
training using two GTX 580 GPUs.
then fed into fully connected layers, with a linear classifier and a bounding box linear
regression module to define regions. This model continues slowly at test time. Faster
R-CNN addressed this issue by combining features of a fully convolutional network to
perform both region proposals and object detection. Since region proposals depended
on features of the image that were already calculated with the forward pass of the CNN
(first step of classification), the model reuses the same CNN results for region proposals
instead of running a separate selective search algorithm. The region proposal network
(RPN) shares convolutional layers with the object detection network, then only one
CNN needs to be trained and region proposals is calculated almost for free. Then,
additional convolutional layers are used to regress region bounds with scores for object
proposal at each location. The RPN works by moving a sliding window over the CNN
feature map and at each window, generating k potential bounding boxes and scores
associated for how good each of those boxes is expected to be. This k represents the
common aspect ratios that candidates to objects could fit, caller anchor boxes. For each
anchor box, the RPN output a bounding box and score per position in the image. This
model improves significantly the speed and the object detection results.
Besides achieving the highest accuracy on both PASCAL VOC 2007 and 2012,
Faster R-CNN was the basis of more than 125 proposed entries in ImageNet detection and
localization at ILSVRC 2016 [51] and in the COCO challenge 2015 was the foundation of
the winners in several categories [25]. Figure 2 shows the network structure of the Faster
R-CNN framework. Both the region proposal network and the object classifier share fully
convolutional layers. These layers are trained jointly. The region proposal network
behaves as an attention director, determining the optimal bounding boxes across a wide
range of scales and using nine candidate aspect ratios to be evaluated for object classi-
fication. In other words, the RPN tells the unified network where to look.
An initial ROI (region of interest) step is implemented, allowing the user to select the
precise area of analysis. This step optimizes performance and speeds up the processes
of detection and classification, by reducing the area of analysis in the video sequences.
Two models, AlexNet and Faster R-CNN were used for vehicle detection and classi-
fication in this research.
Vehicle Detection Using Alex Net and Faster R-CNN Deep Learning Models 7
Following the strategy described in [55] by using the selected images, the
pre-trained CNN “AlexNet” network is used for feature extraction, this technique is
detailed described by Razavian et al. in [56]. For this work, the AlexNet network is
only used to classify four categories. This pre-trained network was used to learn
motorcycles, cars and buses features obtained from the extended dataset, with 80
images per category and 80 examples of the class “urbTree” created from the urban
environment. In the end, the total number of examples is only 320.
Features are extracted from the training set, propagating images through the net-
work up to a specific fully connected layer (fc7), extracting activations responses to
create a training set of features, which is used later for classification.
For classification, as in [55], a multiclass SVM classifier is trained using the image
features obtained from the CNN. Since the length of the feature vector is 4096, a fast
stochastic gradient descent solver is used as training algorithm. In this case the clas-
sifier is trained with only 96 examples (24 per category). The validation set, which
8 J.E. Espinosa et al.
corresponds to the remaining 224 examples (56 by category) is then classified. The
classifier accuracy is evaluated now with the features obtained on this set. Figure 4
shows the results in a confusion matrix. The classifier mismatches three bus images
classifying those as cars; one car image is classified as bus and another as a motorcycle.
The mean accuracy obtained is 0.978.
Fig. 4. Confusion matrix of the experiments (Class 1: Buses 2: Cars 3: Motorcycles 4: urbTree)
For this experiments all classes different to bus, car or motorbike, are renamed as
“unknown” to obtain comparative metrics for the results evaluation. As a preliminary
step for video detection and classification, the user defines a ROI within which the
detection and classification takes place.
Table 1. Rates of Faster R-CNN results. NMS (non maximal suppression) – CDR: correct
detection rate, Bikes: CDR for motorcycles, Cars: CDR for cars, Bus: CDR for buses or trucks.
DFR: Detection Failure Rate. FAR: False Alarm Rate. PR: Precision. RC: Recall. F1:F1-score.
NMS threshold CDR CDR bikes CDR CDR buses DFR FAR PR RC F1
cars
0.30 0.75 0.26 0.83 0.33 0.25 0.38 0.62 0.75 0.68
0.40 0.73 0.21 0.81 0.32 0.27 0.31 0.69 0.73 0.71
0.50 0.72 0.18 0.80 0.33 0.28 0.22 0.78 0.72 0.75
0.60 0.70 0.13 0.78 0.29 0.30 0.16 0.84 0.70 0.76
0.70 0.65 0.09 0.74 0.23 0.35 0.13 0.87 0.65 0.75
0.80 0.61 0.06 0.70 0.07 0.39 0.10 0.90 0.61 0.73
10 J.E. Espinosa et al.
Meanwhile, working with the AlexNet classifier in conjunction with the GMM
background subtraction, results are obtained in terms of the parameters of the back-
ground subtraction algorithm. First, the history parameter is evaluated against a fixed
Mahalanobis distance of 128. History corresponds to the number of frames (LoH) that
constitutes the training set for the background model. The best results obtained are for a
history of 500 frames (F1 = 0.57). Fixing this number, we then proceed to change the
Mahalanobis distance parameter (Tg). This parameter is a threshold for the squared
Mahalanobis distance that helps decide when a sample is close to the existing com-
ponents. A smaller Tg value generates more components. A higher Tg value may result
in a small number of components but they can grow too large. The best result is
obtained with LoH of 500, and Tg of 20, achieving a CDR of 0.66 with a FAR of 0.32
(Fig. 7 and Tables 2, 3).
Table 2. Rates of Faster R-CNN results. NMS (non maximal suppression) – CDR: correct
detection rate, Bikes: CDR for motorcycles, Cars: CDR for cars, Bus: CDR for buses or trucks.
DFR: Detection Failure Rate. FAR: False Alarm Rate. MR: Merge Rate. PR: Precision. RC:
Recall. F1:F1-score.
LoH CDR CDR bikes CDR CDR buses DFR FAR MR PR RC F1
cars
100 0.32 0.45 0.29 0.24 0.66 0.43 0.02 0.57 0.34 0.41
200 0.46 0.60 0.41 0.47 0.51 0.36 0.02 0.64 0.49 0.54
300 0.50 0.57 0.45 0.68 0.47 0.33 0.01 0.67 0.53 0.57
400 0.51 0.53 0.46 0.68 0.46 0.36 0.01 0.64 0.54 0.57
500 0.52 0.53 0.47 0.66 0.44 0.36 0.01 0.64 0.56 0.58
600 0.51 0.51 0.47 0.67 0.45 0.38 0.01 0.62 0.55 0.56
Table 3. Rates of Faster R-CNN results. NMS (non maximal suppression) – CDR: correct
detection rate, Bikes: CDR for motorcycles, Cars: CDR for cars, Bus: CDR for buses or trucks.
DFR: Detection Failure Rate. FAR: False Alarm Rate. MR: Merge Rate. PR: Precision. RC:
Recall. F1:F1-score.
Tg CDR CDR bikes CDR CDR buses DFR FAR MR PR RC F1
cars
16 0.64 0.50 0.66 0.22 0.36 0.42 0.08 0.58 0.64 0.61
20 0.66 0.50 0.68 0.24 0.33 0.38 0.08 0.62 0.67 0.64
32 0.68 0.53 0.69 0.29 0.31 0.33 0.07 0.67 0.69 0.67
40 0.67 0.53 0.67 0.32 0.32 0.31 0.06 0.69 0.68 0.68
48 0.66 0.54 0.65 0.37 0.33 0.31 0.06 0.69 0.67 0.67
64 0.64 0.55 0.62 0.45 0.35 0.30 0.05 0.70 0.65 0.66
The results obtained show that Faster R-CNN outperforms Alex Net+GMM model,
not only in the correct detection rate obtained while producing less false detections, but
also in time spending in the analysis. Both model were analyzed on a Windows 10
Machine with a core i7 7th generation, 4.7 GHz, and 32 GB of RAM, using an NVidia
Titan X (Pascal) 1531 MHz GPU, achieving close to real time in Faster R-CNN model
(40 ms per frame) while AlexNet+GMM took almost 100 ms per frame.
This paper has compared the performance of two deep learning models for vehicle
detection and classification in urban video sequences. Although the AlexNet model is
used for feature extraction in an ad-hoc set of examples oriented to urban scenarios, the
pre-trained Faster R-CNN model achieves better results in correct detections according
to F1-score measure. It is important to remark that the Faster R-CNN model does not
use any dynamic attributes for vehicle detection whereas GMM background subtraction
used in AlexNet model. In fact, as the merge rates (MR) result shows, GMM
12 J.E. Espinosa et al.
background subtraction still has issues with stationary vehicles and occluded scenarios.
In Faster R-CNN, the RPN component results could be improved providing some
urban context information as restriction size of the regions.
For future work, we intend to improve the results of the RPN component of the
Faster R-CNN model enriching it with traffic context information, and improve the
classification component with feature extraction using a Deep Architecture as AlexNet,
ZF or VGG, with a wider set of urban road user classes (e.g. trucks, vans, cyclists,
pedestrians).
Acknowledgments. S.A. Velastin is grateful to funding received from the Universidad Car-
los III de Madrid, the European Union’s Seventh Framework Programme for research, techno-
logical development and demonstration under grant agreement no. 600371, el Ministerio de
Economía y Competitividad (COFUND2013-51509) and Banco Santander. The authors wish to
thank Dr. Fei Yin for the code for metrics employed for evaluations. Finally, we gratefully
acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this
research. The data and code used for this work is available upon request from the authors.
References
1. Tsai, L.W., Hsieh, J.W., Fan, K.C.: Vehicle detection using normalized color and edge
map. IEEE Trans. Image Process. 16(3), 850–864 (2007)
2. Ma, X., Grimson, W.E.L.: Edge-based rich representation for vehicle classification. In: 10th
IEEE International Conference on Computer Vision (ICCV 2005), vol. 1–2, pp. 1185–1192
(2005)
3. Buch, N., Orwell, J., Velastin, S.A.: 3D extended histogram of oriented gradients (3DHOG)
for classification of road users in urban scenes (2009)
4. Feris, R.S., et al.: Large-scale vehicle detection, indexing, and search in urban surveillance
videos. IEEE Trans. Multimed. 14(1), 28–42 (2012)
5. Chen, Z., Ellis, T.: Multi-shape descriptor vehicle classification for urban traffic. In: 2011
International Conference on Digital Image Computing Techniques and Applications
(DICTA), pp. 456–461 (2011)
6. Chen, Z., Ellis, T., Velastin, S.A.: Vehicle detection, tracking and classification in urban
traffic. In: 2012 15th International IEEE Conference on Intelligent Transportation Systems,
pp. 951–956 (2012)
7. Gupte, S., Masoud, O., Martin, R.F., Papanikolopoulos, N.P.: Detection and classification of
vehicles. IEEE Trans. Intell. Transp. Syst. 3(1), 37–47 (2002)
8. Cucchiara, R., Piccardi, M., Mello, P.: Image analysis and rule-based reasoning for a traffic
monitoring system. IEEE Trans. Intell. Transp. Syst. 1(2), 119–130 (2000)
9. Messelodi, S., Modena, C.M., Zanin, M.: A computer vision system for the detection and
classification of vehicles at urban road intersections. Pattern Anal. Appl. 8(1–2), 17–31
(2005)
10. Huang, C.-L., Liao, W.-C.: A vision-based vehicle identification system. In: Proceedings of
17th International Conference on Pattern Recognition, ICPR 2004, vol. 4, pp. 364–367
(2004)
11. Ottlik, A., Nagel, H.-H.: Initialization of model-based vehicle tracking in video sequences of
inner-city intersections. Int. J. Comput. Vis. 80(2), 211–225 (2008)
Another random document with
no related content on Scribd:
vaiti, vaikkakin hänellä oli sentnerin paino omallatunnollaan.
Seuraus siitä oli, että ne työt, jotka vielä saatiin, lyötiin laimin
eivätkä valmistuneet oikeaan aikaan.
"Hanna", sanoi hän sitte, "mitä siitä sitte tulee, kun Vilhelm palaa?
Vannon teille, että olen teidän totinen ystävänne, mutta juuri siksi
täytyy minun katsoa tulevaisuuteen teidän puolestanne. Miehenne
on ainaiseksi kunniaton mies, ja vaikkei tänään kukaan voi sälyttää
hänen rikostaan teidän eikä lastenne hartioille, niin tehdään se
silloin, kun hän on jälleen luonanne, Hanna, teidän täytyy —"
Tämä oli se Marat toisen suunnitelma. Hän oli usein ajatellut tätä
kiristyskoetta, nähdessään kuinka puotikassa pysyi yhä tyhjänä ja
velat lisääntyivät, mutta ensin tahtoi hän nähdä saiko agitaattori
tosiaankin valtikan käsiinsä.
Ja nyt oli aika tullut, nyt oli hätä kynnyksellä, nyt oli joko taivuttava
tai taituttava. Nyt sai tämä porvari vetää kassansa esiin, kuten hän
kerran oli imenyt Schornia kaikella hienostuneella viekkaudellaan.
"Kuritushuoneeseen tulee hän sentään kyllin aikaiseen", täten
koetti herra Pätzoldt puolustaa attentaattiaan, jonka hän oikeastaan
teki Schornin omaisuutta vastaan.
Sepäs vielä puuttui, että häneltä nyt riistettäisiin ihanat päivät! Nyt
kun hän puhetaiteellaan oli saanut asiat niin pitkälle, että hänen
lemmittynsä oli taipuvainen ennen pitkää myymään talon ja tavarat,
lähteäksensä hänen kanssansa Ameriikkaan. Ja tämän kurjan
kääpiön täytyi kaikkialle tulla hänen tielleen. Mutta hänen turpansa
täytyi tukkia, sitte sai toistaiseksi olla rauhassa.
Rouva Schorn sanoi vaan pari sanaa, käskien hänen tehdä miten
hyväksi katsoo. Hän jätti jo kotiystävälle vapaat kädet.
Kadulta ei hän tahtonut astua taloon, ei, kuten varas tahtoi hän
hiipiä puutarhan aitovartta pitkin, kaikkien näkemättä, tuntematta,
ensin seistä kotvan aikaa mykkänä, kädet ristissä kotiaan katsellen;
rukoilla Jumalalta rohkeutta seuraavaa hetkeä varten.
Hän ei voinut uskoa, mitä oli kuullut, tai —? Hän ei voinut selvästi
ajatella, hänen ohimoissaan kihisi ja kiehui.
Niin, nyt uskoi hän kaiken, mitä hulttio maantiellä oli hänelle
kertonut.
*****