Digital Image Processing: Application For Abnormal Incident Detection
Digital Image Processing: Application For Abnormal Incident Detection
Digital Image Processing: Application For Abnormal Incident Detection
A PRESENTATION BY
D.ARAVIND P.NAVEEN
III/IV B.Tech III/IV B.Tech
Sri Sarathi Institute Of Engineering & . Sri Sarathi Institute Of Engineering &
Technology Technology
Nuzvid
Ph:9885501050 Ph:9866221733
Email:aravind_devarapalli@yahoo.co.in Email:navin_pasupuleti@yahoo.co.in
Nuzvid-521201
ANDHRA PRADESH
Abstract- Intelligent vision systems (IVS) represent an exciting part of modern sensing,
computing, and engineering systems. The principal information source in IVS is the
image, a two dimensional representation of a three dimensional scene. The main
advantage of using IVS systems is that the information is in a form that can be interpreted
by humans.
Our paper is an image process application for abnormal incident detection,
which can be used in high security installation, subways, etc. In our work, motion cues
are used to classify dynamic scenes and subsequently allow the detection of abnormal
movements, which may be related critical situations.
Successive frames are extracted from the video stream and compared. By
subtracting the second image from the first, that difference image is obtained. This is the
segmented to aid error measurement and thresholding. If is the threshold is exceeded, the
human operator is alerted. So, that he / she may take remedial action. Thus by processing
the input image suitably, our system alerts operators to any abnormal incidents, which
might lead to critical situations.
1. INTRODUCTION
In this system, motion is used as the main cue for abnormal incident
detection. It is henceforth obvious that the first concern is obtaining the images required
from the source. In the circumstances described (subways, high security installations)
usually a closed circuit television system is employed.
Any ordinary video systems use 25 frames per second. The system
described here uses scene motion information extracted at the rate of 8.33 times per
second. These amounts to capturing a frame once every two frames in the video camera
system. In practical real time operation a hardware block-matching motion detector is
used for frame extraction.
Difference image
The second figure shows a slightly displaced version of the first figure.
The system errors mentioned in the last item must be suppressed. If it
is required to find the direction of motion, it can do by constructing a cumulative
difference image. This cumulative difference image can be constructed from a sequence
of images. This , however is not necessary our system as the direction of motion is
invariably the same.
Obtaining the difference image is simplified in the MATLAB image
processing Toolbox. The input images are read using the ‘imread’ function and converted
to a binary image using the ‘ im2bw’ function. The ‘ im2bw’ function converts the input
image to gray scale image. The output binary image BW is 0.0 (black) for all pixels in
the input image with luminance less than a user defined level and 1.0 (white) for all other
pixels.
Ri adjacent to Rj where S is the total number of regions in an image and H(Ri) is a binary
homogeneity evaluation of region Ri. Resulting regions of the segmented image must be
both homogeneous and maximal where ‘maximal’ means that the homogeneity criterion
would not be true after merging a region with any adjacent region.
Region merging starts with an over segmented image and merges similar or
homogeneous regions to form larger regions until no further merging are possible.
Region splitting is the opposite of region merging. It begins with an under segmented
image where the regions are not homogeneous. The existing image regions are
sequentially split to form regions properly.
2.3. Region growing and segmentation
Our system uses the region growing segmentation method to video the
image in to regions. In region growing segmentation, a seed point is first chosen in the
image. Then the eight neighbours of the pixel are checked for a specific threshold
condition. If the condition is satisfied it is incorporated as part of the region. This process
is repeated for each of the eight neighbours and this continues until every pixel has been
checked, and the whole image has been segmented into regions.
In our system, the MATLAB function ‘bwlabel’ which performs region-
growing segmentation. This function accepts the image to be segmented as input and
returns a matrix representing the segmented image along with the number of segments. It
is to be noted that the image at this stage of processing is a binary image with only two
levels-black (1) and white (0).
Difference image
3.2. Thersholding algorithm
• Get the total number of segments k.
• Repeat through steps 3 to 7 for all k segments.
• Scan the matrix to find the kith segment.
• Store the column indices of the kith segment.
• Find the maximum and minimum index values; subtract to find their difference.
• If difference is greater than or equal to 16 pixels, sound alarm to alert the human
operator.
• Continue with next segment.
An example of how the differences are stored in the form of a column vector is
shown. If any value in the difference matrix is greater than or equal to 16, the human
operator is alerted.
Sample difference matrix
6 20
8 20
13 20
17 20
20 20
20 20
20 20
20 20
20 20
20 18
20 11
In the sample matrix for the difference image shown above, the
threshold of 16 pixels is exceeded and the human operator is alerted.
RESULTS
Second image
First image
DIFFERENCE IMAGE
4. ADVANTAGES:
5. CONCLUSION:
Further, surveillance by humans is dependent on the quality of the
human operator and a lot of factors like operator fatigue, negligence may lead to
degradation of performance. These factors may can intelligent vision system a better
option. as in systems that use gait signature for recognition in vehicle video sensors
for driver assistance.