Contrast Enhancement and Bit Plane Slicing

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

CONTRAST ENHANCEMENT AND BIT PLANE SLICING

To extract blood vessels from the retinal image, contrast enhancement is to be performed just after
preprocessing of input image is completed. The reason behind this is that contrast enhancement will
enable us to identify the vessels more clearly. Here, we are using the green channel extracted from the
original colored image as the blood vessels are most prominent in the green channel. Contrast
enhancement is a process that makes the image features stand out more clearly by making optimal use of
the colors available on the display or output device. Contrast manipulations involve changing the range of
values in an image in order to increase contrast. Using the default settings, we compare the effectiveness
of the following three techniques:

imadjust increases the contrast of the image by mapping the values of the input intensity image to new
values such that, by default, 1% of the data is saturated at low and high intensities of the input data.

histeq performs histogram equalization. It enhances the contrast of images by transforming the values in
an intensity image so that the histogram of the output image approximately matches a specified histogram
(uniform distribution by default).

adapthisteq performs contrast-limited adaptive histogram equalization. Unlike histeq, it operates on


small data regions (tiles) rather than the entire image. Each tile's contrast is enhanced so that the
histogram of each output region approximately matches the specified histogram (uniform distribution by
default). The contrast enhancement can be limited in order to avoid amplifying the noise which might be
present in the image.

The above mentioned are the most common techniques of contrast enhancement. Histogram equalization,
on the other hand, substantially changes images to a greater extent. Many of the previously hidden
features are exposed, especially the debris particles on the image used. Unfortunately, at the same time,
the enhancement over-saturates several areas of images. In order to solve this problem, a different
transformation technique would have to be applied to different portions of the considered image. The
Contrast-Limited Adaptive Histogram Equalization technique, implemented in adapthisteq, can
accomplish this. The algorithm analyzes portions of the image under consideration and computes the
appropriate transformations. A limit on the level of contrast enhancement can also be set, thus preventing
the over-saturation caused by the basic histogram equalization method of histeq. In this project of retinal
image extraction, first we have converted color image to gray scale image (i.e. by green channel
extraction) followed by pre-processing algorithms’ implementation. Contrast enhancements improve the
perceptibility of objects in the scene by enhancing the brightness difference between objects and their
backgrounds. Contrast enhancements are typically performed as a contrast stretch followed by a tonal
enhancement, although these could both be performed in one step. A contrast stretch improves the
brightness differences uniformly across the dynamic range of the image, whereas tonal enhancements
improve the brightness differences in the shadow (dark), midtone (grays), or highlight (bright) regions at
the expense of the brightness differences in the other regions. The contrast enhancement technique plays a
vital role in image processing to bring out the information that exists within low dynamic range of that
gray level image. To improve the quality of an image, it is required to perform the operations like
contrast enhancement and reduction or removal of noise. There are many published papers which have
proposed a concept of contrast enhancement using the global mean of entire image and local mean of 3×3
sub images. Local mean filter is used to smooth the image by taking the mean value of the pixels
surrounding the center pixel within the image. Experimental results based on subjective and objective
evaluation have demonstrated that the proposed algorithm is able to increase the brightness of an input
gray level image. Since most of the images considered in these papers are affected by weather, poor
lighting and the acquiring device itself while capturing, they suffer from poor contrast. Sufficient contrast
in an image makes an object distinguishable from the other objects and the background. Contrast
enhancement improves the quality of images for human observer by expanding the dynamic range of
input gray level. Although, plethora enhancement techniques have emerged, none of them seem to be a
universal one, thus becoming selective in application. Image enhancement techniques have been widely
used in many applications of image processing where the subjective quality of images is important for
human interpretation. Contrast is an important factor in any subjective evaluation of image quality.
Contrast is created by the difference in luminance reflected from two adjacent surfaces. In other words,
contrast is the difference in visual properties that makes an object distinguishable from other objects and
the background. In visual perception, contrast is determined by the difference in the color and brightness
of the object with other objects. Our visual system is more sensitive to contrast than absolute luminance.
Therefore, we can perceive the world similarly regardless of the considerable changes in illumination
conditions. Many algorithms for accomplishing contrast enhancement have been developed and applied to
problems in image processing. Apart from the above mentioned techniques, power-law transformation
technique in images can bring about good results in image enhancement or contrast enhancement. In our
project, we have used this power-law transformation technique only. The general form of Power law
(Gamma) transformation function is

s = c*rγ

Where, ‘s’ and ‘r’ are the output and input pixel values, respectively and ‘c’ and γ are the positive
constants. Like log transformation, power law curves with γ <1 map a narrow range of dark input values
into a wider range of output values, with the opposite being true for higher input values. However, before
applying this contrast enhancement technique, we perform blurring of the pre-processed gray scale
image.Now,if we subtract the blurred image from the pre-processed gray scale image, we will obtain an
image were only blood vessels are slightly prominent while the other things in background becomes
almost completely dark. After this, we apply the power-law transformation technique. We have tried two
different blurring techniques for the gray level pre-processed image. They are as follows-

Opening and Closing Operations: Opening removes small objects from the foreground (usually taken
as the bright pixels) of an image, placing them in the background, while closing removes small holes in
the foreground, changing small islands of background into foreground. These techniques can also be used
to find specificshapes in an image. Now, successive opening and closing operations on the image
containing blood vessel(i.e. the pre-processed, gray scale image)will cause blurring of the image. This
will occur due to successive widening and erosion of image blood vessels in consecutive manner.
However, this does not yield very good results.

Median-filtering using [21 21] mask : The Median Filter is a non-linear digital filtering technique, often
used to remove noise from an image or signal. Such noise reduction is a typical pre-processing step to
improve the results of later processing (for example, edge detection on an image). The main idea of the
median filter is to run through the signal entry by entry, replacing each entry with the median of
neighboring entries. The pattern of neighbors is called the "window", which slides, entry by entry, over
the entire signal. For 1D signals, the most obvious window is just the first few preceding and following
entries, whereas for 2D (or higher-dimensional) data the window must include all entries within a given
radius or ellipsoidal region (i.e. the median filter is not a separable filter).The output obtained due to
median filtering depends both on the nature, size of the image and also on the size of the mask used.
Depending on the nature and size of the image we have considered in this project, it is found that the use
of a [21 21] mask yields best result for blurring of the pre-processed, gray scale image(obtained after
green channel extraction) of the original, colored input retinal image.

Comparing the above two processes of image blurring(i.e. Successive Opening and Closing Operation
and Median Filtering using [21 21] Mask),we observe that the second one yields much better results than
the first one. Thus, we have used Median Filtering using [21 21] mask in our project. This is
implemented using an inbuilt matlab function(as we have used matlab software in this project) which is
given as follows:
I1=medfilt2(I,[21 21]);
Here, “I1” is the output image while “I” represents input image. The output , blurred image is now
subtracted from the pre-processed gray scale image(which is obtained after green channel extraction).The
image obtained after this subtraction operation contains slightly prominent blood vessels with the other
things in the background remaining almost completely dark. Now, to make the blood vessels more
prominent, we use contrast enhancement technique. As we have mentioned earlier, power-law
transformation technique is used for contrast enhancement in our project. This type of transformation is
used for enhancing images for different type of display devices. The gamma of different display devices
is different. For example “γ”(Gamma) of CRT lies in between of 1.8 to 2.5, that means the image
displayed on CRT is dark. Variation in the value of “γ” varies the enhancement of the images. Different
display devices / monitors have their own Gamma correction, that’s why they display their image at
different intensity.The method does not require the user to choose the exponent in the power law
transformation. This method works well for images having poor contrast, especially to those images
which has various peaks corresponding to the background and the foreground are not widely separated.
We have implemented the power-law transformation technique using a matlab inbuilt function(as we have
used matlab software in this project) which can be given as follows-
Ib1(x,y)=Ib(x,y)^r
where “Ib1” represents the output image and “Ib” represents the input image. Image is enhanced by
applying this technique. Here, we have used r=2 as it yields best result. However, for further enhancement
of the image i.e. to make the blood vessels more prominent, we apply exponential transformation to the
enhanced image obtained after applying power-law transformation technique. This exponential
transformation technique used only enhances the blood vessels while the other things in the background
remains almost completely dark as it was in the previous step(i.e. after application of power law
transformation technique).That is why, we can easily apply this exponential transformation technique.
The exponential and `raise to power' operators are two anamorphosis operators which can be applied to
grayscale images. Like the logarithmic transform, they are used to change the dynamic range of an image.
However, in contrast to the logarithmic operator, they enhance high intensity pixel values.
The exponential operator is a point process where the mapping function is an exponential curve. This
means that each pixel intensity value in the output image is equal to a basis value raised to the value of
the corresponding pixel value in the input image. The basis number to be used depends on the desired
degree of compression of the dynamic range. In order to enhance the visibility of a normal photograph,
values just above 1 are suitable. For display, the image must be scaled such that the maximum value
becomes 255 (assuming an 8-bit display). The resulting image is given by
where P and Q are the input and output images, respectively, b is the basis and c is the scaling factor. 
We have implemented this exponential transformation technique using an inbuilt matlab function(as we
have used matlab software in our project) which is as follows:
y=exp(double(x))
where “y” is the output image and “x” is the input image.

The exponential transformation technique ends contrast enhancement procedure. After this, Image
Segmentation procedure is performed. In digital image processing and computer vision, image
segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also
known as image objects). The goal of segmentation is to simplify and/or change the representation of an
image into something that is more meaningful and easier to analyze. Image segmentation is typically used
to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the
process of assigning a label to every pixel in an image such that pixels with the same label share certain
characteristics. The result of image segmentation is a set of segments that collectively cover the entire
image, or a set of contours extracted from the image (as we observe in edge detection). Each of the pixels
in a region are similar with respect to some characteristics or computed properties, such
as color, intensity, or texture. Adjacent regions are significantly different with respect to the same
characteristic(s). When applied to a stack of images, typical in medical imaging, the resulting contours
after image segmentation can be used to create 3D reconstructions with the help of interpolation
algorithms like marching cubes. Image segmentation is performed here to extract or make the blood
vessels separated from the rest of the image. Segmentation partitions an image into distinct regions
containing each pixels with similar attributes. To be meaningful and useful for image analysis and
interpretation, the regions should strongly relate to depicted objects or features of interest. Meaningful
segmentation is the first step from low-level image processing which includes transforming a greyscale or
color image into one or more other images to high-level image description in terms of features, objects,
and scenes. The success of image analysis depends on reliability of segmentation, but an accurate
partitioning of an image is generally a very challenging problem. Segmentation techniques are
either contextual or non-contextual. The latter take no account of spatial relationships between features in
an image and group pixels together on the basis of some global attribute, e.g. grey level or color.
Contextual techniques additionally exploit these relationships, e.g. group together pixels with similar grey
levels and close spatial locations.Image segmentation is the process of partitioning an image into multiple
segments. Image segmentation is typically used to locate objects and boundaries in images. Image
segmentation, which is quite important for computer vision, is introduced as partitioning an image into its
certain regions based on some criteria where the regions are meaningful and disjoint. Image
segmentation is a challenging, complex task that is affected by numerous aspects, including noise, low
contrast, illumination, and irregularity of the object boundaries. Image segmentation, one of the
significant aspects of image processing, is a long standing problem in the research area of computer
vision. The main aim of segmentation is to extract the ROI(Region of Interest)for image analysis. The
division of an image into meaningful structures, i.e., image segmentation, is often an essential step in
image analysis, object representation, visualization, and many other image processing tasks.
Segmentation plays an important role in image processing since separation of a large image into several
parts makes further processing simpler. These several parts that are rejoined will cover the entire image.
Segmentation may also depend on various features like color or texture that are contained in the image.
Before de-noising an image, the image is segmented to recover the original image. The main aim of
segmentation is to reduce the information and hence making easy analysis possible. There exists several
image segmentation techniques, which partition the image into several parts based on certain image
features like pixel intensity value, color, texture, etc. and these techniques are categorized based on the
segmentation method used. In case of our project, the segmentation procedure involves binarisation of the
image as well so that the blood vessels are only visible while the rest of the image becomes completely
dark(i.e. blood vessels are only represented in white and the rest of the image is represented in black).

For image binarization of the image, Bit Plane Slicing technique is used.These binary images are then
compressed using different algorithm. With this technique, the valid bits from gray scale images can be
separated, and it will be useful for processing these data in very less time complexity. Bit plane
slicing is a method of representing an image with one or more bits of the byte used for each pixel. One
can use only MSB to represent the pixel, which reduces the original gray level to a binary image. The
three main goals of bit plane slicing is: Converting a gray level image to a binary image.A bit plane of
a digital discrete signal (such as image or sound) is a set of bits corresponding to a given bit position in
each of the binary numbers representing the signal. For example, for 16-bit data representation there are
16 bit planes: the first bit plane contains the set of the most significant bit, and the 16th contains the least
significant bit.It is possible to see that the first bit plane gives the roughest but the most critical
approximation of values of a medium, and higher is the number of the bit plane, the less is its contribution
to the final stage. Thus, adding a bit plane gives a better approximation. If a bit on the nth bit plane on
am-bit dataset is set to 1, it contributes a value of 2 m−n. Otherwise, it contributes nothing. Therefore, bit
planes can contribute half of the value of the previous bit plane. Bit plane is sometimes used as a
synonymous thing to bitmap. However, technically ,the former refers to the location of the data in
memory and the latter to the data itself. One aspect of using bit-planes is determining whether a bit-plane
is random noise or contains significant information. One method for calculating this is compare each
pixel (X, Y) to three adjacent pixels (X − 1, Y), (X, Y − 1) and (X − 1, Y − 1). If the pixel is the same as
at least two of the three adjacent pixels, it is not noise. A noisy bit-plane will have 49% to 51% pixels that
are noise. The gray level of each pixel in a digital image is stored as one or more bytes in a computer. For
an 8-bit image, 0 is encoded as 00000000 and 255 is encoded as 11111111. Any number between 0 to 255
is encoded as one byte. The bit in the far left side is referred as the most significant bit (MSB) because a
change in that bit would significantly change the value encoded by the byte. The bit in the far right is
referred as the least significant bit (LSB), because a change in this bit does not change the encoded gray
value much. Given an X-bit per pixel image, slicing the image at different planes (bit-planes) plays an
important role in image processing. An application of this technique is data compression. In general, 8-bit
per pixel images are processed. We can slice an image into the following bit-planes. Zero is the least
significant bit (LSB) and 7 is the most significant bit (MSB):

1. 1 which displays all pixels with bit 1 set: 0000.0010


2. 2 which displays all pixels with bit 2 set: 0000.0100
3. 3 which displays all pixels with bit 3 set: 0000.1000
4. 4 which displays all pixels with bit 4 set: 0001.0000
5. 5 which displays all pixels with bit 5 set: 0010.0000
6. 6 which displays all pixels with bit 6 set: 0100.0000
7. 7 which displays all pixels with bit 7 set: 1000.0000
8. 0 which displays all pixels with bit 0 set: 0000.0000

In this project, while using bit plane slicing, we have extracted only the 7 th bit plane from the image for
binarisation on the basis of the desired result. This extraction of 7 th bit plane is done by using logical
AND operator in Matlab software. This logical AND operator basically performs bitwise AND operation
between each pixel value(represented in 8 bits) and the corresponding 8 bit value representing the specific
bit plane. To obtain best results, we have used the 7 th bit plane extraction from the corresponding image in
our process. The matlab function used in this process can be given as follows:

y2(i,j)=bitand(y(i,j),128)
where“y2” is the output image, “y” is the input image. The inbuilt matlab method bitand performs
bitwise AND operation between each image pixel value(represented in 8 bits) and 128(represented as 8
bit value i.e. 10000000).Here, we have used 128 as we want to extract the 7 th bit plane which displays all
pixels with bit 7 set:1000.0000(as mentioned earlier).Thus, the resultant image will contain only the
information contained in the MSB of all the pixel values of the input image. Thus, we obtain the binarised
image by extracting the 8th bit plane. After binarization, in order to eliminate the remains of other things
from background except the blood vessels, median filtering with [ 2 2 ] mask is performed on the
binarised image. This final step of median filtering is implemented in matlab in the following way:

y1=medfilt2(y2,[2 2]);
where “y2” is the binarised image obtained after bit plane slicing and “y1” is the image obtained after
applying median filtering algorithm to image “y2”. The Median Filter is a non-linear digital
filtering technique, often used to remove noise from an image or signal. Such noise reduction is a typical
pre-processing step to improve the results of later processing (for example, edge detection on an image).It
is for this reason that median filtering is applied on the binarised image in order to eliminate the remains
of the undesired background elements apart from blood vessels still remaining in the image after
binarisation.
This ends that the two consecutive processes of contrast enhancement and bit plane slicing of image.

Fig 1: Original pre-processed image of Fig 2: Blurring of pre-processed


green channel extracted from gray scale image
colored, input image.
Fig 3: Image with slightly prominent blood Fig4: Image obtained after contrast
vessels(formed due to subtraction of enhancement of image in Fig 3
blurred image from original gray scale by exponential function
image)

Fig 5: Image obtained by extracting 8th Fig 6: Median filtering algorithm applied
bit plane from the contrast stretched on extracted 8 th bit plane
image (as in Fig 4)

You might also like