Image Color Feature CH 01

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Chapter 1

Introduction to Image Color Feature

Two main factors motivate the need for color in image processing. First, color is
a strong descriptor frequently simplifying the recognition and extraction of objects
from a picture. Second, people can distinguish thousands of tones of color and inten-
sity comparable to just about two dozen tones of gray [1–4]. Color is one of the
most significant characteristics that enable people to recognize images. Color is a
feature that relies on the reflection and processing of light to the eye in the brain.
Color is utilized on a daily basis to say the distinction between objects, locations,
and daytime. Comparing the image color content is obvious and therefore this is one
of the popular choices to perform image recovery tasks. Color function is a solid
descriptor which can often simplify the recognition and extraction of objects from a
particular image.
The exact method through which the human brain controls and understands color
sight is a complicated physiopsychological process. Nevertheless, the fundamental
attributes of light itself can easily be expressed on the basis of experimental and
theoretical findings. In 1666, Sir Isaac Newton noted that it spreads evenly into an
incessant color band from violet to red when white light is transmitted via a prism.
He also observed that no color ends sharply within this spectrum, but the transition
from one color to the next is always smooth. Figure 1.1 demonstrates this process.
Each color in the spectrum is a particular wavelength corresponding to a distinctive
color. It is possible to mix different combinations of these wavelengths to create other
colors too.
From the light reflected from the images, the colors experienced by the human
visual system and some other creatures are defined [5, 6]. As shown in Fig. 1.2, only
a tiny part of the entire electromagnetic spectrum is composed of visible light. Items
that reflect all wavelengths of the light spectrum will appear white, while objects that
favor the reflection of some wavelengths will appear to the observer as some color
shade. For example, artifacts in the range of 420–470 nm that reflect wavelengths
while consuming much of the others may appear as a blue color.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 1
J. Chaki and N. Dey, Image Color Feature Extraction Techniques,
SpringerBriefs in Computational Intelligence,
https://doi.org/10.1007/978-981-15-5761-3_1
2 1 Introduction to Image Color Feature

Fig. 1.1 The color spectrum obtained by transmitting white light via a prism

Fig. 1.2 Wavelength making up visible light or spectrum

Special sensor cell structures, widely known as cones, are in charge of color
perception in human vision within the eye [7, 8]. Researchers have assessed, based
on observational evidence, that six to seven million cones residing in the human eye
can be segmented into three different groups of sensing approximately corresponding
to the colors blue, green, and red. About only 2% of these cones belong to the blue
group, 33% to the green group, and 65% to the red group. It is observed that these
cluster ratios do not reflect the specificity of color directly; in fact, the blue cones
are the most delicate, compensating for their deficiency of presence.
Figure 1.3 demonstrates the absorption of light by the red, green, and blue, cones
as a meaning of the eye’s wavelength and also the highest wavelengths for which they
are most delicate on a standardized range. Colors are regarded as varying proportions
of these primary colors because of these characteristics of the cone cells of the eye.
Moreover, due to the continued existence of the visual spectrum and the varying cone
1 Introduction to Image Color Feature 3

Fig. 1.3 Absorption of light by the human eye cones

sensitivity, no particular color may be classified as red, green, or blue. Mixtures of


such primaries dependent on defined wavelengths cannot be utilized to generate all
spectrum wavelengths; the ranges of blue, red, and green should be allowed to vary
in order to generate all colors.
A color model is a mathematical model that in certain regular and approved ways
facilitates color definition. The model, also recognized as color system or color space,
designates a subspace and a coordinate system within that system of coordinates
where every point conforms specifically to a single color [9]. Nowadays, most color
models are designed to utilize either in computer hardware (monitor, printer, etc.) or
in applications where the use of color is the main goal (i.e., graphic design). Presently
there are countless color models in realistic usage merely because color science is a
vast domain covering many applications.

1.1 Color Spaces or Color Models

A suitable technique for depicting the color signal is required to use color as a visual
stimulus in image processing, multimedia, statistics, and machine vision applications.
This need is addressed by the various color specification systems or color models
(color spaces or solids). Color spaces offer a good rational technique for specifying,
ordering, manipulating, and displaying the colors of the object taken into consid-
eration [10]. A well-selected depiction retains vital data and gives insight into the
required visual operation. The color model chosen should, therefore, be well adapted
to tackle the declaration and answer to the problem. Selecting the finest image repre-
sentation method includes understanding how to generate color patterns and what
information these signals require. Despite the fact that color spaces impose restric-
tions on color perception and depiction, they also assist people to fulfill significant
4 1 Introduction to Image Color Feature

tasks. In specific, color models can be utilized to specify colors, distinguish colors,
assess color similarity, and identify image categories for a variety of applications.
Color model literature can be discovered in the modern sciences such as engineering,
physics, computer science, artificial intelligence, sociology, and philosophy.
In the literature, four fundamental color model families can be defined as follows:
1. Colorimetric color models based on spectral reactance physical measurements
[11]. For such models, three primary colors are identified by their coordinates,
like the CIE chromaticity diagram. To obtain other colors, the primary colors are
mixed subtractively or additively.
2. Psychophysical color based on human color perception [12]. Such color spaces
are either focused on subjective observation standards and relative references
(e.g., Munsell color space) or are constructed through studies to meet human
color perception (e.g., hue, saturation, and lightness model).
3. Physiological color representations are focused on the three main cone types as
in the human retina [13]. The color space utilized in computer hardware by Red-
Green-Blue (RGB) is a renowned example of a color model that is physiologically
inspired.
4. Opponent color spaces or models relying on perception experimentations,
primarily using main opponent colors, like the color pairs Yellow-Blue and
Red-Green [14].

Alternatively, color models can be split into three classifications in applications for
image processing, such as
1. Device-oriented color models that are correlated with devices including RGB,
CMY, and YIQ input; processing; and output [15]. In modern applications, such
spaces are of primary significance where color needs to be specified in a manner
that is consistent with the hardware devices utilized to provide, manipulate, or
obtain color image data.
2. User-oriented color models that are used as a link between hardware and human
operators for manipulating color information such as HCV, HSL, HSB, HSV,
CIE-LAB, MTM, and CIE-LUV [16]. These models enable the individual to
define color through perceptual characteristics and can be regarded as a new
imitation of human color perception.
3. Device-independent color models that were utilized to indicate color signals
regardless of device or application characteristics [17]. These models are
important in applications where color similarities are needed and percep-
tual information is transmitted through networks connecting various hardware
platforms.

As frequently utilized, image formats like GIFs, BMPs, and JPEGs always store
and display images in the RGB color model, an RGB color space-based image
retrieval scheme will not require color space conversion and will, therefore, be facil-
itated. Though, due to the deficiency of the RGB color space not being perceptually
uniform, the RGB color model may be converted into other models in color-based
1.1 Color Spaces or Color Models 5

image retrieval that enhances perceptual consistency. However, there is no proof to


demonstrate which color space will always produce the highest results for image
recovery.

1.1.1 The RGB Color Space

Each color appears in a subspace of the conventional Cartesian coordinate system as


a three-dimensional point within the RGB color space. Each axis signifies one of the
three color components (red (R), green (G), and blue (B)) that will portray all colors
in the model [18]. For simplicity, these values are sometimes standardized so that all
values of R, G, and B fall in the range [0,1]. Figure 1.4 visually portrays the RGB
subspace. The RGB color images are made up of three-channel images, also named
as components, for each prime color. These channels congregate into an additive way
to define the color at a specified pixel (by a mixture of red, green, and blue each pixel
is colored). This directly translates into how a color monitor displays color, which
is why it is sometimes the default color model utilized in most applications. One
significant limitation of the RGB color model is that it is not visually standardized,
which means that the computed range in the RGB model is not really proportional
to the variation in visual color.

Linear RGB Color Model


The linear R component is proportional to the physical power intensity, radiated
from an object around the visible spectrum’s 700 nm band [19]. Similarly, the
546:1 nm range refers to a normal G component and the 435:8 nm band refers to a
linear B component. As a consequence, the linear RGB space is device-independent
and utilized to obtain color consistency throughout various devices in several color

Fig. 1.4 Normalized RGB


coordinate color cube
6 1 Introduction to Image Color Feature

management systems. Utilizing the following matrix conversion (Eq. 1.1), the linear
RGB values in the range [0, 1] can be transformed into the subsequent CIE XYZ
values in the range [0, 1].
⎡ ⎤ ⎡ ⎤⎡ ⎤
X 0.4125 0.3576 0.1804 R
⎣ Y ⎦ = ⎣ 0.2127 0.7152 0.0722 ⎦⎣ G ⎦ (1.1)
Z 0.0193 0.1192 0.9502 B

The conversion from CIE XYZ to RGB values in the range [0, 1] is demarcated
by using Eq. (1.2).
⎡ ⎤ ⎡ ⎤⎡ ⎤
R 3.2405 −1.5372 −0.4985 X
⎣ G ⎦ = ⎣ −0.9693 1.8760 0.0416 ⎦⎣ Y ⎦ (1.2)
B 0.0556 −0.2040 1.0573 Z

On the other hand, tristimulus XYZ values can be attained from the linear RGB
values by using the subsequent matrix represented in Eq. (1.3).
⎡ ⎤ ⎡ ⎤⎡ ⎤
X 0.490 0.310 0.200 R
⎣ Y ⎦ = ⎣ 0.117 0.812 0.011 ⎦⎣ G ⎦ (1.3)
Z 0.000 0.010 0.990 B

The linear RGB values are a physical depiction of a radiated object’s chromatic
light. Though, the human visual system’s perceptual reaction to emit R, G, and B
intensities is more complicated and non-linear. The linear RGB space is extremely
non-uniform perceptually and not appropriate for mathematical perceptual attribute
assessment. Therefore, it is very rare to utilize linear RGB values to depict an image.
On the other hand, in image processing applications such as filtering, non-linear,
R G B values are generally utilized.

Non-linear RGB Color Model


The video camera is exposed to the linear light emitted from the object when an
image is captured. Using gamma correction, the linear RGB intensity incident on the
video camera is converted into non-linear RGB measurements. The conversion from
linear RGB values to non-linear R G B values in the range [0, 1] is demarcated by
using Eq. (1.4).

 4.5R, if R ≤ 0.018
R = 1
1.099 R γ − 0.099, otherwise

4.5G, if G ≤ 0.018
G = 1
1.099 G − 0.099, otherwise
γ
1.1 Color Spaces or Color Models 7

 4.5B, if B ≤ 0.018
B = 1 (1.4)
1.099 B − 0.099, otherwise
γ

where γ is regarded as the camera’s or acquisition device’s gamma factor. Normally


the value of γ is 1/0.45 that is utilized in video cameras. In practical scanners and
cameras, the linear section near small intensities reduces the impact of the noise of the
sensor. Therefore, the digital image pixel values obtained from the object and placed
in a scanner or camera are the values of R G B , which are generally transformed to
the interval of 0 to 255. The three parts R , G , and B of a color image pixel with
one byte for each element are then needed to depict three bytes. It is these non-linear
values of R G B that are recorded in computers as image information records and
utilized in applications for machine vision application. Generally, the RGB symbol
utilized in machine vision literature relates to the values of R G B and thus attention
must be given in the conversion of color models and other appropriate computations.

1.1.2 The CMY and CMYK Color Space

Like the RGB color model, the CMY color model is a subspace of conventional
Cartesian three-dimensional space, considering the form of a cube unit. The funda-
mental secondary colors cyan, magenta, and yellow are represented by each axis
[20]. However, unlike RGB, CMY is a subtractive model of color, which means that
the source in RGB is pure black and the source in CMY is pure white. In other terms,
growing CMY coordinate values shift to darker shades where growing RGB coordi-
nate values shift to brighter colors (see Fig. 1.5). The transformation from RGB to
CMY can be done utilizing Eq. (1.5).

Fig. 1.5 Normalized CMY


coordinate color cube
8 1 Introduction to Image Color Feature
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
C 1 R
⎣ M ⎦ = ⎣1⎦ − ⎣G ⎦ (1.5)
Y 1 B

where all color values were assumed to be standardized to the interval [0,1]. Equa-
tion (1.5) reaffirms the CMY model’s subtractive character. Although black should
produce equivalent components of cyan, magenta, and yellow, it has been discovered
that this contributes to muddy outcomes in printing applications. The fourth element
of real black is introduced to produce the CMYK color space in printing applications.
The use of this CMYK model relates to four-color printing. Like the RGB model,
pixel ranges in the CMY color model do not conform to variations in perceptual
color.
The method to transform RGB color to CMYK is stated below.
The R, G, B values are divided by 255 to change the range [0, 255] to [0, 1] as
shown in Eq. (1.6).

 
R = R 255
 
G = G 255
 
B = B 255 (1.6)
 
The black key (K ) color is computed from the red R̂ , green Ĝ , and blue

B̂ colors as shown in Eq. (1.7).


  
K = 1 − max R, G, B (1.7)


The cyan color (C) is computed from the red R̂ and black (K ) colors as
represented in Eq. (1.8).



C = 1− R−K (1 − K ) (1.8)


The magenta color (M) is computed from the green Ĝ and black (K ) colors
as represented in Eq. (1.9).



M = 1−G−K (1 − K ) (1.9)


The yellow color (Y ) is computed from the blue B̂ and black (K ) colors as
shown in Eq. (1.10).
1.1 Color Spaces or Color Models 9



Y = 1− B−K (1 − K ) (1.10)

The way to transform CMYK color to RGB is stated below.


The R, G, B values are specified in the range [0, 255]. The red (R) color is
computed from the cyan (C) and black (K ) colors as shown in Eq. (1.11).

R = 255 × (1 − C) × (1 − K ) (1.11)

The green color (G) is computed from the magenta (M) and black (K ) colors as
represented in Eq. (1.12).

G = 255 × (1 − M) × (1 − K ) (1.12)

The blue color (B) is computed from the yellow (Y ) and black (K ) colors as
shown in Eq. (1.13).

B = 255 × (1 − Y ) × (1 − K ) (1.13)

1.1.3 The HSV and HSL Color Space

The color models of HSV (hue, saturation, and value) and HSL (hue, saturation, and
lightness) are very distinct from the earlier studied RGB and CMY/K color models
as in that both systems separate a point’s total intensity value from its chromaticity.
The HSV color model can be viewed as a downward pointing hexacone in three
dimensions [21]. The line that runs down the middle of the vertical axis of the cone
signifies the value of intensity V. Hue is portrayed as the angle relative to the red
axis residing perpendicular to the intensity axis on the plane. Saturation relates to
the perpendicular distance of a point from the axis of intensity. Figure 1.6 shows this
HSV color model’s hexacone representation.
The group of equations below (Eqs. 1.14 –1.18) can be utilized to convert a point
in the RGB coordinate system to the suitable value in the HSV space.
 
 −1
1
[(R − G) + (R − B)]
H = cos 2
(1.14)
(R − G)2 + (R − B)(G − B)

H = H  , if B ≤ G (1.15)

H = 360◦ − H  , if B > G (1.16)


10 1 Introduction to Image Color Feature

Fig. 1.6 Color hexagon for HSV depiction

max(R, G, B) − min(R, G, B)
S= (1.17)
max(R, G, B)
max(R, G, B)
V = (1.18)
255
Here the RGB value range is [0, 255].
The model of HSL color is very comparable to the HSV system. A double hexa-
cone is utilized to visualize the subspace in three dimensions, with two apexes at both
pure black and pure white instead of just one at pure black. The saturation element
in HSL [22] goes from a fully saturated color to the respective gray, while satura-
tion in HSV, with V at its maximum, passes from a frilly transparent color to black.
In addition, the color element in HSL always extends from black to white through
the selected hue. In HSV, the component of intensity goes only from black to the
selected hue. Because chromaticity is separated from the intensity in both HSV and
HSL color spaces, intensity-based images can be processed only, leaving the original
color data intact. This has resulted in the widespread utilization of HSL and HSV in
computer vision research.
The HSI machine vision literature does not show whether the non-linear or linear
RGB is being utilized in these transformations. Therefore, the non-linear (R G B ),
which is inherent in conventional machine vision, is utilized. But the ambiguity must
be observed. R G B (range [0, 1]) transformation to HSI (range [0, 1]) is extremely
non-linear and complex as shown in Eqs. (1.19–1.21).
⎡     ⎤
1
R − G  + R − B
H = cos−1 ⎣  2
1
⎦ (1.19)
(R  − G  )2 + (R  − B  )(G  − B  ) 2
1.1 Color Spaces or Color Models 11

3   
S =1− min R  , G  , B  (1.20)
(R   
+G +B)
1  
I = R + G + B (1.21)
3
   
where H = 360◦ − H if B  /I > G  /I . Hue is standardized to the range [0,1] by
H = H/360◦ . Hue (H ) is not demarcated when the saturation (S) is zero. Likewise,
saturation (S) is unspecified if the intensity (I ) is zero.
To convert the HSI values to the R G B in the range [0, 1], then the hue (H ) value
range [0, 1] first transformed back to the un-standardized in the range [0◦ , 360◦ ] by
H = 360◦ (H ). The R G conversion for (0◦ < H ≤ 120◦ ) is shown in equation set
(1.22).

B  = I (1 − S)
 
 S cos H
R = I 1+
cos(60◦ − H )
 
G = 3I − R  + B 

(1.22)

The G B conversion for (120◦ < H ≤ 240◦ ) is shown in equation set (1.23).

H = H − 120◦
R  = I (1 − S)
 
S cos H
G = I 1 +
cos(60◦ − H )
 
B  = 3I − R  + G  (1.23)

The B R conversion for (240◦ < H ≤ 360◦ ) is shown in equation set (1.24).

H = H − 240◦
G  = I (1 − S)
 
S cos H
B = I 1 +
cos(60◦ − H )
 
R  = 3I − G  + B  (1.24)

The vital benefits of the HSI color spaces over other color spaces are (1) good
compatibility with human perception, (2) chromatic values separation from achro-
matic values, and (3) the prospect of utilizing one color feature, i.e., hue, only for
segmentation purposes. Several image segmentation methods take benefit of this.
Segmentation is generally done in one color feature (hue) as an alternative of three,
permitting the utilization of much faster algorithms.
But, hue-oriented color models have some substantial limitations, like (1) singu-
larities in the conversion, e.g., approximate hue for achromatic points, (2) sensitivity
12 1 Introduction to Image Color Feature

to minor deviations of RGB values adjacent to singular points, and (3) mathematical
unpredictability when working on hue because of the feature’s angular nature.

1.1.4 The YIQ and YUV Color Space

The YIQ color system was created by and for the television industry as a result of
a need to compress digital imagery streaming with as little image deterioration as
needed [23]. The luminance value Y is divided from the chromaticity value I and Q,
just like the HSV and HSL models. This enables engineers to represent the luminance
value with more bits than the chromaticity attributes, as the human vision system is
much more susceptible to the modifications of the intensity. Equation 1.25 provides
an estimated linear transformation from a collection of RGB coordinates to the YIQ
space.
⎡ ⎤ ⎡ ⎤⎡ ⎤
Y 0.30 0.50 0.11 R
⎣ I ⎦ = ⎣ 0.60 −0.28 −0.32 ⎦⎣ G ⎦ (1.25)
Q 0.21 −0.52 0.31 B

For television broadcasting along with the common compression procedures


MPEG and JPEG, a comparable color model YUV is also utilized [24]. Equation 1.26
provides the estimated linear transformation from RGB coordinates to YUV space.
⎡ ⎤ ⎡ ⎤⎡ ⎤
Y 0.30 0.59 0.11 R
⎣ U ⎦ = ⎣ −0.15 −0.29 −0.44 ⎦⎣ G ⎦ (1.26)
V 0.62 −0.51 0.10 B

1.1.5 The CIE L*a*b* Color Space

The CIE L*a*b* color space or model was established to the perceptually uniform
and possesses a Euclidean metric. This implies that there would be a strong corre-
lation between Euclidean distance between two points (colors) and human visual
interpretation [25]. CIE L*a*b* is totally based on the CIE XYZ color space, in
which the components X, Y, and Z portray tristimulus that can convey any color that
the average human observer can perceive. These primary colors are unreal, which
means that the actual color stimuli cannot realize them. Because RGB is not an abso-
lute color space, it is impossible to accurately convert RGB coordinates into CIE
L*a*b* space and it cannot generate all humanly perceptible colors. Alternatively,
by using Eq. 1.27, RGB coordinates can be projected into the CIE XYZ color space.
1.1 Color Spaces or Color Models 13
⎡ ⎤ ⎡ ⎤⎡ ⎤
X 0.41 0.36 0.18 R
⎣ Y ⎦ = ⎣ 0.24 0.72 0.07 ⎦⎣ G ⎦ (1.27)
Z 0.02 0.12 0.95 B

where R, G, and B are on the range [0, 1]. From the XYZ model, it is now probable
to project into CIE L*a*b* space using Eqs. (1.28–1.30).


1/
Y 3
L ∗ = 116 − 16 (1.28)
YT
⎡ ⎤

1/
1/
X 3 Y 3
a ∗ = 500⎣ − ⎦ (1.29)
XT YT
⎡ ⎤

1/
1/
Y 3 Z 3
b∗ = 200⎣ − ⎦ (1.30)
YT ZT

The values XT, YT, and ZT signify the tristimulus values of a reference white point.
The back transformation to the XYZ model from the L*a*b* model is completed
by first calculating the luminance Y, and subsequently the calculation of X and Z is
represented in Eqs. (1.31–1.33).

3
L ∗ + 16
Y = YT (1.31)
116
⎛ ⎞3

1/
a∗ Y 3
X =⎝ + ⎠ XT (1.32)
500 YT
⎛ ⎞3

1/
b∗ Y 3
Z = ⎝− + ⎠ ZT (1.33)
200 YT

1.1.6 The Munsell Color Space

The color space of Munsell reflects the previous effort to engage the perception of
color into a color model. The area of Munsell is described as an artists’ comparative
reference [26]. Its overall shape is that of a three-dimensional cylindrical depiction of
the perceived lightness, hue, and saturation. Furthermore, opposite to the HSI or HSV
color models that are represented by the hue, saturation, and lightness of the color
14 1 Introduction to Image Color Feature

Fig. 1.7 The Munsell color space

solids, the Munsell model utilizes the color atlas method where the characteristics
of perception are utilized for sampling.
The basic concept behind the color model of Munsell is that of equivalent visual
spacing between each of the three characteristics. Hue is scaled to a color that is
unique to define. A circular band split into ten parts to represent it. The sections are
defined as red, yellow-red, yellow, green-yellow, green, blue-green, blue, purple-blue,
purple, and red-purple.
If finer distinctions of hue are needed, each part can be further split into ten
subsections. A chromatic hue is defined as one or two neighboring hues resembling
it. Value in the color model of Munsell relates to the lightness or darkness of a
color and is split into 11 parts counted from zero (black) to ten (white). The chroma
describes the strength of the color. It is evaluated in consecutive steps beginning from
one with low chroma values in weak colors. The highest chroma available relies on
the utilization of the hue and the value. As shown in Fig. 1.7, the strong vertical axis
of the Munsell color is the line of V observations from black to white. Hue shifts
perpendicular to the vertical line along each circle. At last, on the V axis, chroma
begins at zero and shifts along each circle’s radius.
The space of Munsell consists of a collection of 1200 color chips each designated
as a unique component of hue, value, and chroma. These chips are combined to create
a three-dimensional structure that forms a warped sphere. The fundamental Munsell
colors book has different editions, with various appearances (matte or glossy), various
sample sizes, and various number of samples. The set of glossy finish collection shows
color point chips organized on 40 charts of constant hue. The chips are organized
in rows and columns on every constant-hue chart. The colors move from light at
the top of each chart to very dark at the bottom by steps designed to be perceptually
equal. They also move from achromatic colors like white and gray at the chart’s inner
edge to chromatic colors at the chart’s outer end by steps that are also meant to be
1.1 Color Spaces or Color Models 15

perceptually equal. All graphs together form the color atlas, which is the Munsell
system’s solid color.
Even though the Munsell color book can be utilized to identify or label colors,
it is not utilized in practice directly for applications for machine vision. Generally,
stored image data is transformed to the Munsell coordinates, quite often in RGB
format, either using closed formulas or lookup tables preceding to the actual applica-
tion. Using the following mathematical algorithm, the transformation from the RGB
components to the Munsell hue (M H ), value (MV ) correlating to the luminance, and
chroma (MC ) correlating to the saturation can be attained.

x = 0.620R + 0.178G + 0.204B


y = 0.299R + 0.587G + 0.144B
z = 0.056G + 0.942B (1.34)

A non-linear conversion is applied to the in-between values as shown in equation


set (1.35).

a = f (x) − f (y)
b = 0.4( f (z) − f (y)) (1.35)

1
where f (w) = 11.6w / 3 − 1.6. Then the new variables are converted to equation set
(1.36).

c = ( p + q cos(θ ))a
d = (r + s sin(θ ))b (1.36)
 
where θ = tan−1 a b , p = 8.880, q = 0.966, r = 8.025, and s = 2.558. At last,
(M H ), (MV ), and (MC ) are computed using Eqs. (1.37–1.39).
c
M H = arctan (1.37)
d

MV = f (y) (1.38)
 
MC = c2 + d 2 (1.39)

Then again, the transformation from RGB, or other color models, to the Munsell
color model can be attained by lookup tables.
16 1 Introduction to Image Color Feature

1.1.7 The Opponent Color Space

This color space or model group is a collection of physiologically driven color models
influenced by the human visual system’s nature [27]. The human vision system
can be defined in terms of opponent hues, blue and yellow on the one hand and
red and green on the other hand, that suspend each other when overlaid, as per
the color vision principle. The RGB signals are converted to three channels, two
opponent color channels (RG, YB) and one achromatic channel (I) are obtained
using Eqs. (1.40–1.42).

RG = R − G (1.40)

Y B = 2B − R − G (1.41)

I = R+G+B (1.42)

At the same time, systematic tests of region segmentation can derive a number
of efficient color characteristics. The image that has the profound valleys on its
histogram and has the greatest discriminating power to distinguish the image groups
in a specified region does not need to be the R, G, and B color characteristics according
to the segmentation method. As a characteristic is supposed to have great discrimi-
nating power if it has a large variance, the utilization of the Karhunen-Loeve (KL)
conversion has obtained color characteristics with great discriminating strength. The
calculation of the fresh color characteristics for the region pixels is performed by
the KL transformation of R, G, and B signals at each phase of segmenting a region.
Three color features can be determined that establish an efficient group of features
for segmenting color images as represented in equation set (1.43).

R+G+B
I1 =
3
I2 = R − G
2G − R − B
I3 = (1.43)
2
Color space hue could be coded in an opponent color in a circular format ranging
from blue, green, yellow, red, and black to white. Saturation is described as distance
in color classifications from the hue circle making hue and saturation specifiable.
Thus, when opponent representation is often considered as a linear transformation of
RGB space, opponent representation is much more appropriate for perceived image
modeling than RGB is. Figure 1.8 shows the opponent color stage by the human
visual system.
1.1 Color Spaces or Color Models 17

Fig. 1.8 The opponent color stage of human visual system

1.1.8 Hue-Max-Min-Difference Color Space

The color space of the Hue-Max-Min-Difference (HMMD) is identical to a space of


perceptually uniform color. The double cone shape of this color space is shown in
Fig. 1.9.
The elements are known as “Max,” “Min” and “Diff,” which are calculations
between RGB and HMMD as per the transform shown in Eq. 1.44.

Max = max(R, G, B)
Min = min(R, G, B)
Diff = Max − Min (1.44)

Although in the HMMD color space the four components are defined, one more
component can be described as Sum as defined using Eq. (1.45).

Fig. 1.9 HMMD double


cone color space
18 1 Introduction to Image Color Feature

Sum = (Max + Min) 2 (1.45)

So in this color space, a total of five components are defined. A set of three
elements,{H, Max, Min} or{H, Diff, Sum}, is however enough to construct the color
space of the HMMD and define a color level. The definitions of each part are distinct.
In HSV color space, Hue (H = [0°, 360°]) has the same property as Hue. Max (=
[0,1]) determines how much black color there is providing shadow or blackness flavor.
Max has the same RGB-related transformation as HSV meaning, but HMMD varies
from the true subspace. The definition is thus separate from Value. Min determines
the amount of white color, providing the quality of whiteness or tint and the range is
[0, 1]. Diff determines how similar color is to pure colors, offering a tonal or vibrant
color in the range [0, 1]. It has a similar property in HSV as Saturation but again
the valid subspace is different. Lastly, the color brightness is determined by the sum
with range [0, 1].

1.1.9 New Trends

The variety of available color models have problems with the application. Since most
of them are intended to conduct well in a particular application, their efficiency under
varying working circumstances is quickly deteriorating [28]. Thus, various (mainly
device-dependent) color spaces need to be merged into a single standard model. The
variations between monitor device-independent spaces and RGB space, like HSV
and CIE L*a*b* spaces, create difficulties in applications, for example, face recog-
nition, multimedia database recognition, etc. mainly because of the complication
in the procedures required to support the color spaces reliant on the transformation
from/to the device.
An innovative standardized image model created on a colorimetric RGB (sRGB)
model is used to solve such issues and satisfy the requirements of network-centered
applications and WWW-based image processing systems. The objective of the inno-
vative color model is to complement the present color model management approaches
by offering a straightforward yet reliable and cost-effective image processing tech-
nique in operating devices, device drivers, and the Web utilizing a straightforward,
robust device-independent color concept.
As most computer monitors are comparable in their key color features and the RGB
model is the most appropriate color model for devices that form a recent computer-
based imaging system, the colorimetric RGB model seems like the best candidate for
this standardized color model. Two factors are of primary significance in identifying
a colorimetric space, such as (1) the observing environment parameters with their
human visual system dependencies and (2) the normal computer space colorimetric
definitions and conversions.
The descriptions of the viewing environment comprise all the transformations
necessary to assist transitions between standard and target viewing environment.
The colorimetric definitions, on the other hand, include the required transformations
1.1 Color Spaces or Color Models 19

to transform between the new sRGB and the color space of CIE-XYZ. The parameters
of the reference viewing environment with the sRGB tristimulus values are computed
from the CIE-XYZ values as shown in Eq. (1.46).
⎡ ⎤ ⎡ ⎤⎡ ⎤
Rs RG B 3.2410 −1.5374 −0.4986 X
⎣ G s RG B ⎦ = ⎣ −0.9692 1.8760 0.0416 ⎦⎣ Y ⎦ (1.46)
Bs RG B 0.0556 −0.2040 1.0570 Z

Negative sRGB tristimulus values and sRGB attributes higher than 1 are not
maintained in practical image processing schemes and are typically deleted using
some type of clipping. The linear tristimulus values are converted as follows (equation
set (1.47) and (1.48)) into non-linear sR G B s.
If Rs RG B , G s RG B , Bs RG B ≤ 0.0034, then

s R  = 12.92Rs RG B
sG  = 12.92G s RG B
s B  = 12.92Bs RG B (1.47)

If Rs RG B , G s RG B , Bs RG B > 0.0034, then

1.0/
s R  = 1.055Rs RG B2.4 − 0.055
1.0/
sG  = 1.055G s RG B2.4 − 0.055
1.0/
s B  = 1.055Bs RG B2.4 − 0.055 (1.48)

With a black digital count of 0 and a white digital count of 255 for 24-bit coding
as depicted in equation set (1.49), the non-linear R G B values are then transformed
into digital values.

s Re = 255s R 
sG e = 255sG 
s Be = 255s B  (1.49)

The backward conversion is demarcated using equation set (1.50).

s R  = s Re + 255
sG  = sG e + 255
s B  = s Be + 255 (1.50)

If Rs RG B , G s RG B , Bs RG B ≤ 0.03928, then
20 1 Introduction to Image Color Feature

Rs RG B = s R  + 12.92
G s RG B = sG  + 12.92
Bs RG B = s B  + 12.92 (1.51)

If Rs RG B , G s RG B , Bs RG B > 0.03928, then



2.4
s R  + 0.055
Rs RG B =
1.055

2.4
sG  + 0.055
G s RG B =
1.055

2.4
s B  + 0.055
Bs RG B = (1.51)
1.055

With,
⎡ ⎤ ⎡ ⎤⎡ ⎤
X 0.4124 0.3576 0.1805 Rs RG B
⎣ Y ⎦ = ⎣ 0.2126 0.7152 0.0722 ⎦⎣ G s RG B ⎦ (1.52)
Z 0.0193 0.1192 0.9505 Bs RG B

Adding a new standardized color model that promotes Web-based imaging tech-
nologies, device drivers, monitors, and printers that complement current color
management support can help both manufacturers and users by providing a definite
path to an enhanced color management scheme.

1.2 Color Quantization

Color quantization is the method by which the number of colors used to depict an
image is reduced. The image space and the segmentation (i.e., split up) of the color
space used are determined by a quantization system [29]. Each axis is split into a set
of components by implementing a conventional quantization system on color space.
The number of colors (n) utilized to depict an image will be n = p × q × r when
the axes are divided into p, q, and r parts. Color space quantization into n colors is
often referred to as a system for quantizing n-bins. Figure 1.10 shows the impact of
the color image quantization. Each axis’ segmentation relies on the color space used.
In the next section, different color spaces and their quantization methods will be
described.
1.2 Color Quantization 21

Fig. 1.10 Color quantization: a the original image using 2563 colors, b quantized in 64 bins,
c quantized in 8 bins, using RGB color space

1.2.1 Scalar or Uniform Color Quantization

Each part of the color space is handled separately in scalar quantization. Then each
axis is split into segments of equivalent size. The planes that move through the
division points perpendicular to the axis then define areas in the color space. The
number of these areas depends on the color space division scheme used. One feasible
strategy is to split the axis of red and green into 8 sections and the axis of blue into 4,
producing 256 areas. Another option is to divide the red and blue into 6 and the green
into 7 sections which produces 252 areas. Each one of these regions will generate a
color for the color map.
After dividing the color space, each of the original colors is mapped to the region
in which it falls. For each region, the representative colors are then averaged of all
the colors mapped to that region. Since each region depicts an entry in the color
map, it is possible to reiterate the same method for mapping the original colors to
a region to map the original colors to the color map. While the implementation of
this algorithm is fast and easy, it does not produce excellent results. There is often
no mapping of colors in the color space, resulting in colormap entries to be wasted.
If the axis is broken on a logarithmic scale rather than linear, this algorithm can
also be implemented in a non-uniform way. This will generate slightly improved
outcomes as both dark and bright colors cannot be distinguished by the human eye.

1.2.2 Vector Color Quantization

This method does not treat individual components distinctly. Every color pixel is
treated as an individual entity. The goal is to discover a set of m demonstrative color
vectors and substitute each original color by one of the new color vectors. m is
generally pre-determined. The resultant deviation should be nominal. This method
22 1 Introduction to Image Color Feature

turns out to be an expensive global optimization problem. The following approaches


thus used to compute local optima.

Popularity Algorithm
Another type of uniform quantization is the popularity algorithms. Rather than split-
ting the color space into 256 regions, these algorithms split the color model into much
smaller areas, and therefore many more regions are created. One feasible implemen-
tation is to split the space into 4 × 4 × 4 regions (262,144 regions) in size. The
original colors are mapped again to the region in which they fall. For each region, the
representative color is the average of the projected colors. The color map is chosen
by getting the representative colors from the 256 most prominent regions (the regions
with the most mapping of colors). If a non-empty region is not chosen for the color
map its index in the color map (the index assigned to color mapping to that region)
is the color map element nearest to its representative pixel (Euclidean distance).
These algorithms are also simple to execute and produce better outcomes than
the standardized algorithm of quantization. However, they require a longer time to
perform and may have a considerably higher storage demand based on the region
size. This may not generate a good outcome based on the image features. This can
be said about all standardized sub-division systems since any information about the
image is used by the technique of dividing the color space.

Median-Cut Algorithm
In this algorithm, the classical technique is considered for color quantization. This
technique is implemented in several applications. In this technique, the first color
histogram of the original image is computed. Then recursively split RGB color space
till the number of boxes is equal to the anticipated number of representative colors.
At every step of recursion, a box with maximum pixels is split at the median of the
longest of its three axes so that half pixels are left in each sub-box. In the final step, the
mean color of all pixels in each sub-box is calculated and utilized as the representative
color (each contained pixel is substituted by this mean). Since these algorithms utilize
image information in splitting the color space this class of algorithms constantly
generates good outcomes while consuming memory and time complexity no worse
than popularity algorithms.

1.2.3 Octree and Other Color Quantization Algorithms

The octree algorithm’s concept is to read sequentially in the image. Each color is
then placed in an octree of volume 8 (a distinct color is represented by each leaf
at depth 8). On the tree, a threshold of K (in this event K = 256) leaves is placed.
Inserting a color into the tree can lead to two results.
1. If there are less than K nodes, the image will be filtered down the tree until either
it hits some leaf nodes with a representative color associated with it or it gets the
leaf node depicting its distinctive color.
1.2 Color Quantization 23

2. If there are more than K leaves in the tree, some groups of leaves in the tree must be
combined together (their average representative colors) and a new representative
color is stored in the parent.

In the literature, there are two probable criteria to be utilized in the selection of leaves
to be combined.
1. First, select reducible nodes with the largest depth in the tree. They depict colors
nearest to each other.
2. If there is more than one leaves the group at the highest depth of the algorithm
could
A. Combine the leaves that denote the least number of pixels. This will support
to preserve the error small.
B. Decrease the leaves that signify the most pixels. In this case, big areas will be
consistently filled in a little wrong color while preserving thorough shadings.

The color map comprises the representative colors of the leaf and clusters in the tree
once the whole image has been handled in this way. The color map index will then
be placed on that leaf, and the image quantization method will simply filter down
the tree until a leaf is hit.
Other approaches utilize 10% of the pixels that are arbitrarily chosen as representa-
tive colors. Statistical and clustering methods can be used to choose the representative
colors such as the k-means method.

1.3 Pseudocolor Image Processing

The purpose of pseudocolor processing is to color a grayscale image by assigning


different colors in different intensity ranges of a gray-level image. Pseudocolor is
also called false color as the colors are not originally present in the grayscale image
[30]. The human eye can interpret near about two dozen of gray shades in a grayscale
image whereas can interpret nearly 1000 variations of color in a color image. So given
a grayscale image, if it is converted to a color image by using pseudocolor processing,
then the interpretation of different intensities is much more convenient as compared
to the ordinary grayscale image. Pseudocoloring can be done by intensity slicing
method. Suppose there are L number of intensity values in a grayscale image I (x, y)
which varies from 0, . . . , (L − 1). In this case, l0 represents black where I (x, y) = 0
and l L−1 represents white where I (x, y) = L − 1. Suppose there is P number of
planes perpendicular to the intensity plane where 0 < P < L − 1. These planes are
placed to the intensity levels l1 , l2 , . . . , l P . P number of planes divide the intensities
to P + 1 number of intervals. So, the color Ck assigned to the gray-level intensity
at position (x, y) can be denoted by f (x, y) = Ck if I (x, y) ∈ Dk where Dk is the
intensity range between lk and lk+1 . So, it can be said that P number of planes divide
the intensities to P + 1 number of intervals denoted by D1 , D2 , . . . , D P+1 . By using
24 1 Introduction to Image Color Feature

this concept, the gray-level intensity range can be divided into some intervals and
for each interval, a particular color can be assigned. By this way, a grayscale image
can be colored and this procedure is known as pseudocoloring. Figure 1.11 shows
the pseudocoloring image of a grayscale image. By the pseudocolor image, we can
visualize different intensities of the image region with different colors, which are
almost flat in the grayscale image. So, using the pseudocolor image, intensities of
the image are much more interpretable or distinguishable than the grayscale image.
In the case of RGB image, colors are added to R, G, and B channels separately and
the combination of R, G, and B channels gives the interpretation of the pseudocolor
image.
Grayscale to color image conversion can be done by the transformations shown
in Fig. 1.12.
In Fig. 1.12, I (x, y) is the grayscale image which is transformed by three
different transformations: RED transformation, GREEN transformation, and BLUE
transformation [6]. RED, GREEN, and BLUE transformation gives the red,
green, and blue plane output of the input grayscale image which is given by

Fig. 1.11 a Grayscale image, b pseudocolor image

Fig. 1.12 Grayscale to color transformation


1.3 Pseudocolor Image Processing 25

Fig. 1.13 a Grayscale image, b pseudocolor transformed image

I R (x, y), IG (x, y), and I B (x, y). When these three planes are combined together
and displayed in a color display system then it is known as a pseudocolor image. For
example, Eq. (1.52) denotes the transformation functions used to generate the color
image and Fig. 1.13 shows the color transformation of a grayscale image by using
Eq. (1.53).

I R (x, y) = I (x, y)
IG (x, y) = 0.33 × I (x, y)
I B (x, y) = 0.11 × I (x, y) (1.53)

In this example, to convert this grayscale image to color, the exact intensities of
the grayscale image are copied to the red plane but the degraded version of intensities
of the original grayscale image is used in the green and blue planes. The combination
of this red, green, and blue planes is shown in Fig. 1.13.

1.4 Full-Color Image Processing

There are two major categories of full-color image processing as mentioned below:
1. Process each component of the image separately and then generate a composite
processed color image. Each component can be processed utilizing grayscale
processing methods.
2. Work with color pixels directly by considering each pixel as a vector.

⎡ ⎤ ⎡ ⎤
cR R
c = ⎣ cG ⎦ = ⎣ G ⎦ (1.54)
cB B
26 1 Introduction to Image Color Feature

Since each pixel is a function of coordinates (x; y), Eq. (1.54) can be expressed
as shown in Eq. (1.55).
⎡ ⎤ ⎡ ⎤
c R (x, y) R(x, y)
c(x, y) = ⎣ cG (x, y) ⎦ = ⎣ G(x, y) ⎦ (1.55)
c B (x, y) B(x, y)

where each vector component is a spatial variable in x and y.


The two approaches may or may not generate equivalent outcomes. The procedure
utilized should be valid to both scalars and vectors. The process of each component of
the vector must be independent of the other components. Neighborhood processing
will be an example where these two methods generate dissimilar outcomes. Averaging
the images distinctly in individual planes and averaging the vectors will generate
unalike outcomes. In theory, any conversion can be done in any color space. In
practice, some conversions are better suited for specific color spaces. The cost of
color space transformations must be measured.

1.5 Summary

The need for image color in the field of image retrieval is discussed in this chapter.
Color image processing includes pseudocolor and full-color or true-color processing.
The purpose of pseudocolor processing is to color a grayscale image by assigning
different colors in different intensity ranges of a gray-level image. In the case of
RGB image, colors are added to R, G, and B channels separately and the combina-
tion of R, G, and B channels gives the interpretation of the pseudocolor image. By the
pseudocolor image, we can visualize different intensities of the image region with a
different color, which are almost flat in the grayscale image. So, using the pseudo-
color image, intensities of the image are much more interpretable or distinguishable
than the grayscale image. In the full-color image, the actual color of the image is
considered. In such type of images, the colors can be specified using different color
models like RGB (linear and non-linear), HSI, HSV, CMY, CMYK, CIE L*a*b*,
YUV, YIQ, Munsell, HMMD, Opponent, etc. Algorithms for the conversion from
one color space to another are also mentioned in this chapter. Different color quan-
tization techniques such as scalar or uniform, vector quantization, octree, etc. are
discussed in this chapter.
References 27

References

1. Rama Varior R, Wang G, Lu J, Liu T (2016) Learning invariant color features for person
reidentification. IEEE Trans Image Process 25(7):3395–3410. https://doi.org/10.1109/tip.2016.
2531280
2. Benitez-Quiroz F, Srinivasan R, Martinez AM, Discriminant functional learning of color
features for the recognition of facial action units and their intensities. IEEE Trans Pattern
Anal Mach Intell. https://doi.org/10.1109/tpami.2018.2868952
3. Tonmoy TH, Hanif MA, Rahman HA, Khandaker N, Hossain I (2016) Error reduction in arsenic
detection through color spectrum analysis. In: 2016 19th international conference on computer
and information technology (ICCIT). Dhaka, pp 343–350. https://doi.org/10.1109/iccitechn.
2016.7860221
4. Dey N, Ashour AS, Hassanien AE (2017) Feature detectors and descriptors generations with
numerous images and video applications: a recap. In: Feature detectors and motion detection
in video processing, pp 36–65. https://doi.org/10.4018/978-1-5225-1025-3.ch003
5. Wang C, Li Z, Dey N, Li Z, Ashour AS, Fong SJ, Shi F (2018) Histogram of oriented gradient
based plantar pressure image feature extraction and classification employing fuzzy support
vector machine. J Med Imaging Health Inform 8(4):842–854. https://doi.org/10.1166/jmihi.
2018.2310
6. Pi JK, Yang J, Zhong Q, Wu MB, Yang HC, Schwartzkopf M, Roth SV, Muller-Buschbaum
P, Xu ZK (2019) Dual-layer nanofilms via mussel-inspiration and silication for non-iridescent
structural color spectrum in flexible displays. ACS Appl Nano Mater. https://doi.org/10.1021/
acsanm.9b00909
7. Devlin RC, Khorasaninejad M, Chen WT, Oh J, Capasso F (2016) Broadband high-efficiency
dielectric metasurfaces for the visible spectrum. Proc Natl Acad Sci 113(38):10473–10478.
https://doi.org/10.1073/pnas.1611740113
8. Morley CV, Fortney JJ, Marley MS, Zahnle K, Line M, Kempton E, Lewis N, Cahoy K (2015)
Thermal emission and reflected light spectra of super Earths with flat transmission spectra.
Astrophys J 815(2):110
9. Perlman I (2016) Absorption, light, spectra of for visual pigments. Encyclopedia of Ophthal-
mology, pp 1–2. https://doi.org/10.1007/978-3-642-35951-4_1036-1
10. Wang S et al (2015) Micro-expression recognition using color spaces. IEEE Trans Image
Process 24(12):6034–6047. https://doi.org/10.1109/TIP.2015.2496314
11. Fedenko VS, Shemet SA, Landi M (2017) UV–vis spectroscopy and colorimetric models for
detecting anthocyanin-metal complexes in plants: an overview of in vitro and in vivo techniques.
J Plant Physiol 212:13–28. https://doi.org/10.1016/j.jplph.2017.02.001
12. Cyriac P, Bertalmio M, Kane D, Vazquez-Corral J (2015) A tone mapping operator based
on neural and psychophysical models of visual perception. In: Human vision and electronic
imaging. International Society for Optics and Photonics, vol 9394, p 93941I. https://doi.org/
10.1117/12.2081212
13. Gao S, Yang K, Li C, Li Y (2015) Color constancy using double-opponency. IEEE Trans Pattern
Anal Mach Intell 37(10):1973–1985. https://doi.org/10.1109/tpami.2015.2396053
14. Ganasala P, Kumar V, Prasad AD (2016) Performance evaluation of color models in the fusion
of functional and anatomical images. J Med Syst 40(5):122. https://doi.org/10.1007/s10916-
016-0478-5
15. Ganesan P, Sathish BS, Vasanth K, Sivakumar VG, Vadivel M, Ravi CN (2019) A compre-
hensive review of the impact of color space on image segmentation. In: 2019 5th international
conference on advanced computing & communication systems (ICACCS). Coimbatore, India,
pp 962–967. https://doi.org/10.1109/icaccs.2019.8728392
16. Ganesan P, Sajiv G (2017) User oriented color space for satellite image segmentation using
fuzzy based techniques. In: 2017 international conference on innovations in information,
embedded and communication systems (ICIIECS). Coimbatore, pp 1–6. https://doi.org/10.
1109/iciiecs.2017.8275977
28 1 Introduction to Image Color Feature

17. Zhang Z, Huang W, Li W, Tian J (2017) Illumination-based and device-independent imaging


model and spectral response functions. In: 2017 IEEE 7th annual international conference on
cyber technology in automation, control, and intelligent systems (CYBER). Honolulu, HI, pp
47–52. https://doi.org/10.1109/cyber.2017.8446589
18. Vaishnavi D, Subashini TS (2015) Robust and invisible image watermarking in RGB color
space using SVD. Procedia Comput Sci 46:1770–1777. https://doi.org/10.1016/j.procs.2015.
02.130
19. Kolkur S, Kalbande D, Shimpi P, Bapat C, Jatakia J (2017) Human skin detection using RGB,
HSV and YCb Cr color models. arXiv preprint arXiv:1708.02694
20. Bao X, Song W, Liu S (2017) Research on color space conversion model from CMYK to CIE-
LAB based on GRNN. In: Pacific-Rim symposium on image and video technology. Springer,
Cham, pp 252–261. https://doi.org/10.1007/978-3-319-75786-5_21
21. Shaik KB, Ganesan P, Kalist V, Sathish BS, Jenitha JMM (2015) Comparative study of skin color
detection and segmentation in HSV and YCbCr color space. Procedia Comput Sci 57:41–48.
https://doi.org/10.1016/j.procs.2015.07.362
22. Saravanan G, Yamuna G, Nandhini S (2016) Real time implementation of RGB to
HSV/HSI/HSL and its reverse color space models. In: 2016 international conference on commu-
nication and signal processing (ICCSP). Melmaruvathur, pp 0462–0466. https://doi.org/10.
1109/iccsp.2016.7754179
23. Ma J, Fan X, Yang SX, Zhang X, Zhu X (2018) Contrast limited adaptive histogram
equalization-based fusion in YIQ and HSI color spaces for underwater image enhancement. Int
J Pattern Recognit Artif Intell 32(07):1854018. https://doi.org/10.1142/S0218001418540186
24. Prema CE, Vinsley SS, Suresh S (2016) Multi feature analysis of smoke in YUV color space
for early forest fire detection. Fire Technol 52(5):1319–1342. https://doi.org/10.1007/s10694-
016-0580-8
25. del Mar Pérez M, Ghinea R, Rivas MJ, Yebra A, Ionescu AM, Paravina RD, Herrera LJ (2016)
Development of a customized whiteness index for dentistry based on CIELAB color space.
Dental Mater 32(3):461–467. https://doi.org/10.1016/j.dental.2015.12.008
26. Paramei GV, Griber YA, Mylonas D (2018) An online color naming experiment in Russian using
Munsell color samples. Color Res Appl 43(3):358–374. https://doi.org/10.1002/col.22190
27. Gong J, Guo J (2016) Image copy-move forgery detection using SURF in opponent color space.
Trans Tianjin Univ 22(2):151–157. https://doi.org/10.1007/s12209-016-2705-z
28. Sağ T, Çunkaş M (2015) Color image segmentation based on multiobjective artificial bee colony
optimization. Appl Soft Comput 34:389–401. https://doi.org/10.1016/j.asoc.2015.05.016
29. Valenzuela G, Celebi ME, Schaefer G (2018) Color quantization using Coreset sampling. In:
2018 IEEE international conference on systems, man, and cybernetics (SMC). Miyazaki, Japan,
pp 2096–2101. https://doi.org/10.1109/smc.2018.00361
30. Jiang N, Wu W, Wang L, Zhao N (2015) Quantum image pseudocolor coding based on the
density-stratified method. Quantum Inf Process 14(5):1735–1755. https://doi.org/10.1007/s11
128-015-0986-0

You might also like