Sensors: Analysis of The Possibilities of Tire-Defect Inspection Based On Unsupervised Learning and Deep Learning

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

sensors

Article
Analysis of the Possibilities of Tire-Defect Inspection Based on
Unsupervised Learning and Deep Learning
Ivan Kuric 1 , Jaromír Klarák 1, * , Milan Sága 1 , Miroslav Císar 1 , Adrián Hajdučík 1 and Dariusz Wiecek 2

1 Department of Automation and Production Systems, Faculty of Mechanical Engineering, University of Zilina,
010 26 Zilina, Slovakia; [email protected] (I.K.); [email protected] (M.S.);
[email protected] (M.C.); [email protected] (A.H.)
2 Faculty of Mechanical Engineering and Computer Science, ATH–University of Bielsko Biala,
43-309 Bielsko-Biała, Poland; [email protected]
* Correspondence: [email protected]

Abstract: At present, inspection systems process visual data captured by cameras, with deep learning
approaches applied to detect defects. Defect detection results usually have an accuracy higher
than 94%. Real-life applications, however, are not very common. In this paper, we describe the
development of a tire inspection system for the tire industry. We provide methods for processing
tire sidewall data obtained from a camera and a laser sensor. The captured data comprise visual
and geometric data characterizing the tire surface, providing a real representation of the captured
tire sidewall. We use an unfolding process, that is, a polar transform, to further process the camera-
obtained data. The principles and automation of the designed polar transform, based on polynomial
regression (i.e., supervised learning), are presented. Based on the data from the laser sensor, the

 detection of abnormalities is performed using an unsupervised clustering method, followed by
Citation: Kuric, I.; Klarák, J.; Sága,
the classification of defects using the VGG-16 neural network. The inspection system aims to
M.; Císar, M.; Hajdučík, A.; Wiecek, D. detect trained and untrained abnormalities, namely defects, as opposed to using only supervised
Analysis of the Possibilities of learning methods.
Tire-Defect Inspection Based on
Unsupervised Learning and Deep Keywords: tire inspection; deep learning; unsupervised learning; polynomial regression; laser sensor;
Learning. Sensors 2021, 21, 7073. defect detection; polar transform
https://doi.org/10.3390/s21217073

Academic Editors:
Roberto Teti and Jianbo Yu 1. Introduction
In the mass production of tires, which is characterized by a large number of manu-
Received: 6 July 2021
Accepted: 21 October 2021
factured items, it is very difficult to conduct the final quality inspection, considering the
Published: 25 October 2021
visual and qualitative aspects of the tires before they are placed on the market. Qualitative
aspects focus on materials, geometry, tire appearance, and final function, while the visual
Publisher’s Note: MDPI stays neutral
aspects of tires are mainly formal (but necessary) features, such as annotations, barcodes,
with regard to jurisdictional claims in
or other features necessary to identify the product. Visual content without defects is rep-
published maps and institutional affil- resentative of a high-quality product. To ensure high quality in the mass production of
iations. tires, it is necessary to obtain data from the manufacturing process and to digitize the
final quality inspection before the product leaves the factory. The current trend is to apply
processes from Industry 4.0, focused on automation, machine learning, sensory systems,
digitization within manufacturing processes, data visualization, etc. [1]. Most applications
of machine learning are based on integrating supervised methods, such as convolutional
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
neural networks (CNN), deep learning, and regression, with unsupervised methods, such
This article is an open access article
as clustering algorithms. A combination of final tire inspection and monitoring of the
distributed under the terms and technological processes during production, makes it possible to capture any damaged
conditions of the Creative Commons product by classifying its defects and can even identify the possible origins of the defect.
Attribution (CC BY) license (https:// This paper is focused on capturing defects during the final inspection of tires. The final
creativecommons.org/licenses/by/ inspection of tires is defined based on the complex manufacturing processes in the rubber
4.0/). industry. The tire is considered the object of interest, which is compounded from various

Sensors 2021, 21, 7073. https://doi.org/10.3390/s21217073 https://www.mdpi.com/journal/sensors


Sensors 2021, 21, 7073 2 of 25
Sensors 2021, 21, 7073 2 of 24

The tire is considered the object of interest, which is compounded from various materials
materials
and has aand has asurface
rugged ruggedincluding
surface including letters, and
letters, symbols, symbols,
so on,and so on,
which which
makes makes
it suitable
itfor
suitable for evaluating
evaluating the possibility
the possibility of using anofinspection
using an inspection
system. Fromsystem. Fromofthe
the view view as-
quality of
quality assurance, there are many requirements, due to the high demands during
surance, there are many requirements, due to the high demands during vehicle operations vehicle
operations
and safetyand safety
of the of theoperator
vehicle vehicle operator
and crew.andInspection
crew. Inspection tasksrequirements
tasks and and requirements
have
have
mainly been described in terms of processing 3D data obtained from laserlaser
mainly been described in terms of processing 3D data obtained from sensors
sensors into
into visual content using pattern recognition [2,3]. A closer description of tire defects
visual content using pattern recognition [2,3]. A closer description of tire defects has been
has been described in a Ph.D. thesis [4] conducted by the Department of Automation and
described in a Ph.D. thesis [4] conducted by the Department of Automation and Produc-
Production Systems at the University of Zilina. There are defined statements that categorize
tion Systems at the University of Zilina. There are defined statements that categorize tire
tire defects in terms of their origin and during the manufacturing processes. A catalog
defects in terms of their origin and during the manufacturing processes. A catalog describ-
describing possible tire defects is generally available to employees at the inspection stand
ing possible tire defects is generally available to employees at the inspection stand in the
in the factory. To simplify categorizing the most frequent defects, we defined six basic
factory. To simplify categorizing the most frequent defects, we defined six basic categories
categories of defects (Figure 1):
of defects (Figure 1):
• Impurity with the same material as the tire material (CH01);
• Impurity with the same material as the tire material (CH01);
• Impurity with a different material from the tire material (CH02);
• Impurity with a different material from the tire material (CH02);
• Material damaged by temperature and pressure (CH03);
• Material damaged by temperature and pressure (CH03);
• Crack (CH04);
•• Crack (CH04);
Mechanical damage to integrity (CH05); and
•• Mechanical damage
Etched material to integrity (CH05); and
(CH06).
• Etched material (CH06).

Figure1.1.Samples
Figure Samplesof
ofthe
themost
mostfrequent
frequentdefects
defects[4].
[4].

The
Thedefects
defectscan
canbebecaptured
capturedby byaacamera,
camera,suchsuchas
asthose
thosemanufactured
manufacturedby bythe
theBalluff
Balluff
company.
company. These images, with a resolution of 12 Mpx, were integrated into the experi-
These images, with a resolution of 12 Mpx, were integrated into the experi-
mental
mental inspection
inspection stand
stand described
described in in[2]
[2]and
andused
usedininthe
thepaper
paper“Design
“Designof oflaser
laserscanners
scanners
data
data processing and their use in a visual inspection system,” which will bepresented
processing and their use in a visual inspection system,” which will be presentedatat
the
the ICIE2020
ICIE2020 conference
conference (2021).
(2021). In
In this
this thesis,
thesis, we
wedemonstrate
demonstrate the thecapturing
capturingof ofdefined
defined
defects
defects through
through the
the use
useofofMATLAB
MATLAB2020a 2020asoftware
software(MathWorks
(MathWorkscompany),
company),for forwhich
whichaa
convolutional neural network (CNN) was trained using a transfer learning method based
convolutional neural network (CNN) was trained using a transfer learning method based
on the AlexNet network [5]. The achieved defect detection accuracy was in the range from
on the AlexNet network [5]. The achieved defect detection accuracy was in the range from
85.15% (CH04) to 99.34% (CH03).
85.15% (CH04) to 99.34% (CH03).
Similar work regarding an inspection system for the tire industry has been described
Similar work regarding an inspection system for the tire industry has been described
in [6], where the inspection system was constructed using image processing techniques,
in [6], where the inspection system was constructed using image processing techniques,
and the principle of processing data was described as obtaining 3D images, converting
and the principle of processing data was described as obtaining 3D images, converting
them into 2D images, and conducting analysis through two different approaches. The
them into 2D images, and conducting analysis through two different approaches. The first
first approach combined discrete Fourier transform and K-means clustering, while the
approach combined discrete Fourier transform and K-means clustering, while the second
second approach utilized artificial neural networks. Discrete Fourier transform was used
approach utilized artificial neural networks. Discrete Fourier transform was used to detect
to detect the defects in 3D images by inferring patterns with spatial domains. K-means
the defects in 3D images by inferring patterns with spatial domains. K-means clustering
clustering separated the pixels into clusters, where complications occurred when defining
separated
the specificthe pixelsofinto
number clusters,
clusters where
to obtain complications
relevant data. The occurred when defining
second approach, the spe-
in particular,
cific number of clusters to obtain relevant data. The second approach, in
used LSTM neural networks. It is possible to use a CNN [7]; however, due to the necessityparticular, used
LSTM neural networks. It is possible to use a CNN [7]; however, due to the
of defining a large learning data set, it is difficult to perform this task. Therefore, it is necessity of
impractical to adapt a CNN to every item occurring in a wide range of products. LSTM
Sensors 2021, 21, 7073 3 of 24

neural networks work in real-time on data from the inspected object. In the commercial
field, the Tekna Automazione e Controllo company specializes in the detection of defects
that occur on the tire sidewall; however, their publicly available information does not
allow for the identification of the algorithms used to perform the inspection. In general,
visual inspection of the surface is mainly based on the use of 3D-captured surfaces by a
triangulation principle, where laser sensors are used as final devices; for example, the
Keyence laser sensor–laser profilometer mentioned in [8–10], or a camera vision system
supported by a laser beam [11–13].
Techniques using the visual data obtained from cameras and devices with laser beams
are currently widely used [14], including those utilizing CNNs (supervised learning)
or clustering methods (unsupervised learning). As mentioned above [15], the use of
convolutional neural networks can be complicated, especially when there is a wide range
of items to search for on the captured products. However, in the case of specific mass
monotonous productions, they can be profitably designed: simply create the data set, then
train the neural network to perform product inspection. Furthermore, there can be many
complications when detecting defects. In [16], it was mentioned that the change in color
scale from color to grayscale due to defects includes many different colors, which can be
exploited to perform defect detection tasks in a shorter time. In [17], the authors described
the use of a CNN for wafer defect detection, describing it as a feasible alternative to manual
inspection, but there were still misclassifications. Another similar work is [18], where three
models were used for wafer defect detection. The lowest detection accuracy was 94.63%
for WDD-Net when detecting mechanical damage, while the highest accuracy was 100%,
which occurred in many of the cases mentioned in the paper. In the category of mechanical
damage, it is necessary to manage significant amounts of data. The other point relies on
manually labeling defects to prepare training data for a supervised model. In [19], the
use of a CNN for aircraft maintenance by defect detection was described. The authors
described complications when collecting a data set for each defect that occurs in aircraft
maintenance, to train the model. Other papers have generally described using a CNN or
R-CNN to classify defects with accuracy higher than 90% [20–24].
In summary, CNNs show potential for defect detection. The associated results are
good, but these systems were mainly in experimental use, not in practical deployment.
Based on experiments, the use of such systems is feasible; however, improvement or re-
designing the logic of inspection systems based on CNN methods is necessary. This is
mainly due to the nature of supervised learning, as mentioned in [18], where a large amount
of data must be captured and labeled, which is a difficult task, to detect the trained defects
or those which are very similar. Defects in real-life conditions are not simple to categorize
simply with a certain number of shapes, as the differences in defects are additionally
manifested in terms of their position, lighting condition, and so on. The described systems
work properly when defects in the analyzed data appear very similar to those in the
training data set. In the case of different defects, there is a very low probability of detection.
One study [25] has described methods for defect detection in steel products using various
technologies applied to visual data captured by a CCD camera. The results obtained
through using more methods led to the statement: “the fusion of multiple technologies
is an expected trend,” which can be interpreted as a necessity to develop systems that
combine more technologies. As such, prospective defect detection methods should be
able to conduct unsupervised learning within the inspection system. For instance, defect
detection on wafers using the K-means method has been described [26]. Non-industrial
areas describing methods combining supervised and unsupervised approaches include
threat detection workflow [27]. In the medical area, research has been published regarding
the use of unsupervised methods for gathering biological data to perform tumor detection
with MRI data [28] or clustering and testing based on IRIS [29].
A possible solution is developing a comparative system, where the recognition of
defects would be replaced by a comparison of the inspected data with standard reference
data without defects. This involves conditioning through use in controlled conditions,
Sensors 2021, 21, 7073 4 of 24

where strictly defined attributes of the inspected object are considered. This condition is
fulfilled in industrial conditions, specifically in mass production. In this case, every product
should be identical. In this manner, abnormalities can be found, being characterizable as
objects of interest without information regarding what it means. Then, in the following step,
CNN-based methods can be used to classify the abnormalities as defects or as acceptable
elements. This paper focuses on describing a comparative method and data fusion to
improve inspection performance, where geometrical data are necessary to identify the
geometry of the captured object and to determine geometric abnormalities, while the
camera system is used to obtain the visual characteristics of the captured surface. At present,
systems that are able to recognize pre-trained patterns or letters in the visual content of data
are available; for example, the OCR function (MATLAB), or specific functions in various
Python libraries. For this purpose, “normalized” visual content is necessary. The term
“normalized” involves the managed modification of data in order to orient it similarly to
standard text in an affine manner. The standard procedure involves unfolding the captured
rotary object using a polar transform—a process that may be automated. The objects
captured by the camera may decrease in quality when using an inappropriate hardware
setup, which can affect the accuracy of the inspection system. Based on these considerations
and previous works, we defined the following hypotheses:

Hypothesis 1 (H1). Is there the possibility to automate the polar transform procedure of the
captured part of the tire sidewall?

Hypothesis 2 (H2). Is it possible to compensate for inaccuracies in visual data captured by camera
due to the use of inappropriately set up hardware?

Hypothesis 3 (H3). Are deep learning techniques applicable to defect detection in visual data from
laser sensors?

Hypothesis 4 (H4). What is an appropriate design and application for a hybrid system integrating
unsupervised learning for defect detection on tire sidewalls?

2. Materials and Methods


According to previous work [3,30,31], which has described methods for processing
point clouds captured by laser sensors from the Micro-Epsilon company, multiple data can
be modified and adapted, fusing them into one whole with the focus of obtaining better
results in a defect inspection system. Previous work has mainly been oriented to processing
data from laser sensors. Methods to transform geometrical data to 2D data, such as visual
content, have been described. Such transformations are feasible in many ways. In this
paper, the method described in [31] is considered.

2.1. Camera Vision


Geometric data are insufficient when mapping the visual aspects of scanned objects.
Instead, conventional data obtained by cameras are considered optimal. In the market, stan-
dard cameras with a resolution from 1–12 Mpx (monochrome) or 1–5 Mpx (color) are avail-
able. For our purposes, 12 Mpx monochrome industrial cameras BVS CA-M4112Z00-35-000
from the Balluff company were used. The first test utilized pictures of the whole sidewall
of the tire, as shown in Figure 2. When adapting to pictures obtained from laser sensors, it
is necessary to perform a polar transform of the sidewall and unfold it from an annulus
shape to a rectangular one. In the Python language, there exist a few suitable frameworks
to perform this task. The processing consists of two operations: The first is the detection of
the center of the sidewall, and the second is the unfolding. Center detection was performed
using circle Hough transform (CHT), implemented in the OpenCV library as an optimized
algorithm [32,33], see Figure 2 (right). The outputs were circles characterizing the borders
of the sidewall and, thus, defining the annulus of the sidewall. The second step is unfolding
the annulus using the polarTransform library. The function is performed using the four
the detection ofthethedetection
center ofofthe thesidewall,
center ofand thethe secondand
sidewall, is thetheunfolding.
second is the Center detec- Center detec-
unfolding.
tion was performed using circle Hough transform (CHT), implemented
tion was performed using circle Hough transform (CHT), implemented in the OpenCV in the OpenCV
library as an optimized
library as algorithm
an optimized [32,33], see Figure
algorithm 2 (right).
[32,33], see FigureThe outputs
2 (right).were The circles
outputs were circles
Sensors 2021, 21, 7073 5 of 24
characterizing the borders of the sidewall and, thus, defining
characterizing the borders of the sidewall and, thus, defining the annulus the annulus of the sidewall.of the sidewall.
The second step The is unfolding
second step theisannulus
unfolding using the polarTransform
the annulus library. The function
using the polarTransform library. The function
is performed using the four parameters defined in the previous step
is performed using the four parameters defined in the previous step of circle detection. of circle detection.
An example ofparameters
Antheexample
unfolding of result
definedthe in is
thedisplayed
unfolding resultin
previous isFigure
step of circle
displayed 3. After analysis
detection.
in Figure 3. of
An Figure
example
After 3, theofunfolding
of
analysis Figure 3,
result
there is a sinusoidal is displayed
character to in
the Figure
picture. 3. After
The analysis
reason for of
thisFigure
is the
there is a sinusoidal character to the picture. The reason for this is the eccentricity 3, there
eccentricity is a sinusoidal
of the character
of the
detected circleto the picture.
characterizing
detected circleThethe reason
tire,
characterizing for this
as shown the inis Figure
the
tire, aseccentricity
2. Theinfusion
shown of the
Figure of2.detected
linear circle
pictures,
The fusion of characterizing
linear pictures,
the
that is, from the
that tire,
laser as shown
sensor
is, from theand inthe
laser Figure 2. and
unfolded
sensor Thetirefusion
the of linear
sidewall,
unfolded pictures,
istire
possible,
sidewall, that
but is,
isthere from
is no
possible, the
butlaser
theresensor
is no
and
added value; inadded the
fact, fusionunfoldedcould tire sidewall,
decrease the is possible,
detail and but
quality there
of is
the no
finaladded
value; in fact, fusion could decrease the detail and quality of the final picture. The picture.value;
The in fact, fusion
solutions rely could
on
solutions decrease
a centered theadetail
rely picture
on and picture
before
centered quality
the polar of the final
transform
before picture.
the polarusing The solutions
parameters
transform of rely
using the on a centered
parameters of the
picture before
center base annulus. The the polar
second transform
attribute is using parameters
resolution in the of the center
columns of the base annulus. The second
unfolded
center base annulus. The second attribute is resolution in the columns of the unfolded
attribute
picture. The size is resolution in the
of the unfolded columns of the unfolded picture. Thecolumn
size of the unfolded
picture. The size ofpicture is 8224
the unfolded ×picture
791 pixels.
is 8224 The pixel
× 791 count
pixels. in the
The pixel count in the column
picture to
is 791, in comparison is 8224
that × the
of 791 laser
pixels. The pixel
sensor, with count
a in the column
maximum of 640 is 791, in
pixels in the
comparison
col- to that of
is 791, in comparison to that of the laser sensor, with a maximum of 640 pixels in the col-
the
umn, as the laser laser
sensor sensor,
capturedwith a maximum
a smaller of 640 pixels
area.aTherefore, in the
when column, as
using a laser the laser sensor captured
umn, as the laser sensor captured smaller area. Therefore, whensensorusing a laser sensor
a smaller
scanning 1280 points, area.
the1280 Therefore,
unfolded when using a laser sensor scanning 1280 points, the unfolded
scanning points,picture is insufficient.
the unfolded picture The solution relies
is insufficient. The on capturing
solution relies on capturing
a smaller area picture
of the is insufficient.
tire or capturing Thethesolution
whole relies on of
sidewall capturing
the tire a smaller
a smaller area of the tire or capturing the whole sidewall of the tire using aa camera with a
using a area ofwith
camera the tire or capturing
the whole sidewall of the tire using a camera with a higher resolution.
higher resolution.
higher resolution.

Figure 2. The sidewall


Figureof2.the
Thetire. 2. The
Figure of
sidewall sidewall of the tire.
the tire.

Figure 3. Unfolded sidewall


Figure image. sidewall image.
Figure 3.3.Unfolded
Unfolded sidewall image.

To increase theTo resolution,


To increase the
increase we resolution,
the captured smaller areas ofsmaller
we captured
captured the sidewall,
smaller areasofas
areas displayed
ofthe
thesidewall,
sidewall, inasasdisplayed
displayed in
Figure 4, with in
aFigure
size of
Figure 4112
4, 4,
with ×
with 3008
a size pixels.
of of
a size 4112 This
4112× 3008method
× 3008pixels. is suitable
ThisThis
pixels. for
method
method capturing
is suitable the side-
for capturing
is suitable the side-
for capturing the
wall during rotation
sidewall during
wall duringduring the process
rotation during
rotation of scanning.
during thetheprocess Inof
process the next step,
ofscanning.
scanning. InInwetheperformed
the next
next step,
step, we the performed
performedthe the
unfolding, theunfolding,
result of which
unfolding, the is fused
the result
result of with
of which
which isthe associated
is fused
fused with pictures
with the from
the associated the
associated pictures laser
pictures fromsensor.
from thethe laser
laser sensor.
sensor.
Unfolding canUnfolding
be carriedcan
Unfolding out be
can bycarried
be any feasible
carried out
out by
byconventional
any feasiblemethod,
anyfeasible such method,
conventional
conventional as those such
method, in theas
such asthose
thosein inthe
the
OpenCV library or the library
OpenCV
OpenCV polarTransform
libraryororthe library. Thislibrary.
thepolarTransform
polarTransform process
library. can
This offer
process
This sufficient
process cancan results,
offeroffer
sufficient results,
sufficient but
results,
but there is nothere
calibration
but is no is
there process.
calibration To
no calibration improve
process. theTo
To improve
process. qualitytheof
improve visual
quality ofdata,
visual
the quality andata,
of appropriate
visual andata,
appropriate system
an appropriate
system design design
to modify
system to data to
modify
design should
modifybe data
conducted
data should be when
conducted
should setting
when up
be conducted the capture
setting
when up hardware.
the capture
setting hardware.
up the capture In the
hardware.
In the experimental stand,
experimental attributes such as the position of the camera, lights,
In the experimental stand, attributes such as the position of the camera, lights, and were
stand, attributes such as the position of the camera, and so
lights, on
and so on so on
were not rigorously
not
were managed.
rigorously
not rigorously Thismanaged.
managed. resulted in
This resultedtheresulted
This observed ineccentricity
in the observed and
eccentricity
the observed the
andanglethe angle
eccentricity having
and the a
angle
having a non-linear radius.
non-linear The angle
radius. The
having a non-linear ε indicates
angleThe
radius. the
ε indicates angle
angle ε the created
angle the
indicates by
created the
angle noncollinearity
bycreated
the noncollinearity between
by the noncollinearity
the normal to the sidewall and the normal to the camera, resulting in a difference between
the radii of the out-circle and in-circle of the annulus, as shown in Figure 5. In addition,
eccentricity indicates the picture is not aligned with the axis of the sidewall. According to
the above, it is necessary to define parameters to compensate for such attributes (i.e., angle,
eccentricity, and axis centering of the picture), as shown in Figures 4 and 5.
between the normalbetween the
to the normaland
sidewall to the
thesidewall
normal and the
to the normalresulting
camera, to the camera, resulting in a difference
in a difference
between the radii between
of the the radii ofand
out-circle thein-circle
out-circle
of and in-circle of
the annulus, as the annulus,
shown as shown
in Figure 5. In in Figure 5. In
addition, eccentricity indicates the picture is not aligned with the axis of the sidewall. of
Sensors 2021, 21, 7073
addition, eccentricity indicates the picture is not aligned with the axis Ac-the sidewall. Ac-
6 of 24
cording to the above, it is necessary to define parameters to compensate
cording to the above, it is necessary to define parameters to compensate for such attributes for such attributes
(i.e., angle,
(i.e., angle, eccentricity, andeccentricity,
axis centeringandofaxis
thecentering of shown
picture), as the picture), as shown
in Figures 4 andin5.Figures 4 and 5.

Figure
Figure 4. Part ofFigure
the 4.4.Part
Partofofthe
sidewall. thesidewall.
sidewall.

Figure 5.
Figure Figure
Illustration 5. Illustration
of eccentricity
5. Illustration andofangle
eccentricity
of eccentricity when
and and angle
anglecapturing
when when
part of capturing
capturing the of thepart
partsidewall. of the sidewall.
sidewall.

Preparation
Preparation forPreparation
capturing the forsidewall
for capturing
capturing areathein
the sidewall
the picture
sidewall areain
area tointhe
thepicture
unfold picture
fromtoto unfoldfrom
involves
unfold frominvolves
involves
defining
defining the border edges,
defining theas
the border
displayed
border edges,
edges, inasas displayed
Figure
displayed4, whereininFigure
Figure 4,4,where
the purple where
pixels
thetheindicate
purplepixels
purple pixelsindicate
the indicatethethe
outer radius and outer
outer radius
theradius
red and
pixels
and thered
indicate
the red pixels
the
pixels inner indicate
radius,
indicate thethe inner
while
inner theradius,
radius,green while
pixels
while thethe green
character-
green pixels
pixels character-
characterize
izevalues
the
ize the values halfway the between
values halfway
halfway redbetween
thebetween the the
and purple red
andand
redpixels. purple
purple
According pixels.
pixels. theAccording
to According
red andto to
the
purple theredred andpurple
and purple
pixels,
pixels,
pixels, it is possible itisispossible
toitdefine possible totodefine
the parametersdefinethe theparameters
for parametersfor
unfolding. forunfolding.
unfolding.
The first step is The
The firststep
to first
define step
theisisprinciple
totodefine
defineof the
the principle
principle
the unfolding ofofthetheunfolding
unfolding
process, process,
whereprocess,
pixels are wherepixels
where pixelsareare
chosen
chosen
chosen in a specific way inand
in aa specific
transformedway
way and to atransformed
and certain rowto
transformed toa certain
(red) a or
certain row
column row(red) oras
(red)
(blue), column
orillus-
column (blue), as illus-
(blue), as
trated
6. Thein
trated in Figureillustrated Figure
in Figure
principle 6. The
6. The
shown principle
byprinciple
the blue shown
colorbyisby
shown the blue
the
based oncolor
blue is based
color
pixels is based
defined onby pixels
on pixels
the defined byby
defined the
purple and redthe purple
purple
pixels, and
and redred
and
the pixels,
pixels,
coordinates andand the
of thecoordinates
coordinates
pixels of the
fit the of pixels
the
general fit the
pixels
circle general
fitequation.
the general circle
Thecircle equation. The
equation.
second principle,The second
second
shown principle,
principle,
by the red shownshown
color, byisby
the the
redred color,
color,
characterized isasis characterized
characterized
an asas
arc approximating ananarcarc approximating
aapproximating
si- a asi-
sinusoid
nusoid function.
function. In
In both principles, it is necessary
necessary
nusoid function. In both principles, it is necessary to define the parameters to compensate to
to define
define the
the parameters
parameters to
to compensate
compensate
for
foraxis
for axis eccentricity axis
and eccentricity
the anglesand
eccentricity andthe
between theangles
angles
normal between
between
vectors. normal
normalvectors.
vectors.
Sensors 2021, 21, 7073 7 of 24
Sensors 2021, 21, 7073 7 of 25
Sensors 2021, 21, 7073 7 of 25

Figure
Figure 6.
6. Possible
Possible unfolding
unfolding of
of the
the sidewall
sidewall area.
area.
Figure 6. Possible unfolding of the sidewall area.
The blue principle
principle is is based
basedon onthe
thelinear
linearchoice
choiceofof pixels
pixels fromfrom thethe picture
picture bounded
bounded by
thethe
by annulus
annulus part of
part the
of tire.
the The
tire. transformation
The transformation is based
is on
based lines
The blue principle is based on the linear choice of pixels from the picture boundedon of pixels
lines of in
pixelsangular
in itera-
angular
tion (ϕi ).annulus
iteration
by the The
(φ i). general
The equation
general
part of tire.for
theequation the
The forcircle
the is defined
circle
transformation isin
is defined the Cartesian
based in
onthe coordinate
Cartesian
lines of in system.
pixelscoordinate
angular
system.The pixel positions are defined by the coordinate
iteration (φi). The general equation for the circle is defined ini the Cartesian pixels x and y i . The blue line in
coordinate
FigureThe6 pixel
system. is definable
positions in are
the defined
polar coordinate system. pixels
by the coordinate Angle𝑥iterationand 𝑦 .(ϕ i ) of
The theline
blue polar
in
coordinate
Figure 6 is system
definable characterizes
in the polar the resolution
coordinate system.in the
Angle x-axis, while
iteration
The pixel positions are defined by the coordinate pixels 𝑥 and 𝑦 . The blue line in (𝜑 ρ) characterizes
of the polar the
coor-
resolution
dinate 6 is in
Figuresystem the y-axis
characterizes
definable in theandpolar
theresolution
the basic pixel
coordinate coordination
in system.
the x-axis,Angle for
while 𝜌ϕ0characterizes
iteration . Computing
(𝜑 ) of thethe theresolu-
polar angle
coor-
iteration
tion in the is a solvable
y-axis and method,
the basic as displayed
pixel in
coordination Figurefor 𝜑
7. The
. main
Computing
dinate system characterizes the resolution in the x-axis, while 𝜌 characterizes the resolu- point theinvolves
angle defining
iteration
the
is
tionzero-point,
a solvable
in the y-axis that
method, is,
and the
asthe start pixel
displayed
basic point of the7.circle
in Figure
coordination in a𝜑point
The main
for polar coordination
involves
. Computing definingsystem.
the angle the The
zero-
iteration
zero-point
point, is characterized
that is, method,
the start aspoint by coordinates
of the circle x and
in ampolar y , defined as the highest point. When
is a solvable displayed in Figure 7. Thecoordination
m
main point involves system. The zero-point
defining the zero-is
adapted to thebypixel
characterized coordinates
coordinates 𝑥 in
andthe𝑦 picture,
, defined there
as exists
the a function
highest point. to
Whenfind adapted
a minimal to
point, that is, the start point of the circle in a polar coordination system. The zero-point is
position
the pixel in the x-axis in
coordinates in the
thepixel edge
picture, polyline:
there
characterized by coordinates 𝑥 and 𝑦 ,exists
defined a function
as the highestto findpoint.
a minimalWhenposition
adaptedinto
the x-axis in the pixel edge
the pixel coordinates in the picture, polyline:
xm = xi there
, ym exists
= yi =a minfunction
(Y ). to find a minimal position(1)in
the x-axis in the pixel edge polyline:

Figure 7. Scheme for the definition of φi.


Figure7.7.Scheme
Figure Schemefor
forthe
thedefinition
definitionof
ofϕi.
φi.
𝑥 =𝑥, 𝑦 = 𝑦 = 𝑚𝑖𝑛(𝑌). (1)
The next step allows for 𝑥 the
= 𝑥 definition
, 𝑦 = 𝑦of=other
𝑚𝑖𝑛(𝑌). features, such as the center of the
(1)
partial annulus, defined by coordinates xt and yt , expressed insuch
The next step allows for the definition of other features, as the(2),
Equation center
and of the
angle
partial
iterations.annulus, defined
The step
The next coordinates by coordinates
xi and
allows for 𝑥
theydefinition and
i representof 𝑦
the , expressed
pixelfeatures,
other in Equation
edge polyline
such as forthe(2),
every and angle
iteration
center of the
iterations.
of ϕi , based
partial Theoncoordinates
annulus, the by𝑥 coordinates
and 𝑦 represent
main characteristics
defined of andthe
𝑥 circles 𝑦and,pixel edge polyline
analyzing
expressed in the for every
points
Equation in the
(2), iteration
and circle,
angle
of 𝜑 , based on the main characteristics of circles and analyzing the points
iterations. The coordinates 𝑥 and 𝑦 represent the pixel edge polyline for every iteration
according to its center point. The angle iteration is described in Equation in
(3), the
where circle,
the
according
differences ∆x
to its
and ∆y
center point.
are The
computed angle
as iteration
differences is
fromdescribed
the in Equation
zero-point and,
of 𝜑 , based on the main characteristics of circles and analyzing the points in the circle, (3),
from where
them, the
differences
angle ∆𝑥
iteration and
ϕ is ∆𝑦 are
computed,computed
as as
depicted differences
in Figure from
7. the zero-point
according to its icenter point. The angle iteration is described in Equation (3), where the and, from them,
the angle iteration
differences ∆𝑥 and 𝜑 ∆𝑦is computed,
are computed as depicted in Figure
as differences from7.the zero-point and, from them,
xt = xm , yt = ym + ρ, (2)
the angle iteration 𝜑 is computed, as depicted in Figure 7.
∆y ∆x = xm − xi
tan ϕi = , . (3)
∆x ∆y = ym − yi
Sensors 2021, 21, 7073 8 of 24

The red principle, displayed in Figure 6, is described as choosing part of the circle
with a specific radius, where pixel coordinates are defined in the x-axis with a specific
radius. For every part of the circle (arc), the radius must be defined.
In the process of carrying out the polar transformation, it is necessary to choose an
appropriate method. The blue method is, upon the first view, simpler than the red method;
however, the blue method has complications involving defining the basic positions of pixels
in the picture. In every case, it is necessary to adapt the system of Cartesian coordinates to
pictures with the positions of pixels being closer to reality than that with polar coordinates.
The following solution uses the red method. To gain relevant data, it is necessary to modify
the data, as real data are noisy due to the captured geometric relief of the inspected objects.
To address this, we conducted the smoothing of ϕi , as defined in Equation (4). Modification
of the result obtained from this equation was suitable, through the application of minimal
and maximal functions, not to the first and last items in the angle iterations, but to find
minimal and maximum values for the specific number of ϕi . The coordinates in the x-axis
are defined, from the picture, as every column of the picture matrix. It is necessary to
calculate the pixel coordinates in the y-axis. Calculation of the radius is defined in Equation
(5) for every iteration, where the number of iterations (n) defines the size of the picture, that
is, 4112 × 3008 (x,y). Calculating the average radius (ρ) in the numerator of Equation (5)
was carried out at every iteration, where the final was a mean value of all radius iterations.
In the radius analysis, the result had a large variance, due to the calculation style, as the
input values were not continuous, but discrete values of pixel coordinates. This feature
generates rounding, as demonstrated by the high variation of the results.

(max ( ϕi ) − min( ϕi ))
ϕi ∼ min( ϕi ) + ∗ i, (4)
num( ϕi )
!
∑in=1  ∆x  
∆y
sin tan−1 ∆x
ρ∼
= , n = 4112. (5)
n
As a result, it is necessary to adapt to boundary features, such as edges and objects of
interest. These edges are highlighted, in Figure 4, by red and purple curves, while the green
curve represents the mean values of the red and purple curves. Calculating the radius
values is inevitable when considering the mentioned boundary features. The expression of
pixel positions in the y-axis can be performed in two ways: from the general circle equation
or for the equation based on the principle of the polar transform. Modified equations for
the captured picture are expressed in Figure 8, with Equation (6) in green and Equation (7)
in blue, where ygi represents the y-coordinates when computing the green polyline and ybi
represents the y-coordinates of blue polyline:
q
y gi = − ρ2 − ( xi − xm )2 + ym + ρ, (6)

ybi = ρ(1 − cos ϕi ) + ym . (7)


The radius result for the red curve displayed in Figure 4, according to Equation (5),
is 10,000, which represents the average value of all calculated radii at every iteration. A
comparison of the calculated radii is illustrated in Figure 8 (left). There were significant
errors in the computed green (6) and blue (7) curves, compared with the red curve, in
the calculated radius values, as shown on the left side of Figure 8. Solving for the radius
is possible through empirical or analytical methods, where empirical methods are based
on a manually entered radius value with the main goal of approximating the computed
curves to red curves as closely as possible. Modified radius values are demonstrated in
Figure 8 (right), along with the declared radius values.
calculated radius values, as shown on the left side of Figure 8. Solving for the radius is
possible through
calculated empirical
radius values, or
as analytical
shown onmethods, where
the left side empirical
of Figure methods
8. Solving forare based
the on is
radius
a possible
manually entered radius value with the main goal of approximating the computed
through empirical or analytical methods, where empirical methods are based on
curves to red curves
a manually enteredasradius
closelyvalue
as possible.
with the Modified radius
main goal values are demonstrated
of approximating in
the computed
Sensors 2021, 21, 7073 9 of 24
Figure 8 (right),
curves along with
to red curves the declared
as closely radiusModified
as possible. values. radius values are demonstrated in
Figure 8 (right), along with the declared radius values.

Figure 8. Circle part approximations, with radius (Ro) values.


Figure 8. Circle part approximations,
approximations, with
with radius
radius (Ro)
(Ro) values.
values.
The differences in radius value were due to the computed curve coordinates, where
both methods
The applied in
The differences
differences were
in based
radius
radius on were
value
value different
were duecoordinate
due to
to the
the computed
computedsystems and coordinates,
curve
curve equations; Equa-
coordinates, where
where
tion
both(6) is based
methods on the
applied Cartesian
were based coordinate
on different system,
coordinate while Equation
systems
both methods applied were based on different coordinate systems and equations; Equa- and (7) corresponds
equations; Equation to a(6)
polar
is coordinate
based on the system
Cartesian adapted to
coordinate the pixel
system,coordinates
while in
Equation the
tion (6) is based on the Cartesian coordinate system, while Equation (7) corresponds to picture.
(7) On
corresponds the left
toside
a of a
polar
Figure 9,
coordinate apparent deviations in y-axis can be seen, even after
polar coordinate system adapted to the pixel coordinates in the picture. On the left side of
system adapted to the pixel coordinates in the approximation
picture. On the left of
sidethe of
curves
Figureto9,
Figure 9,the target curve.
apparent
apparent The shapes
deviations
deviations in
in the of the can
the y-axis
y-axis curves
can be were even
be seen,
seen, displaced
after in
even after the x-axis, caused
approximation
approximation of
of the
the
bycurves
the method
curves to
to the for gaining
the target
target curve.the
curve. The𝑥 shapes
The value,of
shapes ofdue
the to
the the uneven
curves
curves were characterin
were displaced
displaced inofthe
thex-axis,
the curve.caused
x-axis, The
caused
manual
by the corrections
by the method
method for forwere carried
gaining
gaining the x𝑥m value,
the out by displacing
value, due
dueto the
tothe 𝑥uneven
theuneven value for (9), the
character
character of renumbering
of the
thecurve.
curve. The
The
𝜑
ofmanualby corrections
a value in were
(6), and carried out
renumbering by of 𝜑
displacing by athe x
value
manual corrections were carried out by displacing the 𝑥 value for (9), the renumbering
m value
in (4). for
The (9), the
results renumbering
after such
of
of ϕ𝜑i bybyaavalue
replacements in
in (6),
are shown
value (6), and renumbering
in Figure
and 9 (right),
renumbering of ϕ𝜑i by
ofwith by aavalue
the valueinin(4).
displacement (4). The results
values
The forafter
results such
specific
after such
replacements
curves. The are shown
method in Figure
described above 9 (right),
was with the
similar to displacement
the standard values for
method, andspecific
it was curves.
nec-
replacements are shown in Figure 9 (right), with the displacement values for specific
The method
essary to declare described
parameters above aswas
input similartotoperform
the standard method, and it was necessary to
curves. The method described abovedatawas similar the polar
to the standardtransform,
method,mainlyand it accord-
was nec-
declare
ing to the parameters as input data to perform the polar transform, mainly according to the
essary totype
declareof picture
parameters (as illustrated
as input data in Figure
to perform4), namely
the polar part of the annulus
transform, mainlyof any
accord-
type ofobject.
rotating picture (as illustrated in Figure 4), namely part of the annulus of any rotating object.
ing to the type of picture (as illustrated in Figure 4), namely part of the annulus of any
rotating object.

Figure
Figure9.9.Application
Applicationofof
displacement
displacementcorrections.
corrections.
Figure 9. Application of displacement corrections.
The second mentioned method is an analytical method. The analytical method was
constructed using boundary conditions defined by the red and the purple curves displayed
in Figure 4. According to these curves, it is possible to perform a polar transform on
part of an annulus. Knowledge from an empirical method was applied in the design of
the analytical method, with important parameters as displayed in Figure 10, where the
main parameters were ρ0 (the red curve) and ρj (the purple curve). Calculating radius
parameters was possible, but it required mathematical expressions of the boundary curves.
Sensors 2021, 21, 7073 10 of 25

Sensors 2021, 21, 7073 The second mentioned method is an analytical method. The analytical method 10 ofwas
24
constructed using boundary conditions defined by the red and the purple curves dis-
played in Figure 4. According to these curves, it is possible to perform a polar transform
on part of an annulus. Knowledge from an empirical method was applied in the design of
The approximation of curves was carried out using polynomial regression. The type of
the analytical method, with important parameters as displayed in Figure 10, where the
circle definition, selected as the most suitable, was second-degree polynomial regression,
main parameters were 𝜌 (the red curve) and 𝜌 (the purple curve). Calculating radius
as shown in Equation (8), as it was considered closest to the circle equation.
parameters was possible, but it required mathematical expressions of the boundary
curves. The approximation of curves was
− 1 carried out using polynomial regression. The
yri = X T X X T xi = axi 2 + bxi + c. (8)
type of circle definition, selected as the most suitable, was second-degree polynomial re-
gression, as shown in Equation (8), as it was considered closest to the circle equation.

Figure10.
Figure 10.Basic
Basicprinciple
principleand
andparameters
parametersofofthe
theproposed
proposedanalytical
analyticalmethod.
method.

This regression applies to 𝑦both curves.


= (𝑋 𝑋) 𝑋From 𝑥 = 𝑎𝑥the polynomial
+ 𝑏𝑥 + 𝑐. expression of curves,
(8)
we defined dx0 using the coordinates xm according to Equation (9) and ym according to
Equation
This(10). This mathematical
regression applies to both regression managed
curves. From to center the
the polynomial computed
expression curves we
of curves, to
the boundary curves and no other displacement corrections were
defined 𝑑𝑥 using the coordinates 𝑥 according to Equation (9) and 𝑦 according to needed, such as in the
empirical
Equation method,
(10). Thiswhere the correction
mathematical operation
regression managedis required.
to centerThe
thecorrection
computedof radiusto
curves
values
the boundary curves and no other displacement corrections were needed, such astointhe
was performed using the result of Equation (8), adapting the radius value the
approximate curveswhere
empirical method, obtainedtheby Equations
correction (6)–(8). This
operation modification
is required. was carried
The correction of out to
radius
determine the differences in the y-axis between curves described in Equation
values was performed using the result of Equation (8), adapting the radius value to the (11) for the
green method, and Equation (13) for the blue method. Based on these equations, it was
approximate curves obtained by Equations (6)–(8). This modification was carried out to
possible to express the exact value of the radius to approximate a regression polynomial
determine the differences in the y-axis between curves described in Equation (11) for the
equation with a defined boundary condition. The equations for the blue and the green
green method, and Equation (13) for the blue method. Based on these equations, it was
methods expressed as Equations (12) and (14), were built on the condition of zero difference
possible to express the exact value of the radius to approximate a regression polynomial
between the computed radius values and the polynomials. The application of the analytical
equation with a defined boundary condition. The equations for the blue and the green
method for the red curve is displayed in Figure 11, where it is compared to basic radius
methods expressed as Equations (12) and (14), were built on the condition of zero differ-
values derived from Equation (4) (the left side) and the computed radius values including
ence between the computed radius values and the polynomials. The application of the
the displacement parameters in the y-axis such as ∆dygi (green), ∆dybi (blue) and the
analytical method for the red curve is displayed in Figure 11, where it is compared to basic
equation of polynomial regression (yri ). The same principle, used for the purple curve,
radius values derived from Equation (4) (the left side) and the computed radius values
is displayed in Figure 12, with the same information as in Figure 11 on the right side.
including the displacement parameters in the y-axis such as ∆d𝑦 (green), ∆d𝑦 (blue)
Figure 5 illustrates a possible camera placement, with respect to the captured object, which
andcause
can the equation of polynomial
deformation or distortionregression (Yri). Theimage
of the captured same byprinciple,
the camerausedsystem.
for the These
purple
curve, is displayed in Figure 12, with the same information as in Figure
placement parameters emerged mainly in the green method. Displacement in the x-axis, 11 on the right
as
side. Figure 5 illustrates a possible camera placement, with respect to the captured
indicated in Figures 4 and 5, is visualized as the displacement values in Figure 11 (right). object,
which
The canαcause
angle deformation
resulted in differentor displacement
distortion of the captured
values for theimage
greenby the camera
method system.
in Figure 11
These
(122 placement
pixels) parameters
and Figure 12 (149emerged
pixels). Inmainly in the
the blue greenitmethod.
method, Displacement
is possible in the
to see that there
was a displacement difference, but it was lost by rounding to 4 decimal places. The angle ε
manifested in radius values (Ro green), where the difference (1901) between the radii of the
x-axis,as
x-axis, asindicated
indicatedin inFigures
Figures44and and5,5,isisvisualized
visualizedas asthe
thedisplacement
displacementvalues
valuesininFigure
Figure
11(right).
11 (right).The
Theangle
angleααresulted
resultedin indifferent
differentdisplacement
displacementvalues
valuesfor
forthe
thegreen
greenmethod
methodin in
Figure 11 (122 pixels) and Figure 12 (149 pixels). In the blue method, it is
Figure 11 (122 pixels) and Figure 12 (149 pixels). In the blue method, it is possible to see possible to see
Sensors 2021, 21, 7073 11 of 24
thatthere
that therewas
wasaadisplacement
displacementdifference,
difference,but butititwas
waslost
lostby
byrounding
roundingto to44decimal
decimalplaces.
places.
The angleεεmanifested
Theangle manifestedin inradius
radiusvalues
values(Ro (Rogreen),
green),where
wherethe
thedifference
difference(1901)
(1901)between
between
theradii
the radiiof
ofthe
thered
red(5467)
(5467)andandpurple
purple(3566)
(3566)curves
curvesshould
shouldhave
havethe
thesame
samedifference
differenceasasfor
for
the 𝑦𝑦 (5467)
thered values
values (1995)
(1995) asas for
for the
the red
red (337)
(337) and
and the
the purple
purple (2332)
(2332) curves.
curves.
and purple (3566) curves should have the same difference as for the ym values
(1995) as for the red (337) and the purple (2332) curves.
𝜕𝑦
𝜕𝑦
𝑥𝑥 == 𝑥𝑥, , 𝑤ℎ𝑒𝑟𝑒
𝑤ℎ𝑒𝑟𝑒 ==00 (9)
(9)
𝜕𝑥
xm𝜕𝑥
∂y
= xi , where ri = 0 (9)
∂xi
𝜕𝑦
𝜕𝑦
𝑦𝑦 == (𝑥 ))
(𝑥 (10)
(10)
𝜕𝑥
𝜕𝑥
∂y
ym = ri ( xm ) (10)
∂xi
∆d𝑦 == 𝑦𝑦 −−𝑦𝑦 ==𝑎𝑥
∆d𝑦 𝑎𝑥 ++𝑏𝑥 𝑏𝑥 ++𝑐𝑐−− −− 𝜌𝜌 −−(𝑥 (𝑥 −−𝑥𝑥 )) ++ 𝑦𝑦 ++ 𝜌𝜌 , , 𝑖𝜖〈1,4112〉
𝑖𝜖〈1,4112〉
 q 
(11)
(11)
∆dy gi = yri − y gi = axi 2 + bxi + c − − ρ2 − ( xi − xm )2 + ym + ρ , i ∈ h1, 4112i (11)
𝑎𝑥 ++𝑏𝑥
𝑎𝑥 (𝑥 −−𝑥𝑥 ))
𝑏𝑥 ++𝑐𝑐−−𝑦𝑦 2++(𝑥
ifif∆d𝑦
∆d𝑦 ==0,0, 𝜌𝜌== axi + bxi + c − ym + ( xi − xm )2 (12)
(12)
if ∆dy gi2(𝑎𝑥
2(𝑎𝑥
= 0, ρ++=𝑏𝑥 ++𝑐𝑐−−𝑦𝑦 2))
𝑏𝑥 (12)
2( axi + bxi + c − ym )
∆d𝑦 == Yri
∆d𝑦 Yri−−𝑦𝑦 ∆dy
==𝑎𝑥
𝑎𝑥 + ri𝑏𝑥−++
bi =+y𝑏𝑥 𝑐𝑐−
ybi = (𝜌(1
−(𝜌(1
ax 2 −cos
i −
cos
+ bx i+φφc))− 1),−𝑖𝜖〈1,4112〉
++(𝑦ρ𝑦(), 𝑖𝜖〈1,4112〉
cos ϕi ) + ym ), i ∈ h1, 4112i (13)
(13)
(13)

axi 2 + bxi + c − ym
∆d𝑦 ==00 𝜌𝜌=
= if ∆dybi = 0, ρ = (14)
ifif∆d𝑦 (1 − cos ϕi ) (14)
(14)
(( ))

Figure
Figure 11.11.
11.
Figure Comparison
Comparison ofofcomputed
Comparison computed
of radius
radius
computed and
and
radius modified
modified
and radius
radius
modified values
values
radius byanalytical
by
values analytical
by method
method
analytical forfor
for
method thethe
the red
red curve.
curve.
red curve.

Figure
Figure
Figure 12. 12. Radius
12.Radius
Radius values
values
values modified
modified
modified by
thethe
bythe
by analytical
analytical
analytical method
method
method forfor
for thethe
the purple
purple
purple curve.
curve.
curve.

The methods described above allowed us to obtain the boundary parameters necessary
to perform polar transformation when considering non-well-placed camera hardware.
Based on the above-mentioned aspects, the analytical method based on the green principle
defined in the Cartesian coordinate system was chosen. Figure 10 defines another variable,
Xk , as a matrix of pixel coordinates in the x-axis. The index k represents the row in the
The methods described above allowed us to obtain the boundary parameters neces-
sary to perform polar transformation when considering non-well-placed camera hard-
ware. Based on the above-mentioned aspects, the analytical method based on the green
Sensors 2021, 21, 7073
principle defined in the Cartesian coordinate system was chosen. Figure 10 defines an-
12 of 24
other variable, 𝑋 , as a matrix of pixel coordinates in the x-axis. The index 𝑘 represents
the row in the picture, for which the start is defined as 𝑦 in the red curve (𝑦 ) and the
end as 𝑦 in the purple curve (𝑦 ). Under this principle, we computed various k-variables,
picture, for
including thewhich
radius (15),is∆dx
the𝜌 start defined(16),as𝑛ym (17),
in the𝑋red curve
(18), and(y𝑌0 ) and
(19)the endkth
in the ym in the
as iteration.
purple curve
Together, they(yrepresent
j ). Under a this
gradualprinciple, we
transition computed
from the various
red to k-variables,
purple including
parameters. The pa-the
rameter 𝑛
radius ρ k (15), ∆dx (16),
defines kthe number n k (17), X (18),
of pixels
k and
in a row, Y (19) in the kth iteration.
k or how many pixels will be used from Together, they
represent
the a gradual
basic picture. transition
Excluding the 𝑛 the
from redcan
value to purple
lead toparameters.
deformationThe parameter
in the nk defines
transformed pic-
the number of pixels in a row, or how many pixels will be
ture. In Figure 10, we illustrate a shorter purple curve than the red curve. The parameter used from the basic picture.
𝑛Excluding the nkthe
representing value can leadof
shortening to the
deformation
transforming in the transformed
curve was computedpicture.based
In Figure
on the 10,
we illustrate
boundary a shorter of
parameters purple
radius curve than
values, asthe red curve.
defined in Equation (17). Thisnvalue,
The parameter k representing
along with the
shortening
the ∆𝑑𝑥 value of thefortransforming
displacementcurve was computed
correction, expressesbased 𝑋 ason the boundary
indices parameters
in the x-axis of pix-
of radius values, as defined in Equation (17). This
els. Coordinates in the y-axis, 𝑌 , were computed by Equation (19). The results value, along with the ∆dx k value
used forfor
displacement correction, expresses X as indices in the x-axis
the above-mentioned green analyticalk method in the polar transformation are displayed of pixels. Coordinates in the
y-axis,
in FigureYk13,
, were
where computed
the original by Equation
image is (19).
shown Theonresults
the leftused for the
side and theabove-mentioned
transformed pic-
ture is on the right side. The inequalities in the transformed picture areFigure
green analytical method in the polar transformation are displayed in 13, where
negligible, and thethe
original
final image
picture is shown
appears to beon the left side and the transformed picture is on the right side.
linear.
The inequalities in the transformed picture are negligible, and the final picture appears to
be linear. 𝑘
𝜌 =𝜌 − 𝜌 −𝜌  k, 𝑘 𝜖〈0, 𝑦 − 𝑦 〉  (15)
ρk = ρ0 − ρ j −𝑦ρ0− 𝑦 , k ∈ h0, y j − y0 i (15)
y j − y0
𝑘
∆dx = dx − dx − dx  k, 𝑘 𝜖〈0, 𝑦 − 𝑦 〉  (16)
∆dxk = dx0 − dx j − dx 𝑦 0− 𝑦 , k ∈ h0, y j − y0 i (16)
y j − y0

𝜌 − ρ0𝜌− ρk 𝑘 k
  !!
𝑛 nk==𝑛 n− 0 −1 − , 𝑘 𝜖〈0, − 𝑦y j 〉− y0 i
, k ∈𝑦 h0,

1− (17)
(17)
𝜌 ρ0 𝑦 −y𝑦j − y0
n
𝑛 + ∆dxk −𝑛nk , 𝑛 n + ∆dxk𝑛+ nk i, k ∈ h0, y j − y0 i, n = 4112
   
X𝑋k = h〈
= 2 + ∆dx − 2 , +2 ∆dx + 〉 , 𝑘 𝜖〈0, 𝑦 − 𝑦 〉, 𝑛 = 4112 (18)
(18)
2 2 2 q 2 2
Y = − ρ2 − ( X − x m )2 + y m + ρ (19)
𝑌 = k− 𝜌 − (𝑋 − 𝑥k ) + 𝑦 + 𝜌 (19)

Figure
Figure 13.
13. Result
Result of
of polar
polar transformation
transformation by
by analytical
analytical method.
method.

2.2.
2.2.Point
PointCloud
Cloudfrom
fromthe
theLaser
LaserSensor
Sensor
Previously
Previously published works have
published works have described
described the
the principles
principles of
of working
working with
with point
point
clouds
cloudsobtained
obtained by
byaalaser
lasersensor
sensor[34]
[34]to
toobtain
obtainvisual
visualcontent
contentcompatible
compatible with
with algorithms
algorithms
designed
designedforforcamera-obtained
camera-obtainedimagery
imagery[3,31].
[3,31].The
Thefurther work
further workdescribed in in
described this paper
this is
paper
based on the results of these works. The described procedures to generate visual
is based on the results of these works. The described procedures to generate visual content
content from point clouds used data consisting of 12 scans, with the shape of 25,000 × 640,
representing a specific part of the tire sidewall area. It focuses on the connection of specific
parts of the scan, in order to obtain a scan of the whole tire sidewall. The first principle
is based on finding matching geometrical data between scans, as performed by matrix
matching in a stepwise manner (design of laser scanner data processing and their use in
the visual inspection system, ICIE2020). This matching was accurate, but the algorithm
from point clouds used data consisting of 12 scans, with the shape of 25,000 × 640, repre-
senting a specific part of the tire sidewall area. It focuses on the connection of specific parts
Sensors 2021, 21, 7073 13 of 24
of the scan, in order to obtain a scan of the whole tire sidewall. The first principle is based
on finding matching geometrical data between scans, as performed by matrix matching
in a stepwise manner (design of laser scanner data processing and their use in the visual
inspection system, ICIE2020).
is computationally intensive. This
Suchmatching
a procedurewas took
accurate, but the algorithm
approximately 10 minisoncomputa-
an Intel
tionally intensive.
Core i7-8700. BetterSuch a procedure
results took approximately
were obtained by considering 10 themin on an
pattern Intel Coreofi7-8700.
recognition visual
Better
contentresults werethe
[2], where obtained by considering
convolution principle wastheused
pattern recognition
through of visual
the function content [2],
MatchTemplate,
where the into
integrated convolution
the OpenCV principle
librarywas used
[35]. The through
task timethewasfunction
reducedMatchTemplate,
to 3 s, including inte-
GUI
grated into the
procedures, OpenCV
using similarlibrary [35].asThe
hardware task
in the time
case was reduced
of matrix to 3 Further
matching. s, including
workGUI
has
procedures, using similar hardware as in the case of matrix matching.
utilized the pattern recognition method for two main reasons: the speed of the procedureFurther work has
utilized the patternto
and the possibility recognition
recognize method
pre-definedfor two main[36–38].
patterns reasons:For
thethis
speed of the aprocedure
purpose, database
and the possibility
of pre-defined to recognize
patterns, composed pre-defined
of letters patterns
and other[36–38].
featuresFor this purpose,
regularly a database
occurring on tire
sidewalls
of (as mentioned
pre-defined in Table 1),ofwas
patterns, composed created.
letters This database
and other contains these
features regularly data:on tire
occurring
sidewalls
• (as mentioned
Coordinates in Tablepattern;
of the snipped 1), was created. This database contains these data:
•• Pattern fromof
Coordinates thethegrayscale
snippedimage generated from the point cloud;
pattern;
•• Point cloud
Pattern fromofthethegrayscale
pattern; image generated from the point cloud;
•• Positions
Point cloudin of
string chain;
the pattern;
•• Pattern from a color
Positions in string chain;image converted from grayscale to color.
• Pattern from a color image converted from grayscale to color.
Table 1. Number of samples for specific scans.
Table 1. Number of samples for specific scans.
Number of Scans Number of Samples
Number of1Scans
Scan Number 107
of Samples
Scan 1 107
Scan 2 126
Scan 2 126
Scan 4 91
Scan 4 91

This database was used to identify the basic patterns and basic objects identifiable on
the tire
the tire surface,
surface, as
as illustrated
illustrated in
in Figure
Figure 14. The main
14. The main advantage
advantage is is its
its fast
fast application
application and
and
low computational
low computational intensity
intensity when
when compared
compared to to aa CNN.
CNN. This
This procedure
procedure can can save
save time
time and
and
computational power, which is a necessary aspect of inspection tasks for mass
computational power, which is a necessary aspect of inspection tasks for mass production production
under factory
under factory conditions.
conditions. According
According to to obtained
obtained data, it is
data, it is possible
possible to
to assume
assume the
the position
position
of other patterns or to correlate the positions of recognized patterns with
of other patterns or to correlate the positions of recognized patterns with pre-defined pat-pre-defined
patterns,
terns, where
where irregularities
irregularities between
between defined
defined patterns
patterns and and inspected
inspected areasareas indicate
indicate the
the pos-
possibility of an abnormality, defect, or misidentified object
sibility of an abnormality, defect, or misidentified object occurring. occurring.

Figure
Figure 14.
14. Application
Application of
of recognizing
recognizing pre-defined patterns.
pre-defined patterns.

The ability to use pattern recognition is conditioned by the homogeneity of the data,
variance leads
where any variance leadsto
toan
aninability
inabilitytotorecognize
recognizepatterns;
patterns;for
forinstance,
instance, this
this method
method is
is not frequently used when analyzing visual content from a camera, as the
not frequently used when analyzing visual content from a camera, as the variance in pic- variance in
pictures
tures containing
containing thethe same
same objects
objects is istypically
typicallyvery
veryhigh.
high.The
Thedeployment
deployment of of this
this method
in the camera vision field relies on strict adherence to conditions during capturing, such as
light conditions, position, and so on. The difference in our approach lies in the application
of pictures generated from laser sensors, thus. being based on scanned surfaces. In contrast,
the variance in these data is low, which is computed as subtracting the matched area of the
first scan from the second scan. Conditions were normalized matrices and filtered values,
in the camera vision field relies on strict adherence to conditions during capturing, such
as light conditions, position, and so on. The difference in our approach lies in the applica-
Sensors 2021, 21, 7073 tion of pictures generated from laser sensors, thus. being based on scanned surfaces. 14 of 24
In
contrast, the variance in these data is low, which is computed as subtracting the matched
area of the first scan from the second scan. Conditions were normalized matrices and fil-
tered values, mainly in missing data. In matched scans (e.g., Scan 1 and Scan 3 mentioned
mainly in missing data. In matched scans (e.g., Scan 1 and Scan 3 mentioned in [2]), the
in [2]), the differences mainly ranged between −0.2 and 0.2 mm, as displayed in Figure 15.
differences mainly ranged between −0.2 and 0.2 mm, as displayed in Figure 15. There
There were occurrences of larger absolute differences, almost up to 3 mm, in which the
were occurrences of larger absolute differences, almost up to 3 mm, in which the number
number
of of thesecorresponded
these values values corresponded
with the with the number
number of differences
of differences to 0.4 mm.to 0.4 mm. There-
Therefore, an
fore, an absolute difference in the range between 0.4 and 3 mm likely indicates
absolute difference in the range between 0.4 and 3 mm likely indicates a defect occurringa defect
occurring
on on the
the scanned scanned surface.
surface.

Figure 15.Different
Figure15. Differentvalues
valuesin
inmatched
matchedareas
areasof
ofscans.
scans.

Based
Based on on the
the above,
above, ititisispossible
possibleto todesign
designan aninspection
inspectionsystemsystemthat thatisiscapable
capableof of
detecting defects occurring on the scanned surfaces of the inspected
detecting defects occurring on the scanned surfaces of the inspected objects. The principle objects. The principle
isisbased
basedononthe definition
the definition of the
of thecorrect data,data,
correct denoted as theasstandard
denoted reference
the standard data, which
reference data,
represents
which represents correctly scanned objects without any defect or abnormality.aSuch
correctly scanned objects without any defect or abnormality. Such system is
a sys-
applicable in cases where the final product has a homogenous character.
tem is applicable in cases where the final product has a homogenous character. A partic- A particularly
suitable industry
ularly suitable for the
industry fordeployment
the deployment of such an inspection
of such an inspection system
system is isthe
theproduction
production
of printed circuit boards (PCBs) or similar objects. In the case
of printed circuit boards (PCBs) or similar objects. In the case of soft materials, such of soft materials, such as
as
polyurethane or rubber, it is more complicated, due to slight changes in the shapesof
polyurethane or rubber, it is more complicated, due to slight changes in the shapes of
objects
objectsand andtheir
theirfeatures.
features.Despite
Despite non-ideal
non-ideal properties,
properties,wewe were ableable
were to useto the
usecomparative
the compar-
system to evaluate
ative system tires. The
to evaluate tires.results are displayed
The results in Figure
are displayed in 16, where
Figure 16,three
where abnormalities
three abnor-
were found. The abnormalities were all observed as deviations higher than 0.5 mm in
malities were found. The abnormalities were all observed as deviations higher than 0.5
absolute value. To process such data, it is necessary to use value clustering algorithms.
mm in absolute value. To process such data, it is necessary to use value clustering algo-
Appropriate clustering algorithms are part of unsupervised learning methods in the field
rithms. Appropriate clustering algorithms are part of unsupervised learning methods in
of artificial intelligence. The appropriate type of clustering depends on parameters, which
the field of artificial intelligence. The appropriate type of clustering depends on parame-
can limit the applicability of a given method. One of these is of very high importance,
ters, which can limit the applicability of a given method. One of these is of very high im-
especially in mass production: the time requirements. Another is the ability to separate
portance, especially in mass production: the time requirements. Another is the ability to
different values into specific numbers that correspond to the number of defects occurring
separate different values into specific numbers that correspond to the number of defects
in the scanned object. In terms of time consumed, the best method is K-means [39,40].
occurring in the scanned object. In terms of time consumed, the best method is K-means
Another possibility is the DBSCAN algorithm, which is a little slower (mainly when
[39,40]. Another possibility is the DBSCAN algorithm, which is a little slower (mainly
considering huge data sets) but can separate point clouds into clusters without defining
when considering huge data sets) but can separate point clouds into clusters without de-
the number of clusters first [41,42]. As displayed in Figure 16, this inspection system can
finingabnormalities
detect the number ofor clusters
defects. first [41,42].
The figureAs displayed
shows three in Figurewhile
clusters, 16, this inspection
more clusterssystem
were
can detect abnormalities or defects. The figure shows three clusters,
found in the data; however, they were too small and, therefore, irrelevant. Furthermore, while more clusters
the
were found
threshold for in the data; however,
categorizing clusters as they were too
important or small and,depends
irrelevant therefore, onirrelevant. Further-
the expectation of
more,
the usertheof threshold
the system. forAcategorizing
special case clusters as important
is Abnormality or irrelevant
4, an item occurringdepends
in both on the
scans.
Recognition of this abnormality was based on the way of scanning, where a geometrically
diverse object is represented by small scan values but has large enough differences to have
them categorized as abnormalities.
expectation of the user of the system. A special case is Abnormality 4, an item occurring
in both scans. Recognition of this abnormality was based on the way of scanning, where
expectation of the user of the system. A special case is Abnormality 4, an item occurring
a geometrically diverse object is represented by small scan values but has large enough
in both scans. Recognition of this abnormality was based on the way of scanning, where
Sensors 2021, 21, 7073 differences to have them categorized as abnormalities. 15 of 24
a geometrically diverse object is represented by small scan values but has large enough
differences to have them categorized as abnormalities.

Figure 16. Detected abnormalities.


Figure 16.
Figure Detected abnormalities.
16. Detected abnormalities.
Figure 16 shows abnormalities in 2D generated from a point cloud (i.e., 3D data).
Based on this, 16
Figure it isshows
possible to visualize in
abnormalities abnormalities
2D generated
generatedin 3D,
from asashown in Figure
point cloud
cloud (i.e.,17,3Dusing
data).
Figure 16 shows abnormalities in 2D from a point (i.e., 3D data).
theBased
Open3D library
on this, it is [43]. The to
possible time necessary
visualize to performinclustering
abnormalities 3D, as shownranges from 117,
in Figure to using
3 s,
Based on this, it is possible to visualize abnormalities in 3D, as shown in Figure 17, using
the Open3D library [43]. The time necessary to perform clustering
depending on the number of differences larger than 0.5 mm, which, in Scan 2, was 74,507 ranges from 1 to
the Open3D library [43]. The time necessary to perform clustering ranges from 1 to 3 s,
3 s, depending
different on the are
points. Points number
shown of in
differences
the colorslarger
greenthan
and0.5 mm,
red, which,
where in Scan
green 2, was
indicates
depending on the number of differences larger than 0.5 mm, which, in Scan 2, was 74,507
74,507
points different
within points. Points
the tolerances are shown
regarding the in the colors
standard green and
reference red,while
data, wherered green indicates
highlights
different points. Points are shown in the colors green and red, where green indicates
points
points within
that are outtheoftolerances
the toleranceregarding the standard it
band. Furthermore, reference
is possibledata, while
to see thered
realhighlights
surface
points within the tolerances regarding the standard reference data, while red highlights
points that
captured are laser
by the out ofsensor.
the tolerance band. is
This feature Furthermore, it is possible
mainly interesting to see theoutside
for personnel real surfaceof
points that are out of the tolerance band. Furthermore, it is possible to see the real surface
the company, who are able to see the surface defects in real-time. The 3D data can offerof
captured by the laser sensor. This feature is mainly interesting for personnel outside
captured by the laser sensor. This feature is mainly interesting for personnel outside of
the possibilities
more company, who for are able to defects,
analyzing see the surface
while the defects
use ofin2D real-time.
data mayThe 3D data the
complicate canun-offer
the company, who are able to see the surface defects in real-time. The 3D data can offer
more possibilities for analyzing defects, while the use of 2D data
derstanding of the character of defects, as they may not be obvious. Only visual and geo- may complicate the
more possibilities for analyzing defects, while the use of 2D data may complicate the un-
understanding
metric defects, such of as
themissing
character of defects,
or excess as they
material, canmay not be obvious.
be detected Only visual
[3]. The analysis of de-and
derstanding
geometric of the character
defects, suchprocessesof defects,
as missing as theymaterial,
may not canbe obvious. Only[3].visual and geo-
fects and manufacturing canorreveal
excess the origin of thebedefect,
detectedleadingThe analysis
to modifi-
metric defects,
of defects and such as missing or
manufacturing excess material,
processes canthebe detected of[3]. The analysis of de-
cation of the processes in such a way that thecan reveal
frequency andorigin
severity thedefects
of defect, dueleading
to theto
fects and
modificationmanufacturing processes can reveal the origin of the defect, leading to modifi-
problems in the of the processes
production processin can
suchbeadecreased.
way that the Suchfrequency and severity
an implementation would of defects
also
cation
due toof the
the processes
problems in in such
the a way that
production the frequency
process can be and severity
decreased. Such ofandefects due to the
implementation
help to better manage product quality.
problems
would also inhelp
the production processproduct
to better manage can be decreased.
quality. Such an implementation would also
help to better manage product quality.

Figure
Figure 3D3D
17.17. visualization
visualization of of detected
detected abnormalities.
abnormalities.
2.3. Fusion
Figure 17. 3DGeometric Dataofand
visualization Pictures
detected from the Camera
abnormalities.
Sections 2.1 and 2.3 described the data captured from the tire sidewall using a camera
(2.1) and by a laser sensor (2.2). Each of these methods has specific advantages and
disadvantages. As such, it is possible to obtain a synergic effect by combining these data
and, thus, by combining the advantages of individual methods. The fusion of this data is
shown in Figure 18, which compares individual and merged pictures (merged data). To
merge data, it was necessary to normalize the data types and obtain the same shape. For
2.3. Fusion Geometric Data and Pictures from the Camera
Sections 2.1 and 2.3 described the data captured from the tire sidewall using a camera
(2.1) and by a laser sensor (2.2). Each of these methods has specific advantages and disad-
vantages. As such, it is possible to obtain a synergic effect by combining these data and,
Sensors 2021, 21, 7073 16 of 24
thus, by combining the advantages of individual methods. The fusion of this data is shown
in Figure 18, which compares individual and merged pictures (merged data). To merge
data, it was necessary to normalize the data types and obtain the same shape. For this
reason, we developed
this reason, an analytical
we developed method
an analytical of unfolding,
method where where
of unfolding, the main theaim
mainwasaimto sup-
was
press and minimize
to suppress the inaccuracies
and minimize caused caused
the inaccuracies by hardware depicteddepicted
by hardware in Figurein5.Figure
In the 5.case
In
when
the casewewhen
did not
weapply
did notthis method,
apply blurred areas
this method, blurredoccurred in the merged
areas occurred data, mainly
in the merged data,
at edges,
mainly at such
edges,assuch
the borders of letters,
as the borders symbols,
of letters, and the
symbols, andtire
thetread. The blurring
tire tread. The blurringwas
caused by non-corresponding
was caused by non-corresponding object positions,
object suchsuch
positions, as edges in different
as edges places
in different in the
places invis-
the
ual andand
visual geometric data.
geometric To perform
data. To performthe the
merging
mergingprocess, it was
process, necessary
it was to unify
necessary the size
to unify the
of the
size offused datadata
the fused through resizing.
through Resizing
resizing. the smaller
Resizing datadata
the smaller types andand
types unifying the the
unifying di-
dimension
mension to the
to the larger
larger image
image offered
offered better
better results.
results. Otherwise,
Otherwise, the resolution
the resolution wouldwould be
be lost,
lost,
in theincase
the of
case of pictures
pictures with higher
with higher resolution.
resolution. To fuseTothe
fuse theitdata,
data, was it was necessary
necessary to define to
define merging
merging points,points,
such assuch as defined
defined samples samples recognized
recognized in the captured
in the captured data. data.

Figure 18. Visualization of merged data types.

Defect Detection
2.4. Defect Detection byby RCNN
RCNN with VGG-16VGG-16 Network
Network
According to previous works [18,23,44],
to previous works [18,23,44], existing defect
existing detection
defect methods
detection have mainly
methods have
been based
mainly been on deep
based onlearning (supervised
deep learning learning)
(supervised usingusing
learning) specifically designed
specifically or pre-
designed or
trained CNN
pre-trained architecture,
CNN suchsuch
architecture, as AlexNet
as AlexNet [45],[45],
Resnet-50 [46],[46],
Resnet-50 or VGG-16
or VGG-16[18].[18].
For this
For
reason,
this evaluations
reason, evaluationshave been
have beenperformed
performed using
using these
thesemethods
methodson onspecific
specific data,
data, such
as visual content generated from laser sensor data [2]. We
as visual content generated from laser sensor data [2]. We chose to test the VGG-16chose to test the VGG-16
CNN
CNN architecture, based on results presented in [18,47,48]. Training
architecture, based on results presented in [18,47,48]. Training was performed on 5000 was performed on
5000 samples, including two categories of defects: Impurity with
samples, including two categories of defects: Impurity with the same material as the tirethe same material as
the tire material (CH01), and mechanical damage to integrity (CH05).
material (CH01), and mechanical damage to integrity (CH05). These defects were defined These defects were
defined
as as the simplest
the simplest to due
to detect, detect,to due
theirtosize
their
andsize and visibility
visibility in thein the data.
data. Training
Training was
was per-
performed using MATLAB software. The training options were set
formed using MATLAB software. The training options were set as follows: Stochastic Gra- as follows: Stochastic
Gradient
dient Descent
Descent withwith momentum
momentum (sgdm),
(sgdm), minibatch
minibatch size =size
16, =maxepochs
16, maxepochs = 5, initial
= 5, initial learn
rate = 0.000001, and execution environment = parallel. After training, the detector detector
learn rate = 0.000001, and execution environment = parallel. After training, the reached
reached
an accuracyan accuracy
of 93.75%.ofThe93.75%.
detectionThe was
detection was performed
performed on 11 scanson [2],11where
scans10 [2], where
detected
10 detected possible defects with the highest accuracies. In this way,
possible defects with the highest accuracies. In this way, there were many incorrect detec- there were many
tions. For instance, in Figure 19, we show the result of detection based on VGG-16 on
incorrect detections. For instance, in Figure 19, we show the result of detection based in
VGG-161,in
SCAN SCANfour
where 1, where
areas four
wereareas wereto
detected detected
have a to have aimpurity
rubber rubber impurity defectbut
defect (CH1), (CH1),
the
but the areas were not bounded very well. The same results were observed for every scan.
areas were not bounded very well. The same results were observed for every scan. The
The other important parameter was time to perform detection which, for one scan, ranged
other important parameter was time to perform detection which, for one scan, ranged
approximately from 100 to 125 s (i.e., SCAN 1, 116.258 s). The hardware used for training
approximately from 100 to 125 s (i.e., SCAN 1, 116.258 s). The hardware used for training
and detection was a GPU (Intel Core i7-8700, RAM 32 GB DDR4, NVIDIA GeForce RTX
and detection was a GPU (Intel Core i7-8700, RAM 32 GB DDR4, NVIDIA GeForce RTX
2070). The size of the scan was 640 × 25,000 pixels (16 MPX). The results of this experiment
2070). The size of the scan was 640 × 25,000 pixels (16 MPX). The results of this experiment
appeared not to be very promising, according to the long time required for detection and
appeared not to be very promising, according to the long time required for detection and
incorrect detections while declaring high accuracy.
incorrect detections while declaring high accuracy.
2.5. Classification of Detected Abnormalities
Based on the results presented in Section 2.4, we decided to design a new VGG-
16 model; however, not for performing the detection of defects, but, instead, detecting
the classification of defects from the data set. The model training was performed on
4000 samples, including two types of defects (CH1 and CH5). The validation data set
contained 1000 samples. Training settings were as follows: Optimizer = ADAM, batch
size = 96, and epochs = 20. Training was performed using KERAS. All samples were resized
to 64 × 64 pixels. The process of training is displayed in Figure 20. The evaluation of the
trained model is depicted in Figure 21, as a confusion matrix.
Sensors 2021, 21, 7073 17 of 24
Sensors 2021, 21, 7073 17 of 25
Figure
Figure19.
19.Defect
Defectdetection
detectionby
byVGG-16
VGG-16ininSCAN
SCAN1.1.

2.5.
2.5.Classification
ClassificationofofDetected
DetectedAbnormalities
Abnormalities
Based
Based on the results presentedininSection
on the results presented Section2.4,
2.4,we
wedecided
decidedtotodesign
designaanew
newVGG-16
VGG-16
model;
model; however, not for performing the detection of defects, but, instead, detectingthe
however, not for performing the detection of defects, but, instead, detecting the
classification
classificationofofdefects
defectsfrom
fromthe
thedata
dataset.
set.The
Themodel
modeltraining
trainingwas
wasperformed
performedon on4000
4000sam-
sam-
ples,
ples,including
includingtwo twotypes
typesofofdefects
defects(CH1
(CH1andandCH5).
CH5).TheThevalidation
validationdata
datasetsetcontained
contained
1000
1000 samples. Training settings were as follows: Optimizer = ADAM, batch size==96,
samples. Training settings were as follows: Optimizer = ADAM, batch size 96,and
and
epochs
epochs = 20. Training was performed using KERAS. All samples were resized to 64××64
= 20. Training was performed using KERAS. All samples were resized to 64 64
pixels.
pixels.The
Theprocess
processofoftraining
trainingisisdisplayed
displayedininFigure
Figure20.20.The
Theevaluation
evaluationofofthethetrained
trained
model
modelisis depicted
depicted
Figure ininFigure
Figure
19. Defect 21,
21,asas
detection aaconfusion
by confusion
VGG-16 matrix.
matrix.
in SCAN 1.
Figure 19. Defect detection by VGG-16 in SCAN 1.

2.5. Classification of Detected Abnormalities


Based on the results presented in Section 2.4, we decided to design a new VGG-16
model; however, not for performing the detection of defects, but, instead, detecting the
classification of defects from the data set. The model training was performed on 4000 sam-
ples, including two types of defects (CH1 and CH5). The validation data set contained
1000 samples. Training settings were as follows: Optimizer = ADAM, batch size = 96, and
epochs = 20. Training was performed using KERAS. All samples were resized to 64 × 64
pixels. The process of training is displayed in Figure 20. The evaluation of the trained
model is depicted in Figure 21, as a confusion matrix.

Figure
Figure20.
Figure20.Process
20.Processofof
Process training
oftrainingVGG-16
trainingVGG-16for
VGG-16forclassification.
forclassification.
classification.

Figure 20. Process of training VGG-16 for classification.

Figure 21.
Figure Confusion matrix for classification by VGG-16 network.
Figure21.
21.Confusion
Confusionmatrix
matrixfor
forclassification
classificationby
byVGG-16
VGG-16network.
network.
The trained model was used for the classification of abnormalities obtained from the
process described in Section 2.2, where three abnormalities were detected (Figure 16). The
difference between “abnormalities” and “defects” is that an abnormality is detected by an
unsupervised method (DBSCAN) as something different, but without other information. In
real conditions, an abnormality should be labeled as a specific defect. To define and classify
recognized abnormalities, it is necessary to perform the classification of abnormalities;
for example, by training a CNN. In this case, VGG-16 was used. The classification of
abnormalities was performed, where Abnormality 7 and Abnormality 2 were classified
as rubber impurity (CH1) (Figure 22), and Abnormality 4 was classified as mechanical
damage
Figure 21. to integritymatrix
Confusion (CH5). The accuracy
for classification by of classification
VGG-16 network. for Abnormality 7 as rubber
impurity was 73.11% in the graph (Figure 22; left), corresponding to the label “0”. The time
In real conditions, an abnormality should be labeled as a specific defect. To define and
classify recognized abnormalities, it is necessary to perform the classification of abnor-
malities; for example, by training a CNN. In this case, VGG-16 was used. The classification
of abnormalities was performed, where Abnormality 7 and Abnormality 2 were classified
Sensors 2021, 21, 7073 18 of 24
as rubber impurity (CH1) (Figure 22), and Abnormality 4 was classified as mechanical
damage to integrity (CH5). The accuracy of classification for Abnormality 7 as rubber im-
purity was 73.11% in the graph (Figure 22; left), corresponding to the label “0”. The time
necessary
necessaryfor
forthe
theclassification
classification of
ofone
oneitem
itemwas
wasapproximately
approximately one
one second,
second, using
using the
the same
same
hardware as mentioned above.
hardware as mentioned above.

Figure22.
Figure Classificationof
22.Classification ofAbnormality
Abnormality7.7.

3. Results
3. Results
In Section 2, we described five main methods. The first involved processing visual
dataIncaptured
Section 2,bywe described
a camera (2.1five main vision).
Camera methods.The Thesecond
first involved
involvedprocessing
processing visual
data
data captured
captured by aby a camera
laser sensor(2.1(2.2Camera
Point cloud vision).
fromThe second
laser involved
sensor). processing
The third involved data cap-
fusing
tured
the databy obtained
a laser sensor
from (2.2 Point cloud
the above. from laser
The fourth sensor).
involved Theof
the use third involved
a deep learning fusing
methodthe
data
(i.e., obtained
VGG-16) from the above.
for defect The fourth
detection involved
in visual the use
data from theoflaser
a deep learning
sensor method
as part (i.e.,
of tire in-
VGG-16) for defect detection in visual data from the laser sensor
spection. The final section described the application of VGG-16 for the classification of as part of tire inspection.
The final section
abnormalities described
obtained by the
the application of VGG-16
process described for the2.3.
in Section classification of abnormali-
In the Introduction, we
ties
posedobtained by the process
four hypotheses described
answered in theinfollowing:
2.3. In the Introduction, we posed four hypothe-
ses answered in the following:
Hypothesis 1 (H1). Yes, it is possible to automate the polar transform procedure of the captured
Hypothesis 1 (H1).using
partial tire sidewall Yes, it is possible
a system basedto on
automate the polar
polynomial transform procedure of the captured
regression.
partial tire sidewall using a system based on polynomial regression.
Hypothesis 2 (H2). Yes, it is possible to compensate for the inaccuracies in visual data captured
Hypothesis
by a camera, 2with (H2). Yes, it is possible
a minimum to compensate
of two boundary for thethrough
conditions, inaccuracies in visual data
the polynomial captured
expression of
theadetected
by camera, edges.
with aThe conceptofis two
minimum illustrated
boundary in the Ro greenthrough
conditions, values corresponding
the polynomial to expression
the Cartesian of
coordinate
the detected system,
edges. Thewhere Ro is
concept green for theinred
illustrated thecurve
Ro greenwasvalues
5467, corresponding
and for the purple
to the curve was
Cartesian
3566. The system,
coordinate positions of theRocurves
where green were
for the 0 =curve
dxred 337, was = 2332
dx j 5467, and(Figure
for the10), where
purple curvethewas
difference
3566.
in Ro
The values of
positions was
the1901
curves the 𝑑𝑥
andwere = 337,in𝑑𝑥
difference = 2332
position of the curves
(Figure 10),was 1995,
where thewhich represents
difference in Ro
compensation
values was 1901 forand
the the
inappropriately
difference in set up hardware.
position of the curves was 1995, which represents compen-
sation for the inappropriately set up hardware.
Hypothesis 3 (H3). In the Material and Methods section, we applied a deep learning method, that
is, an R-CNN
Hypothesis based In
3 (H3). onthetheMaterial
VGG-16and network.
Methods The resultswe
section, forapplied
the visual
a deepdata generated
learning fromthat
method, the
laser sensors showed little potential for application.
is, an R-CNN based on the VGG-16 network. The results for the visual data generated from the
laser sensors showed little potential for application.
Hypothesis 4 (H4). We described a tire inspection system using unsupervised learning (in partic-
ular, the DBSCAN algorithm), which showed good potential for application. The main advantage
lies in detecting abnormalities, which can be further classified as defects or acceptable items.

An overview of Section 2 is provided in Figure 23, illustrating the principle of the


designed tire inspection system based on unsupervised learning; specifically, the DBSCAN
algorithm. The sensor devices are the camera system and the laser sensor.
the described methods include the camera image polar transform based on polynomial
regression (supervised learning). For the data from the laser sensor, the visual content
generated from geometric data is used, pattern recognition of pre-defined samples by
cv2.matchTemplate is carried out, the detection of abnormalities is performed by the
Sensors 2021, 21, 7073 DBSCAN algorithm (unsupervised learning), and the classification of abnormalities to19de-
of 24

fects relies on the VGG-16 network architecture (deep learning).

Figure
Figure 23.23. Overall
Overall design
design of of proposed
proposed tire
tire inspection
inspection system.
system.

In the Materials and Methods, we described the processing of the camera-obtained


4. Discussion
data using a method for unfolding the specific area of pictures capturing the tire sidewall.
TheAs mentioned
resolution in the
of the results,pictures
captured Section was
2 was divided
4112 × 3008 into five subsections.
pixels. The first
The first unfolding was
method
focused on the polar transform for the unfolding of the part of the annulus,
used a picture of the whole tire sidewall, and the unfolding was divided into two steps. capturing part
ofDuring
the tire the
sidewall. We circles
first step, coveredinthe
theconventional
picture were method
detected. of The
unfolding
secondby circle
step wasHough
a polar
transform and polar transform to the Cartesian coordinate system. The disadvantage
transformation that used the parameters of the circles obtained in the first step. The result ofof
this method
this is theisnon-concentricity
processing shown in Figure 3. of The
the detected circlewere
tire sidewalls and the real tire
usually not sidewall, which
fully straight, but,
manifested as the wavy.
instead, slightly first harmonic component
The harmonic analysisshown in Figure
indicated 3. The next
the occurrence of astep was the
non-zero first
development and description
harmonic component, of a method
as caused for the polar
by the eccentricity transform
of the detected of circle
the part
whenof the side-
compared
wall based on boundary conditions. The boundary conditions were set
to the real center point of the tire sidewall. Circle detection was carried out using Houghas the detected
edges describing
circles implemented an area of the
in the captured
OpenCV surface.
library. For Edge detection
the polar is sensitive
transformation andto conversion
the char-
to the Cartesian system, the polarTransform library was used. The hardware used for
computing was as follows: Intel Core i7-8700, RAM 32 GB DDR4, and NVIDIA GeForce
RTX 2070. The whole process took less than 3 s. The most notable disadvantage of this
unfolding process is setting up the parameters for circle detection and polar transformation.
For this, the empirical setting of the mentioned parameters is necessary and, even still,
some eccentricity persists, which can be further removed by the manual centering of the
picture before the transformation. The size of Figure 3 is 8224 × 791 pixels. To increase
the resolution, only part of the tire sidewall was considered at the same time, as shown
in Figure 4. The complications with setting up the empirical parameters were the same
as in Figure 2. To solve this, a method constructed using boundary conditions based on
edge detection to define the part of an annulus was designed, as displayed in Figure 4.
Another possible way to perform polar transform from empirical to analytical is by using
second-order polynomial regression, as mentioned above. The main advantage of this
lies in the possibility of automating the definition of parameters and compensation for
inaccuracies of the camera system (as described in Figure 5). This analytical method is
based on regression, and it seems to allow for the automation of unfolding pictures. For
this reason, it was chosen for use in the tire inspection system.
The second subsection described processing 3D geometrical data, captured by a laser
sensor, into 2D data, as depicted in Figure 14. The results of pattern recognition with respect
Sensors 2021, 21, 7073 20 of 24

to pre-defined samples are described in Table 1. Pattern recognition was closely described
for processing 3D data from laser sensors into visual content [2], for which we used the
function cv2.matchTemplate integrated into the OpenCV library. The condition for using
this method is that the data must be homogenous, as illustrated in Figure 15. In the case of
conventional images, the implementation of pattern recognition is complicated by the high
dispersion caused by the light conditions and requires the strict positioning of the object
during the process of capturing images. The other advantages of using geometrical data for
the tire inspection system are explicitly defining the correct areas and the high possibility
of capturing geometrical abnormalities, as displayed in Figure 16. Abnormalities were
classified as clusters, separated by the unsupervised learning algorithm DBSCAN, which
is able to separate data into a specific number of clusters based on density without a pre-
defined number of clusters. Data chosen for clustering were defined based on differences in
3D data; specifically, in the z-matrix, where the threshold was defined as 0.5 mm, according
to the data displayed in Figure 15. The detected abnormalities can be described in 3D,
as displayed in Figure 17. The classification of detected abnormalities is the subject of
Section 2.5, where the VGG-16 network architecture was utilized by means of KERAS in
Python. The detected abnormalities were correctly classified as rubber impurity (CH1)
for Abnormalities 2 and 7, and mechanical damage to integrity (CH5) for Abnormality 4.
Additionally, we described the use of the R-CNN method for defect detection, performed
in MATLAB, with the goal to design a tire inspection system based only on supervised
learning. The results of this experiment did not show much promise, and the design of the
model (neural network) needed much improvement. In Section 2.3, we described the fusion
of the captured data (by camera and laser). Fusion was performed by manual matching
and resizing data to uniform size. Centering was based on the upper left rectangular corner
of the specified area, representing the same object pattern in both types of data.
An overview of the constructed tire inspection system is shown in Figure 18, where
the described methods include the camera image polar transform based on polynomial
regression (supervised learning). For the data from the laser sensor, the visual content
generated from geometric data is used, pattern recognition of pre-defined samples by
cv2.matchTemplate is carried out, the detection of abnormalities is performed by the
DBSCAN algorithm (unsupervised learning), and the classification of abnormalities to
defects relies on the VGG-16 network architecture (deep learning).

4. Discussion
As mentioned in the results, Section 2 was divided into five subsections. The first was
focused on the polar transform for the unfolding of the part of the annulus, capturing part
of the tire sidewall. We covered the conventional method of unfolding by circle Hough
transform and polar transform to the Cartesian coordinate system. The disadvantage
of this method is the non-concentricity of the detected circle and the real tire sidewall,
which manifested as the first harmonic component shown in Figure 3. The next step
was the development and description of a method for the polar transform of the part of
the sidewall based on boundary conditions. The boundary conditions were set as the
detected edges describing an area of the captured surface. Edge detection is sensitive to
the characteristics of the original captured picture. This paper presents the use of picture
modification to suppress the background of the original picture. In a fully automated
implementation of the process of capturing pictures, the camera should be enhanced by
optimizing the background of the tire inspection stand: the background should allow for
clear separation of the tire sidewall from the background. Figure 5 shows inaccuracies
in the capturing of tire sidewalls when using a camera. The eccentricity of the ε angle
causes the projection of the captured circle as an ellipse, deforming the captured shape.
For this reason, a more accurate method could modify the circle equation to that of an
ellipse by adding appropriate parameters. In the case of a very low angle, the impact is
negligible. Using polynomial regression, the parameters of the ellipse, compensated by
values “a” and “b,” are shown in Equation (8). Therefore, we can perform unfolding while
Sensors 2021, 21, 7073 21 of 24

compensating for the inaccuracies caused by inappropriate hardware conditions, which


can be suppressed as part of the calibration process. This polar transform process, which
suppresses inaccuracies, is appropriate as part of tire inspection stand calibration, where
the computed parameters could be set up as constants and applied in the process of the
automated tire inspection system.
The second subsection focused on the geometrical data. The main aim was to perform
pattern recognition, using cv2.matchTemplate, in compensation for the disadvantages
associated with the conventional use of CNN. The common feature of both mentioned
approaches is convolution. However, the implementation of pattern recognition is simpler
than CNN (deep learning) and, thus, is more effective from a time–cost point of view.
Using pattern recognition in conventional pictures captured by cameras is almost useless,
due to the condition of capturing images and the character of the examined object. In
the case of pictures generated from 3D data, the images are more stable and, therefore,
more suitable to perform pattern recognition using the above-mentioned function. The
other advantage is the possibility of processing 3D data from the laser sensor in the same
way as 2D when a picture is generated from a point cloud. The detected abnormalities
displayed in Figure 16 are displayed in 3D in Figure 17. We determined that using the
DBSCAN method (an unsupervised learning approach) provides an effective means to
detect abnormality clusters. The main advantage of this method is automatic clustering to
a specific number of clusters; however, the disadvantage is its time performance. Based
on the fact that DBSCAN clustering is performed only on the part of data characterized
by differences larger than 0.5 mm in the z-axis, the time necessary to perform such a
process is proportional to the number of differences. The third subsection was focused
on merging 3D and 2D data. The main advantage was obtaining both the geometric and
visual character of the scanned surface. According to this feature, it is possible to describe
the abnormalities as a geometric feature, as well as a visual feature, without defining its
character or category. The classification of abnormalities was described in Section 2.5. Very
good classification results were observed, as shown in Figure 21. Using classification is
necessary, as clustering algorithms alone are generally unable to categorize the character
of abnormalities. We also described using an R-CNN-based approach to detect defects;
unfortunately, the results were not very good. The declared detection accuracies were high,
but the detected regions of defects were not appropriate or in the right place, as illustrated
in Figure 19. In the introduction, we mentioned various categories of defects, which are
illustrated in Figure 1. In real conditions, defects occur with many variations in shape,
position, orientation, and so on. For this reason, is it appropriate to use only supervised
methods for defect detection? Is it possible for a trained system to detect untrained objects?
We found that the proposed tire inspection system can resolve this situation, by using an
unsupervised learning approach—specifically, the DBSCAN algorithm—to separate points
into clusters and detect every detectable abnormality in a specific place, according to the
quality of obtained data. The issue of size in defect detection was suppressed in the tire
inspection system’s classification procedure, where the input layer was declared to have a
strict shape, and every abnormality was accordingly resized to perform classification. To
ensure high defect detection accuracy in many orientations, it is appropriate to include
these samples into the data set intended for training the CNN; however, there is still a
high possibility of correct classification, due to the affinity of a given abnormality to its
associated category.
In the Ph.D. thesis [4], tire inspection was performed, where the defect detection
accuracy was in the range of 85.15% (CH04) to 99.34% (CH03). It should be noted that the
accuracy of classification for Abnormality 7 as rubber impurity was 73.11% (Figure 22), less
than that achieved by the conventional method described in the Ph.D. thesis.

5. Conclusions
In this paper, we described the design of a hybrid tire inspection system utilizing both
3D and 2D data. The applied algorithm combines both supervised and unsupervised learn-
Sensors 2021, 21, 7073 22 of 24

ing methods. In terms of supervised learning, it uses pattern recognition and polynomial
regression, while, in terms of unsupervised learning, the DBSCAN algorithm is used for the
clustering task. Polynomial regression was used to automate the process compensating for
the inaccuracies described in Figure 5, thus, replacing the conventional methods described
in Section 2. Further work should involve managing and modifying the tire inspection
stand and design background in order to identify and possibly automatically separate
areas of the tire sidewall from the background. Another possible improvement would be to
replace the circle with an ellipse in the relevant equation provided in Section 2. To capture
the visual character of the surface, a conventional camera was used. The polar transform
of images was necessary in order to allow for its unification with the data obtained from
a laser sensor. In further work, the conventional camera may be replaced by a line-scan
camera, which would allow for obtaining a much higher resolution and the same character
of the captured surface as that derived from the laser sensor. The reason that we used a
conventional camera was due to availability and, possibly, the use of color too. In the case
of using a color line camera, the situation is more complicated, due to their availability on
the market—such a type of line camera is quite rare.
Considering the laser sensor, this feature greatly improves the possibilities of hybrid
tire inspection systems, based mainly on the use of geometric data. The geometric data
reflect the real topology of the scanned surface and the much higher resistance of the laser
sensor to the light conditions means that the data obtained from the laser sensor are more
stable when compared to data obtained using a standard camera. Laser sensors can work
as line cameras but, in comparison to conventional cameras, the resolution is lower, and
they offer only grayscale images captured solely in the wavelength of the laser used. The
application of a camera allows for the use of devices that operate on color images. In
the introduction, we mentioned the use of CNNs for defect detection in wafers in the
semiconductor industry. There is very important information in conventional inspection
systems that works well when applying deep learning methods. The results of these
systems are generally very good, achieving 94% and higher defect-recognition accuracy.
However, there is also a need to manage larger data sets in order to train such models.
Deep learning methods are in the category of supervised learning, as part of artificial
intelligence. This means that defect detection is achieved through the use of a training
data set and manual labeling of the defects. Therefore, in the case of very specific or
abnormal defects occurring, the supervised system may not be able to capture the defect
due to the absence of training data guiding the model to detect these types of defects.
The proposed method works on 3D data and uses an unsupervised learning approach—a
specific DBSCAN method—to identify abnormalities without the need to train or adapt tire
inspection systems to capture such defects. In the future, it will be necessary to implement
the described tire inspection system on a larger scale for every conceivable type of defect,
including items such as barcodes and labels, and to perform verification of the defect.
The system described in this paper can, in the future, replace conventional methods in
inspection systems; however, it is still necessary to fulfill the mentioned aspects missing
from this inspection system and verify its use in other industrial fields; for example, in the
semiconductor industry, for such tasks as detecting defect on PCB wafers. Furthermore,
in the future, exploration of the possibility of evaluating the angles α and ε mentioned in
Figure 5 should be considered, leading to the possibility of designing a calibration system
for the camera to obtain higher accuracy.

Author Contributions: Conceptualization: J.K.; methodology, J.K.; software, J.K.; validation, I.K.
and M.C.; formal analysis, M.C. and J.K.; investigation, A.H. and J.K.; resources, I.K. and J.K.; data
curation, I.K. and J.K.; writing—original draft preparation, J.K.; writing—review and editing, M.C.
and I.K.; visualization, M.C., J.K. and I.K.; supervision, A.H. and M.S.; project administration, I.K.
and M.S.; funding acquisition, I.K., D.W. and M.S. All authors have read and agreed to the published
version of the manuscript.
Funding: This work was supported by the Slovak Research and Development Agency under contract
No. APVV-16-0283.
Sensors 2021, 21, 7073 23 of 24

Institutional Review Board Statement: Not applicable.


Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are available on request from the
corresponding author.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Kuric, I.; Císar, M.; Tlach, V.; Zajačko, I.; Gál, T.; Wi˛ecek, D. Technical Diagnostics at the Department of Automation and
Production Systems. In Advances in Ergonomics in Design; Springer: Berlin/Heidelberg, Germany, 2019; Volume 835, pp. 474–484.
[CrossRef]
2. Klarak, J.; Kuric, I.; Cisar, M.; Stancek, J.; Hajducik, A.; Tucki, K. Processing 3D Data from Laser Sensor into Visual Content Using
Pattern Recognition. In Proceedings of the 2021 IEEE 8th International Conference on Industrial Engineering and Applications
(ICIEA), Kyoto, Japan, 23–29 April 2021; pp. 543–549. [CrossRef]
3. Klarák, J.; Hajdučík, A.; Bohušík, M.; Kuric, I. Methods of Processing Point Cloud to Achieve Improvement Data Possibilities.
In Projektowanie, Badania i Eksploatacja’ 2020; p. 419. Available online: http://www.engineerxxi.ath.eu/book/projektowanie-
badania-i-eksploatacja2020/ (accessed on 19 February 2021).
4. Kandera, M. Design of Methodology for Testing and Defect Detection Using Artificial Intelligence Methods. 2020. Available
online: http://opac.crzp.sk/?fn=detailBiblioForm&sid=D51B1947951498618DF67753D437&seo=CRZP-detail-kniha (accessed
on 24 October 2019).
5. Transfer Learning Using AlexNet—MATLAB & Simulink—MathWorks United Kingdom. Available online: https:
//uk.mathworks.com/help/deeplearning/examples/transfer-learning-using-alexnet.html (accessed on 14 October 2019).
6. Massaro, A.; Dipierro, G.; Cannella, E.; Galiano, A.M. Comparative Analysis among Discrete Fourier Transform, K-Means
and Artificial Neural Networks Image Processing Techniques Oriented on Quality Control of Assembled Tires. Information
2020, 11, 257. [CrossRef]
7. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in
Neural Information Processing Systems; Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; Curran Associates, Inc.:
New York, NY, USA, 2012; pp. 1097–1105.
8. Borish, M.; Post, B.K.; Roschli, A.; Chesser, P.C.; Love, L.J. Real-Time Defect Correction in Large-Scale Polymer Additive
Manufacturing via Thermal Imaging and Laser Profilometer. Procedia Manuf. 2020, 48, 625–633. [CrossRef]
9. Borish, M.; Post, B.K.; Roschli, A.; Chesser, P.C.; Love, L.J.; Gaul, K.T. Defect Identification and Mitigation Via Visual Inspection in
Large-Scale Additive Manufacturing. JOM 2019, 71, 893–899. [CrossRef]
10. Mullan, F.; Mylonas, P.; Parkinson, C.; Bartlett, D.; Austin, R. Precision of 655 nm Confocal Laser Profilometry for 3D surface
texture characterisation of natural human enamel undergoing dietary acid mediated erosive wear. Dent. Mater. 2018, 34, 531–537.
[CrossRef] [PubMed]
11. Lung, C.W.; Chiu, Y.C.; Hsieh, C.W. A laser-based vision system for tire tread depth inspection. In Proceedings of the 2016 IEEE
International Symposium on Computer, Consumer and Control, IS3C 2016, Xi’an, China, 4–6 July 2016; pp. 850–853. [CrossRef]
12. Li, J.; Huang, Y. Automatic inspection of tire geometry with machine vision. In Proceedings of the 2015 IEEE International
Conference on Mechatronics and Automation, ICMA 2015, Beijing, China, 2–5 August 2015; pp. 1950–1954. [CrossRef]
13. Mital’, G.; Dobránsky, J.; Ružbarský, J.; Olejárová, Š. Application of Laser Profilometry to Evaluation of the Surface of the
Workpiece Machined by Abrasive Waterjet Technology. Appl. Sci. 2019, 9, 2134. [CrossRef]
14. Wang, G.; Zheng, B.; Li, X.; Houkes, Z.; Regtien, P. Modelling and calibration of the laser beam-scanning triangulation measure-
ment system. Robot. Auton. Syst. 2002, 40, 267–277. [CrossRef]
15. Guberman, N. On Complex Valued Convolutional Neural Networks. February 2016. Available online: https://arxiv.org/abs/16
02.09046v1 (accessed on 20 October 2021).
16. Tao, X.; Zhang, D.; Ma, W.; Liu, X.; Xu, D. Automatic Metallic Surface Defect Detection and Recognition with Convolutional
Neural Networks. Appl. Sci. 2018, 8, 1575. [CrossRef]
17. Chien, J.-C.; Wu, M.-T.; Lee, J.-D. Inspection and Classification of Semiconductor Wafer Surface Defects Using CNN Deep
Learning Networks. Appl. Sci. 2020, 10, 5340. [CrossRef]
18. Chen, X.; Chen, J.; Han, X.; Zhao, C.; Zhang, D.; Zhu, K.; Su, Y. A Light-Weighted CNN Model for Wafer Structural Defect
Detection. IEEE Access 2020, 8, 24006–24018. [CrossRef]
19. Doğru, A.; Bouarfa, S.; Arizar, R.; Aydoğan, R. Using Convolutional Neural Networks to Automate Aircraft Maintenance Visual
Inspection. Aerospace 2020, 7, 171. [CrossRef]
20. Shi, J.; Li, Z.; Zhu, T.; Wang, D.; Ni, C. Defect Detection of Industry Wood Veneer Based on NAS and Multi-Channel Mask R-CNN.
Sensors 2020, 20, 4398. [CrossRef]
21. Wang, T.; Chen, Y.; Qiao, M.; Snoussi, H. A fast and robust convolutional neural network-based defect detection model in product
quality control. Int. J. Adv. Manuf. Technol. 2018, 94, 3465–3471. [CrossRef]
22. Su, B.; Chen, H.; Zhou, Z. BAF-Detector: An Efficient CNN-Based Detector for Photovoltaic Cell Defect Detection. IEEE Trans.
Ind. Electron. 2021. [CrossRef]
Sensors 2021, 21, 7073 24 of 24

23. Jing, J.; Ma, H.; Zhang, H. Automatic fabric defect detection using a deep convolutional neural network. Color. Technol.
2019, 135, 213–223. [CrossRef]
24. Yun, J.P.; Shin, W.C.; Koo, G.; Kim, M.S.; Lee, C.; Lee, S.J. Automated defect inspection system for metal surfaces based on deep
learning and data augmentation. J. Manuf. Syst. 2020, 55, 317–324. [CrossRef]
25. Sun, X.; Gu, J.; Tang, S.; Li, J. Research Progress of Visual Inspection Technology of Steel Products—A Review. Appl. Sci.
2018, 8, 2195. [CrossRef]
26. Chang, C.-Y.; Li, C.; Chang, J.-W.; Jeng, M. An unsupervised neural network approach for automatic semiconductor wafer defect
inspection. Expert Syst. Appl. 2009, 36, 950–958. [CrossRef]
27. Le, D.C.; Zincir-Heywood, A.N. Evaluating Insider Threat Detection Workflow Using Supervised and Unsupervised Learning.
In Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 24 May 2018; pp. 270–275.
[CrossRef]
28. Chen, C.-C.; Juan, H.-H.; Tsai, M.-Y.; Lu, H.H.-S. Unsupervised Learning and Pattern Recognition of Biological Data Structures
with Density Functional Theory and Machine Learning. Sci. Rep. 2018, 8, 1–11. [CrossRef]
29. Zivkovic, Z.; Van Der Heijden, F. Recursive unsupervised learning of finite mixture models. IEEE Trans. Pattern Anal. Mach. Intell.
2004, 26, 651–656. [CrossRef]
30. Kuric, I.; Kandera, M.; Klarák, J.; Ivanov, V.; Wi˛ecek, D. Visual Product Inspection Based on Deep Learning Methods. In Advances
in Mechanical Engineering; Springer: Berlin/Heidelberg, Germany, 2020; pp. 148–156. [CrossRef]
31. Klarák, J.; Kandera, M.; Kuric, I. Transformation of Point Cloud into the Two-Dimensional Space Based on Fuzzy Logic Principles.
2019. Available online: http://www.engineerxxi.ath.eu/book/designing-researches-and-exploitation-2019-vol-1/ (accessed on
26 October 2020).
32. Davies, E. A modified Hough scheme for general circle location. Pattern Recognit. Lett. 1988, 7, 37–43. [CrossRef]
33. Seo, S.-W.; Kim, M. Efficient architecture for circle detection using Hough transform. In Proceedings of the International
Conference on ICT Convergence 2015: Innovations Toward the IoT, 5G and Smart Media Era, ICTC 2015, Jeju Island, Korea,
28–30 October 2015; pp. 570–572. [CrossRef]
34. Che, E.; Jung, J.; Olsen, M.J. Object Recognition, Segmentation, and Classification of Mobile Laser Scanning Point Clouds: A State
of the Art Review. Sensors 2019, 19, 810. [CrossRef] [PubMed]
35. OpenCV: Template Matching. Available online: https://docs.opencv.org/master/d4/dc6/tutorial_py_template_matching.html
(accessed on 1 June 2021).
36. Briechle, K.; Hanebeck, U.D. Template Matching using Fast Normalized Cross Correlation. Opt. Pattern Recognit. XII
2001, 4387, 95–103.
37. Tsai, D.-M.; Lin, C.-T. Fast normalized cross correlation for defect detection. Pattern Recognit. Lett. 2003, 24, 2625–2631. [CrossRef]
38. Friemel, B.H.; Bohs, L.N.; Trahey, G.E. Relative performance of two-dimensional speckle-tracking techniques: Normalized
correlation, non-normalized correlation and sum-absolute-difference. In Proceedings of the IEEE Ultrasonics Symposium,
Seattle, WA, USA, 7–10 November 1995; Volume 2, pp. 1481–1484. [CrossRef]
39. Bholowalia, P.; Kumar, A. EBK-Means: A Clustering Technique Based on Elbow Method and K-Means in WSN. Int. J. Comput.
Appl. 2014, 105, 17–24.
40. Visualizing K-Means Clustering. Available online: https://www.naftaliharris.com/blog/visualizing-k-means-clustering/
(accessed on 14 October 2019).
41. Arlia, D.; Coppola, M. Experiments in Parallel Clustering with DBSCAN. Lect. Notes Comput. Sci. 2001, 2150, 326–331. [CrossRef]
42. Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with
Noise. 1996. Available online: www.aaai.org (accessed on 10 October 2019).
43. Zhou, Q.-Y.; Park, J.; Koltun, V. Open3D: A Modern Library for 3D Data Processing. Available online: http://www.open3d
(accessed on 26 October 2020).
44. Wang, J.; Xu, C.; Yang, Z.; Zhang, J.; Li, X. Deformable Convolutional Networks for Efficient Mixed-Type Wafer Defect Pattern
Recognition. IEEE Trans. Semicond. Manuf. 2020, 33, 587–596. [CrossRef]
45. Zhang, Y.; Cui, X.; Liu, Y.; Yu, B. Tire Defects Classification Using Convolution Architecture for Fast Feature Embedding. Int. J.
Comput. Intell. Syst. 2018, 11, 1056–1066. [CrossRef]
46. Wen, L.; Li, X.; Gao, L. A transfer convolutional neural network for fault diagnosis based on ResNet-50. Neural Comput. Appl.
2020, 32, 6111–6124. [CrossRef]
47. Perez, H.; Tah, J.H.M.; Mosavi, A. Deep Learning for Detecting Building Defects Using Convolutional Neural Networks. Sensors
2019, 19, 3556. [CrossRef]
48. Xu, X.; Zheng, H.; Guo, Z.; Wu, X.; Zheng, Z. SDD-CNN: Small Data-Driven Convolution Neural Networks for Subtle Roller
Defect Inspection. Appl. Sci. 2019, 9, 1364. [CrossRef]

You might also like