Sensors: Analysis of The Possibilities of Tire-Defect Inspection Based On Unsupervised Learning and Deep Learning
Sensors: Analysis of The Possibilities of Tire-Defect Inspection Based On Unsupervised Learning and Deep Learning
Sensors: Analysis of The Possibilities of Tire-Defect Inspection Based On Unsupervised Learning and Deep Learning
Article
Analysis of the Possibilities of Tire-Defect Inspection Based on
Unsupervised Learning and Deep Learning
Ivan Kuric 1 , Jaromír Klarák 1, * , Milan Sága 1 , Miroslav Císar 1 , Adrián Hajdučík 1 and Dariusz Wiecek 2
1 Department of Automation and Production Systems, Faculty of Mechanical Engineering, University of Zilina,
010 26 Zilina, Slovakia; [email protected] (I.K.); [email protected] (M.S.);
[email protected] (M.C.); [email protected] (A.H.)
2 Faculty of Mechanical Engineering and Computer Science, ATH–University of Bielsko Biala,
43-309 Bielsko-Biała, Poland; [email protected]
* Correspondence: [email protected]
Abstract: At present, inspection systems process visual data captured by cameras, with deep learning
approaches applied to detect defects. Defect detection results usually have an accuracy higher
than 94%. Real-life applications, however, are not very common. In this paper, we describe the
development of a tire inspection system for the tire industry. We provide methods for processing
tire sidewall data obtained from a camera and a laser sensor. The captured data comprise visual
and geometric data characterizing the tire surface, providing a real representation of the captured
tire sidewall. We use an unfolding process, that is, a polar transform, to further process the camera-
obtained data. The principles and automation of the designed polar transform, based on polynomial
regression (i.e., supervised learning), are presented. Based on the data from the laser sensor, the
detection of abnormalities is performed using an unsupervised clustering method, followed by
Citation: Kuric, I.; Klarák, J.; Sága,
the classification of defects using the VGG-16 neural network. The inspection system aims to
M.; Císar, M.; Hajdučík, A.; Wiecek, D. detect trained and untrained abnormalities, namely defects, as opposed to using only supervised
Analysis of the Possibilities of learning methods.
Tire-Defect Inspection Based on
Unsupervised Learning and Deep Keywords: tire inspection; deep learning; unsupervised learning; polynomial regression; laser sensor;
Learning. Sensors 2021, 21, 7073. defect detection; polar transform
https://doi.org/10.3390/s21217073
Academic Editors:
Roberto Teti and Jianbo Yu 1. Introduction
In the mass production of tires, which is characterized by a large number of manu-
Received: 6 July 2021
Accepted: 21 October 2021
factured items, it is very difficult to conduct the final quality inspection, considering the
Published: 25 October 2021
visual and qualitative aspects of the tires before they are placed on the market. Qualitative
aspects focus on materials, geometry, tire appearance, and final function, while the visual
Publisher’s Note: MDPI stays neutral
aspects of tires are mainly formal (but necessary) features, such as annotations, barcodes,
with regard to jurisdictional claims in
or other features necessary to identify the product. Visual content without defects is rep-
published maps and institutional affil- resentative of a high-quality product. To ensure high quality in the mass production of
iations. tires, it is necessary to obtain data from the manufacturing process and to digitize the
final quality inspection before the product leaves the factory. The current trend is to apply
processes from Industry 4.0, focused on automation, machine learning, sensory systems,
digitization within manufacturing processes, data visualization, etc. [1]. Most applications
of machine learning are based on integrating supervised methods, such as convolutional
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
neural networks (CNN), deep learning, and regression, with unsupervised methods, such
This article is an open access article
as clustering algorithms. A combination of final tire inspection and monitoring of the
distributed under the terms and technological processes during production, makes it possible to capture any damaged
conditions of the Creative Commons product by classifying its defects and can even identify the possible origins of the defect.
Attribution (CC BY) license (https:// This paper is focused on capturing defects during the final inspection of tires. The final
creativecommons.org/licenses/by/ inspection of tires is defined based on the complex manufacturing processes in the rubber
4.0/). industry. The tire is considered the object of interest, which is compounded from various
The tire is considered the object of interest, which is compounded from various materials
materials
and has aand has asurface
rugged ruggedincluding
surface including letters, and
letters, symbols, symbols,
so on,and so on,
which which
makes makes
it suitable
itfor
suitable for evaluating
evaluating the possibility
the possibility of using anofinspection
using an inspection
system. Fromsystem. Fromofthe
the view view as-
quality of
quality assurance, there are many requirements, due to the high demands during
surance, there are many requirements, due to the high demands during vehicle operations vehicle
operations
and safetyand safety
of the of theoperator
vehicle vehicle operator
and crew.andInspection
crew. Inspection tasksrequirements
tasks and and requirements
have
have
mainly been described in terms of processing 3D data obtained from laserlaser
mainly been described in terms of processing 3D data obtained from sensors
sensors into
into visual content using pattern recognition [2,3]. A closer description of tire defects
visual content using pattern recognition [2,3]. A closer description of tire defects has been
has been described in a Ph.D. thesis [4] conducted by the Department of Automation and
described in a Ph.D. thesis [4] conducted by the Department of Automation and Produc-
Production Systems at the University of Zilina. There are defined statements that categorize
tion Systems at the University of Zilina. There are defined statements that categorize tire
tire defects in terms of their origin and during the manufacturing processes. A catalog
defects in terms of their origin and during the manufacturing processes. A catalog describ-
describing possible tire defects is generally available to employees at the inspection stand
ing possible tire defects is generally available to employees at the inspection stand in the
in the factory. To simplify categorizing the most frequent defects, we defined six basic
factory. To simplify categorizing the most frequent defects, we defined six basic categories
categories of defects (Figure 1):
of defects (Figure 1):
• Impurity with the same material as the tire material (CH01);
• Impurity with the same material as the tire material (CH01);
• Impurity with a different material from the tire material (CH02);
• Impurity with a different material from the tire material (CH02);
• Material damaged by temperature and pressure (CH03);
• Material damaged by temperature and pressure (CH03);
• Crack (CH04);
•• Crack (CH04);
Mechanical damage to integrity (CH05); and
•• Mechanical damage
Etched material to integrity (CH05); and
(CH06).
• Etched material (CH06).
Figure1.1.Samples
Figure Samplesof
ofthe
themost
mostfrequent
frequentdefects
defects[4].
[4].
The
Thedefects
defectscan
canbebecaptured
capturedby byaacamera,
camera,suchsuchas
asthose
thosemanufactured
manufacturedby bythe
theBalluff
Balluff
company.
company. These images, with a resolution of 12 Mpx, were integrated into the experi-
These images, with a resolution of 12 Mpx, were integrated into the experi-
mental
mental inspection
inspection stand
stand described
described in in[2]
[2]and
andused
usedininthe
thepaper
paper“Design
“Designof oflaser
laserscanners
scanners
data
data processing and their use in a visual inspection system,” which will bepresented
processing and their use in a visual inspection system,” which will be presentedatat
the
the ICIE2020
ICIE2020 conference
conference (2021).
(2021). In
In this
this thesis,
thesis, we
wedemonstrate
demonstrate the thecapturing
capturingof ofdefined
defined
defects
defects through
through the
the use
useofofMATLAB
MATLAB2020a 2020asoftware
software(MathWorks
(MathWorkscompany),
company),for forwhich
whichaa
convolutional neural network (CNN) was trained using a transfer learning method based
convolutional neural network (CNN) was trained using a transfer learning method based
on the AlexNet network [5]. The achieved defect detection accuracy was in the range from
on the AlexNet network [5]. The achieved defect detection accuracy was in the range from
85.15% (CH04) to 99.34% (CH03).
85.15% (CH04) to 99.34% (CH03).
Similar work regarding an inspection system for the tire industry has been described
Similar work regarding an inspection system for the tire industry has been described
in [6], where the inspection system was constructed using image processing techniques,
in [6], where the inspection system was constructed using image processing techniques,
and the principle of processing data was described as obtaining 3D images, converting
and the principle of processing data was described as obtaining 3D images, converting
them into 2D images, and conducting analysis through two different approaches. The
them into 2D images, and conducting analysis through two different approaches. The first
first approach combined discrete Fourier transform and K-means clustering, while the
approach combined discrete Fourier transform and K-means clustering, while the second
second approach utilized artificial neural networks. Discrete Fourier transform was used
approach utilized artificial neural networks. Discrete Fourier transform was used to detect
to detect the defects in 3D images by inferring patterns with spatial domains. K-means
the defects in 3D images by inferring patterns with spatial domains. K-means clustering
clustering separated the pixels into clusters, where complications occurred when defining
separated
the specificthe pixelsofinto
number clusters,
clusters where
to obtain complications
relevant data. The occurred when defining
second approach, the spe-
in particular,
cific number of clusters to obtain relevant data. The second approach, in
used LSTM neural networks. It is possible to use a CNN [7]; however, due to the necessityparticular, used
LSTM neural networks. It is possible to use a CNN [7]; however, due to the
of defining a large learning data set, it is difficult to perform this task. Therefore, it is necessity of
impractical to adapt a CNN to every item occurring in a wide range of products. LSTM
Sensors 2021, 21, 7073 3 of 24
neural networks work in real-time on data from the inspected object. In the commercial
field, the Tekna Automazione e Controllo company specializes in the detection of defects
that occur on the tire sidewall; however, their publicly available information does not
allow for the identification of the algorithms used to perform the inspection. In general,
visual inspection of the surface is mainly based on the use of 3D-captured surfaces by a
triangulation principle, where laser sensors are used as final devices; for example, the
Keyence laser sensor–laser profilometer mentioned in [8–10], or a camera vision system
supported by a laser beam [11–13].
Techniques using the visual data obtained from cameras and devices with laser beams
are currently widely used [14], including those utilizing CNNs (supervised learning)
or clustering methods (unsupervised learning). As mentioned above [15], the use of
convolutional neural networks can be complicated, especially when there is a wide range
of items to search for on the captured products. However, in the case of specific mass
monotonous productions, they can be profitably designed: simply create the data set, then
train the neural network to perform product inspection. Furthermore, there can be many
complications when detecting defects. In [16], it was mentioned that the change in color
scale from color to grayscale due to defects includes many different colors, which can be
exploited to perform defect detection tasks in a shorter time. In [17], the authors described
the use of a CNN for wafer defect detection, describing it as a feasible alternative to manual
inspection, but there were still misclassifications. Another similar work is [18], where three
models were used for wafer defect detection. The lowest detection accuracy was 94.63%
for WDD-Net when detecting mechanical damage, while the highest accuracy was 100%,
which occurred in many of the cases mentioned in the paper. In the category of mechanical
damage, it is necessary to manage significant amounts of data. The other point relies on
manually labeling defects to prepare training data for a supervised model. In [19], the
use of a CNN for aircraft maintenance by defect detection was described. The authors
described complications when collecting a data set for each defect that occurs in aircraft
maintenance, to train the model. Other papers have generally described using a CNN or
R-CNN to classify defects with accuracy higher than 90% [20–24].
In summary, CNNs show potential for defect detection. The associated results are
good, but these systems were mainly in experimental use, not in practical deployment.
Based on experiments, the use of such systems is feasible; however, improvement or re-
designing the logic of inspection systems based on CNN methods is necessary. This is
mainly due to the nature of supervised learning, as mentioned in [18], where a large amount
of data must be captured and labeled, which is a difficult task, to detect the trained defects
or those which are very similar. Defects in real-life conditions are not simple to categorize
simply with a certain number of shapes, as the differences in defects are additionally
manifested in terms of their position, lighting condition, and so on. The described systems
work properly when defects in the analyzed data appear very similar to those in the
training data set. In the case of different defects, there is a very low probability of detection.
One study [25] has described methods for defect detection in steel products using various
technologies applied to visual data captured by a CCD camera. The results obtained
through using more methods led to the statement: “the fusion of multiple technologies
is an expected trend,” which can be interpreted as a necessity to develop systems that
combine more technologies. As such, prospective defect detection methods should be
able to conduct unsupervised learning within the inspection system. For instance, defect
detection on wafers using the K-means method has been described [26]. Non-industrial
areas describing methods combining supervised and unsupervised approaches include
threat detection workflow [27]. In the medical area, research has been published regarding
the use of unsupervised methods for gathering biological data to perform tumor detection
with MRI data [28] or clustering and testing based on IRIS [29].
A possible solution is developing a comparative system, where the recognition of
defects would be replaced by a comparison of the inspected data with standard reference
data without defects. This involves conditioning through use in controlled conditions,
Sensors 2021, 21, 7073 4 of 24
where strictly defined attributes of the inspected object are considered. This condition is
fulfilled in industrial conditions, specifically in mass production. In this case, every product
should be identical. In this manner, abnormalities can be found, being characterizable as
objects of interest without information regarding what it means. Then, in the following step,
CNN-based methods can be used to classify the abnormalities as defects or as acceptable
elements. This paper focuses on describing a comparative method and data fusion to
improve inspection performance, where geometrical data are necessary to identify the
geometry of the captured object and to determine geometric abnormalities, while the
camera system is used to obtain the visual characteristics of the captured surface. At present,
systems that are able to recognize pre-trained patterns or letters in the visual content of data
are available; for example, the OCR function (MATLAB), or specific functions in various
Python libraries. For this purpose, “normalized” visual content is necessary. The term
“normalized” involves the managed modification of data in order to orient it similarly to
standard text in an affine manner. The standard procedure involves unfolding the captured
rotary object using a polar transform—a process that may be automated. The objects
captured by the camera may decrease in quality when using an inappropriate hardware
setup, which can affect the accuracy of the inspection system. Based on these considerations
and previous works, we defined the following hypotheses:
Hypothesis 1 (H1). Is there the possibility to automate the polar transform procedure of the
captured part of the tire sidewall?
Hypothesis 2 (H2). Is it possible to compensate for inaccuracies in visual data captured by camera
due to the use of inappropriately set up hardware?
Hypothesis 3 (H3). Are deep learning techniques applicable to defect detection in visual data from
laser sensors?
Hypothesis 4 (H4). What is an appropriate design and application for a hybrid system integrating
unsupervised learning for defect detection on tire sidewalls?
Figure
Figure 4. Part ofFigure
the 4.4.Part
Partofofthe
sidewall. thesidewall.
sidewall.
Figure 5.
Figure Figure
Illustration 5. Illustration
of eccentricity
5. Illustration andofangle
eccentricity
of eccentricity when
and and angle
anglecapturing
when when
part of capturing
capturing the of thepart
partsidewall. of the sidewall.
sidewall.
Preparation
Preparation forPreparation
capturing the forsidewall
for capturing
capturing areathein
the sidewall
the picture
sidewall areain
area tointhe
thepicture
unfold picture
fromtoto unfoldfrom
involves
unfold frominvolves
involves
defining
defining the border edges,
defining theas
the border
displayed
border edges,
edges, inasas displayed
Figure
displayed4, whereininFigure
Figure 4,4,where
the purple where
pixels
thetheindicate
purplepixels
purple pixelsindicate
the indicatethethe
outer radius and outer
outer radius
theradius
red and
pixels
and thered
indicate
the red pixels
the
pixels inner indicate
radius,
indicate thethe inner
while
inner theradius,
radius,green while
pixels
while thethe green
character-
green pixels
pixels character-
characterize
izevalues
the
ize the values halfway the between
values halfway
halfway redbetween
thebetween the the
and purple red
andand
redpixels. purple
purple
According pixels.
pixels. theAccording
to According
red andto to
the
purple theredred andpurple
and purple
pixels,
pixels,
pixels, it is possible itisispossible
toitdefine possible totodefine
the parametersdefinethe theparameters
for parametersfor
unfolding. forunfolding.
unfolding.
The first step is The
The firststep
to first
define step
theisisprinciple
totodefine
defineof the
the principle
principle
the unfolding ofofthetheunfolding
unfolding
process, process,
whereprocess,
pixels are wherepixels
where pixelsareare
chosen
chosen
chosen in a specific way inand
in aa specific
transformedway
way and to atransformed
and certain rowto
transformed toa certain
(red) a or
certain row
column row(red) oras
(red)
(blue), column
orillus-
column (blue), as illus-
(blue), as
trated
6. Thein
trated in Figureillustrated Figure
in Figure
principle 6. The
6. The
shown principle
byprinciple
the blue shown
colorbyisby
shown the blue
the
based oncolor
blue is based
color
pixels is based
defined onby pixels
on pixels
the defined byby
defined the
purple and redthe purple
purple
pixels, and
and redred
and
the pixels,
pixels,
coordinates andand the
of thecoordinates
coordinates
pixels of the
fit the of pixels
the
general fit the
pixels
circle general
fitequation.
the general circle
Thecircle equation. The
equation.
second principle,The second
second
shown principle,
principle,
by the red shownshown
color, byisby
the the
redred color,
color,
characterized isasis characterized
characterized
an asas
arc approximating ananarcarc approximating
aapproximating
si- a asi-
sinusoid
nusoid function.
function. In
In both principles, it is necessary
necessary
nusoid function. In both principles, it is necessary to define the parameters to compensate to
to define
define the
the parameters
parameters to
to compensate
compensate
for
foraxis
for axis eccentricity axis
and eccentricity
the anglesand
eccentricity andthe
between theangles
angles
normal between
between
vectors. normal
normalvectors.
vectors.
Sensors 2021, 21, 7073 7 of 24
Sensors 2021, 21, 7073 7 of 25
Sensors 2021, 21, 7073 7 of 25
Figure
Figure 6.
6. Possible
Possible unfolding
unfolding of
of the
the sidewall
sidewall area.
area.
Figure 6. Possible unfolding of the sidewall area.
The blue principle
principle is is based
basedon onthe
thelinear
linearchoice
choiceofof pixels
pixels fromfrom thethe picture
picture bounded
bounded by
thethe
by annulus
annulus part of
part the
of tire.
the The
tire. transformation
The transformation is based
is on
based lines
The blue principle is based on the linear choice of pixels from the picture boundedon of pixels
lines of in
pixelsangular
in itera-
angular
tion (ϕi ).annulus
iteration
by the The
(φ i). general
The equation
general
part of tire.for
theequation the
The forcircle
the is defined
circle
transformation isin
is defined the Cartesian
based in
onthe coordinate
Cartesian
lines of in system.
pixelscoordinate
angular
system.The pixel positions are defined by the coordinate
iteration (φi). The general equation for the circle is defined ini the Cartesian pixels x and y i . The blue line in
coordinate
FigureThe6 pixel
system. is definable
positions in are
the defined
polar coordinate system. pixels
by the coordinate Angle𝑥iterationand 𝑦 .(ϕ i ) of
The theline
blue polar
in
coordinate
Figure 6 is system
definable characterizes
in the polar the resolution
coordinate system.in the
Angle x-axis, while
iteration
The pixel positions are defined by the coordinate pixels 𝑥 and 𝑦 . The blue line in (𝜑 ρ) characterizes
of the polar the
coor-
resolution
dinate 6 is in
Figuresystem the y-axis
characterizes
definable in theandpolar
theresolution
the basic pixel
coordinate coordination
in system.
the x-axis,Angle for
while 𝜌ϕ0characterizes
iteration . Computing
(𝜑 ) of thethe theresolu-
polar angle
coor-
iteration
tion in the is a solvable
y-axis and method,
the basic as displayed
pixel in
coordination Figurefor 𝜑
7. The
. main
Computing
dinate system characterizes the resolution in the x-axis, while 𝜌 characterizes the resolu- point theinvolves
angle defining
iteration
the
is
tionzero-point,
a solvable
in the y-axis that
method, is,
and the
asthe start pixel
displayed
basic point of the7.circle
in Figure
coordination in a𝜑point
The main
for polar coordination
involves
. Computing definingsystem.
the angle the The
zero-
iteration
zero-point
point, is characterized
that is, method,
the start aspoint by coordinates
of the circle x and
in ampolar y , defined as the highest point. When
is a solvable displayed in Figure 7. Thecoordination
m
main point involves system. The zero-point
defining the zero-is
adapted to thebypixel
characterized coordinates
coordinates 𝑥 in
andthe𝑦 picture,
, defined there
as exists
the a function
highest point. to
Whenfind adapted
a minimal to
point, that is, the start point of the circle in a polar coordination system. The zero-point is
position
the pixel in the x-axis in
coordinates in the
thepixel edge
picture, polyline:
there
characterized by coordinates 𝑥 and 𝑦 ,exists
defined a function
as the highestto findpoint.
a minimalWhenposition
adaptedinto
the x-axis in the pixel edge
the pixel coordinates in the picture, polyline:
xm = xi there
, ym exists
= yi =a minfunction
(Y ). to find a minimal position(1)in
the x-axis in the pixel edge polyline:
The red principle, displayed in Figure 6, is described as choosing part of the circle
with a specific radius, where pixel coordinates are defined in the x-axis with a specific
radius. For every part of the circle (arc), the radius must be defined.
In the process of carrying out the polar transformation, it is necessary to choose an
appropriate method. The blue method is, upon the first view, simpler than the red method;
however, the blue method has complications involving defining the basic positions of pixels
in the picture. In every case, it is necessary to adapt the system of Cartesian coordinates to
pictures with the positions of pixels being closer to reality than that with polar coordinates.
The following solution uses the red method. To gain relevant data, it is necessary to modify
the data, as real data are noisy due to the captured geometric relief of the inspected objects.
To address this, we conducted the smoothing of ϕi , as defined in Equation (4). Modification
of the result obtained from this equation was suitable, through the application of minimal
and maximal functions, not to the first and last items in the angle iterations, but to find
minimal and maximum values for the specific number of ϕi . The coordinates in the x-axis
are defined, from the picture, as every column of the picture matrix. It is necessary to
calculate the pixel coordinates in the y-axis. Calculation of the radius is defined in Equation
(5) for every iteration, where the number of iterations (n) defines the size of the picture, that
is, 4112 × 3008 (x,y). Calculating the average radius (ρ) in the numerator of Equation (5)
was carried out at every iteration, where the final was a mean value of all radius iterations.
In the radius analysis, the result had a large variance, due to the calculation style, as the
input values were not continuous, but discrete values of pixel coordinates. This feature
generates rounding, as demonstrated by the high variation of the results.
(max ( ϕi ) − min( ϕi ))
ϕi ∼ min( ϕi ) + ∗ i, (4)
num( ϕi )
!
∑in=1 ∆x
∆y
sin tan−1 ∆x
ρ∼
= , n = 4112. (5)
n
As a result, it is necessary to adapt to boundary features, such as edges and objects of
interest. These edges are highlighted, in Figure 4, by red and purple curves, while the green
curve represents the mean values of the red and purple curves. Calculating the radius
values is inevitable when considering the mentioned boundary features. The expression of
pixel positions in the y-axis can be performed in two ways: from the general circle equation
or for the equation based on the principle of the polar transform. Modified equations for
the captured picture are expressed in Figure 8, with Equation (6) in green and Equation (7)
in blue, where ygi represents the y-coordinates when computing the green polyline and ybi
represents the y-coordinates of blue polyline:
q
y gi = − ρ2 − ( xi − xm )2 + ym + ρ, (6)
Figure
Figure9.9.Application
Applicationofof
displacement
displacementcorrections.
corrections.
Figure 9. Application of displacement corrections.
The second mentioned method is an analytical method. The analytical method was
constructed using boundary conditions defined by the red and the purple curves displayed
in Figure 4. According to these curves, it is possible to perform a polar transform on
part of an annulus. Knowledge from an empirical method was applied in the design of
the analytical method, with important parameters as displayed in Figure 10, where the
main parameters were ρ0 (the red curve) and ρj (the purple curve). Calculating radius
parameters was possible, but it required mathematical expressions of the boundary curves.
Sensors 2021, 21, 7073 10 of 25
Sensors 2021, 21, 7073 The second mentioned method is an analytical method. The analytical method 10 ofwas
24
constructed using boundary conditions defined by the red and the purple curves dis-
played in Figure 4. According to these curves, it is possible to perform a polar transform
on part of an annulus. Knowledge from an empirical method was applied in the design of
The approximation of curves was carried out using polynomial regression. The type of
the analytical method, with important parameters as displayed in Figure 10, where the
circle definition, selected as the most suitable, was second-degree polynomial regression,
main parameters were 𝜌 (the red curve) and 𝜌 (the purple curve). Calculating radius
as shown in Equation (8), as it was considered closest to the circle equation.
parameters was possible, but it required mathematical expressions of the boundary
curves. The approximation of curves was
− 1 carried out using polynomial regression. The
yri = X T X X T xi = axi 2 + bxi + c. (8)
type of circle definition, selected as the most suitable, was second-degree polynomial re-
gression, as shown in Equation (8), as it was considered closest to the circle equation.
Figure10.
Figure 10.Basic
Basicprinciple
principleand
andparameters
parametersofofthe
theproposed
proposedanalytical
analyticalmethod.
method.
axi 2 + bxi + c − ym
∆d𝑦 ==00 𝜌𝜌=
= if ∆dybi = 0, ρ = (14)
ifif∆d𝑦 (1 − cos ϕi ) (14)
(14)
(( ))
Figure
Figure 11.11.
11.
Figure Comparison
Comparison ofofcomputed
Comparison computed
of radius
radius
computed and
and
radius modified
modified
and radius
radius
modified values
values
radius byanalytical
by
values analytical
by method
method
analytical forfor
for
method thethe
the red
red curve.
curve.
red curve.
Figure
Figure
Figure 12. 12. Radius
12.Radius
Radius values
values
values modified
modified
modified by
thethe
bythe
by analytical
analytical
analytical method
method
method forfor
for thethe
the purple
purple
purple curve.
curve.
curve.
The methods described above allowed us to obtain the boundary parameters necessary
to perform polar transformation when considering non-well-placed camera hardware.
Based on the above-mentioned aspects, the analytical method based on the green principle
defined in the Cartesian coordinate system was chosen. Figure 10 defines another variable,
Xk , as a matrix of pixel coordinates in the x-axis. The index k represents the row in the
The methods described above allowed us to obtain the boundary parameters neces-
sary to perform polar transformation when considering non-well-placed camera hard-
ware. Based on the above-mentioned aspects, the analytical method based on the green
Sensors 2021, 21, 7073
principle defined in the Cartesian coordinate system was chosen. Figure 10 defines an-
12 of 24
other variable, 𝑋 , as a matrix of pixel coordinates in the x-axis. The index 𝑘 represents
the row in the picture, for which the start is defined as 𝑦 in the red curve (𝑦 ) and the
end as 𝑦 in the purple curve (𝑦 ). Under this principle, we computed various k-variables,
picture, for
including thewhich
radius (15),is∆dx
the𝜌 start defined(16),as𝑛ym (17),
in the𝑋red curve
(18), and(y𝑌0 ) and
(19)the endkth
in the ym in the
as iteration.
purple curve
Together, they(yrepresent
j ). Under a this
gradualprinciple, we
transition computed
from the various
red to k-variables,
purple including
parameters. The pa-the
rameter 𝑛
radius ρ k (15), ∆dx (16),
defines kthe number n k (17), X (18),
of pixels
k and
in a row, Y (19) in the kth iteration.
k or how many pixels will be used from Together, they
represent
the a gradual
basic picture. transition
Excluding the 𝑛 the
from redcan
value to purple
lead toparameters.
deformationThe parameter
in the nk defines
transformed pic-
the number of pixels in a row, or how many pixels will be
ture. In Figure 10, we illustrate a shorter purple curve than the red curve. The parameter used from the basic picture.
𝑛Excluding the nkthe
representing value can leadof
shortening to the
deformation
transforming in the transformed
curve was computedpicture.based
In Figure
on the 10,
we illustrate
boundary a shorter of
parameters purple
radius curve than
values, asthe red curve.
defined in Equation (17). Thisnvalue,
The parameter k representing
along with the
shortening
the ∆𝑑𝑥 value of thefortransforming
displacementcurve was computed
correction, expressesbased 𝑋 ason the boundary
indices parameters
in the x-axis of pix-
of radius values, as defined in Equation (17). This
els. Coordinates in the y-axis, 𝑌 , were computed by Equation (19). The results value, along with the ∆dx k value
used forfor
displacement correction, expresses X as indices in the x-axis
the above-mentioned green analyticalk method in the polar transformation are displayed of pixels. Coordinates in the
y-axis,
in FigureYk13,
, were
where computed
the original by Equation
image is (19).
shown Theonresults
the leftused for the
side and theabove-mentioned
transformed pic-
ture is on the right side. The inequalities in the transformed picture areFigure
green analytical method in the polar transformation are displayed in 13, where
negligible, and thethe
original
final image
picture is shown
appears to beon the left side and the transformed picture is on the right side.
linear.
The inequalities in the transformed picture are negligible, and the final picture appears to
be linear. 𝑘
𝜌 =𝜌 − 𝜌 −𝜌 k, 𝑘 𝜖〈0, 𝑦 − 𝑦 〉 (15)
ρk = ρ0 − ρ j −𝑦ρ0− 𝑦 , k ∈ h0, y j − y0 i (15)
y j − y0
𝑘
∆dx = dx − dx − dx k, 𝑘 𝜖〈0, 𝑦 − 𝑦 〉 (16)
∆dxk = dx0 − dx j − dx 𝑦 0− 𝑦 , k ∈ h0, y j − y0 i (16)
y j − y0
𝜌 − ρ0𝜌− ρk 𝑘 k
!!
𝑛 nk==𝑛 n− 0 −1 − , 𝑘 𝜖〈0, − 𝑦y j 〉− y0 i
, k ∈𝑦 h0,
1− (17)
(17)
𝜌 ρ0 𝑦 −y𝑦j − y0
n
𝑛 + ∆dxk −𝑛nk , 𝑛 n + ∆dxk𝑛+ nk i, k ∈ h0, y j − y0 i, n = 4112
X𝑋k = h〈
= 2 + ∆dx − 2 , +2 ∆dx + 〉 , 𝑘 𝜖〈0, 𝑦 − 𝑦 〉, 𝑛 = 4112 (18)
(18)
2 2 2 q 2 2
Y = − ρ2 − ( X − x m )2 + y m + ρ (19)
𝑌 = k− 𝜌 − (𝑋 − 𝑥k ) + 𝑦 + 𝜌 (19)
Figure
Figure 13.
13. Result
Result of
of polar
polar transformation
transformation by
by analytical
analytical method.
method.
2.2.
2.2.Point
PointCloud
Cloudfrom
fromthe
theLaser
LaserSensor
Sensor
Previously
Previously published works have
published works have described
described the
the principles
principles of
of working
working with
with point
point
clouds
cloudsobtained
obtained by
byaalaser
lasersensor
sensor[34]
[34]to
toobtain
obtainvisual
visualcontent
contentcompatible
compatible with
with algorithms
algorithms
designed
designedforforcamera-obtained
camera-obtainedimagery
imagery[3,31].
[3,31].The
Thefurther work
further workdescribed in in
described this paper
this is
paper
based on the results of these works. The described procedures to generate visual
is based on the results of these works. The described procedures to generate visual content
content from point clouds used data consisting of 12 scans, with the shape of 25,000 × 640,
representing a specific part of the tire sidewall area. It focuses on the connection of specific
parts of the scan, in order to obtain a scan of the whole tire sidewall. The first principle
is based on finding matching geometrical data between scans, as performed by matrix
matching in a stepwise manner (design of laser scanner data processing and their use in
the visual inspection system, ICIE2020). This matching was accurate, but the algorithm
from point clouds used data consisting of 12 scans, with the shape of 25,000 × 640, repre-
senting a specific part of the tire sidewall area. It focuses on the connection of specific parts
Sensors 2021, 21, 7073 13 of 24
of the scan, in order to obtain a scan of the whole tire sidewall. The first principle is based
on finding matching geometrical data between scans, as performed by matrix matching
in a stepwise manner (design of laser scanner data processing and their use in the visual
inspection system, ICIE2020).
is computationally intensive. This
Suchmatching
a procedurewas took
accurate, but the algorithm
approximately 10 minisoncomputa-
an Intel
tionally intensive.
Core i7-8700. BetterSuch a procedure
results took approximately
were obtained by considering 10 themin on an
pattern Intel Coreofi7-8700.
recognition visual
Better
contentresults werethe
[2], where obtained by considering
convolution principle wastheused
pattern recognition
through of visual
the function content [2],
MatchTemplate,
where the into
integrated convolution
the OpenCV principle
librarywas used
[35]. The through
task timethewasfunction
reducedMatchTemplate,
to 3 s, including inte-
GUI
grated into the
procedures, OpenCV
using similarlibrary [35].asThe
hardware task
in the time
case was reduced
of matrix to 3 Further
matching. s, including
workGUI
has
procedures, using similar hardware as in the case of matrix matching.
utilized the pattern recognition method for two main reasons: the speed of the procedureFurther work has
utilized the patternto
and the possibility recognition
recognize method
pre-definedfor two main[36–38].
patterns reasons:For
thethis
speed of the aprocedure
purpose, database
and the possibility
of pre-defined to recognize
patterns, composed pre-defined
of letters patterns
and other[36–38].
featuresFor this purpose,
regularly a database
occurring on tire
sidewalls
of (as mentioned
pre-defined in Table 1),ofwas
patterns, composed created.
letters This database
and other contains these
features regularly data:on tire
occurring
sidewalls
• (as mentioned
Coordinates in Tablepattern;
of the snipped 1), was created. This database contains these data:
•• Pattern fromof
Coordinates thethegrayscale
snippedimage generated from the point cloud;
pattern;
•• Point cloud
Pattern fromofthethegrayscale
pattern; image generated from the point cloud;
•• Positions
Point cloudin of
string chain;
the pattern;
•• Pattern from a color
Positions in string chain;image converted from grayscale to color.
• Pattern from a color image converted from grayscale to color.
Table 1. Number of samples for specific scans.
Table 1. Number of samples for specific scans.
Number of Scans Number of Samples
Number of1Scans
Scan Number 107
of Samples
Scan 1 107
Scan 2 126
Scan 2 126
Scan 4 91
Scan 4 91
This database was used to identify the basic patterns and basic objects identifiable on
the tire
the tire surface,
surface, as
as illustrated
illustrated in
in Figure
Figure 14. The main
14. The main advantage
advantage is is its
its fast
fast application
application and
and
low computational
low computational intensity
intensity when
when compared
compared to to aa CNN.
CNN. This
This procedure
procedure can can save
save time
time and
and
computational power, which is a necessary aspect of inspection tasks for mass
computational power, which is a necessary aspect of inspection tasks for mass production production
under factory
under factory conditions.
conditions. According
According to to obtained
obtained data, it is
data, it is possible
possible to
to assume
assume the
the position
position
of other patterns or to correlate the positions of recognized patterns with
of other patterns or to correlate the positions of recognized patterns with pre-defined pat-pre-defined
patterns,
terns, where
where irregularities
irregularities between
between defined
defined patterns
patterns and and inspected
inspected areasareas indicate
indicate the
the pos-
possibility of an abnormality, defect, or misidentified object
sibility of an abnormality, defect, or misidentified object occurring. occurring.
Figure
Figure 14.
14. Application
Application of
of recognizing
recognizing pre-defined patterns.
pre-defined patterns.
The ability to use pattern recognition is conditioned by the homogeneity of the data,
variance leads
where any variance leadsto
toan
aninability
inabilitytotorecognize
recognizepatterns;
patterns;for
forinstance,
instance, this
this method
method is
is not frequently used when analyzing visual content from a camera, as the
not frequently used when analyzing visual content from a camera, as the variance in pic- variance in
pictures
tures containing
containing thethe same
same objects
objects is istypically
typicallyvery
veryhigh.
high.The
Thedeployment
deployment of of this
this method
in the camera vision field relies on strict adherence to conditions during capturing, such as
light conditions, position, and so on. The difference in our approach lies in the application
of pictures generated from laser sensors, thus. being based on scanned surfaces. In contrast,
the variance in these data is low, which is computed as subtracting the matched area of the
first scan from the second scan. Conditions were normalized matrices and filtered values,
in the camera vision field relies on strict adherence to conditions during capturing, such
as light conditions, position, and so on. The difference in our approach lies in the applica-
Sensors 2021, 21, 7073 tion of pictures generated from laser sensors, thus. being based on scanned surfaces. 14 of 24
In
contrast, the variance in these data is low, which is computed as subtracting the matched
area of the first scan from the second scan. Conditions were normalized matrices and fil-
tered values, mainly in missing data. In matched scans (e.g., Scan 1 and Scan 3 mentioned
mainly in missing data. In matched scans (e.g., Scan 1 and Scan 3 mentioned in [2]), the
in [2]), the differences mainly ranged between −0.2 and 0.2 mm, as displayed in Figure 15.
differences mainly ranged between −0.2 and 0.2 mm, as displayed in Figure 15. There
There were occurrences of larger absolute differences, almost up to 3 mm, in which the
were occurrences of larger absolute differences, almost up to 3 mm, in which the number
number
of of thesecorresponded
these values values corresponded
with the with the number
number of differences
of differences to 0.4 mm.to 0.4 mm. There-
Therefore, an
fore, an absolute difference in the range between 0.4 and 3 mm likely indicates
absolute difference in the range between 0.4 and 3 mm likely indicates a defect occurringa defect
occurring
on on the
the scanned scanned surface.
surface.
Figure 15.Different
Figure15. Differentvalues
valuesin
inmatched
matchedareas
areasof
ofscans.
scans.
Based
Based on on the
the above,
above, ititisispossible
possibleto todesign
designan aninspection
inspectionsystemsystemthat thatisiscapable
capableof of
detecting defects occurring on the scanned surfaces of the inspected
detecting defects occurring on the scanned surfaces of the inspected objects. The principle objects. The principle
isisbased
basedononthe definition
the definition of the
of thecorrect data,data,
correct denoted as theasstandard
denoted reference
the standard data, which
reference data,
represents
which represents correctly scanned objects without any defect or abnormality.aSuch
correctly scanned objects without any defect or abnormality. Such system is
a sys-
applicable in cases where the final product has a homogenous character.
tem is applicable in cases where the final product has a homogenous character. A partic- A particularly
suitable industry
ularly suitable for the
industry fordeployment
the deployment of such an inspection
of such an inspection system
system is isthe
theproduction
production
of printed circuit boards (PCBs) or similar objects. In the case
of printed circuit boards (PCBs) or similar objects. In the case of soft materials, such of soft materials, such as
as
polyurethane or rubber, it is more complicated, due to slight changes in the shapesof
polyurethane or rubber, it is more complicated, due to slight changes in the shapes of
objects
objectsand andtheir
theirfeatures.
features.Despite
Despite non-ideal
non-ideal properties,
properties,wewe were ableable
were to useto the
usecomparative
the compar-
system to evaluate
ative system tires. The
to evaluate tires.results are displayed
The results in Figure
are displayed in 16, where
Figure 16,three
where abnormalities
three abnor-
were found. The abnormalities were all observed as deviations higher than 0.5 mm in
malities were found. The abnormalities were all observed as deviations higher than 0.5
absolute value. To process such data, it is necessary to use value clustering algorithms.
mm in absolute value. To process such data, it is necessary to use value clustering algo-
Appropriate clustering algorithms are part of unsupervised learning methods in the field
rithms. Appropriate clustering algorithms are part of unsupervised learning methods in
of artificial intelligence. The appropriate type of clustering depends on parameters, which
the field of artificial intelligence. The appropriate type of clustering depends on parame-
can limit the applicability of a given method. One of these is of very high importance,
ters, which can limit the applicability of a given method. One of these is of very high im-
especially in mass production: the time requirements. Another is the ability to separate
portance, especially in mass production: the time requirements. Another is the ability to
different values into specific numbers that correspond to the number of defects occurring
separate different values into specific numbers that correspond to the number of defects
in the scanned object. In terms of time consumed, the best method is K-means [39,40].
occurring in the scanned object. In terms of time consumed, the best method is K-means
Another possibility is the DBSCAN algorithm, which is a little slower (mainly when
[39,40]. Another possibility is the DBSCAN algorithm, which is a little slower (mainly
considering huge data sets) but can separate point clouds into clusters without defining
when considering huge data sets) but can separate point clouds into clusters without de-
the number of clusters first [41,42]. As displayed in Figure 16, this inspection system can
finingabnormalities
detect the number ofor clusters
defects. first [41,42].
The figureAs displayed
shows three in Figurewhile
clusters, 16, this inspection
more clusterssystem
were
can detect abnormalities or defects. The figure shows three clusters,
found in the data; however, they were too small and, therefore, irrelevant. Furthermore, while more clusters
the
were found
threshold for in the data; however,
categorizing clusters as they were too
important or small and,depends
irrelevant therefore, onirrelevant. Further-
the expectation of
more,
the usertheof threshold
the system. forAcategorizing
special case clusters as important
is Abnormality or irrelevant
4, an item occurringdepends
in both on the
scans.
Recognition of this abnormality was based on the way of scanning, where a geometrically
diverse object is represented by small scan values but has large enough differences to have
them categorized as abnormalities.
expectation of the user of the system. A special case is Abnormality 4, an item occurring
in both scans. Recognition of this abnormality was based on the way of scanning, where
expectation of the user of the system. A special case is Abnormality 4, an item occurring
a geometrically diverse object is represented by small scan values but has large enough
in both scans. Recognition of this abnormality was based on the way of scanning, where
Sensors 2021, 21, 7073 differences to have them categorized as abnormalities. 15 of 24
a geometrically diverse object is represented by small scan values but has large enough
differences to have them categorized as abnormalities.
Figure
Figure 3D3D
17.17. visualization
visualization of of detected
detected abnormalities.
abnormalities.
2.3. Fusion
Figure 17. 3DGeometric Dataofand
visualization Pictures
detected from the Camera
abnormalities.
Sections 2.1 and 2.3 described the data captured from the tire sidewall using a camera
(2.1) and by a laser sensor (2.2). Each of these methods has specific advantages and
disadvantages. As such, it is possible to obtain a synergic effect by combining these data
and, thus, by combining the advantages of individual methods. The fusion of this data is
shown in Figure 18, which compares individual and merged pictures (merged data). To
merge data, it was necessary to normalize the data types and obtain the same shape. For
2.3. Fusion Geometric Data and Pictures from the Camera
Sections 2.1 and 2.3 described the data captured from the tire sidewall using a camera
(2.1) and by a laser sensor (2.2). Each of these methods has specific advantages and disad-
vantages. As such, it is possible to obtain a synergic effect by combining these data and,
Sensors 2021, 21, 7073 16 of 24
thus, by combining the advantages of individual methods. The fusion of this data is shown
in Figure 18, which compares individual and merged pictures (merged data). To merge
data, it was necessary to normalize the data types and obtain the same shape. For this
reason, we developed
this reason, an analytical
we developed method
an analytical of unfolding,
method where where
of unfolding, the main theaim
mainwasaimto sup-
was
press and minimize
to suppress the inaccuracies
and minimize caused caused
the inaccuracies by hardware depicteddepicted
by hardware in Figurein5.Figure
In the 5.case
In
when
the casewewhen
did not
weapply
did notthis method,
apply blurred areas
this method, blurredoccurred in the merged
areas occurred data, mainly
in the merged data,
at edges,
mainly at such
edges,assuch
the borders of letters,
as the borders symbols,
of letters, and the
symbols, andtire
thetread. The blurring
tire tread. The blurringwas
caused by non-corresponding
was caused by non-corresponding object positions,
object suchsuch
positions, as edges in different
as edges places
in different in the
places invis-
the
ual andand
visual geometric data.
geometric To perform
data. To performthe the
merging
mergingprocess, it was
process, necessary
it was to unify
necessary the size
to unify the
of the
size offused datadata
the fused through resizing.
through Resizing
resizing. the smaller
Resizing datadata
the smaller types andand
types unifying the the
unifying di-
dimension
mension to the
to the larger
larger image
image offered
offered better
better results.
results. Otherwise,
Otherwise, the resolution
the resolution wouldwould be
be lost,
lost,
in theincase
the of
case of pictures
pictures with higher
with higher resolution.
resolution. To fuseTothe
fuse theitdata,
data, was it was necessary
necessary to define to
define merging
merging points,points,
such assuch as defined
defined samples samples recognized
recognized in the captured
in the captured data. data.
Defect Detection
2.4. Defect Detection byby RCNN
RCNN with VGG-16VGG-16 Network
Network
According to previous works [18,23,44],
to previous works [18,23,44], existing defect
existing detection
defect methods
detection have mainly
methods have
been based
mainly been on deep
based onlearning (supervised
deep learning learning)
(supervised usingusing
learning) specifically designed
specifically or pre-
designed or
trained CNN
pre-trained architecture,
CNN suchsuch
architecture, as AlexNet
as AlexNet [45],[45],
Resnet-50 [46],[46],
Resnet-50 or VGG-16
or VGG-16[18].[18].
For this
For
reason,
this evaluations
reason, evaluationshave been
have beenperformed
performed using
using these
thesemethods
methodson onspecific
specific data,
data, such
as visual content generated from laser sensor data [2]. We
as visual content generated from laser sensor data [2]. We chose to test the VGG-16chose to test the VGG-16
CNN
CNN architecture, based on results presented in [18,47,48]. Training
architecture, based on results presented in [18,47,48]. Training was performed on 5000 was performed on
5000 samples, including two categories of defects: Impurity with
samples, including two categories of defects: Impurity with the same material as the tirethe same material as
the tire material (CH01), and mechanical damage to integrity (CH05).
material (CH01), and mechanical damage to integrity (CH05). These defects were defined These defects were
defined
as as the simplest
the simplest to due
to detect, detect,to due
theirtosize
their
andsize and visibility
visibility in thein the data.
data. Training
Training was
was per-
performed using MATLAB software. The training options were set
formed using MATLAB software. The training options were set as follows: Stochastic Gra- as follows: Stochastic
Gradient
dient Descent
Descent withwith momentum
momentum (sgdm),
(sgdm), minibatch
minibatch size =size
16, =maxepochs
16, maxepochs = 5, initial
= 5, initial learn
rate = 0.000001, and execution environment = parallel. After training, the detector detector
learn rate = 0.000001, and execution environment = parallel. After training, the reached
reached
an accuracyan accuracy
of 93.75%.ofThe93.75%.
detectionThe was
detection was performed
performed on 11 scanson [2],11where
scans10 [2], where
detected
10 detected possible defects with the highest accuracies. In this way,
possible defects with the highest accuracies. In this way, there were many incorrect detec- there were many
tions. For instance, in Figure 19, we show the result of detection based on VGG-16 on
incorrect detections. For instance, in Figure 19, we show the result of detection based in
VGG-161,in
SCAN SCANfour
where 1, where
areas four
wereareas wereto
detected detected
have a to have aimpurity
rubber rubber impurity defectbut
defect (CH1), (CH1),
the
but the areas were not bounded very well. The same results were observed for every scan.
areas were not bounded very well. The same results were observed for every scan. The
The other important parameter was time to perform detection which, for one scan, ranged
other important parameter was time to perform detection which, for one scan, ranged
approximately from 100 to 125 s (i.e., SCAN 1, 116.258 s). The hardware used for training
approximately from 100 to 125 s (i.e., SCAN 1, 116.258 s). The hardware used for training
and detection was a GPU (Intel Core i7-8700, RAM 32 GB DDR4, NVIDIA GeForce RTX
and detection was a GPU (Intel Core i7-8700, RAM 32 GB DDR4, NVIDIA GeForce RTX
2070). The size of the scan was 640 × 25,000 pixels (16 MPX). The results of this experiment
2070). The size of the scan was 640 × 25,000 pixels (16 MPX). The results of this experiment
appeared not to be very promising, according to the long time required for detection and
appeared not to be very promising, according to the long time required for detection and
incorrect detections while declaring high accuracy.
incorrect detections while declaring high accuracy.
2.5. Classification of Detected Abnormalities
Based on the results presented in Section 2.4, we decided to design a new VGG-
16 model; however, not for performing the detection of defects, but, instead, detecting
the classification of defects from the data set. The model training was performed on
4000 samples, including two types of defects (CH1 and CH5). The validation data set
contained 1000 samples. Training settings were as follows: Optimizer = ADAM, batch
size = 96, and epochs = 20. Training was performed using KERAS. All samples were resized
to 64 × 64 pixels. The process of training is displayed in Figure 20. The evaluation of the
trained model is depicted in Figure 21, as a confusion matrix.
Sensors 2021, 21, 7073 17 of 24
Sensors 2021, 21, 7073 17 of 25
Figure
Figure19.
19.Defect
Defectdetection
detectionby
byVGG-16
VGG-16ininSCAN
SCAN1.1.
2.5.
2.5.Classification
ClassificationofofDetected
DetectedAbnormalities
Abnormalities
Based
Based on the results presentedininSection
on the results presented Section2.4,
2.4,we
wedecided
decidedtotodesign
designaanew
newVGG-16
VGG-16
model;
model; however, not for performing the detection of defects, but, instead, detectingthe
however, not for performing the detection of defects, but, instead, detecting the
classification
classificationofofdefects
defectsfrom
fromthe
thedata
dataset.
set.The
Themodel
modeltraining
trainingwas
wasperformed
performedon on4000
4000sam-
sam-
ples,
ples,including
includingtwo twotypes
typesofofdefects
defects(CH1
(CH1andandCH5).
CH5).TheThevalidation
validationdata
datasetsetcontained
contained
1000
1000 samples. Training settings were as follows: Optimizer = ADAM, batch size==96,
samples. Training settings were as follows: Optimizer = ADAM, batch size 96,and
and
epochs
epochs = 20. Training was performed using KERAS. All samples were resized to 64××64
= 20. Training was performed using KERAS. All samples were resized to 64 64
pixels.
pixels.The
Theprocess
processofoftraining
trainingisisdisplayed
displayedininFigure
Figure20.20.The
Theevaluation
evaluationofofthethetrained
trained
model
modelisis depicted
depicted
Figure ininFigure
Figure
19. Defect 21,
21,asas
detection aaconfusion
by confusion
VGG-16 matrix.
matrix.
in SCAN 1.
Figure 19. Defect detection by VGG-16 in SCAN 1.
Figure
Figure20.
Figure20.Process
20.Processofof
Process training
oftrainingVGG-16
trainingVGG-16for
VGG-16forclassification.
forclassification.
classification.
Figure 21.
Figure Confusion matrix for classification by VGG-16 network.
Figure21.
21.Confusion
Confusionmatrix
matrixfor
forclassification
classificationby
byVGG-16
VGG-16network.
network.
The trained model was used for the classification of abnormalities obtained from the
process described in Section 2.2, where three abnormalities were detected (Figure 16). The
difference between “abnormalities” and “defects” is that an abnormality is detected by an
unsupervised method (DBSCAN) as something different, but without other information. In
real conditions, an abnormality should be labeled as a specific defect. To define and classify
recognized abnormalities, it is necessary to perform the classification of abnormalities;
for example, by training a CNN. In this case, VGG-16 was used. The classification of
abnormalities was performed, where Abnormality 7 and Abnormality 2 were classified
as rubber impurity (CH1) (Figure 22), and Abnormality 4 was classified as mechanical
damage
Figure 21. to integritymatrix
Confusion (CH5). The accuracy
for classification by of classification
VGG-16 network. for Abnormality 7 as rubber
impurity was 73.11% in the graph (Figure 22; left), corresponding to the label “0”. The time
In real conditions, an abnormality should be labeled as a specific defect. To define and
classify recognized abnormalities, it is necessary to perform the classification of abnor-
malities; for example, by training a CNN. In this case, VGG-16 was used. The classification
of abnormalities was performed, where Abnormality 7 and Abnormality 2 were classified
Sensors 2021, 21, 7073 18 of 24
as rubber impurity (CH1) (Figure 22), and Abnormality 4 was classified as mechanical
damage to integrity (CH5). The accuracy of classification for Abnormality 7 as rubber im-
purity was 73.11% in the graph (Figure 22; left), corresponding to the label “0”. The time
necessary
necessaryfor
forthe
theclassification
classification of
ofone
oneitem
itemwas
wasapproximately
approximately one
one second,
second, using
using the
the same
same
hardware as mentioned above.
hardware as mentioned above.
Figure22.
Figure Classificationof
22.Classification ofAbnormality
Abnormality7.7.
3. Results
3. Results
In Section 2, we described five main methods. The first involved processing visual
dataIncaptured
Section 2,bywe described
a camera (2.1five main vision).
Camera methods.The Thesecond
first involved
involvedprocessing
processing visual
data
data captured
captured by aby a camera
laser sensor(2.1(2.2Camera
Point cloud vision).
fromThe second
laser involved
sensor). processing
The third involved data cap-
fusing
tured
the databy obtained
a laser sensor
from (2.2 Point cloud
the above. from laser
The fourth sensor).
involved Theof
the use third involved
a deep learning fusing
methodthe
data
(i.e., obtained
VGG-16) from the above.
for defect The fourth
detection involved
in visual the use
data from theoflaser
a deep learning
sensor method
as part (i.e.,
of tire in-
VGG-16) for defect detection in visual data from the laser sensor
spection. The final section described the application of VGG-16 for the classification of as part of tire inspection.
The final section
abnormalities described
obtained by the
the application of VGG-16
process described for the2.3.
in Section classification of abnormali-
In the Introduction, we
ties
posedobtained by the process
four hypotheses described
answered in theinfollowing:
2.3. In the Introduction, we posed four hypothe-
ses answered in the following:
Hypothesis 1 (H1). Yes, it is possible to automate the polar transform procedure of the captured
Hypothesis 1 (H1).using
partial tire sidewall Yes, it is possible
a system basedto on
automate the polar
polynomial transform procedure of the captured
regression.
partial tire sidewall using a system based on polynomial regression.
Hypothesis 2 (H2). Yes, it is possible to compensate for the inaccuracies in visual data captured
Hypothesis
by a camera, 2with (H2). Yes, it is possible
a minimum to compensate
of two boundary for thethrough
conditions, inaccuracies in visual data
the polynomial captured
expression of
theadetected
by camera, edges.
with aThe conceptofis two
minimum illustrated
boundary in the Ro greenthrough
conditions, values corresponding
the polynomial to expression
the Cartesian of
coordinate
the detected system,
edges. Thewhere Ro is
concept green for theinred
illustrated thecurve
Ro greenwasvalues
5467, corresponding
and for the purple
to the curve was
Cartesian
3566. The system,
coordinate positions of theRocurves
where green were
for the 0 =curve
dxred 337, was = 2332
dx j 5467, and(Figure
for the10), where
purple curvethewas
difference
3566.
in Ro
The values of
positions was
the1901
curves the 𝑑𝑥
andwere = 337,in𝑑𝑥
difference = 2332
position of the curves
(Figure 10),was 1995,
where thewhich represents
difference in Ro
compensation
values was 1901 forand
the the
inappropriately
difference in set up hardware.
position of the curves was 1995, which represents compen-
sation for the inappropriately set up hardware.
Hypothesis 3 (H3). In the Material and Methods section, we applied a deep learning method, that
is, an R-CNN
Hypothesis based In
3 (H3). onthetheMaterial
VGG-16and network.
Methods The resultswe
section, forapplied
the visual
a deepdata generated
learning fromthat
method, the
laser sensors showed little potential for application.
is, an R-CNN based on the VGG-16 network. The results for the visual data generated from the
laser sensors showed little potential for application.
Hypothesis 4 (H4). We described a tire inspection system using unsupervised learning (in partic-
ular, the DBSCAN algorithm), which showed good potential for application. The main advantage
lies in detecting abnormalities, which can be further classified as defects or acceptable items.
Figure
Figure 23.23. Overall
Overall design
design of of proposed
proposed tire
tire inspection
inspection system.
system.
to pre-defined samples are described in Table 1. Pattern recognition was closely described
for processing 3D data from laser sensors into visual content [2], for which we used the
function cv2.matchTemplate integrated into the OpenCV library. The condition for using
this method is that the data must be homogenous, as illustrated in Figure 15. In the case of
conventional images, the implementation of pattern recognition is complicated by the high
dispersion caused by the light conditions and requires the strict positioning of the object
during the process of capturing images. The other advantages of using geometrical data for
the tire inspection system are explicitly defining the correct areas and the high possibility
of capturing geometrical abnormalities, as displayed in Figure 16. Abnormalities were
classified as clusters, separated by the unsupervised learning algorithm DBSCAN, which
is able to separate data into a specific number of clusters based on density without a pre-
defined number of clusters. Data chosen for clustering were defined based on differences in
3D data; specifically, in the z-matrix, where the threshold was defined as 0.5 mm, according
to the data displayed in Figure 15. The detected abnormalities can be described in 3D,
as displayed in Figure 17. The classification of detected abnormalities is the subject of
Section 2.5, where the VGG-16 network architecture was utilized by means of KERAS in
Python. The detected abnormalities were correctly classified as rubber impurity (CH1)
for Abnormalities 2 and 7, and mechanical damage to integrity (CH5) for Abnormality 4.
Additionally, we described the use of the R-CNN method for defect detection, performed
in MATLAB, with the goal to design a tire inspection system based only on supervised
learning. The results of this experiment did not show much promise, and the design of the
model (neural network) needed much improvement. In Section 2.3, we described the fusion
of the captured data (by camera and laser). Fusion was performed by manual matching
and resizing data to uniform size. Centering was based on the upper left rectangular corner
of the specified area, representing the same object pattern in both types of data.
An overview of the constructed tire inspection system is shown in Figure 18, where
the described methods include the camera image polar transform based on polynomial
regression (supervised learning). For the data from the laser sensor, the visual content
generated from geometric data is used, pattern recognition of pre-defined samples by
cv2.matchTemplate is carried out, the detection of abnormalities is performed by the
DBSCAN algorithm (unsupervised learning), and the classification of abnormalities to
defects relies on the VGG-16 network architecture (deep learning).
4. Discussion
As mentioned in the results, Section 2 was divided into five subsections. The first was
focused on the polar transform for the unfolding of the part of the annulus, capturing part
of the tire sidewall. We covered the conventional method of unfolding by circle Hough
transform and polar transform to the Cartesian coordinate system. The disadvantage
of this method is the non-concentricity of the detected circle and the real tire sidewall,
which manifested as the first harmonic component shown in Figure 3. The next step
was the development and description of a method for the polar transform of the part of
the sidewall based on boundary conditions. The boundary conditions were set as the
detected edges describing an area of the captured surface. Edge detection is sensitive to
the characteristics of the original captured picture. This paper presents the use of picture
modification to suppress the background of the original picture. In a fully automated
implementation of the process of capturing pictures, the camera should be enhanced by
optimizing the background of the tire inspection stand: the background should allow for
clear separation of the tire sidewall from the background. Figure 5 shows inaccuracies
in the capturing of tire sidewalls when using a camera. The eccentricity of the ε angle
causes the projection of the captured circle as an ellipse, deforming the captured shape.
For this reason, a more accurate method could modify the circle equation to that of an
ellipse by adding appropriate parameters. In the case of a very low angle, the impact is
negligible. Using polynomial regression, the parameters of the ellipse, compensated by
values “a” and “b,” are shown in Equation (8). Therefore, we can perform unfolding while
Sensors 2021, 21, 7073 21 of 24
5. Conclusions
In this paper, we described the design of a hybrid tire inspection system utilizing both
3D and 2D data. The applied algorithm combines both supervised and unsupervised learn-
Sensors 2021, 21, 7073 22 of 24
ing methods. In terms of supervised learning, it uses pattern recognition and polynomial
regression, while, in terms of unsupervised learning, the DBSCAN algorithm is used for the
clustering task. Polynomial regression was used to automate the process compensating for
the inaccuracies described in Figure 5, thus, replacing the conventional methods described
in Section 2. Further work should involve managing and modifying the tire inspection
stand and design background in order to identify and possibly automatically separate
areas of the tire sidewall from the background. Another possible improvement would be to
replace the circle with an ellipse in the relevant equation provided in Section 2. To capture
the visual character of the surface, a conventional camera was used. The polar transform
of images was necessary in order to allow for its unification with the data obtained from
a laser sensor. In further work, the conventional camera may be replaced by a line-scan
camera, which would allow for obtaining a much higher resolution and the same character
of the captured surface as that derived from the laser sensor. The reason that we used a
conventional camera was due to availability and, possibly, the use of color too. In the case
of using a color line camera, the situation is more complicated, due to their availability on
the market—such a type of line camera is quite rare.
Considering the laser sensor, this feature greatly improves the possibilities of hybrid
tire inspection systems, based mainly on the use of geometric data. The geometric data
reflect the real topology of the scanned surface and the much higher resistance of the laser
sensor to the light conditions means that the data obtained from the laser sensor are more
stable when compared to data obtained using a standard camera. Laser sensors can work
as line cameras but, in comparison to conventional cameras, the resolution is lower, and
they offer only grayscale images captured solely in the wavelength of the laser used. The
application of a camera allows for the use of devices that operate on color images. In
the introduction, we mentioned the use of CNNs for defect detection in wafers in the
semiconductor industry. There is very important information in conventional inspection
systems that works well when applying deep learning methods. The results of these
systems are generally very good, achieving 94% and higher defect-recognition accuracy.
However, there is also a need to manage larger data sets in order to train such models.
Deep learning methods are in the category of supervised learning, as part of artificial
intelligence. This means that defect detection is achieved through the use of a training
data set and manual labeling of the defects. Therefore, in the case of very specific or
abnormal defects occurring, the supervised system may not be able to capture the defect
due to the absence of training data guiding the model to detect these types of defects.
The proposed method works on 3D data and uses an unsupervised learning approach—a
specific DBSCAN method—to identify abnormalities without the need to train or adapt tire
inspection systems to capture such defects. In the future, it will be necessary to implement
the described tire inspection system on a larger scale for every conceivable type of defect,
including items such as barcodes and labels, and to perform verification of the defect.
The system described in this paper can, in the future, replace conventional methods in
inspection systems; however, it is still necessary to fulfill the mentioned aspects missing
from this inspection system and verify its use in other industrial fields; for example, in the
semiconductor industry, for such tasks as detecting defect on PCB wafers. Furthermore,
in the future, exploration of the possibility of evaluating the angles α and ε mentioned in
Figure 5 should be considered, leading to the possibility of designing a calibration system
for the camera to obtain higher accuracy.
Author Contributions: Conceptualization: J.K.; methodology, J.K.; software, J.K.; validation, I.K.
and M.C.; formal analysis, M.C. and J.K.; investigation, A.H. and J.K.; resources, I.K. and J.K.; data
curation, I.K. and J.K.; writing—original draft preparation, J.K.; writing—review and editing, M.C.
and I.K.; visualization, M.C., J.K. and I.K.; supervision, A.H. and M.S.; project administration, I.K.
and M.S.; funding acquisition, I.K., D.W. and M.S. All authors have read and agreed to the published
version of the manuscript.
Funding: This work was supported by the Slovak Research and Development Agency under contract
No. APVV-16-0283.
Sensors 2021, 21, 7073 23 of 24
References
1. Kuric, I.; Císar, M.; Tlach, V.; Zajačko, I.; Gál, T.; Wi˛ecek, D. Technical Diagnostics at the Department of Automation and
Production Systems. In Advances in Ergonomics in Design; Springer: Berlin/Heidelberg, Germany, 2019; Volume 835, pp. 474–484.
[CrossRef]
2. Klarak, J.; Kuric, I.; Cisar, M.; Stancek, J.; Hajducik, A.; Tucki, K. Processing 3D Data from Laser Sensor into Visual Content Using
Pattern Recognition. In Proceedings of the 2021 IEEE 8th International Conference on Industrial Engineering and Applications
(ICIEA), Kyoto, Japan, 23–29 April 2021; pp. 543–549. [CrossRef]
3. Klarák, J.; Hajdučík, A.; Bohušík, M.; Kuric, I. Methods of Processing Point Cloud to Achieve Improvement Data Possibilities.
In Projektowanie, Badania i Eksploatacja’ 2020; p. 419. Available online: http://www.engineerxxi.ath.eu/book/projektowanie-
badania-i-eksploatacja2020/ (accessed on 19 February 2021).
4. Kandera, M. Design of Methodology for Testing and Defect Detection Using Artificial Intelligence Methods. 2020. Available
online: http://opac.crzp.sk/?fn=detailBiblioForm&sid=D51B1947951498618DF67753D437&seo=CRZP-detail-kniha (accessed
on 24 October 2019).
5. Transfer Learning Using AlexNet—MATLAB & Simulink—MathWorks United Kingdom. Available online: https:
//uk.mathworks.com/help/deeplearning/examples/transfer-learning-using-alexnet.html (accessed on 14 October 2019).
6. Massaro, A.; Dipierro, G.; Cannella, E.; Galiano, A.M. Comparative Analysis among Discrete Fourier Transform, K-Means
and Artificial Neural Networks Image Processing Techniques Oriented on Quality Control of Assembled Tires. Information
2020, 11, 257. [CrossRef]
7. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in
Neural Information Processing Systems; Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; Curran Associates, Inc.:
New York, NY, USA, 2012; pp. 1097–1105.
8. Borish, M.; Post, B.K.; Roschli, A.; Chesser, P.C.; Love, L.J. Real-Time Defect Correction in Large-Scale Polymer Additive
Manufacturing via Thermal Imaging and Laser Profilometer. Procedia Manuf. 2020, 48, 625–633. [CrossRef]
9. Borish, M.; Post, B.K.; Roschli, A.; Chesser, P.C.; Love, L.J.; Gaul, K.T. Defect Identification and Mitigation Via Visual Inspection in
Large-Scale Additive Manufacturing. JOM 2019, 71, 893–899. [CrossRef]
10. Mullan, F.; Mylonas, P.; Parkinson, C.; Bartlett, D.; Austin, R. Precision of 655 nm Confocal Laser Profilometry for 3D surface
texture characterisation of natural human enamel undergoing dietary acid mediated erosive wear. Dent. Mater. 2018, 34, 531–537.
[CrossRef] [PubMed]
11. Lung, C.W.; Chiu, Y.C.; Hsieh, C.W. A laser-based vision system for tire tread depth inspection. In Proceedings of the 2016 IEEE
International Symposium on Computer, Consumer and Control, IS3C 2016, Xi’an, China, 4–6 July 2016; pp. 850–853. [CrossRef]
12. Li, J.; Huang, Y. Automatic inspection of tire geometry with machine vision. In Proceedings of the 2015 IEEE International
Conference on Mechatronics and Automation, ICMA 2015, Beijing, China, 2–5 August 2015; pp. 1950–1954. [CrossRef]
13. Mital’, G.; Dobránsky, J.; Ružbarský, J.; Olejárová, Š. Application of Laser Profilometry to Evaluation of the Surface of the
Workpiece Machined by Abrasive Waterjet Technology. Appl. Sci. 2019, 9, 2134. [CrossRef]
14. Wang, G.; Zheng, B.; Li, X.; Houkes, Z.; Regtien, P. Modelling and calibration of the laser beam-scanning triangulation measure-
ment system. Robot. Auton. Syst. 2002, 40, 267–277. [CrossRef]
15. Guberman, N. On Complex Valued Convolutional Neural Networks. February 2016. Available online: https://arxiv.org/abs/16
02.09046v1 (accessed on 20 October 2021).
16. Tao, X.; Zhang, D.; Ma, W.; Liu, X.; Xu, D. Automatic Metallic Surface Defect Detection and Recognition with Convolutional
Neural Networks. Appl. Sci. 2018, 8, 1575. [CrossRef]
17. Chien, J.-C.; Wu, M.-T.; Lee, J.-D. Inspection and Classification of Semiconductor Wafer Surface Defects Using CNN Deep
Learning Networks. Appl. Sci. 2020, 10, 5340. [CrossRef]
18. Chen, X.; Chen, J.; Han, X.; Zhao, C.; Zhang, D.; Zhu, K.; Su, Y. A Light-Weighted CNN Model for Wafer Structural Defect
Detection. IEEE Access 2020, 8, 24006–24018. [CrossRef]
19. Doğru, A.; Bouarfa, S.; Arizar, R.; Aydoğan, R. Using Convolutional Neural Networks to Automate Aircraft Maintenance Visual
Inspection. Aerospace 2020, 7, 171. [CrossRef]
20. Shi, J.; Li, Z.; Zhu, T.; Wang, D.; Ni, C. Defect Detection of Industry Wood Veneer Based on NAS and Multi-Channel Mask R-CNN.
Sensors 2020, 20, 4398. [CrossRef]
21. Wang, T.; Chen, Y.; Qiao, M.; Snoussi, H. A fast and robust convolutional neural network-based defect detection model in product
quality control. Int. J. Adv. Manuf. Technol. 2018, 94, 3465–3471. [CrossRef]
22. Su, B.; Chen, H.; Zhou, Z. BAF-Detector: An Efficient CNN-Based Detector for Photovoltaic Cell Defect Detection. IEEE Trans.
Ind. Electron. 2021. [CrossRef]
Sensors 2021, 21, 7073 24 of 24
23. Jing, J.; Ma, H.; Zhang, H. Automatic fabric defect detection using a deep convolutional neural network. Color. Technol.
2019, 135, 213–223. [CrossRef]
24. Yun, J.P.; Shin, W.C.; Koo, G.; Kim, M.S.; Lee, C.; Lee, S.J. Automated defect inspection system for metal surfaces based on deep
learning and data augmentation. J. Manuf. Syst. 2020, 55, 317–324. [CrossRef]
25. Sun, X.; Gu, J.; Tang, S.; Li, J. Research Progress of Visual Inspection Technology of Steel Products—A Review. Appl. Sci.
2018, 8, 2195. [CrossRef]
26. Chang, C.-Y.; Li, C.; Chang, J.-W.; Jeng, M. An unsupervised neural network approach for automatic semiconductor wafer defect
inspection. Expert Syst. Appl. 2009, 36, 950–958. [CrossRef]
27. Le, D.C.; Zincir-Heywood, A.N. Evaluating Insider Threat Detection Workflow Using Supervised and Unsupervised Learning.
In Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 24 May 2018; pp. 270–275.
[CrossRef]
28. Chen, C.-C.; Juan, H.-H.; Tsai, M.-Y.; Lu, H.H.-S. Unsupervised Learning and Pattern Recognition of Biological Data Structures
with Density Functional Theory and Machine Learning. Sci. Rep. 2018, 8, 1–11. [CrossRef]
29. Zivkovic, Z.; Van Der Heijden, F. Recursive unsupervised learning of finite mixture models. IEEE Trans. Pattern Anal. Mach. Intell.
2004, 26, 651–656. [CrossRef]
30. Kuric, I.; Kandera, M.; Klarák, J.; Ivanov, V.; Wi˛ecek, D. Visual Product Inspection Based on Deep Learning Methods. In Advances
in Mechanical Engineering; Springer: Berlin/Heidelberg, Germany, 2020; pp. 148–156. [CrossRef]
31. Klarák, J.; Kandera, M.; Kuric, I. Transformation of Point Cloud into the Two-Dimensional Space Based on Fuzzy Logic Principles.
2019. Available online: http://www.engineerxxi.ath.eu/book/designing-researches-and-exploitation-2019-vol-1/ (accessed on
26 October 2020).
32. Davies, E. A modified Hough scheme for general circle location. Pattern Recognit. Lett. 1988, 7, 37–43. [CrossRef]
33. Seo, S.-W.; Kim, M. Efficient architecture for circle detection using Hough transform. In Proceedings of the International
Conference on ICT Convergence 2015: Innovations Toward the IoT, 5G and Smart Media Era, ICTC 2015, Jeju Island, Korea,
28–30 October 2015; pp. 570–572. [CrossRef]
34. Che, E.; Jung, J.; Olsen, M.J. Object Recognition, Segmentation, and Classification of Mobile Laser Scanning Point Clouds: A State
of the Art Review. Sensors 2019, 19, 810. [CrossRef] [PubMed]
35. OpenCV: Template Matching. Available online: https://docs.opencv.org/master/d4/dc6/tutorial_py_template_matching.html
(accessed on 1 June 2021).
36. Briechle, K.; Hanebeck, U.D. Template Matching using Fast Normalized Cross Correlation. Opt. Pattern Recognit. XII
2001, 4387, 95–103.
37. Tsai, D.-M.; Lin, C.-T. Fast normalized cross correlation for defect detection. Pattern Recognit. Lett. 2003, 24, 2625–2631. [CrossRef]
38. Friemel, B.H.; Bohs, L.N.; Trahey, G.E. Relative performance of two-dimensional speckle-tracking techniques: Normalized
correlation, non-normalized correlation and sum-absolute-difference. In Proceedings of the IEEE Ultrasonics Symposium,
Seattle, WA, USA, 7–10 November 1995; Volume 2, pp. 1481–1484. [CrossRef]
39. Bholowalia, P.; Kumar, A. EBK-Means: A Clustering Technique Based on Elbow Method and K-Means in WSN. Int. J. Comput.
Appl. 2014, 105, 17–24.
40. Visualizing K-Means Clustering. Available online: https://www.naftaliharris.com/blog/visualizing-k-means-clustering/
(accessed on 14 October 2019).
41. Arlia, D.; Coppola, M. Experiments in Parallel Clustering with DBSCAN. Lect. Notes Comput. Sci. 2001, 2150, 326–331. [CrossRef]
42. Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with
Noise. 1996. Available online: www.aaai.org (accessed on 10 October 2019).
43. Zhou, Q.-Y.; Park, J.; Koltun, V. Open3D: A Modern Library for 3D Data Processing. Available online: http://www.open3d
(accessed on 26 October 2020).
44. Wang, J.; Xu, C.; Yang, Z.; Zhang, J.; Li, X. Deformable Convolutional Networks for Efficient Mixed-Type Wafer Defect Pattern
Recognition. IEEE Trans. Semicond. Manuf. 2020, 33, 587–596. [CrossRef]
45. Zhang, Y.; Cui, X.; Liu, Y.; Yu, B. Tire Defects Classification Using Convolution Architecture for Fast Feature Embedding. Int. J.
Comput. Intell. Syst. 2018, 11, 1056–1066. [CrossRef]
46. Wen, L.; Li, X.; Gao, L. A transfer convolutional neural network for fault diagnosis based on ResNet-50. Neural Comput. Appl.
2020, 32, 6111–6124. [CrossRef]
47. Perez, H.; Tah, J.H.M.; Mosavi, A. Deep Learning for Detecting Building Defects Using Convolutional Neural Networks. Sensors
2019, 19, 3556. [CrossRef]
48. Xu, X.; Zheng, H.; Guo, Z.; Wu, X.; Zheng, Z. SDD-CNN: Small Data-Driven Convolution Neural Networks for Subtle Roller
Defect Inspection. Appl. Sci. 2019, 9, 1364. [CrossRef]