Being able to identify defects is an essential step during manufacturing processes. Yet, not all defects are necessarily known and sufficiently well described in the databases images. The challenge we address in this paper is to detect any defect by fitting a model using only normal samples of industrial parts. For this purpose, we propose to test fast AnoGAN (f-AnoGAN) approach based on a generative adversarial network (GAN). The method is an unsupervised learning algorithm, that contains two phases; first, we train a generative model using only normal images, which proposes a fast mapping of new data into the latent space. Second, we add and train an encoder to reconstruct images. The anomaly detection is defined by the reconstruction error between the defected data and the reconstructed ones, and the residual error of the discriminator. For our experiments, we use two sets of industrial data; the MVTec Anomaly Detection Dataset and a private dataset which is based on thermal-wave and used for non-destructive testing. This technique has been utilized in research for the evaluation of industrial materials. Applying the f-AnoGAN in this domain offers high anomaly detection accuracy.
This work presents how deflectometry can be coupled with a light-field camera to better characterize and quantify the depth of anomalies on specular surfaces. In our previous work,1 we proposed a new scanning scheme for the detection and 3D reconstruction of defects on reflective objects. However, the quality of the reconstruction was strongly dependent on the object-camera distance which was required as an external input parameter. In this paper, we propose a new approach that integrates an estimation of this distance into our system by replacing the standard camera with a light-field camera.
3D modeling of scene contents takes an increasing importance for many computer vision based applications. In particular, industrial applications of computer vision require efficient tools for the computation of this 3D information. Routinely, stereo-vision is a powerful technique to obtain the 3D outline of imaged objects from the corresponding 2D images. As a consequence, this approach provides only a poor and partial description of the scene contents. On another hand, for structured light based reconstruction techniques, 3D surfaces of imaged objects can often be computed with high accuracy. However, the resulting active range data in this case lacks to provide data enabling to characterize the object edges. Thus, in order to benefit from the positive points of various acquisition techniques, we introduce in this paper promising approaches, enabling to compute complete 3D reconstruction based on the cooperation of two complementary acquisition and processing techniques, in our case stereoscopic and structured light based methods, providing two 3D data sets describing respectively the outlines and surfaces of the imaged objects. We present, accordingly, the principles of three fusion techniques and their comparison based on evaluation criterions related to the nature of the workpiece and also the type of the tackled application. The proposed fusion methods are relying on geometric characteristics of the workpiece, which favour the quality of the registration. Further, the results obtained demonstrate that the developed approaches are well adapted for 3D modeling of manufactured parts including free-form surfaces and, consequently quality control applications using these 3D reconstructions.
In previous works, we have extended the principles of “variable homography”, defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.
During foreign operations, Improvised Explosive Devices (IEDs) are one of major threats that soldiers may
unfortunately encounter along itineraries. Based on a vehicle-mounted camera, we propose an original approach
by image comparison to detect signicant changes on these roads. The classic 2D-image registration techniques
do not take into account parallax phenomena. The consequence is that the misregistration errors could be
detected as changes. According to stereovision principles, our automatic method compares intensity proles along
corresponding epipolar lines by extrema matching. An adaptive space warping compensates scale dierence in
3D-scene. When the signals are matched, the signal dierence highlights changes which are marked in current
video.
Fabric's smoothness is a key factor in determining the quality of finished textile products and has great influence on the functionality of industrial textiles and high-end textile products. With popularization of the zero defect industrial concept, identifying and measuring defective material in the early stage of production is of great interest to the industry. In the current market, many systems are able to achieve automatic monitoring and control of fabric, paper, and nonwoven material during the entire production process, however online measurement of hairiness is still an open topic and highly desirable for industrial applications. We propose a computer vision approach to compute epipole by using variable homography, which can be used to measure emergent fiber length on textile fabrics. The main challenges addressed in this paper are the application of variable homography on textile monitoring and measurement, as well as the accuracy of the estimated calculation. We propose that a fibrous structure can be considered as a two-layer structure, and then we show how variable homography combined with epipolar geometry can estimate the length of the fiber defects. Simulations are carried out to show the effectiveness of this method. The true length of selected fibers is measured precisely using a digital optical microscope, and then the same fibers are tested by our method. Our experimental results suggest that smoothness monitored by variable homography is an accurate and robust method of quality control for important industrial fabrics.
KEYWORDS: Light sources, 3D modeling, Image processing, 3D image processing, 3D image reconstruction, Image quality, Sensors, Control systems, 3D acquisition, Cameras
Accuracy of 3D vision-based reconstruction tasks depends both on the complexity of analyzed objects and on good
viewing / illumination conditions, ensuring image quality and minimizing consequently measurement errors after
processing of acquired images. In this contribution, as a complement to an autonomous cognitive vision system
automating 3D reconstruction and using Situation Graph Trees (SGTs) as a planning / control tool, these graphs are
optimized in two steps. The first (off-line) step addresses the placement of lighting sources, with the aim to find positions
minimizing processing errors during the subsequent reconstruction steps. In the second step, on-line application of the
SGT-based control module focuses on adjustment of illumination conditions (e. g., intensity), leading eventually to
process re-planning, and enabling further to extract optimally the contour data required for 3D reconstruction. The whole
illumination optimization procedure has been fully automated and included in the dynamic (re-)planning tool for visionbased
reconstruction tasks, e. g. in view of quality control applications.
A fabric's smoothness is a key factor to determine the quality of textile finished products and has great influence on the
functionality of industrial textiles and high-end textile products. With popularization of the 'zero defect' industrial
concept, identifying and measuring defective material in the early stage of production is of great interest for the industry.
In the current market, many systems are able to achieve automatic monitoring and control of fabric, paper, and
nonwoven material during the entire production process, however online measurement of hairiness is still an open topic
and highly desirable for industrial applications.
In this paper we propose a computer vision approach, based on variable homography, which can be used to measure the
emergent fiber's length on textile fabrics. The main challenges addressed in this paper are the application of variable
homography to textile monitoring and measurement, as well as the accuracy of the estimated calculation. We propose
that a fibrous structure can be considered as a two-layer structure and then show how variable homography can estimate
the length of the fiber defects. Simulations are carried out to show the effectiveness of this method to measure the
emergent fiber's length. The true lengths of selected fibers are measured precisely using a digital optical microscope, and
then the same fibers are tested by our method. Our experimental results suggest that smoothness monitored by variable
homography is an accurate and robust method for quality control of important industrially fabrics.
The paper presents an approach for error estimation for the various steps of an automated 3D vision-based
reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and
built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning
tool. Such an automated quality control system requires the coordination of a set of complex processes performing
sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD
object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be
able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently
the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation,
matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze
particularly the segmentation error due to localization errors for extracted edge points supposed to belong to
lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing
these geometric features are used as quality measure to determine confidence intervals and finally to estimate the
segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to
evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis
of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown
experimental results.
KEYWORDS: 3D modeling, Light sources and illumination, Image segmentation, 3D image reconstruction, 3D image processing, Computer aided design, 3D acquisition, Solid modeling, Cognitive modeling, Visual process modeling
This paper presents an original approach for the optimal 3D reconstruction of manufactured workpieces based on a
priori planification of the task, enhanced on-line through dynamic adjustment of the lighting conditions, and built
around a cognitive intelligent sensory system using so-called Situation Graph Trees. The system takes explicitely
structural knowledge related to image acquisition conditions, type of illumination sources, contents of the scene (e. g.,
CAD models and tolerance information), etc. into account. The principle of the approach relies on two steps. First, a socalled
initialization phase, leading to the a priori task plan, collects this structural knowledge. This knowledge is
conveniently encoded, as a sub-part, in the Situation Graph Tree building the backbone of the planning system
specifying exhaustively the behavior of the application. Second, the image is iteratively evaluated under the control of
this Situation Graph Tree. The information describing the quality of the piece to analyze is thus extracted and further
exploited for, e. g., inspection tasks. Lastly, the approach enables dynamic adjustment of the Situation Graph Tree,
enabling the system to adjust itself to the actual application run-time conditions, thus providing the system with a self-learning
capability.
This paper presents an original approach for a vision-based quality control system, built around a cognitive intelligent sensory system. The principle of the approach relies on two steps. First, a so-called initialization phase leads to structural knowledge on image acquisition conditions, type of illumination sources, etc. Second, the image is iteratively evaluated using this knowledge and complementary information (e.g., CAD models, and tolerance information). Finally, the information describing the quality of the piece under evaluation is extracted. A further aim of the approach is to enable building strategies that determine for instance the “next best view” required for completing the current extracted object description through dynamic adjustment of the knowledge base including this description. Such techniques require primarily investigation of three areas, dealing respectively with intelligent self-reasoning 3D sensors, 3D image processing for accurate reconstruction and evaluation software for comparison of image-based measurements with CAD data. However, an essential prior step, dealing with modeling of lighting effects, is required. As a starting point, we first modeled pinpoint light sources. After having introduced in Sections 1 and 2 the objectives and principles of the approach, we present in Section 3 and 4 the implementation and modeling approach for illumination. Some first results illustrating the approach are presented in Section 5. Finally, we conclude with some future directions for improving this approach.
KEYWORDS: Electronic filtering, Defect detection, Signal to noise ratio, Image segmentation, Signal detection, Light sources and illumination, Machine vision, Gaussian filters, Digital filtering, Bandpass filters
Quality control by artificial vision is getting more and more widespread within the industry. Indeed, in many cases, industrial applications require a control with high stability performance, satisfying high production rate. The purpose of this paper is to present a method to detect in real time defects located on the circumference of industrial parts that have a circular shape. Meanwhile, production steps can lead to an oval shape or to different sizes of parts. These two phenomena can lead to miss defects. Therefore a constant mask will not be able to detect these defects correctly. The control of the circularity of these parts can be achieved in two steps.
Quality control by artificial vision is getting more and more widespread within the industry. Indeed, in many cases, industrial applications require a control with high stability performance, satisfying high production rate. For texture control, some major problems may occur: uneasiness to show different textures, segmentation features as well as classification and decision phases requiring still to much computation time. This article presents a comparison between two non-parametric classification methods used for real time control of textured objects moving at a cadence of 10 pieces per second. Four types of flaws have to be indifferently detected: smooth surfaces, bumps, hollow knocked surfaces and lacks of material. These defects generate texture variations which have to be detected and sorted out by our system, each flaw apparition being registered to carry out a survey over the production cycle. We previously presented a search for an optimal lighting system, in this case the acquired images were tremendously improved. On these optimal images, we described a method for selecting the best segmentation features. The third step, which is presented here, is a comparison between two multi-classes classification algorithms: the Parzen's estimator and the so-called 'stressed polytopes' method. These two algorithms which require a learning phase are both based on a non-parametric discrimination method of the flaw classes. In one hand, they are both relatively inexpensive in time calculation but on the other hand they present different assets relative to the easiness of the learning phase or the number of useable segmentation features. They also have a different behavior towards the cut out of the features space, especially on the 'cross-classes' border. Their comparison is made through the aforementioned quoted points which are relevant for the evaluation of the discrimination efficiency. Finally, through an industrial example we present the results of such a comparison. The control, a PC based machine, includes the calculation five classification features (calculations were carried out on the local neighborhood of each pixel), five distinct classes for the classification phase and the decision phase. This led to a 3,63% classification error ratio for the best compromise.
Wavelet transforms are efficient tools for texture analysis and classification. Separable techniques are classically used but present several drawbacks. First, diagonal coefficients contain poor information. Second, the other coefficients contain useful information only if the texture is oriented in the vertical and horizontal directions. So an approach of texture analysis by non-separable transform is proposed. An improved interscale resolution is allowed by the quincunx scheme and this analysis leads to only one detail image where no particular orientation is favored. New orthogonal isotropic filters for the decomposition are constructed by applying McClellan transform on one dimension B-spline filters. The obtained wavelet function have better isotropic and frequency properties than those previously proposed by Feauveau. Since IIR filters are obtained, an integration in Fourier domain of the whole operations of the transform is proposed. A texture analysis is performed on wavelet details coefficients. Simple parameters are calculated from each scale. Finally, the evolution over scales of the parameters is obtained and this multiscale parameter is used to characterize the different textures. An application of this method is posed with the analysis of human cells. The aim is to distinguish states of evolution. As no information is provided by monoscale classical methods on these images, the proposed process allows to identify several states. In this process a reference curve is constructed for each states calculated from the multiscale variance of known images. When a new image is analyzed, a new evolution curve is calculated and a measure of the distance with the references is done. This technique is more efficient than classical ones as multiscale information is used.
This paper presents a method of searching segmentation parameters which has been developed for an industrial study. The problem consists in the detection of four types of defects on textured industrial parts: smooth surfaces, bumps, lacks of material and hollow knocked surfaces. The lighting system used in this application is not described in this paper but is presented in a previous study concerning the characterization of lighting.
Quality control in industrial application has greatly benefited from the development of tools like artificial vision. In order to obtain a good quality image of the object under investigation, the first step is to use a good lighting system. This paper presents a reliable method which allows to compare several lighting with respect to their capabilities of bringing out defects. This study has been led on textured industrial parts on which four types of defects have to be detected: smooth surfaces, bumps, lacks of material and hollows knocked surfaces. The aim is to determine the best lighting among various experimental sets. This method has two stages: the first one is a definition, according to the knowledge and the shape of the defects, of a pertinent attribute vector which components are defect sensitive. In the second step, discrimination power property of this vector is computed and compared under various illumination using Parzen's kernel. This method insures a well-suited illumination in numerous applications in defects detection and leads to an efficient set of lighting system and segmentation's parameters. Work is under way to generalize this method to multidimensional cases in order to allow interactions between components of the attribute vector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.