Computer Aided Quality Control (Caqc)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Computer aided quality

control
• Inspection is the means by which poor quality is detected and
good quality isassured in products that are produced in a
production process.
• Inspection is usually carried-out manually via the use of various
technologies that examine specific variables (quality
characteristics of the product), or product attributes (to ensure
product conformance to previously-set standards).
• Inspection is the process of presenting, examining, deciding-upon,
and acting upon an item to ensure that poor quality is detected in
product attributes, and good quality is assured.
• Various technologies support the inspection procedure, enabled by
various sensors, instruments, and gauges.
• Some inspection techniques use manually operated devices such
as micrometers, callipers, protractors, and go/no-go gauges; whilst
other techniques are based upon modern technologies such as co-
ordinating measuring machines (CMM) and machine vision, which
use computer-controlled systems that allow the inspection
procedure to be automated.
• This unit focuses on automated inspection techniques.
Characteristics of measuring
instruments
Contact vs. Non-contact Inspection
Techniques
Contact Inspection
Techniques
Physical contact occurs between the object to be inspected and
the measurement device in contact inspection; this is typically
done by means of a mechanical probe or other device that
touches the item, which allows the inspection procedure to
occur.
• Conventional measuring and gauging instruments
• Co-ordinate measuring machines (CMMs) and related
techniques to measure
• mechanical dimensions
• Stylus type surface texture measuring machines to measure
surface
• characteristics such as roughness and waviness
• Electrical contact probes for testing integrated circuits and
printed circuit boards
CMM

•The most common probe design is the touch-trigger type, which actuate when the probe
makes contact with the item’s surface.
• Various trigger mechanisms are available, including: high-sensitivity electrical contact
switch triggers; contact
switch triggers that activate when electrical contact is made between part surface and
probe tip; and piezoelectric sensor switch that operates by assessing tension loads on
the probe.
•After contact between probe and part surface, displacement transducers associated with
the three linear axes record the co-ordinate positions of the probe, and pass the results
to the CMM controller.
•Compensation is made for the radius of the probe tip, and over-travel of the probe nib
due to momentum is neglected.
•The probe returns to a neutral position when it leaves the part surface.
The overall CMM construction is illustrated in Figure. and consists of
a mechanical structure that supports the probe head and probe,
and a worktable that passes underneath the probe, upon which the
item to be inspected is placed.
To the side we can see the associated computer system that takes
and records
the probe results as they occur.
The two principal components from this construction are the probe,
and its
mechanical structure.
The tip of the probe in the CMM is usually a ruby ball. Ruby is a
form of corundum (aluminium oxide) with high hardness for wear
resistance, and low density for minimum inertia, thus making it
ideal for probing applications.
Probes can be single or multiple tip
Non-contact Inspection Techniques
.Non-contact inspection techniques use sensors instead of a mechanized
probe favoured by contact inspection methodologies.
.The sensor is located at a certain distance from the object to be
inspected, to measure or gauge the desired features of the object.
.Optical inspection technologies—these use light to accomplish the
measurement or gauging cycle. The most important technique is machine
vision.
.Non-optical inspection technologies—these use other forms of
energy than light to perform the inspection. Various energies utilized
include: electrical fields, radiation, and ultra-sonics.
.They avoid possible surface damage that can be caused upon contact
Inspection cycle times are faster as the contact probe must be re-
positioned for each new part inspected, while the non-contact sensor
remains stationary
.Parts handling is lower with non-contact inspection than with contact
inspection, as parts in the latter methodology usually require special
handling and adjustments so that inspection can occur
.It allows for the possibility of 100% automated inspection, for the above
reasons
Non-contact non-optical inspection
techniques
Machine Vision
• Machine vision is the creation of an image and the collection of
data derived from the image, and the subsequent processing and
interpretation of the data by a computer from some useful
application.
• Machine vision is also known as computer vision, and its principal
application is in industrial inspection.
• Machine vision exists in two-dimensional (2D) and three-
dimensional (3D) formats, with 2D being most common in
industrial applications.
• Examples of its usage include dimensional measuring and
gauging, verifying the presence of components, and checking for
features on a flat (or semi-flat) surface.
• The operation of machine vision has three functions: Image acquisition and
digitization; image processing and analysis and interpretation.
• Image acquisition and digitization: It is typically performed by deploying a
video camera to capture the image, and the use of a digitizing system to store the
image data for subsequent analysis.
• The camera is focused upon the surface of the item of interest, and an image
consisting of discrete pixel elements is captured in the viewing area; each pixel
has a value proportional to the light intensity of that portion of the scene.
• The intensity value of each pixel is converted into its equivalent digital value by
an analogue-to-digital converter.
• the simplest type of machine vision, called binary vision (so called because it can
only assign black and white intensity values, and no values in-between).
• A more sophisticated vision system will add a palette of different representational
colours, in grey, that can capture different light intensities as different shades of
grey; this system is called the grey scale system.
• This type of system is used, not only to pick-out dimensional features and the
items size and shape, but also the item’s colour, and other surface characteristics.
• Greyscale vision systems typically use 4, 6, or 8 bits of memory, with each bit
corresponding to 2^8 = 256 intensity levels—more than either the human eye or
the video camera can really distinguish.
• Each set of digitized pixel values is referred to as a frame, and each frame is
stored in a computer memory device called a frame buffer.
• The process of reading all the pixel values in a frame is performed with a
frequency of 30 times per second.
Machine vision: (a) scene presentation; (b) 12 x 12 matrix
super-imposed; and (c)creation of pixelated scene and
assignment of intensity values, in black or white
Cameras
Solid-state cameras have, to a great extent, replaced vidicon cameras
(also used as TV cameras) as the prime image-capturing devices used
in machine vision.
Solid-state cameras operate by focusing the image onto a 2D array of
very small, finely spaced photosensitive elements, which
subsequently form the matrix of pixels seen in the scene image.
An electrical charge is generated by each element according to the
intensity of light striking the element; and this charge is subsequently
stored by a storage device consisting of an array of storage elements
corresponding one-to-one with the photosensitive picture elements.
Charge values accumulate, and are ultimately read sequentially in
the data processing and analysis function of machine vision.
Illumination
•The scene that the camera is focused-upon must be well illuminated
if an image
of sufficient quality is to be captured.
•Illumination must be well-placed and constant over the time
required to capture the image; this usually means that special
lighting must be deployed for a machine vision application, rather
than relying upon ambient lighting.
Image Processing and Analysis
• A number of techniques have been developed so that data produced
during the first phase of machine vision may be processed and analysed.
• These general techniques are called segmentation (a technique intended
to define and separate regions of interest within the image), and feature
extraction (which follows on from various segmentation processes).
• Image processing and analysis techniques under these general headings
are outlined in Table
Interpretation
• The extracted features of the image are guide from which
interpretation of the image emerges; that is, interpretation is
concerned with recognizing the object (object recognition), and/or
recognizing the major features of the object (pattern recognition).
• Predefined models or standard values are used to identify the object
in the image. Two commonly-used interpretation techniques are:
• Template matching—a method whereby the features of the image are
compared against corresponding features of a model or template
stored in the computer memory
• Feature weighting—a technique in which several features are
combined into a single measure by assigning a weight to each
feature according to its relative importance in identifying the object,
and where the resultant score is compared against an ideal object
score stored in computer memory, to achieve proper identification
Scanning laser method

Conventional optical instruments include optical comparators and microscopes;


while laser systems can be used for scanning.
In Figure a scanning laser device is depicted. The system uses a laser beam that
is deflected by a rotating mirror to produce a beam of light that can be focused to
sweep past an object; while on
the other side of the object, a photo-detector senses the light sweep, except
when it is interrupted by the object, and this interruption time may be measured
and related to the size and shape of the object with great accuracy.
Laser devices include the scanning laser, which comes in a number of formats;
Linear array devices may also be used, whereby an array of closely spaced photo
diodes are placed behind an object, and used to capture planar light that is
directed at the object from the other side.
•A variation of this technique uses
optical triangulation methods (refer to
Figure).
•A laser is used to focus a narrow
beam at an object to form a spot of
light on the object; meanwhile, a
linear array of photo diodes is used to
determine the location of the spot
using triangulation.
• The resultant angle A and distance L
between light source and photo
diodes are known; and thus, by
means of simple trigonometry, the
range of the object (R) can be
determined by means of the
equation:
R = L cot A
Digital Image correlation (DIC) Methods
Two-
dimensional
structure of
DIC system
shown in
Figure , for a
flat plane
surface
displacement
measurement
, only need
one camera.
Three-
dimensional
structure of
DIC signal
detection
system shown
in Figure it
requires two
or more
cameras
capture
Infrared Thermography Technique

Infrared thermography is a process


in which an infrared imaging There are two kinds of thermography:
system (an infrared camera) 1.) Active Thermography (AT) is
converts the spatial variations in defined as applying a stimulus to a
infrared radiance from a surface target to cause the target to heat or
into a two-dimensional image, in cool in such a way as to allow
which variations in radiance are characteristics of the target to be
displayed as a range of colors or observed when viewed by thermal
tones. imagery.
As a general rule, objects in the 2.) Passive Thermography (PT) is
image that are lighter in color are defined
warmer, and darker objects are as measuring the temperature
cooler. differences between the target
An image produced by an infrared materials the surroundings under
camera is called a thermograph. different ambient temperature
conditions.

You might also like