PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7877, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we attempt to propose a signal restoration technique from the noise corrupted signal. The main
diculty in most of the noise removal approaches is the extraction of singularities which are part of the signal
from noise elements. In order to over come this problem, the propose method measures the Lipschitz exponent of
the transitions to extract the noise elements. Unlike many noise removal techniques, the present method works
in the non orthogonal domain. These noise elements were identied from the decaying slope of modulus maxima
lines and is termed as Lipschitz exponents. The main contribution of the work is the reconstruction process.
By utilizing the property of Lipschitz exponents, it is possible to reconstruct the smooth signal by non linear
functioning. Statistical results are quite promising and performs better than conventional shrinkage methods
in the case of high variance noise. Furthermore, in order to extract noise elements the proposed method is not
limited with the selection of wavelet function for the addressed signal as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic banknote sheet cut-and-bundle machines are widely used within the scope of banknote production. Beside
the cutting-and-bundling, which is a mature technology, image-processing-based quality inspection for this type of
machine is attractive. We present in this work a new real-time Touchless Counting and perspective cutting blade quality
insurance system, based on a Color-CCD-Camera and a dual-core Computer, for cut-and-bundle applications in
banknote production. The system, which applies Wavelet-based multi-scale filtering is able to count banknotes inside a
100-bundle within 200-300 ms depending on the window size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is focused on the characterization of GNSS signals reception environment by estimation of the
percentage of visible sky in real-time. On previous works, a new segmentation technique based on a color
watershed using an adaptive combination of color and texture information was proposed. This information was
represented by two morphological gradients, a classical color gradient and a morphological texture gradient based
on mathematical morphology or co-occurrence matrices. The segmented images were then classified into two
regions: sky and not-sky. However, this approach has high computational cost and thus, cannot be applied in
real-time. On this paper, we present this adaptive segmentation method with a texture gradient calculated by
the Gabor filter and a region-tracking method based on a block-matching estimation. This last step reduces
the execution time of the application in order to respect the real-time conditions. Since the application works
for fish-eye images, a calibration and rectification method is required before tracking and is also presented on
this paper. The calibration method presented is based on the straight line condition and thus does not use real
word coordinates. This prevents measurement errors. The tracking results are compared to the results of the
classification method (which has already been evaluated on previous works). The evaluation shows that the
proposed method has a very low error and decreases the execution time by ten times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of circular object detection and localisation arises quite often in machine vision applications, for
example in semi-conductor component inspection. We propose two novel approaches for the precise centre
localisation of circular objects, e.g. p-electrodes of light-emitting diodes. The first approach is based on image
gradients, for which we provide an objective function that is solely based on dot products and can be maximised
by gradient ascend. The second approach is inspired by the concept of isophotes, for which we derive an objective
function that is based on the definition of radial symmetry. We evaluate our algorithms on synthetic images with
several kinds of noise and on images of semiconductor components and we show that they perform better and
are faster than state of the art approaches such as the Hough transform. The radial symmetry approach proved
to be the most robust one, especially for low contrast images and strong noise with a mean error of 0.86 pixel
for synthetic images and 0.98 pixel for real world images. The gradient approach yields more accurate results
for almost all images (mean error of 4 pixel) compared to the Hough transform (8 pixel). Concerning runtime,
the gradient-based approach significantly outperforms the other approaches being 5 times faster than the Hough
transform; the radial symmetry approach is 12% faster.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Designing a suitable image representation is one of the most fundamental issues of computer vision. There are
three steps in the popular Bag of Words based image representation: feature extraction, coding and pooling. In
the final step, current methods make an M x K encoded feature matrix degraded to a K-dimensional vector
(histogram), where M is the number of features, and K is the size of the codebook: information is lost dramatically
here. In this paper, a novel pooling method, based on 2-D histogram representation, is proposed to retain
more information from the encoded image features. This pooling method can be easily incorporated into state-of-
the-art computer vision system frameworks. Experiments show that our approach improves current pooling
methods, and can achieve satisfactory performance of image classification and image reranking even when using
a small codebook and costless linear SVM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new approach to non-rigid registration. A hierarchical subdivision approach is applied, with
local normalized phase correlation for patch registration. The major improvement is achieved by implementing a
suitable decimation at each level. The decimation is implemented via a Gram polynomial basis. Both global and
local polynomial approximation are considered and compared with the use of a Fourier basis. The issue of Gibbs
error in polynomial decimation is examined. It is shown that the Gram basis is superior when applied to signals
with strong gradient, i.e., a gradient which generates a significant Gibbs error with a Fourier basis. A bivariate
Gram polynomial tensor product approximation is used to implement regularization. It is demonstrated that
the new method performs well on both synthetic and real image data. The procedure requires approximately
1.3 sec. to register an image with 800 × 500 pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Morphological and microstructural features visible in microscopy images of nuclear materials can give information
about the processing history of a nuclear material. Extraction of these attributes currently requires a subject matter expert
in both microscopy and nuclear material production processes, and is a time consuming, and at least partially manual
task, often involving multiple software applications. One of the primary goals of computer vision is to find ways to
extract and encode domain knowledge associated with imagery so that parts of this process can be automated. In this
paper we describe a user-in-the-loop approach to the problem which attempts to both improve the efficiency of domain
experts during image quantification as well as capture their domain knowledge over time. This is accomplished through
a sophisticated user-monitoring system that accumulates user-computer interactions as users exploit their imagery. We
provide a detailed discussion of the interactive feature extraction and segmentation tools we have developed and
describe our initial results in exploiting the recorded user-computer interactions to improve user productivity over time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an ecient measure of overlap between two co-linear segments which considerably decreases the overall computational time of a Segment-based motion estimation and reconstruction algorithm already exist in literature. We also discuss the special cases where sparse sampling of the motion space for initialization of the algorithm does not result in a good solution and suggest to use dense sampling instead to overcome the problem. Finally, we demonstrate our work on a real data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In general, since smoke appears before flames, smoke detection is particularly important for early fire detection systems.
To detect fire-smoke using video camera is a difficult work because main characteristics of a smoke are uncertain, vague,
constant patterns of shape and color. Thus, this paper proposes a new fire-smoke detection method, especially forest
smoke using analyzing of temporal patterns of smoke and Fuzzy Finite Automata (FFA). To consider the smoke
characteristics over time, the temporal patterns of intensity entropy, wavelet energy and motion orientation have been
used for generating, multivariate probability density functions (PDFs) are applied Fuzzy Finite Automata (FFA) for
smoke verification. The proposed FFA consist of a set of fuzzy states (VH, H, L, VL), and a transition mapping that
describes what event can occur at which state and resulting new state. For smoke verification, FFA is most appropriate
method in case variables are time-dependent and uncertain. The proposed algorithm is successfully applied to various
fire-smoke videos and shows a better detection performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new approach for the estimation of fire front volume in indoor laboratory experiments. This work
deals with fire spreading on inclinable tables. The method is based on the use of two synchronized stereovision systems
positioned respectively in a back position and in a front position of the fire propagation direction. The two vision
systems are used in order to extract complementary 3D fire points. The obtained data are projected in a same reference
frame and used to build a global form of the fire front. An inter-systems calibration procedure is presented and permits
the computation of the projection matrix in order to project all the data to a unique reference frame. From the obtained
3D fire points, a three dimensional surface rendering is performed and the fire volume is estimated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic recognition of road distresses has been an important research area since it reduces economic loses before
cracks and potholes become too severe. Existing systems for automated pavement defect detection commonly require
special devices such as lights, lasers, etc, which dramatically increase the cost and limit the system to certain
applications. Therefore, in this paper, a low cost automatic pavement distress evaluation approach is proposed. This
method can provide real-time pavement distress detection as well as evaluation results based on the color images
captured from a camera installed on a survey vehicle. The entire process consists of two main parts: pavement surface
extraction followed by pavement distress detection and classification. In the first part, a novel color segmentation
method based on a feed forward neural network is applied to separate the road surface from the background. In the
second part, a thresholding technique based on probabilistic relaxation is utilized to separate distresses from the road
surface. Then, by inputting the geometrical parameters obtained from the detected distresses into a neural network
based pavement distress classifier, the defects can be classified into different types. Simulation results are given to
show that the proposed method is both effective and reliable on a variety of pavement images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face classification in an uncontrolled setting has wide applications in security and surveillance systems. Multiple frames
are often available for this purpose captured by multiple sensors or a single sensor generating video clips. Data fusion
technique for face classification has an advantage in that a considerable amount of information can be used to achieve
high recognition performance. This paper investigates the efficacy of multi-frame decision level fusion for face
classification based on a photon-counting linear discriminant analysis, which realizes Fisher's criterion without
dimensionality reduction. Decision level fusion comprises two stages: score validation and score combination. During
score validation, candidate symbols (classes) are selected by a screening process. During score combination, the
candidate scores are combined in order to make a final decision. In the experiments, a facial image database is employed
to show the preliminary results of the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unconstrained environments with variable ambient illumination and changes of head pose are still challenging for many face recognition systems. To recognize a person independent of pose, we first fit an active appearance model to a given facial image. Shape information is used to transform the face into a pose-normalized representation. We decompose the transformed face into local regions and extract texture features from these not necessarily rectangular regions using a shape-adapted discrete cosine transform. We show that these features contain sufficient discriminative information to recognize persons across changes in pose. Furthermore, our experimental results show a significant improvement in face recognition performance on faces with pose variations when compared with a block-DCT based feature extraction technique in an access control scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study proposes a new white blood cell (WBC) segmentation method using region merging scheme and GVF
(Gradient Vector Flow) snake. WBC segmentation consists of two schemes; nuclei segmentation and cytoplasm
segmentation. For nuclei segmentation, we create a probability map using probability density function estimated from
samples of WBC's nuclei and crop the sub-images to include nucleus by using the fact that nuclei have salient color
against background and red blood cells. Then, mean-shift clustering is performed for region segmentation and merging
rules are applied to merge particle clusters to nucleus. For cytoplasm segmentation, a hybrid approach is proposed that
combines the spatial characteristics of cytoplasm and GVF snakes to delineate the boundary of the region of interest.
Unlike previous algorithms, the main contribution of this study is to improve the accuracy of WBC segmentation and
reduce the computational time by cropping sub-images and applying different segmentation rules according to the parts
of cell. The evaluation of proposed method was performed on five WBC types and it showed that the proposed algorithm
produced accurate segmentation results in most types of WBCs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multi-sensor image fusion method based on redundant lifting non-separable wavelet transform is
presented. This method can capture the detailed information of an image accurately. The wavelet
coefficients of the approximant sub-images are fused respectively based-on the fusion rule of local area
gradient. This method is effectively used to fuse the multi-spectral infrared images and the multi-sensor
medical images. The results show that the proposed method has the property of shift-invariance and has
good fusion effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of abnormalities is a very challenging problem in computer vision, especially if these abnormalities
must be detected in images of textured surfaces such as textile, stone, or wood. We propose a novel, non-parametric
approach for defect detection in textures that only employs two features. We compute the two
parameters of a Weibull fit for the distribution of image gradients in local regions. Then, we perform a simple
novelty detection algorithm in order to detect arbitrary deviations of the reference texture. Therefore, we evaluate
the Euclidean distances of all local patches to a reference point in the Weibull space, where the reference point
is determined for each texture image individually. Thus, our approach becomes independent of the particular
texture type and also independent of a certain defect type.
For performance evaluation we use the highly challenging database provided by Bosch for a contest on
industrial optical inspection with different classes of textures and different defect types. By using the Weibull
parameters we can detect local deviations of texture images in an unsupervised manner with high accuracy.
Compared to existing approaches such as Gabor filters or grey level statistics, our approach is not only powerful,
but also very efficient such that it can also be applied for real-time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the Spallation Neutron Source (SNS), an accelerator-based neutron source located at the Oak Ridge National
Laboratory (Tennessee, USA), the production of neutrons is obtained by accelerating protons against a mercury
target. This self-cooling target, however, suffers rapid heat deposition by the beam pulse leading to large
pressure changes and thus to cavitations that may be damaging to the container. In order to locally compensate
for pressure increases, a small-bubble population is added to the mercury flow using gas bubblers. The geometry
of the bubblers being unknown, we are testing several bubblers' configurations and are using machine vision
techniques to characterize their efficiency by quantitative measurement of the created bubble population. In this
paper we thoroughly detail the experimental setup and the image processing techniques used to quantitatively
assess the bubble population. To support this approach we are comparing our preliminary results for different
bubblers and operating modes, and discuss potential improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and
resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a
magnified imaging system is designed with small spot size aperture holes (10 and 100μm) for improved resolution
beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the
small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system
design is developed for a CSI system with a development of equations for limitations on the system based on the
coded image requirements and the neutron source characteristics of size and divergence. Simulation has been
applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array
objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation
transfer function (MTF) from the line spread function. MTF results for both 100μm and 10μm aperture hole
diameters show resolutions matching the hole diameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays machine vision applications require skilled users to configure, tune, and maintain. Because such
users are scarce, the robustness and reliability of applications are usually significantly affected. Autonomic
computing offers a set of principles such as self-monitoring, self-regulation, and self-repair which can be used
to partially overcome those problems. Systems which include self-monitoring observe their internal states, and
extract features about them. Systems with self-regulation are capable of regulating their internal parameters
to provide the best quality of service depending on the operational conditions and environment. Finally, self-repairing
systems are able to detect anomalous working behavior and to provide strategies to deal with such
conditions. Machine vision applications are the perfect field to apply autonomic computing techniques. This
type of application has strong constraints on reliability and robustness, especially when working in industrial
environments, and must provide accurate results even under changing conditions such as luminance, or noise.
In order to exploit the autonomic approach of a machine vision application, we believe the architecture of
the system must be designed using a set of orthogonal modules. In this paper, we describe how autonomic
computing techniques can be applied to machine vision systems, using as an example a real application: 3D
reconstruction in harsh industrial environments based on laser range finding. The application is based on
modules with different responsibilities at three layers: image acquisition and processing (low level), monitoring
(middle level) and supervision (high level). High level modules supervise the execution of low-level modules.
Based on the information gathered by mid-level modules, they regulate low-level modules in order to optimize
the global quality of service, and tune the module parameters based on operational conditions and on the
environment. Regulation actions involve modifying the laser extraction method to adapt to changing conditions
in the environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a method and a system to measure distances to a target and at the same time obtaining an image of
the target. The system is based on a wavefront coded lens and an image processing unit. The method requires
a calibration phase where a blur metric is related to the distance of the target. The distance to the target for
any position is then obtained by computing the blur metric of the target and using the calibration data. The
method and system are easy to manufacture and provide an alternative to other distance measuring methods
and devices, while also producing an image of the scene. The target is a printed image of pseudo random black
and white elements which can be stick or placed nearby objects of which distance is to be evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a machine vision system for automatic identification of the class of firearms by extracting and analyzing two
significant properties from spent cartridge cases, namely the Firing Pin Impression (FPI) and the Firing Pin Aperture
Outline (FPAO). Within the framework of the proposed machine vision system, a white light interferometer is employed
to image the head of the spent cartridge cases. As a first step of the algorithmic procedure, the Primer Surface Area
(PSA) is detected using a circular Hough transform. Once the PSA is detected, a customized statistical region-based
parametric active contour model is initialized around the center of the PSA and evolved to segment the FPI.
Subsequently, the scaled version of the segmented FPI is used to initialize a customized Mumford-Shah based level set
model in order to segment the FPAO. Once the shapes of FPI and FPAO are extracted, a shape-based level set method is
used in order to compare these extracted shapes to an annotated dataset of FPIs and FPAOs from varied firearm types. A
total of 74 cartridge case images non-uniformly distributed over five different firearms are processed using the
aforementioned scheme and the promising nature of the results (95% classification accuracy) demonstrate the efficacy of
the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method that can extract and visualize anatomical structures from volumetric medical images by
using a 3D level set segmentation method and a hybrid volume rendering technique. First, the segmentation using the
level set method was conducted through a surface evolution framework based on the geometric variation principle. This
approach addresses the topological changes in the deformable surface by using the geometric integral measures and level
set theory. These integral measures contain a robust alignment term, an active region term, and a mean curvature term.
By using the level set method with a new hybrid speed function derived from the geometric integral measures, the
accurate deformable surface can be extracted from a volumetric medical data set. Second, we employed a hybrid volume
rendering approach to visualize the extracted deformable structures. Our method combines indirect and direct volume
rendering techniques. Segmented objects within the data set are rendered locally by surface rendering on an object-by-object
basis. Globally, all the results of subsequent object rendering are obtained by direct volume rendering (DVR).
Then the two rendered results are finally combined in a merging step. This is especially useful when inner structures
should be visualized together with semi-transparent outer parts. This merging step is similar to the focus-plus-context
approach known from information visualization. Finally, we verified the accuracy and robustness of the proposed
segmentation method for various medical volume images. The volume rendering results of segmented 3D objects show
that our proposed method can accurately extract and visualize human organs from various multimodality medical volume
images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many methods have been developed in image processing for face recognition, especially in recent years with the increase
of biometric technologies. However, most of these techniques are used on grayscale images acquired in the visible range
of the electromagnetic spectrum.
The aims of our study are to improve existing tools and to develop new methods for face recognition. The techniques
used take advantage of the different spectral ranges, the visible, optical infrared and thermal infrared, by either
combining them or analyzing them separately in order to extract the most appropriate information for face recognition.
We also verify the consistency of several keypoints extraction techniques in the Near Infrared (NIR) and in the Visible
Spectrum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multidisciplinary research conducted at the Oak Ridge National Laboratory aims at understanding the molecular
controls of partitioning, transport and fate of carbon fixed by photosynthesis in plants and its correlation
with other measured plant system properties. Ultimately, we intend to develop a modeling framework to assess,
correlate and predict as to which spatiotemporal changes in system dynamics are key to predicting emergent
properties of system. Within this research, this paper relates to the quantitative morphological imaging of the
main structures forming a plant (stem, roots, and leaves), their internal sub-structures, and changes occurring
overtime.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Being a low cost method, the ball crater method is being widely used in both the laboratory and industry for thickness
measurements of coatings. A crucial step of this technique is the determination of the radii of the crater circles from a
microscope image. The accuracy of the thickness value is mostly defined by the accuracy of radius measurements.
Traditional methods of measuring the radii involve human operators, who inevitably introduce measurement uncertainty.
In this work, we propose an automated method of estimating the crater radii with the aid of image processing. It
measures the radius by fitting a circle to all edge points of the crater circle, which is extracted from the microscope
image of the crater by using image processing techniques. It enables automating the process of determining the coating
thickness from the image of a ball crater on a coating. Furthermore, because it utilizes all edge points to estimate the
radius value, it increases robustness and measurement accuracy. Experimental results confirm the feasibility of our
method and its potential in reducing measurement uncertainty and increasing measurement accuracy of the ball crater
method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.