Handbook of Optical Biomedical Diagnostics, Vol.2 Methods, 2nd Edition
Handbook of Optical Biomedical Diagnostics, Vol.2 Methods, 2nd Edition
Handbook of Optical Biomedical Diagnostics, Vol.2 Methods, 2nd Edition
OPTICAL
BIOMEDICAL
DIAGNOSTICS
SECOND EDITION
Volume 2: Methods
Valery V. Tuchin
EDITOR
Library of Congress Cataloging-in-Publication Data
Published by
SPIE
P.O. Box 10
Bellingham, Washington 98227-0010 USA
Phone: +1 360.676.3290
Fax: +1 360.647.1445
Email: [email protected]
Web: http://spie.org
The content of this book reflects the work and thought of the authors and editors.
Every effort has been made to publish reliable and accurate information herein, but the
publisher is not responsible for the validity of the information or for any outcomes
resulting from reliance thereon. All known errata will be posted on the book’s page on
our website.
Images on the cover were reprinted with permission from the following publications:
• A. Doronin, C. Macdonald, and I. Meglinski, “Propagation of coherent polarized
light in highly scattering turbid media,” J. Biomed. Opt. 19(2), 025005 (2014).
• S. J. Kirkpatrick and D. D. Duncan, “Noncontact microstrain measurements in
orthodontic wires,” J. Biomed. Mater. Res. 29, 1437–1442 (1995).
Table of Contents
Preface xv
List of Contributors xix
v
vi Table of Contents
Index 639
Preface
This Handbook is the second edition of the monograph initially published in
2002. The first edition described some aspects of laser–cell and laser–tissue
interactions that are basic for biomedical diagnostics and presented many
optical and laser diagnostic technologies prospective for clinical applications.
The main reason for publishing such a book was the achievements of the last
millennium in light scattering and coherent light effects in tissues, and in the
design of novel laser and photonics techniques for the examination of the
human body. Since 2002, biomedical optics and biophotonics have had rapid
and extensive development, leading to technical advances that increase the
utility and market growth of optical technologies. Recent developments in the
field of biophotonics are wide-ranging and include novel light sources,
delivery and detection techniques that can extend the imaging range and
spectroscopic probe quality, and the combination of optical techniques with
other imaging modalities.
The innovative character of photonics and biophotonics is underlined by
two Nobel prizes in 2014 awarded to Eric Betzig, Stefan W. Hell, and William
E. Moerner “for the development of super-resolved fluorescence microscopy”
and to Isamu Akasaki, Hiroshi Amano, and Shuji Nakamura “for the
invention of efficient blue light-emitting diodes which has enabled bright and
energy-saving white light sources.” The authors of this Handbook have a
strong input in the development of new solutions in biomedical optics and
biophotonics and have conducted cutting-edge research and developments
over the last 10–15 years, the results of which were used to modify and update
early written chapters. Many new, world-recognized experts in the field have
joined the team of authors who introduce fresh blood in the book and provide
a new perspective on many aspects of optical biomedical diagnostics.
The optical medical diagnostic field covers many spectroscopic and laser
technologies based on near-infrared (NIR) spectrophotometry, fluorescence
and Raman spectroscopy, optical coherent tomography (OCT), confocal
microscopy, optoacoustic (photoacoustic) tomography, photon-correlation
spectroscopy and imaging, and Doppler and speckle monitoring of biological
flows.1–45 These topics—as well as the main trends of the modern laser
diagnostic techniques, their fundamentals and corresponding basic research
xv
xvi Preface
Valery V. Tuchin
April 2016
References
1. F. A. Duck, Physical Properties of Tissue: A Comprehensive Reference
Book, Academic, London (1990).
2. A. P. Shepherd and P. A. Oberg, Laser Doppler Blood Flowmetry,
Kluwer, Boston (1990).
3. J. B. Pawley (Ed.), Handbook of Biological Confocal Microscopy, Plenum
Press, New York (1990).
4. T. Wilson (Ed.), Confocal Microscopy, Academic Press, London (1990).
5. K. Frank and M. Kessler (Eds.), Quantitative Spectroscopy in Tissue, pmi
Verlag, Frankfurt am Main (1992).
xviii Preface
23. E. Baron (Ed.), Light-Based Therapies for Skin of Color, Springer, New
York (2009).
24. K.-E. Peiponen, R. Myllylä, and A. V. Priezzhev, Optical Measurement
Techniques, Innovations for Industry and the Life Science, Springer-Verlag,
Berlin, Heidelberg (2009).
25. L. Wang, Ed., Photoacoustic Imaging and Spectroscopy, CRC Press,
Taylor & Francis Group, London (2009).
26. V.V. Tuchin (Ed.), Handbook of Optical Sensing of Glucose in Biological
Fluids and Tissues, CRC Press, Taylor & Francis Group, London (2009).
27. A. Wax and V. Backman (Eds.), Biomedical Applications of Light
Scattering, McGraw-Hill, New York (2010).
28. V. V. Tuchin, Lasers and Fiber Optics in Biomedical Science, 2nd ed.,
Fizmatlit, Moscow (2010).
29. X.-C. Zhang and J. Xu, Introduction to THz Wave Photonics, Springer,
New York (2010).
30. V.V. Tuchin (Ed.), Handbook of Photonics for Medical Science, CRC
Press, Taylor & Francis Group, London (2010).
31. F. S. Pavone (Ed.), Laser Imaging and Manipulation in Cell Biology,
Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim (2010).
32. V.V. Tuchin (Ed.), Advanced Optical Flow Cytometry: Methods and
Disease Diagnoses, Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim
(2011).
33. D. A. Boas, C. Pitris, and N. Ramanujam (Eds.), Handbook of Biomedical
Optics, CRC Press, Taylor & Francis Group, London (2011).
34. J. Popp, V.V. Tuchin, A. Chiou, and S.H. Heinemann (Eds.), Handbook
of Biophotonics, vol. 1: Basics and Techniques, WILEY-VCH Verlag
GmbH & Co. KGaA, Weinheim (2011).
35. J. Popp, V.V. Tuchin, A. Chiou, and S.H. Heinemann (Eds.), Handbook
of Biophotonics, vol. 2: Photonics for Health Care, WILEY-VCH Verlag
GmbH & Co. KGaA, Weinheim (2012).
36. J. Popp, V.V. Tuchin, A. Chiou, and S.H. Heinemann (Eds.), Handbook
of Biophotonics, vol. 3: Photonics in Pharmaceutics, Bioanalysis and
Environmental Research, WILEY-VCH Verlag GmbH & Co. KGaA,
Weinheim (2012).
37. V.V. Tuchin, Dictionary of Biomedical Optics and Biophotonics, SPIE
Press, Bellingham, WA (2012).
38. M. J. Leahy (ed.), Microcirculation Imaging, Wiley-VCH Verlag GmbH
& Co. KGaA, Weinheim (2012).
39. R.K. Wang and V.V. Tuchin (Eds.), Advanced Biophotonics: Tissue
Optical Sectioning, CRC Press, Taylor & Francis Group, London (2013).
40. H. Jelinkova (Ed.), Lasers for Medical Applications: Diagnostics, Therapy
and Surgery, Woodhead Publishing, Ltd., Cambridge (2013).
xx Preface
xxi
xxii List of Contributors
1
2 Part III
and urinary bladder. The main goal of this chapter is to provide the readers
with basic tools necessary to understand the potentials of biomedical light
scattering spectroscopy, including sufficient medical and biological back-
ground and principles of light scattering by cells and sub-cellular structures.
The relation of single and multiple scattering in tissue is particularly
considered. Finally, the applications of various types of light scattering in
detection of early cancer and precancerous conditions are reviewed. In
addition, several recently developed clinical tools are described including the
endoscopic polarized scanning spectroscopy (EPSS) instrument, which is
compatible with existing endoscopes. It scans large areas of the esophagus
chosen by the physician and has the software and algorithms necessary to
obtain quantitative, objective data about tissue structure and composition,
which can be translated into diagnostic information in real time. This process
enables the physician to take confirming biopsies at suspicious sites and
minimize the number of biopsies taken at nondysplastic sites. Another newly
developed technique, called confocal light absorption and scattering
spectroscopic (CLASS) microscopy, combines light-scattering spectroscopy
(LSS) with confocal microscopy. In CLASS microscopy, light-scattering
spectra are the source of the contrast. Another important aspect of LSS is its
ability to detect and characterize particles well beyond the diffraction limit.
Chapter 3 discusses the applications of reflectance and fluorescence
spectroscopies for the assessment of the optical properties of human skin in
relation to different diseases, environmental factors, and the effectiveness of
various treatments. Applied to the skin in vivo, these techniques provide
information on the structure of epidermis and dermis, on the quantity and
density of blood vessels, on the concentration and spatial distribution of
chromophores and fluorophores in skin, and on the nature of skin metabolic
processes. The authors discuss the potential advantages and possible
applications of the combined use of reflectance and fluorescence spectroscopy
of skin for the evaluation of erythema and pigmentation indices, the
determination of hemoglobin oxygenation and concentration, and the
investigation of the efficacy of topical sunscreens. Simple models are used
to analyze changes in skin reflectance and fluorescence spectra as a result of
morphological and functional alterations in skin, or as a result of treatment
effects. Such changes can be monitored by imaging techniques, in particular,
in polarized light and analyzing the color characteristics of the reflected light.
Ways to improve the accuracy of skin diagnostics and the efficiency of skin
therapy by analyzing and controlling the skin optical parameters are also
discussed in this chapter. In particular, the authors demonstrate how to
control the sensitivity of skin reflectance spectra by compression and
stretching. A special emphasis is made on the potentialities of immersion
optical clearing and corresponding decrease in the scattering coefficient in
tissue studies. Ways to raise the efficiency of optical clearing, e.g., by
Scattering, Fluorescence, Infrared, and Raman Spectroscopy of Tissues 3
Alexander V. Priezzhev
Juergen Lademann
Co-editors
Chapter 1
Optical Study of RBC
Aggregation in Whole Blood
Samples and Single Cells
Alexander V. Priezzhev and Kisung Lee
Lomonosov Moscow State University, Moscow, Russia
Nikolai N. Firsov
Russian State Medical University, Moscow, Russia
Juergen Lademann
University Clinic Charité, Berlin, Germany
5
6 Chapter 1
creasing the cell membranes, which reduces the aggregation of RBCs. The
studies of RBC deformability showed24–26 that as the shear rate is increased
from zero to 200 s–1, the relative elongation p ¼ (ab)/(a þ b), where a and b
are the correspondingly long and short half-axes of the ellipsoid modeling the
elongated cell, increases quite rapidly, whereas at shear rates changing from
200 to 600 s–1 the cell relative elongation changes more slowly and does not
exceed the value of p ¼ 0.7. Long-lasting mechanical stress reduces the RBC
deformability. However, the deformability is gradually restored after some
time depending on the duration and intensity of the applied stress. Computer
analysis of the experimental data obtained by means of laser diffractometry27
allows one to conclude that at high shear rates the major factor of RBC
deformability is the viscosity of the intercellular medium (solution of
hemoglobin), while at low shear rates the major factors are the viscoelastic
properties of the membrane and the shape of the cells.
Figure 1.3 Basic approaches for the determination of the erythrocyte aggregation and
disaggregation by optical measurements.
and Stolz et al.36 and Donner et al.37 Later Firsov et al.,38–41 Priezzhev
et al.,42,43 Hardeman et al.,44 and Potron et al.45 developed this approach
enabling researchers to perform quantitative measurements of kinetic indices
of RBC aggregation and estimate the hydrodynamic strength of the
aggregates. A cylindrical Couette cell is typically used for such measurements.
Numerical simulations of the backscattering signal from model blood
samples in different geometries performed using the Monte Carlo algorithm
were implemented for optimizing the operation of the device and signal
processing and interpretation.43,46
sample ranges from 2.3 to 2.5 mL. The outer (hollow) cylinder wall is
transparent to the illuminating light. The surface of the inner cylinder is
specially coated to eliminate reflection. The inner or outer cylinder can be
rotated with varying step rates, so that the shear rate in the fluid flow
inside the gap changes from the minimum value ranging from 1.5 to 2.5 s–1
to the maximum value ranging from 840 to 1500 s–1, depending upon the
design. A specific feature of Couette flows is that the magnitude of the shear
rate is the same everywhere inside. It is determined only by the rotation rate
of the cylinder and the radii of the cylinders. This means that all RBCs in
the gap are in similar hydrodynamic stress conditions that are nondepen-
dent on their exact location, which is very important for aggregation
measurements.
To perform the measurements, a collimated probe beam of an intensity-
modulated laser (lp ¼ 670 or 780 nm) illuminates a small volume of blood
(typically V ¼ 1–2 mm3, depending on the focusing) at the half-height level of
the gap. The incident light peak intensity in the probe beam is I0 , 0.3 mW/
mm2. Multiple tests performed with blood samples showed that laser
irradiation of blood at these wavelengths and of such an intensity affects
neither the rheological nor the optical properties of a sample throughout the
measurement procedure, which typically lasts about 10 minutes.
After passing through the transparent wall of the outer cylinder, the
probing beam is scattered multiple times by RBCs all throughout the depth of
the blood sample. A fiber-optic unit consisting of several detecting apertures
placed outside the chamber in the vicinity of the probing beam detects a
portion of the incident light scattered in the backward direction. The process
of diffusion and the multiple scattering of the photons in the blood volume
can be considered as follows. Part of the probe light is scattered by the first
layer of RBCs. As discussed in Chapter 2, due to the high anisotropy of blood,
most of the photons are scattered at low angles. They are sequentially
scattered in the deeper and deeper layers. Only a small portion of the photons
that are multiply scattered at large angles reach the detector apertures at the
outer surface of the sample and contribute to the output signal. Experiments
show that the apertures located at different positions relative to the incident
probe beam and flow direction detect different numbers of photons
backscattered from flowing RBCs. This phenomenon of scattering asymmetry
is considered in more detail in Ref. 47.
Figure 1.5 Typical time course of remission signal measured with a blood sample drawn
from a healthy individual (Hct ¼ 40).
Figure 1.6 Kinetics of the remission signal during the aggregation process.
Figure 1.7 Approximation of the remission signal during the aggregation process.
shown that the temperature change ranging from 28 to 37°C has no effect on
the remission signal at flow conditions. As for the temperature dependence of
the aggregation properties of RBCs in stasis or at low shear stress, the
experimental results obtained by different authors are rather controversial.
Our experiments, performed with a Couette system and blood samples of
healthy individuals,49 show that though the scatter of the measured
parameters is rather high, the characteristic times T1 and T2 do not regularly
depend on temperature from 2 to 37°C. The main conclusion is that the
aggregation parameters of RBCs can be reproducibly measured at room
temperatures (20–25°C) without any corrections. Temperatures higher than
37°C induce a quick increase in the aggregation rate (by about three times at
45°C). In the temperature range of 45 to 50°C, the adequate measurements are
hardly possible due to the thermal denaturation of blood. In some
pathological cases (e.g., cryoglobulinemia, Sjogren disease) a sharp decrease
of T1 and an increase of the strength of aggregates, b, were recorded.
However, other authors using different techniques and conducting
experiments in different conditions report somewhat different and variable
18 Chapter 1
results (see, e.g., Refs. 50–52). In one such experiment performed by means of
photometric aggregometry,50 20 blood samples of normal donors and 20
blood samples of patients with venous ulcers of the leg were examined at 3, 10,
20, 30, and 37°C. It was shown that with decreasing temperature, red cell
aggregates become more resistant to hydrodynamic dispersion and more
prone to growing under low shear stress. It is concluded that a decrease in
temperature causes an increase in the adsorptive energy of red cell
aggregation, which is most likely due to an increase in molecular adsorption
stress. RBC aggregate formation as an overall process is retarded by a
decrease in temperature, which is primarily due to an increase in plasma
viscosity that causes an increased damping of aggregate formation.
Accordingly, the rate constant of aggregate formation corrected for plasma
viscosity increases with decreasing temperature. The authors explain the
obtained temperature dependence of the kinetic parameters by a theoretical
model that suggests an increase in the contact area between aggregating red
blood cells as the rate-limiting step of red cell aggregation. They conclude that
as a whole, red cell aggregation is favored by a lowering of temperature.
The effect of oxygenation on the level of the remission signal can be reduced if
the measuring wavelength is chosen near the isobestic point of blood at 805 nm.
Whether or not the determined dependence of the remission signal on the
degree of oxygenation influences the kinetic measurements of whole blood
samples in the rotating cylinder system was checked using two different diode
lasers. One of them emitted at a wavelength of 670 nm, at which the
oxygenation effect has a significant influence on the remission signal, whereas
the other one was at 780 nm, at which the influence can be neglected. Long-
term measurements showed no difference in the time course of the remission
signal during aggregation and disaggregation at these wavelengths. Taking
into consideration these results, the effect of oxygenation on the remission
signals could be neglected under the conditions of the standard procedure of
blood sample preparation. This result is comprehensible, because blood
samples are kept in very limited contact with air during sampling and
preparation. The open surface of the sample in the gap, i.e., in contact with
the air, is very small in comparison to the whole sample surface.
Figure 1.9 Effect of RBC sedimentation process on the remission signal intensity
(variation limits).
trap—a light field gradient that traps the nearby microparticles. Typically a
microparticle in the optical trap is considered as a particle attached to a spring
(optical trap) with stiffness k. The farther away the particle is from the beam
focus, the higher is the returning force applied by the optical trap. In a typical
study of RBCs’ interactions, the forces are matched to the maximum returning
force of the optical trap. When a trapped cell escapes from the optical trap, the
interaction force has exceeded the trapping force.
The problem of cells’ heating is solved by matching the laser wavelength
to the minimum of the RBC absorption. The calculations made in Ref. 62
showed that the effect of heating is a few K per every 100 mW of laser power
(depending on the chamber size and geometry). The laser power used in the
experiments typically does not exceed 50 mW. Therefore, the laser heating
effect on cells is negligible. Experimental observations proved that trapping an
RBC for several minutes causes no visible changes in the cell. The typical
measurement time for one cell is 1–2 minutes. Using a laser beam power of
50 mW is sufficient for measuring interaction forces ranging from 0.5 to
30 pN with 10–20% deviation.
The setup records the images from forward-scattered light to a video
camera. The dynamic video fragments are analyzed using adequate software
to obtain detailed information about the measurement procedure and the
results. Typical images of the RBCs before and after trapping are presented in
Fig. 1.11. It is seen that the cells tend to rotate to a side-oriented state because
the optical trap exerts more longitudinal forces than lateral forces.
The measurements of RBCs’ interactions in vitro with LT are typically
performed with cells suspended in the autologous plasma or some other
solution. Therefore, the parameters of interaction can be precisely measured
Figure 1.11 Sequences of cell trapping by laser tweezers in a diluted solution of RBCs:
(a) single cells on the glass surface, (b) two individual RBCs trapped in double-channel laser
tweezers.
24 Chapter 1
depending on both the cell membrane properties and the solution content.
LTs allow for measuring the pure aggregational interaction between the two
RBCs during the formation of a doublet as the first step in the formation of
larger aggregates, independent of the other cells.
attached to each other with the desired initial interaction surface and interaction
times; (4) laser tweezers are moved away from each other to separate the RBCs
at a constant trapping power; (5) the minimum force required to separate the
RBCs is found by repeating step 4 while slowly increasing the trapping power.
The typical value of the force, required to separate the cells overlapped in the
RBC doublet in autologous plasma was found to be around 10 pN.62–64
In a number of works, it was shown that the disaggregation process could
proceed in three different ways as shown in Fig. 1.13.62–64 In the first case, an
unbreakable point contact between cells is observed at the final stage of
disaggregation; in the second case, the disaggregation process is characterized
by strong interaction forces within a certain area of the cells’ surfaces’ overlap;
in the third case, the RBC doublet is easily fully disaggregated.
The force required to disaggregate RBCs in autologous plasma is
dependent on the interaction time. During the first 2 seconds, the cells
disaggregate easily, and after that the interaction becomes stronger.9 The
quantitative measurements proved that the disaggregation force increases
during the first few seconds.58,64
1.7.2.2 Measurement of the aggregation force
In the case of the measurement of the aggregation process, the velocity of
spontaneous aggregation is measured as shown in Fig. 1.14. The measurement
sequences are the following: (1) two independent non-interacting cells are
trapped in separate laser tweezers; (2) the cells are lifted from the surface of
the chamber; (3) the cells are attached to each other with a local interaction
surface; (4) laser tweezers are turned off; (5) RBCs start to spontaneously
overlap; (6) the velocity of the process is measured. The typical value of the
Figure 1.14 Typical measurement steps: 1) two individual RBCs are trapped in different
laser tweezers (a); 2) the cells are attached with laser tweezers after which the latter are
switched off (b); 3-5) the velocity of RBCs spontaneous overlapping is measured (c-e).
26 Chapter 1
Aggregation Disaggregation
100
80
values of the parameters
60
40
20
0
T1 T1 T2 T2 β1 β1 β2 β2
Figure 1.15 Aggregation and disaggregation parameters of blood samples from healthy
donors and patients suffering from diabetes.
Chapter 1
Optical Study of RBC Aggregation in Whole Blood Samples and Single Cells 29
Figure 1.16 Aggregation and disaggregation kinetics in a blood sample obtained from a
patient in a strong pathological state.68
cases are usually completely destroyed, as is shown in Fig. 1.16 for a blood
sample obtained from a patient suffering from cancer.68 The aggregation and
disaggregation parameters are strongly different for those of healthy persons.
Significant variations of the results obtained by different authors using
different equipment, sample preparation procedures, and experimental
protocols when measuring the RBC aggregation and disaggregation
parameters raised the issue of standardization of this work, which turned
out to be very difficult issue to solve. As a partial solution of this problem, an
international group of experts has set up new guidelines for hemorheological
laboratory techniques, including the RBC aggregometry in Ref. 75.
Summarizing the outlined results, we can state that the described optical
measurement techniques designed for studying blood aggregation and
disaggregation are well suited for the determination of the pathological status
of patients suffering from different diseases. The class of diseases can be
estimated. This method will produce broad practical applications in medical
healthcare, once it becomes possible to standardize both the sample
preparation and measurement protocols and the algorithms of the presenta-
tion and analyses of the results.
Acknowledgments
Dr. Priezzhev acknowledges the support of this work by the Russian Science
Foundation grant #14-15-00602.
References
1. J. S. K. Fung and P. B. Canham, “The mode and kinetics of the human
red cell doublet formation,” Biorheology, 11, 241–251 (1974).
2. M. J. Dunlop, M. Martin, P. B. Canham, and C. P. S. Taylor, “Kinetics
of adhesive interaction in vitro of human erythrocytes in plasma,”
Microvasc. Res., 28, 62–74 (1983).
30 Chapter 1
Vadim Backman
Northwestern University, Evanston, USA
2.1 Introduction
Over the past decades, substantial progress has been made in medical
diagnostic technologies that target anatomic changes at the organ level.
Techniques such as magnetic resonance imaging (MRI) and spectroscopy
(MRS), x-ray computed tomography (X-ray CT), and ultrasound make it
possible to “see through the human body.” At the same time, there is clearly a
need for the development of diagnostic techniques that use our current
knowledge of the cellular and subcellular bases of disease. The diagnostic
techniques applicable in situ (inside the human body) that can provide
structural and functional information about the tissue at the cellular and
subcellular levels, the kind of information that is currently obtainable using
only in vitro methods requiring tissue removal, will have great implications for
the detection and prevention of diseases as well as targeted therapy.
Recently various optical techniques, such as elastic1–3 and Raman
scattering,4 absorption,5,6 and fluorescence7–9 have been used to study human
tissue in situ noninvasively. The goal of these studies is not only to provide
accurate diagnosis of various diseases, but also to better understand the
genesis of diseases. In this chapter we describe novel techniques capable of
identifying and characterizing pathological changes in human tissues at the
cellular and subcellular levels that are based on light scattering. Light
scattering spectroscopy (LSS) has long been used to study a great variety of
37
38 Chapter 2
0.2 mm to 0.8 mm. The mitochondria are quite flexible and may easily change
their shape. Their major function is to generate adenosine triphosphate
(ATP) via oxidative phosphorilation, thus providing the energy required for
the cell. Since metabolic requirements of different cells vary, the numbers of
mitochondria differ depending on the cell size and its energy needs. For
example, nonsecreting cells of the epithelial lining of the internal surface of
the colon have few mitochondria. In contrast, the hepatocytes—liver cells—
contain a few thousand mitochondria. A mitochondrion is composed of a
7-nm-thick outer membrane and an inner membrane that is about 6 nm in
thickness. The inner membrane is folded to form cristae. It is studded with
numerous mushroom-like 15-nm particles that possess a 10-nm head and 5-
nm tail connecting them to the membranous wall. The 10- to 20-nm-wide
space between the outer and the inner membranes is called the membrane
space and appears relatively empty. The space between the cristae of the
inner membrane is wider, ranging from 40 to 200 nm, and is filled with the
mitochondrial matrix. This matrix is not uniform, but rather is a meshwork
of nearly spherical matrix granules composed of densely packed phospho-
lipoproteins. These granules are 30 to 50 nm in diameter and have a complex
internal structure. They contain multiple 1- to 3-nm compartments separated
by dense septa.
2. Endoplasmic reticulum (ER) is composed of tubules and flat sheets of
membranes distributed over the intracellular space. The outer diameter of
these tubules ranges from 30 to 100 nm. Their wall thickness is about
10 nm. There are two types of endoplasmic reticulum: rough endoplasmic
reticulum (RER) that functions in the synthesis of cholesterol and lipids,
and smooth endoplasmic reticulum (SER) that functions in the synthesis
of proteins. The RER differs from the SER in that it bears 20- to 25-nm
42 Chapter 2
columnar epithelial cells of the intestine, are all of similar size and shape. In
some cases, the standard deviation of the nuclear diameters is less than 5% of
the average diameter. On the contrary, the sizes of other organelles,
mitochondria for instance, vary widely even within a single cell. (Note that
the statistical properties of the population of mitochondria and other small
organelles are roughly preserved over a cell line.)
The major components of the cell nucleus are the nuclear envelope,
chromatin, nucleolus, and nuclear matrix, all embedded in the nucleoplasm.
1. The nucleus is enclosed by the nuclear envelope, a system of two
membranes separated by a 10- to 30-nm space, the prerinuclear cisterna.
The outer membrane may have ribosomes attached to it. Some parts of
the membrane contain fenestration called nuclear pores that are complex
structures with a circular shape about 100 nm in diameter covered by 15-
to 20-nm spherical particles.
2. Chromatin is one of the major components of the nucleus. It contains
the strands of deoxyribonucleic acid (DNA) that encode the genetic
information of the cell. Although the DNA is organized in the
chromosomes, these are not appreciable in the interphase, or nondividing
nucleus, and become apparent only with cell division. Two types of
chromatin are distinguished. The portion of the DNA that is not being
transcribed forms condensed or heterochromatin. The transcribed part of
the DNA comprises an extended or euchromarin. This distinction was
brought into use by the histologists who microscopically examined stained
tissue sections. When stained with basophilic contrast dye, the nucleus
appears bluish due to the high affinity of the heterochromatin to such dyes.
On the other hand, the euchromatin is poorly stained and is not visible with
the microscope. The fundamental etiology of this difference between two
types of chromatin has yet to be explained. Although it is known that the
euchromatin is less dense than the heterochromatin, this fact itself is not
sufficient to explain the difference in the staining pattern. The clumps of the
heterochromatin are irregular and may vary from 0.5 to 2 mm in size. The
heterochromatin is made up of closely tangled 30 nm fibrils. The fibrils are
composed of beaded strands with regularly spaced smaller subunits, the
nucleosomes. These have a cylindrical shape and are connected by the
filaments, which are 4 nm in diameter. In the euchromatin, the strands of
the nucleosomes are uncoiled and do not form the fibrillar structure.
3. The nucleolus is responsible for the transcription of ribosomal RNA
(rRNA), the assembly of subunits of ribosomes. The size of the nucleolus
may range from 0.5 mm up to about 1 mm. It appears to be a more or less
round network of anastomosing strands, called pars granulose. These
strands are made from 15-nm ribonucleoprotein particles surrounded by
even smaller filaments. There are a few rounded structures within the
nucleolus filled with fibrillar material of a lower density, called fibrillar
centers, which have a dimension of about 80 nm.
44 Chapter 2
The cell nucleus, like most other cell organelles, is not a uniform object
and possesses a complex internal structure. The larger inhomogeneities are
formed by smaller structures, which in turn are composed of even smaller
building blocks. We have to notice, however, that studies have established
that the density variations within the cell nucleus are smaller than the one
between the nucleus and the cytoplasm. The implications of this fact for light
scattering by the cells will be discussed in the following sections.
Figure 2.2 Low-power microphotograph of section of the wall of the esophagus illustrating
the major structures of the gastrointestinal tube. L–lumen, E–epithelium, CT–connective
tissue, MM–muscularis mucosa, BV–submucosal blood vessel, SM–submucosa, MP–
PXVFXODULV SURSULD6HHFRORUSODWHV
Light Scattering Spectroscopy of Epithelial Tissues: Principles and Applications 45
Figure 2.3 Microphotograph of the H&E-stained section of the bronchial wall illustrating
major components of the mucosa: epithelium and connective tissue6HHFRORUSODWHV
Figure 2.4 Examples of epithelia: (a) simple columnar epithelium, (b) stratified squamous
epithelium, (c) pseudostratified epithelium, (d) transitional epithelium6HHFRORUSODWHV
notice that more than 85% of all cancers originate from the epithelia; colon,
lung, and cervical cancers are examples. Such cancers are referred to as
carcinomas. If diagnosed at one of the preinvasive stages, most would-be
carcinomas are readily curable. Unfortunately, detection of such lesions is not
always possible with conventional diagnostic techniques. During the last few
years, LSS has appeared to be a powerful tool to considerably advance the
detection of early cancer. Its success is based on the fact that the earliest and
most ubiquitous changes occurring with precancer are alterations of the
histology of the affected epithelial cells, and their nuclei in particular. The
purpose of the following section is to outline the basic histopathology of early
cancerous and precancerous changes happening in the epithelia.
While carcinomas may originate from perfectly healthy epithelium, it has
been shown that the probability of a malignant transformation is much
greater for the epithelium that has already been damaged. For example, many
cancers grow on the basis of metaplasia. The term metaplasia describes a
pathological condition in which a normal mucosa is replaced by mucosa of
another type that is not normally present in the affected organ. One of the
examples of metaplasia is a replacement of normal stratified squamous
epithelium of the esophagus by intestinal mucosa covered by the simple
columnar epithelium, which is believed to be a consequence of the persistent
gastroesophageal reflux commonly known as “heartburn.” This condition is
referred to as Barrett’s esophagus. Patients with Barrett’s esophagus are
estimated to have a 40 times increased chance to develop esophageal
adenocarcinoma than the rest of the population. Squamous cell carcinoma of
the lungs is another example. It originates from stratified squamous
epithelium that has replaced respiratory (pseudostratified ciliated) epithelium
normally found in the large bronchi.
Carcinogenesis is a complex process. It starts from the exposure of a cell
to a carcinogenic agent, such as ionizing radiation, a chemical, a virus, etc.
This exposure results in DNA damage and mutation. It is said that the cell
becomes “initiated.” Many genes such as oncogenes and cancer suppressor
genes whose mutations may lead to a progression to malignancy have been
identified. By definition, the cells become malignant when they are able to
penetrate the basement membrane separating the epithelium from the
underlying connective tissue. The cells are almost never able to penetrate
the basement membrane immediately after their initiation. Thus, the
malignancy is usually preceded by a precursor stage that is usually referred
to as dysplasia. The dysplastic cells exhibit disorderly but not malignant
proliferation. Dysplasia almost always precedes but not necessarily results in
cancer. Thus, it would be proper to say that dysplastic cells have malignant
potential. Depending on the severity of the disease, the dysplasia is classified
as either low grade or high grade or as mild, moderate, or severe. Sometimes
pathologists use the term “indefinite for dysplasia” in cases when identifying
Light Scattering Spectroscopy of Epithelial Tissues: Principles and Applications 49
Figure 2.5 Dysplasia in the cervical stratified squamous epithelium. On the left,
nondysplastic epithelium is preserved6HHFRORUSODWHV
50 Chapter 2
region of refractive index variation. One can think of two types of problems
related to light scattering by a single particle: direct and inverse scattering
problems. The direct problem is formulated as follows: given a particle of
known structure (alternatively, given complete information about the
distribution of the refractive index inside the particle) illuminated by a plane
wave of particular polarization, find the electromagnetic field inside and
outside the particle. Usually, however, the simplified formulation of the
problem suffices: find the electromagnetic field at large distances from the
particle. The inverse problem is to find the distribution of the particle’s
refractive index based on the measurable electromagnetic wave scattered by
this particle. Although not solvable in a closed form in most cases, the first
problem is much easier than the second one. In fact, it is virtually impossible
to solve the inverse problem in the case when the solution of the direct
problem is not known. While the inverse problem is not solvable in most
relevant cases, one can get at least partial information about the particle from
the scattering data. In order to achieve this, understanding the direct problem
is crucial.
Ei ¼ E0 eiðkrvtÞ , (2.1)
where E0 is the wave amplitude, k is the wave vector, and v is the frequency.
In the wave zone, distances r are large compared to the wavelength l and the
particle’s characteristic dimension a; the radiation scattered by the particle is a
spherical wave Es. The amplitudes of both the incident and scattered waves
can be expressed as a combination of two components, parallel, Ei2 and Es2,
and perpendicular, Ei1 and Es1, to the scattering plane. The scattering
amplitude matrix S(u,f) relates the components of the scattered wave and
those of the incident wave, Es ¼ SEi or, more explicitly,16
E s2 eiðkrvtÞ S2 S3 E i2
¼ , (2.2)
E s1 ikr S4 S1 E i1
ss ¼ k 2
∫ ∫ðjS1 þ S4 j2 þ jS1 þ S4 j2 Þd cos udf: (2.3)
0 p
jS 1 ðuÞj2
I ⊥s ¼ I ⊥i . (2.5)
k 2 r2
Generally speaking, to find the matrix elements, one needs to solve
Maxwell’s wave equations with proper boundary conditions of the electric
and magnetic field continuities. The methods of solving the wave equations
are not discussed here, since they can be found elsewhere.17 Such a solution is
the most rigorous and the one that is usually the most difficult to obtain. In
fact, there are just a few cases where the analytical solutions to the wave
equation have been found. In 1907, Gustav Mie obtained the solution for the
scattering of a plane wave by a uniform sphere. The functions S1 and S2 are
expressed as an infinite series of Bessel functions of two parameters, kd and
kmd, with k the wave number, d the diameter of the sphere, and m the relative
refractive index of the sphere.18 We note that the parameters the Mie solution
depends on are the maximal and the minimal phase shifts of the light wave
passing through the system. If the sphere is optically denser than the
surrounding medium, m . 1, the maximal phase shift, kmd, is achieved when
a photon propagates through the center of the sphere. On the other hand, the
photon that is not scattered by the particle has undergone a minimal phase
shift, kd. Other examples of particles for which the scattering problem has
been analytically solved are cylinders, coated spheres, uniform and coated
spheroids, strips, and planes.19 In all these cases, the amplitudes are expressed
as an infinite series. The coefficients of some of these series are obtained as yet
another series. Some of the series are ill converging.
Light Scattering Spectroscopy of Epithelial Tissues: Principles and Applications 53
8
ss ¼ pk 4 a2 . (2.7)
3
with n(r) the refractive index at a point r and V 0 the volume encompassing
the volume of the particle V and its immediate surroundings.
2. The phase shift is small, 2ka|Dm 1|≪1, with a, the linear dimension of
the particle. Such a “soft” particle can be considered an array of
noninteracting dipoles.
The refractive index of most cell organelles ranges from 1.38 to 1.42.21–23
The refractive index of the cytoplasm of most cells varies from 1.34 to
1.36. Therefore, both conditions of the Rayleigh-Gans approximation are
satisfied for most small organelles. This approximation is derived by applying
Rayleigh’s equations (2.6) to any volume element dV within the particle. It
can be easily shown that
S2 S3 ik 3 V cos u 0
¼ Rðu,fÞ , (2.9)
S4 S1 2p 0 1
with
Rðu,fÞ ¼
1
VV
∫ðmðrÞ 1Þeidðr,u,fÞdr, (2.10)
where m(r) is the relative refractive index at a point r, and d is the phase of the
wave scattered in direction (u,f) by the dipole positioned at a point r.
If a particle is sufficiently homogenous,
scattering. For u ¼ 0, the form factor equals unity. In other directions, |R| , 1,
so the scattering in the forward direction prevails.
While most subcellular structures are smaller than or comparable to the
wavelength, there is one whose size is significantly larger than that of the other
organelles: the nucleus. None of the above-mentioned approximations could
be applied to the cell nucleus. Rather it falls into a different limiting case,
namely it is much larger compared to the wavelength. Such large particles
produce interesting but complicated scattering patterns. The approximate
theory of light scattering by large particles was first proposed by van de Hulst
in 1957.18 The van de Hulst approximation was originally formulated for
spherical particles only. However, it can be extended to large particles of an
arbitrary shape. Although the van de Hulst theory does not provide a
universal means for finding the scattering matrix for all scattering angles even
in the case of a homogenous sphere, it does enable scattering amplitudes to be
obtained in the near-forward direction as well as the scattering cross section.
Consider a particle that satisfies the following two conditions. The first is
the same as condition 1 of the Rayleigh-Gans approximation, |Dm 1|≪1.
The other is directly opposite to condition 2. We require that the phase shift be
large, 2ka|m 1|≫1. The phase shift of the ray emerging from the particle
depends on where this ray entered the particle and how much it was deflected.
The difference in the phase shift will create constructive or destructive
interference on a screened position far from the particle. Applying the
Huygens’ principle, one can obtain16
k2
SðuÞ ¼ ∫∫ð1 eijðrÞ Þeidðr,uÞd 2 r,
2p A
(2.12)
Figure 2.6 d, the phase shift between the two light rays, is shown.
Equation (2.12) enables one to obtain the scattering amplitude for a soft
large particle of an arbitrary shape. In the case of a sphere, substituting the
expressions for the phase shifts Eqs. (2.13) and (2.14) into Eq. (2.12), and
performing the integration in the spherical coordinates, one can obtain14
pffiffiffiffiffiffiffiffiffi
a
SðuÞ ¼ k 2
∫ð1 e2ikðm1Þ a2 r2
ÞJ 0 ðkruÞrdr. (2.15)
0
As one can see, the scattered intensity is highly forward directed. The width of
the first scattering lobe is about l/a. Generally speaking, the larger the
particle, the stronger and narrower the first lobe.
Light Scattering Spectroscopy of Epithelial Tissues: Principles and Applications 57
The scattering cross section can be obtained using the optical theorem16
either from Eq. (2.16) or directly from Eq. (2.12) and equals
sin½2xðm 1Þ sin½xðm 1Þ 2
ss 2pa 1
2
þ . (2.17)
xðm 1Þ xðm 1Þ
This is a famous expression first obtained by van de Hulst. It shows that large
spheres give rise to a very different type of scattering than the small particles
considered above do. Both the intensity of the forward scattering and the
scattering cross section are not monotonous functions of wavelength. Rather,
they exhibit oscillations with the wavelength, called an interference structure
(Fig. 2.7). The origin of these oscillations is indeed interference between the
light ray passing through the center of the sphere and one not interacting with
it. The frequency of these oscillations is proportional to x(m 1), so it
increases with the sphere size and refractive index.
Before we leave the topic of light scattering by a sphere, let us briefly
discuss scattering in the near-backward direction. Unfortunately, obtaining
the intensity of light scattered at large angles is difficult, since one has to
consider several effects that influence the scattering pattern. For example,
multiple internal reflections and surface waves all contribute to the intensity in
the backward direction.18 It can be shown that the scattering in the near-
backward direction is given by
4p
ss Re Sð0Þ, (2.19)
k2
one can obtain the scattering cross section of an arbitrarily shaped and not
necessarily homogenous large particle:
It can easily be shown that the value of this integral, the scattering cross
section, and other integral properties of the scattering pattern depend mostly
on the maximal phase shift that a light ray can gain while traveling along a
straight trajectory inside the particle, rather than on the specific shape of the
particle. This fact was confirmed in a great number of experiments. For
example, it was found that light scattering by red blood cells can accurately be
described using the van de Hulst approximation, despite the fact that red
blood cells are highly nonspherical.24
might give some insight into the impact of the internal structure of a particle
on the scattering pattern, it is by no means rigorous.
The FDTD presents a more general approach that does not suffer from
the limitation of the discrete particle model. It enables the computation of
scattering amplitudes for inhomogeneous objects of arbitrary shape. The
FDTD aims to find the numerical solution of Maxwell’s equations for an
electromagnetic wave propagating in a medium with given variations of the
refractive index. The equations are discretized in time and space on a four-
dimensional grid. The value of the refractive index is given for each of the
grid’s pixels. Apparently, the higher the spatial and temporal frequencies of
the grid, the more accurate the model is. However, there is a price to pay. The
complexity of the algorithm increases as l–6t–2 with a reduction of the grid
spatial spacing l and temporal interval t. It was established that l , l/10,
where l is the wavelength of the incident light guarantees reasonable
accuracy. Thus, the step is chosen as l min{l/10,dn} with dn the special
dimension of refractive index variations. Proper boundary conditions are to
be applied as well. For example, the Liao boundary conditions have been used
to simulate unbounded media.28 The incident wave is considered to be a
sinusoidal plane wave source. Thus, the system of first-order finite difference
equations is constructed. The values of six components of the electromagnetic
wave (Ex, Ey, Ez, Hx, Hy, Hz) are to be found at each of the grid elements
using conventional algorithms.
Despite its generality, the FDTD suffers from a few drawbacks. First, it is
computationally intensive. Second, although it does allow a solution for the
scattering problem, it does not necessarily help with the understanding of the
scattering characteristics. And, after all, the foremost important reason to
solve the direct scattering problem is to elucidate the general properties of the
scattering that enable at least a partial solution of the inversion problem. On
the other hand, the FDTD can be exceedingly useful in comparing various
approximate analytical models with otherwise difficult to perform experi-
ments and, therefore, in developing the most accurate analytical descriptions.
In the end, a word of caution: it would be tempting to think that the use of
numerical modeling avoids the necessity for making any simplifying
assumption regarding the origin of tissue scattering. While it is partially true
and numerical modeling can release some of the assumption, it must always
be realized that the model is only as good as its input parameters.
understanding of light scattering by living cells and tissues. Our goal will be
twofold. First, we will examine which cell structures are responsible for
various scattering patterns. Second, we will address the question of what kind
of simplified models can be used to predict the scattering patterns produced by
various subcellular structures.
As we have seen, the cells and tissues have complex structures with a
very broad range of the scatterers’ sizes: from a few nanometers, which is the
size of a macromolecule, to 7–10 mm, which is the size of a nucleus, and to
20 mm, which is the size of a cell itself. The picture is complicated even
further by the fact that only a few cell structures are homogenous with a
uniformly distributed refractive index. Most cell organelles and inclusions
are themselves complex objects with a spatially varying refractive index.
Thus, picturing a cell as being composed of a number of distinct uniform
objects of proper geometrical shape is, generally speaking, not correct. On
the other hand, several studies have confirmed that many organelles, such as
mitochondria, lysosomes, and nuclei, do posses an average refractive index
substantially different from that of their surroundings (Table 2.1); therefore,
viewing a cell as an object with a continuously or randomly varying
refractive index is not accurate either. A more accurate model acknowledges
subcellular compartments of various sizes with refractive indices, though not
constant over the compartment’s volume, but different from that of the
surroundings. A similar model would describe an epithelium or a mucosal
tissue as well.
Studies of light scattering by cells have a long history. The first works in
this area investigated the angular dependence of the scattered light. Most of
the experiments were performed on a single wavelength and the angular
distribution of the scattered light was measured either with an array of
photodetectors, fiber optics, or charge-coupled devices (CCD). Brunsting et al.
initiated a series of experiments aiming to relate the internal structure of
living cells with the scattering pattern by measuring forward and near-forward
scattering by cell suspensions.29 This became one of the first attempts to explain
light scattering by cells using rigorous quantitative approaches.
The researchers used cells of several types such as Chinese hamster’s
oocytes (CHO), HeLa cells, and nucleated blood cells. They compared the
resulting angular distribution of the scattered light with the one predicted by
the Mie theory and found that a very good agreement between the theory and
l
uf ¼ .
a
Therefore, it is not surprising that the structures with the biggest dimensions,
namely cells and their nuclei, are the major scatterers in the forward direction.
Another interesting conclusion can be drawn from these experiments:
despite the nonhomogeneity and a not perfectly spherical shape of the cells
and their nuclei, the experimental results were explained using the Mie theory,
which deals with uniform spheres. This finding should not be surprising as
well. It can be anticipated in the light of the van de Hulst theory. Indeed, the
scattering pattern produced by a large particle (for simplicity, we refer to
particles that are large compared to the wavelength as “large” particles) in the
near-forward direction depends mostly on the maximal phase shift undergone
by the incident light wave rather than variations of the refractive index inside
the particle [see Eq. (2.15)]. This result was later supported by Sloot et al.23 in
the experiments with white blood cells (leukocytes) and Hammer et al. in the
experiments with red blood cells.24 The former group of researchers found
that light scattering by the leukocytes in the near forward direction could be
explained if each cell was approximated as being composed of two concentric
spheres, one being the cell itself and the other being the nucleus. The latter
group showed that the near-forward scattering of light by red blood cells can
accurately be described using the van de Hulst approximation, which was
derived for large particles of spherical shapes rather than the concave-convex
disks that the red blood cells really are. One important comment must be
made. There is an apparent distinction between the experiments with cell
suspensions and those involving living epithelia: when the scattering by a cell
suspension is measured, the cells might provide important contribution to the
scattering. On the contrary, the epithelia consist of one or several layers of
contiguous cells with minute (a few nanometers) or no extracellular space at
all. Thus, the contribution to the scattering from a cell as a whole is expected
to be much less and the subcellular structures are to be viewed as major
scatterers.
62 Chapter 2
Figure 2.8 Schematic of the goniometer system used for measuring angular scattering
distribution, P(u). A collimated light beam from the He-Ne laser is incident on a cylindrical
sample cell containing a suspension of biological cells. The scattered light is measured as a
function of angle by a photomultiplier tube, which is rotated around the sample cell. The thick
black line represents a thin tube that is black on the inside and is critical for angular
resolution and elimination of stray light. The intensity as a function of wavelength must be
multiplied by the geometrical factor, cosu, to account for the change in acceptance angle as
the detector is rotated around the sample.
Figure 2.9 Measured angular scattering distribution, P(u), for cells, nuclei, and mitochon-
dria. Values below 9 deg and above 168 deg were extrapolated. All curves are normalized
so that the integral of the P(u) over the 4p solid angle equals unity.
backscattering peak. Their intensity profile is rather flat for these angles,
which is illustrated by the flat P(u) for angles u . 150 deg. For comparison,
the intensity of light scattered by the cell nuclei and cells themselves, which is
more than an order of magnitude larger than the wavelength, does exhibit a
prominent peak in a near-backward direction, as expected.
These studies showed that the cell structures responsible for light
scattering might be correlated with the angle of scattering. The following
generalizations, though by no means universal, can provide a convenient
paradigm to think about light scattering by biological cells:
1. When a cell is suspended in a buffer solution of lower refractive index, the
cell itself is responsible for small angle scattering. This result has been used
in flow cytometry to estimate the cell sizes.31 However, this is not the case
for contiguous layers of cells such as epithelia, where all cells are
surrounded by the other cells with similar optical properties rather than a
medium of a different refractive index.32 In the latter case, most of the
scattering is attributable to particular structures within the cells.
2. At slightly larger angles, the nucleus is primarily responsible for
scattering. Moreover, it is the major scatterer in all forward and
near-forward directions, in the case when the cell is a part of a contiguous
layer.
3. Smaller organelles, cell inclusions, and suborganellar and subnuclear
inhomogeneities are likely responsible for scattering at larger angles.
Unfortunately, it is difficult to be more specific due to the fact that there is
too great a variety of cell components with sizes smaller than or
comparable to the wavelength. To name a few: mitochondria, lysosomes,
Golgi apparatus, mostly all cell organelles other than the nucleus, plus
64 Chapter 2
1.06
1.04
1.02
Rs
1
0.98
0.96
0.94
350 450 550 650
λ (nm)
Figure 2.10 Oscillatory component of light backscattered by a monolayer of intestinal
cells. The dotted line shows the spectrum measured in the experiment. The solid line shows
the signal predicted by the Mie theory.
Figure 2.11 Scattering pattern of angular dependence of the intensity of light scattered by
two cells with randomly assigned dielectric structure. The spatial frequency of the refractive
index fluctuations is higher in the top curve (labeled high frequency) than in the lower curve
(labeled low frequency). The mean refractive index is the same for both curves.
all with dimensions smaller than the optical wavelength. However, larger
structures, such as cell nuclei, typically 5 to 15 mm in diameter, can also
scatter light. Because of the ubiquity of light scattering, its effects are
enmeshed with those of absorption, making interpretation of tissue spectra
difficult. On the other hand, this feature can actually enrich the information
provided by spectroscopic techniques.
Consider a beam of light incident on an epithelial layer of tissue.
A portion of this light is backscattered from the epithelial cells, while the
remainder is transmitted to deeper tissue layers where it undergoes multiple
scattering and becomes randomized. All of the diffusive light that is not
absorbed in the tissue eventually returns to the surface, passing once more
through the epithelium where it is again subject to scattering from the cells.
Thus, the emerging light will consist of a large component of multiply
scattered light and a smaller component of singly scattered light. For a thin
slab of epithelial tissue, an approximate solution of the transport equation for
the signal R(l) collected within an acceptable solid angle Vc is given by the
following expression10
RðlÞ 1 etðlÞ
¼ etðlÞ þ hhI ðl, s0 Þpðl,s, s0 ÞiVi þ hI d ðl,s0 Þpðl,s,s0 Þi2p iVc ,
RðlÞ hI d ðl,sÞiVc i
(2.21)
where t is the optical thickness, Ii (l,s) is the intensity of the incident light
delivered in solid angle Vi, Id(l,s) is the intensity of the light emerging from
the underlying tissue, and
for any function f and solid angle V, with s a unit vector pointing outward
from the tissue surface in an arbitrary direction. The quantity
is the reflectance of the diffusive background. In the case when the scatterers
are distributed uniformly throughout the thickness of the epithelium and
discrete particle approximation is applicable, the optical thickness is simply
X
tðlÞ ¼ ss ðl,jÞN j ,
j
with ss(l,l) the scattering cross section of a particle of type j (the type of a
particle may be defined as its shape, size, refractive index, etc.) and Nj the
distribution function of the particles depending on their type (number of
68 Chapter 2
scatterers of type j per unit area). In the same approximation, the effective
scattering phase function is
1X
pðl,s,s0 Þ ¼ pðl, j,s,s0 Þss ðl, jÞN j
t j
with p(l, j,s,s 0 ) the scattering phase function for a particle of type j. In the case
of spherical scatterers, this function is determined by the Mie theory. The first
term in Eq. (2.22) describes the attenuation of the diffusive component, and
the terms in brackets describe backscattering of the incident light and forward
scattering of the diffusive component by the scatterers within the epithelium,
respectively.
For a small Vc the forward scattering and backscattering terms in
Eq. (2.21) can be expanded in t(l). The forward scattering term varies with
the wavelength in phase with t(l), as required by the optical theorem,16
whereas the backscattering term is out of phase. Thus, the light scattered in
the epithelium introduces into the signal of light emerging from a tissue a fine
wavelength dependent component, whose spectral shape depends on the
optical properties of the scatterers and that can be accounted for using the
scattering theory. In turn, some of the properties of the scattering particles
within the epithelium might be revealed by analyzing the spectral variations of
the reflected signal R(l).
The fine structure component is typically just a few or less than a percent
of the total reflected signal and is ordinarily masked by the background of
diffusely scattered light from underlying tissue, which itself exhibits spectral
features due to absorption and scattering. The spectrum of the diffusely
scattered light can be described using a quantity called diffuse reflectance.
Diffuse reflectance is determined by absorption and reduced scattering
coefficients, ma and m0s , respectively. Absorption and scattering coefficients, in
turn, depend linearly on the concentrations of the various tissue components.
Several researchers have employed models, often based on the diffusion
approximation, to extract optical parameters from diffuse reflectance.
Quantitative results have been obtained in blood oximetry33,34 and in the
study of the optical properties of various animal tissues such as rat prostate,35
chicken breast,36 and canine gut.37 Zonios et al. developed a method for
modeling the reflectance of colon tissue and extracting such properties as
hemoglobin concentration, hemoglobin oxygen saturation, effective scatterer
density, and effective scatterer size.38 The method is based on the model
derived from the diffusion approximation and assumes colon tissue to be a
homogeneous semi-infinite turbid medium. Part of the incident light is
absorbed in the tissue, whereas the nonabsorbed part is subject to multiple
scattering and eventually emerges from the surface as diffuse reflectance.
A certain fraction of this emerging light is collected by the probe, whereas the
remaining part escapes undetected. The amount of the light collected depends
Light Scattering Spectroscopy of Epithelial Tissues: Principles and Applications 69
on the optical properties of tissue as well as on the probe radius rc . This radius
serves as a scale length, enabling ma and m0s to be determined.
Starting with an expression derived by Farrell et al.,39 Zonios et al. obtained
an analytical expression for the diffuse reflectance collected by the probe:
0 mr0
m0s mz0 ð1þ43AÞmz0 emr1 4 e 2
Rp ðlÞ ¼ 0 e þe z0 0 1 þ A z 0 0 ,
ms þ ma r1 3 r2
(2.22)
with
1
z0 ¼ ,
m0s þ ma
with a the hemoglobin oxygen saturation parameter and cHb the total
hemoglobin concentration. The molar extinction coefficients, εHbO2 (l) and
εHb (l), are well known.40
In general, the reduced scattering coefficient m0s ðlÞ is the sum of
contributions from the various tissue scatterers. Detailed information about
these individual scatterers is not well known. Therefore, one can express
measurement of the diffuse reflectance, the fibers are chosen to be less than
200 mm in diameter with NA ¼ 0.22.
For protection of the probe surface, a few millimeters shield made of an
optically transparent material such as quartz is put on the tip of the probe.
A thicker shield would allow photons of shorter pathlengths to be collected.
On the other hand, a thinner shield selects for photons traveling longer paths
inside the tissue and, therefore, sampling deeper tissue. Thus, the thickness of
the shield can be chosen depending on how deep the tissue of greatest interest
is located. A probe based on a frequently used design consists of a 200-mm
central delivery fiber concentrically surrounded by six collection fibers of the
same diameter. The probe tip is covered by a 1-mm quartz shield beveled to
prevent specular reflection from the surface of the probe.
Figure 2.12 shows an example of a system designed to collect the diffuse
reflectance from a tissue in a broad spectral range. The light from a white light
source, such as a xenon or tungsten lamp, is delivered onto the tissue surface
via the delivery arm of the probe. The other arm collects the returned light and
directs it to the spectrophotometer. The data are then transferred to a
computer for display and analysis. The reflectance signal R(l) is defined as
follows
SðlÞ DðlÞ
RðlÞ ¼ , (2.25)
S ref ðlÞ DðlÞ
where S(l) is a signal of the reflected light measured with the spectrophotom-
eter, Sref(l) is a reference signal defined as 100% reflectance measured with a
white standard, and D(l) is a spectrophotometer’s dark current signal.
Figure 2.12 An example of equipment used to acquire reflectance spectra from skin
tumors in situ.
Light Scattering Spectroscopy of Epithelial Tissues: Principles and Applications 73
RL ðlÞ
RE ðlÞ ¼ , (2.26)
RH ðlÞ
and found that the best discrimination was achieved if four spectral features
were used: 1) slope of the reflectance spectrum, RE(420) to RE(600), 2) the
area enclosed between the straight line that joins RE(600) to RE(780) and
RE(l) in the range from 600 to 700 nm, 3) the mean of RE(l) between 700 and
780 nm, and 4) the area enclosed between the straight line obtained by linear
fitting RE(l) and RE(l) in the 700 to 780-nm wavelength range. Mourant et al.
demonstrated that the reflectance spectroscopy was able to diagnose the
transitional carcinoma of the urinary bladder and adenoma and adenocarci-
noma of the colon and rectum with good accuracy.2,46 In these cases, the
hemoglobin absorption metric was used.
As we have already mentioned in the previous section, Zonios et al.
studied the capability of the diffuse reflectance spectroscopy to diagnose
colonic precancerous lesions, adenomatous polyps in vivo.38 Figure 2.13 shows
typical diffuse reflectance spectra from one adenomatous polyp site and one
normal mucosa site. Significant spectral differences are readily observed,
particularly in the short-wavelength region of the spectrum, where the
hemoglobin absorption valley around 420 nm stands out as the prominent
spectral feature. This valley is much more prominent in the spectrum of the
adenomatous polyp. This feature, as well as more prominent dips around 542
and 577 nm, which are characteristic of hemoglobin absorption as well, are all
indicative of the increased hemoglobin presence in the adenomatous tissue.
Apparently, the differences between these spectra are due to changes in
the scattering and absorption properties of the tissues. Both the absorption
dips and the slopes of the spectra are sensitive functions of the absorption and
scattering coefficients, providing a natural way to introduce an inverse
algorithm that is sensitive to such features. The authors quantified the
absorption and scattering properties using the diffusion-based model
Light Scattering Spectroscopy of Epithelial Tissues: Principles and Applications 75
Figure 2.13 Typical normal and adenomatous polyp spectra (thick curves) and modeled
spectra (thin curves).
Figure 2.14 Scattering spectra obtained from the data shown in Fig. 2.13 (thin curves) and
corresponding Mie theory spectra (thick curves). The effective scattering sizes are indicated.
discussed in Sec. 2.5. Equation (2.22) was fit to the data using the Levenberg-
Marquardt minimization method. Thus, the total hemoglobin concentration
cHb and hemoglobin oxygen saturation a were obtained. The optimal reduced
scattering coefficient ms 0 (l) was also found for each wavelength l, ranging
from 360 to 685 nm. It was found that ms 0 (l) has a spectral dependence that
resembles a straight line declining with wavelength l. The slope of ms 0 (l)
decreases with the increasing effective size of the scatterers, ds (Fig. 2.14).
76 Chapter 2
0
0 50 100 150
Hemoglobin Concentration (mg/dL)
Figure 2.15 Binary plot of total hemoglobin concentration cHb vs. effective scatterer size
ds. Open circles–nondysplastic colon, solid circles–adenomatous polyps (dysplasia).
This allowed the effective scatterer size to be determined from the known
ms 0 (l). The model fits shown in Fig. 2.13 are in very good agreement with the
experimental data.
The authors applied this algorithm to analyze the spectra collected from
several adenomatous as well as nondysplastic tissue sites, obtaining values
of the four parameters for each site probed. Figure 2.15 shows a binary plot
of the effective scatter size versus hemoglobin concentration. Note that the
normal mucosa data tend to form a cluster, while the adenomatous polyp
data are separated and characterized by a wider spread and irregular
distribution. Adenomatous colon polyps were characterized by increased
hemoglobin concentration, in agreement with published results that
precancerous tissues such as adenomatous polyps exhibit increased
microvascular volume.47,48 The hemoglobin oxygen saturation was found
to be approximately 60% on average for both normal mucosa and
adenomatous polyps. This result is reasonable, as the measurements were
performed in the capillary network of the mucosa, where oxygen is
transferred from hemoglobin to tissue.
Characteristic differences in the scattering properties were also observed
between the two tissue types studied. For adenomatous polyps, the average
effective scattering size was larger, and the average effective scatterer density
was smaller as compared to normal mucosa. The range of effective scattering
sizes was in good agreement with that reported for average scatterer sizes of
biological cell suspensions.30 Although the reason for the increase of the
scatterers’ sizes in the adenomas is not clear, one can speculate that it is due
to the increased cellular content of the polyps. These studies have
demonstrated the potential of diffuse reflectance spectroscopy to obtain
quantitative information about tissue structural composition in vivo and in
real time.
Light Scattering Spectroscopy of Epithelial Tissues: Principles and Applications 77
Multichannel
spectroscope
Mirror
Beam splitting
Broad-band Lens Polarizer polarization
source analyzer
Optical fiber
Aperture Beam
splitter
Tissue
sample
Figure 2.17 Size distributions and refractive index distributions, N(d,m), of epithelial cell
nuclei obtained with LSS for the (a) normal and (b) cancerous colon tissue samples.
300
300 (a) (b)
-2
-2
Population density, 10 mm
Population density, 10 mm
2
2
200 200
100 100
0 0
0 20 40 60 0 20 40 60
Percentage of large nuclei Percentage of large nuclei
400
(c) (d)
-2
-2
Population density, 10 mm
Population density, 10 mm
600
2
2
300
400
200
200
100
0 0
0 10 20 30 0 20 40 60
Percentage of large nuclei Percentage of large nuclei
Figure 2.18 Dysplasia/CIS classifications for four types of tissue obtained clinically with
LSS, compared with histologic diagnosis. In each case the ordinate indicates the percentage
of enlarged nuclei and the abscissa indicates the population density of the nuclei, which
parametrizes nuclear crowding. (a) Barrett’s esophagus: nondysplastic Barrett’s mucosa
(○), indefinite for dysplasia (▪), low-grade dysplasia (•), high-grade dysplasia (▴); (b) colon:
normal colonic mucosa (○), adenomatous polyp (▴); (c) urinary bladder: benign bladder
mucosa (○), transitional cell carcinoma in situ (▴); (d) oral cavity: normal (○), low-grade
dysplasia (•), squamous cell carcinoma in situ (▴).
Kidder et al. and Cabib et al. developed Fourier transform infrared (FTIR)
spectroscopic imaging to study the biochemical composition of tissues.53,54
Kidder et al. applied FTIR spectroscopic imaging to observe biochemical
modifications in brain tissue.55 Sowa et al. showed that spectroscopic imaging
can be used to study tissue perfusion.56 Farkas et al. developed a new
modality of spectroscopic imaging by combining it with analytical cytol-
ogy57,58 and applied spectral imaging for cancer detection and diagnosis.59 In
spectroscopic optical coherence tomography (OCT), the capabilities of OCT
were enhanced by combining conventional OCT with Fourier transform
spectroscopy.60
LSS was extended to allow imaging applications as well.61 This LSS-based
imaging allows mapping variations in the size of epithelial cell nuclei of living
82 Chapter 2
tissues over wide surface areas. The resulting images provide direct
quantitative measurements of nuclear enlargement and chromatin content,
which can be translated into clinical diagnoses. The technique can be used for
noninvasive or minimally invasive detection of precancerous changes in a
variety of organs, such as the colon and oral cavity.
In LSS imaging, a light source with a broad illumination spectrum is used
to illuminate the imaged tissue. The light from this source is collimated,
polarized, and transmitted through one of several narrow-band filters to select
the desired wavelengths covering the visible spectral range. A pair of equifocal
achromatic lenses separated by twice their focal length collects the light
backscattered by the sample. This so-called 4-f system ensures that the special
distribution of light in the plane distanced one focal length between the
collecting lenses depends on the angular distribution of light emerging from
the tissue. Therefore, an aperture positioned at the center of the lens system
determines the angular distribution of light scattered by the sample and
collected by the CCD, which is placed one focal length away from the outer
lens. The single scattering component is distinguished from the multiple
scattering component by means of polarization discrimination using an
analyzing polarizer, as in the polarization LSS. The CCD collects images for
each of the illumination wavelengths. After all the filters are used, each pixel is
represented by an LSS spectrum, which is analyzed using the Mie theory as in
the other LSS modalities.
LSS imaging was applied to study ex vivo colon tissue samples that were
obtained immediately after resection from patients undergoing colectomy
for familial adenomatous polyposis. Colonic adenomas are precancerous
dysplastic lesions exhibiting all of the characteristics of dysplastic lesions,
including cell nuclear enlargement, pleomorphism, and hyperchromasia.
The adenomas are surrounded by normal tissue covered by a single layer of
epithelial cells. For each pixel (25 mm 25 mm) of the imaged field
(1.3 cm 1.3 cm), the analysis of the LSS spectra enabled the size and
refractive index of the nuclei in each pixel to be obtained. Then the imaged
field was divided into 125 mm 125 mm regions and the percentage of nuclei
larger than 10 mm was obtained for each of these regions. As discussed
above, this statistic, which characterizes the degree of nuclear enlargement,
is highly significant for the diagnosis of dysplastic lesions in the colon and
several other organs. The resulting color-coded plot is shown in Fig. 2.19. As
expected, the nuclei are enlarged in the central, adenomatous region, but not
in the surrounding nondysplastic tissue.
These results demonstrate that LSS has the potential to provide a means
for detecting epithelial precancerous lesions and preinvasive cancers
throughout the body. LSS is advantageous compared to conventional
diagnostic techniques in that it can provide objective, quantitative results in
real time without the need for tissue removal. The first clinical application
Light Scattering Spectroscopy of Epithelial Tissues: Principles and Applications 83
Percentage of large
nuclei
0.5
40 - 50
mm 30 - 40
1 20 - 30
10 - 20
1.5
0 0.5 1 1.5
mm
Non-dysplastic
Adenom a mucosa
Figure 2.19 LSS images of colon tissue sample showing the spatial distribution of the
percentage of enlarged nuclei. The adenoma observed histologically is marked by an
ellipse6HHFRORUSODWHV
Figure 2.20 Clinical EPSS instrument. The EPSS instrument is shown in the endoscopy
suite before the clinical procedure, with the scanning probe inserted into the working channel
of an endoscope. The insets show details of the scanning probe tip and the control box.
For use during endoscopy, the polarized scanning fiber optic probe is
inserted into the working channel of a standard gastroendoscope
(e.g., Olympus GIF-H180 used in the procedures reported below) and the
gastroenterologist introduces the endoscope through the mouth. Spectroscopy
of the entire Barrett’s segment is performed by scanning adjacent sections, 2 cm
in length, with the polarized scanning probe as follows. The endoscope tip is
positioned and the probe is extended 2 cm beyond the endoscope tip, placing it
at the distal boundary of a BE region chosen for examination. One complete
rotary scan of the esophageal wall is completed. The probe is withdrawn
linearly 2 mm back into the endoscope tip and another rotary scan is
completed. This is repeated for 10 rotary scans, so that an entire 2 cm length of
BE is scanned; then, the endoscope tip is withdrawn 2 cm and the next length of
BE is examined. The instrument collects 30 data points for each rotary scan
and performs ten steps during a linear scan (2 mm per step), collecting 300 data
points in 2 min for each 2-cm segment of BE. The scanning time can be reduced
to as little as 20 sec by utilizing a more efficient scanning mechanism.
Qiu et al.62 performed in vivo measurements using EPSS during 10 routine
clinical endoscopic procedures for patients with suspected dysplasia at the
Interventional Endoscopy Center (IEC) at Beth Israel Deaconess Medical
Center (BIDMC). Patients reporting to the IEC at BIDMC had undergone
initial screening at other institutions and were referred with confirmed BE and
suspicion of dysplasia. Protocols were reviewed and approved by the BIDMC
Institutional Review Board.
Patients reporting for routine screening of Barrett’s esophagus who had
consented to participate in the study were examined. The EPSS polarized fiber
optic probe was inserted into the working channel of the gastroendoscope and
the gastroenterologist introduced the endoscope through the mouth. The
EPSS instrument performed optical scanning of each complete, continuous
region of the luminal esophageal wall chosen for examination by the
gastroenterologist. Data from the optical scans were recorded for each linear
and angular position of the probe tip as parallel and perpendicular
polarization reflectance spectra, corrected for light source intensity and
lineshape. The backscattering spectrum at each individual spatial location was
extracted by subtracting perpendicular from parallel polarized reflectance
spectra. The backscattering spectra were then normalized to remove
amplitude variations due to peristalsis. The mean of the normalized spectra
was calculated. The difference from the mean for each site was calculated,
squared, and summed over all spectral points. A site was considered likely to
be dysplastic if this parameter was greater than 10% of the summed mean
squared. No data points are needed for calibration of this simple diagnostic
rule. This analysis is straightforward and can be done in real time. By
extracting the nuclear size distributions from the backscattering spectra for
each individual spatial location the researchers found that this simple rule is
86 Chapter 2
Figure 2.21 Nuclear size distributions for one high-grade dysplasia site and one non-
dysplastic site in BE of one of the patients. Dark (red and pink online) regions of the map
indicate areas suspicious for dysplasia based on nuclear size distributions extracted from
the backscattering spectra for each individual spatial location. Non-dysplastic BE sites had
nuclear size distributions centered about 5-6 mm diameters while sites marked as suspicious
for dysplasia have nuclear size distributions with a main peak centered from 9 to 15 mm. The
arrows indicate the specific locations on the esophageal surface for which the size
distributions are extracted from the polarized LSS data.
and their angles relative to the start of the EPSS scan. Pathological examination
revealed a total of 13 dysplastic sites out of which 9 were high-grade dysplasia
(HGD). The rest of the sites were diagnosed as non-dysplastic BE.
The diagnostic parameters for each EPSS location were extracted from the
backscattering spectra, i.e., the residuals of the parallel and perpendicular
spectral components collected by the EPSS instrument. The results were
presented in the form of pseudo-color maps. Double blind comparison of the
EPSS maps with the biopsy reports revealed 11 true positive (TP) sites, 3 false
positive (FP) sites, 80 true negative (TN) sites, and 1 false negative (FN) site.
Thus EPSS measurements are characterized by a sensitivity of 92% and
specificity of 96%.
Several of the BE patients enrolled in the study who underwent routine
endoscopy and biopsy with EPSS presented with pathologies that revealed no
dysplasia, and the patients were dismissed. However, in some of these patients
the EPSS scan indicated probable sites of focal dysplasia, which were located
in regions where biopsies had not been taken. One of the patients was recalled
and biopsies were taken at the three sites indicated by EPSS in addition to the
standard-of-care protocol. Pathology confirmed HGD in all three EPSS
directed biopsies and one more HGD at a point located between two EPSS
indicated sites (Fig. 2.22). The latter site, considered to be a false negative, is
Figure 2.22 Biopsies taken during the initial and follow-up endoscopy procedures for patient
A, overlaid on the EPSS map acquired during the initial procedure (left panel). Three follow-up
biopsies were guided by the EPSS map and pathology confirmed HGD for each (indicated at
360°). High-resolution endoscopic (HRE) image of a location with invisible HGD (right panel)
with narrow band imaging (NBI) enabled. Video capture was acquired in subject A at one of
the locations where invisible dysplasia was missed by visual examination by HRE with NBI,
but were located by EPSS, and later confirmed by pathology. The site is marked by an arrow.
Note that the site is visually indistinguishable from the surrounding nondysplastic BE tissue
6HHFRORUSODWHV
88 Chapter 2
very close to the sites indicated by EPSS and may arise from imperfect
correspondence of the actual biopsy site with an EPSS mapped site (a problem
that will be addressed in future instrument and algorithm development). The
patient now has been given radio frequency ablation (RFA) treatment.
These focal dysplasias were missed by standard-of-care procedures that
blindly biopsy a tiny fraction of esophageal tissue according to a prescribed
protocol, but they were caught and confirmed by the capability of EPSS to
examine the entire esophageal epithelium millimeter-by-millimeter and detect
dysplastic cells—enabling early treatment and in all likelihood saved patients'
esophagi, and perhaps their lives.
The frequency of dysplasia in the patient sample was consistent with that of
the pre-screened patient population referred to the BIDMC IEC for
confirmation and treatment, but was higher than would be expected in the
general BE patient population. In fact, the rarity of HGD detection in the
general population of BE patients underscores the importance of having a more
comprehensive and effective method for gastroesophageal cancer screening.
1.00 500 nm
550 nm
600 nm
0.75 650 nm
700 nm
S, a.u.
0.50
0.25
0.00
-6 -4 -2 0 2 4 6
z (μm)
Figure 2.24 Depth sectioning of CLASS microscope along vertical axis.
five spectra shown in Fig. 2.24 are almost identical (500 nm, 550 nm,
600 nm, 650 nm, and 700 nm), which demonstrates the excellent chromatic
characteristics of the instrument. Small maxima and minima on either side of
the main peak are due to diffraction from the pinhole. The asymmetry is due
to spherical aberration in the reflective objective.66
90 Chapter 2
1.9 mm mixed in a ratio of 4:2:1. Note the misleading size information evident
in the conventional fluorescence images. A 0.5 mm microsphere that is either
close to the focal plane of the fluorescence microscope or carries a high load of
fluorescent label produces a spot that is significantly larger than the
microsphere’s actual size. The CLASS image [middle of Fig. 2.25(c)] on the
other hand does not make this error and correctly reconstructs the real size of
the microsphere. One also can see that prior fluorescence labeling does not
affect the determination of the objects with CLASS measurements.
To confirm the ability of CLASS to detect and identify specific organelles in
a live cell, Itzkan et al.64 performed simultaneous CLASS and fluorescence
imaging of live 16HBE14o- human bronchial epithelial cells, with the lysosomes
stained with a lysosome-specific fluorescent dye. The fluorescence image of the
bronchial epithelial cell, the CLASS reconstructed image of the lysosomes, and
the overlay of two images are provided in Fig. 2.25. The overall agreement is
very good. However, as expected, there is not always a precise, one-to-one
correspondence between organelles appearing in the CLASS image and the
fluorescence image. This is because the CLASS image comes from a single, well
defined confocal image plane within the cell, while the fluorescence image
comes from several focal “planes” within the cell throughout the thicker depth
of field produced by the conventional fluorescence microscope. Thus, in the
fluorescence image, Itzkan et al.64 observe the superposition of several focal
“planes,” and thus additional organelles above and below those in the single,
well-defined confocal image plane of the CLASS microscope.
Figure 2.26 shows the ability of CLASS microscopy to do time sequencing
on a single cell. The cell was incubated with DHA for 21 hours. The time
Figure 2.26 The time sequence of CLASS microscope reconstructed images of a single
cell. The cell was treated with DHA and incubated for 21 hours. The time indicated in each
image is the time elapsed after the cell was removed from the incubator6HHFRORUSODWHV
92 Chapter 2
indicated in each image is the time elapsed after the cell was removed from the
incubator. In this figure, the nucleus, which appears as the large blue organelle,
has its actual shape and density reconstructed from the CLASS spectra. The
remaining individual organelles reconstructed from the CLASS spectra are
represented simply as spheroids whose size, elongation, and color indicate
different organelles. The shape of the nucleus has changed dramatically by the
third hour and the nuclear density, indicated by color depth, has decreased with
time. The organelles almost completely vanish by 4 hours.
Since the CLASS microscopy requires no exogenous labels, thus avoiding
their potential interference with cell processes, it is applicable to viable cell
and tissue preparations, enabling the observation of cells and organelles
functioning at scales on the order of 100 nm. Applications for CLASS
microscopy in such diverse areas as prenatal diagnosis, in vitro fertilization
(IVF) or drug discovery are all linked by the potential of this technique to
nondestructively observe functional intracellular processes.
Human embryo development and quality, as well as response to
environmental factors, might be monitored progressively at all critical stages
using CLASS. Since the CLASS measurement is nondestructive and requires
no exogenous chemicals, a given embryo in vitro could be monitored over time
before implantation. These kinds of progression studies are not possible with
the techniques currently available.
An important part of the drug discovery process is to monitor changes in
organelle morphology in cells treated with compounds being screened for
therapeutic or toxic effects. Currently, imaging using numerous fluorescence
markers67 or electron microscopy using non-viable cell preparations68–70 are
being used to detect these changes. However, CLASS could be used to
monitor organelle responses in multiple cell lines in parallel, in real time, using
viable cells with no exogenous markers. Not only would CLASS enable more
rapid screening, but it would also provide results more likely to be predictive
of animal and ultimately human outcomes.
Acknowledgments
We would like to thank Irving Itzkan for encouragement and Olga Perelman
and Eugenia Fingerman for inspiration and patient help.
References
1. A.G. Yodh and B. Chance, “Spectroscopy and imaging with diffusing
light,” Physics Today, 48(3), 34–40 (1995).
2. J.R. Mourant, I. J. Bigio, and J. Boyer et al., “Spectroscopic diagnosis of
bladder cancer with elastic light scattering spectroscopy,” Lasers Surg.
Med., 17, 350–357 (1995).
Light Scattering Spectroscopy of Epithelial Tissues: Principles and Applications 93
3. S.L. Jacques, J.R. Roman, and K. Lee, “Imaging superficial tissues with
polarized light,” Lasers Surg. Med., 26, 119–129 (2000).
4. R.P. Rava, J.J. Baraga, and M.S. Feld, “Near-infrared Fourier-
transform Raman spectroscopy of human artery,” Spectrochemica Acta
A 47(3-4), 509–512 (1991).
5. M.S. Patterson, B.C. Wilson, and J.W. Feather et al., “The
measurement of dihematoporhyrin ether concentration in tissue by
reflectance spectrophotometry,” Photochem. Photobiol., 46(3), 337–343
(1987).
6. R.R. Anderson and J.A. Parish “The optics of human skin,” J. Invest.
Dermatol. 77, 13–19 (1981).
7. R.R. Alfano, A. Prahdan, G.C. Tang, and S.J. Wahl, “Optical
spectroscopic diagnosis of cancer and normal breast tissues,” J. Opt.
Soc. Amer. B 6(5), 1015–1023 (1989).
8. R.R. Richards-Kortum, R. Rava, and M. Fitzmaurice et al.,
“A one-layer model of laser-induced fluorescence for diagnosis of
disease in human-tissue— applications to atherosclerosis,” IEEE Trans.
Biomed. Eng. 36, 1222–1232 (1989).
9. K.T. Schomacker, J.K. Frisoli, and C.C. Compton et al., “Ultraviolet
laser-induced fluorescence of colonic tissue–basic biology and diagnostic
potential,” Lasers Surg. Med. 12(1), 63–78 (1992).
10. L.T. Perelman, V. Backman, and M. Wallace et al., “Observation of
periodic fine structure in reflectance from biological tissue: a new
technique for measuring nuclear size distribution,” Phys. Rev. Lett., 80,
627–630 (1998).
11. V. Backman, M. Wallace, and L.T. Perelman et al., “diagnosing cancers
using spectroscopy,” Nature, 405 (2000).
12. D.W. Fawcett, A Textbook of Histology, Charman & Hall, New York
(1994).
13. L.D. Hiatt, Color Atlas of Histology, Williams & Wilkins, Baltimore
(1994).
14. R.S. Cotran, S.L. Robbins, and V. Kumar, Robbins Pathological Basis of
Disease, W.B. Saunders Company, Philadelphia (1994).
15. B.J. Reid, R.C. Haggitt, and C.E. Rubin et al., “Observer variation in
the diagnosis of dysplasia in Barrett’s esophagus,” Hum. Pathol., 19,
166–178 (1988).
16. R.G. Newton, Scattering Theory of Waves and Particles, McGraw-Hill
Book Company, New York (1969).
17. J.D. Jackson, Classical Electrodynamics, John Wiley & Sons, New York
(1975).
18. H.C. van de Hulst, Light Scattering by Small Particles, Dover
Publications, New York (1957).
19. M. Kerker, The Scattering of Light, Academic Press, New York (1969).
94 Chapter 2
20. B.T. Draine and P.J. Flatau, “Discrete dipole approximation for
scattering calculations,” J Opt. Soc. America, A11, 1491–1499 (1994).
21. B. Beauvoit, T. Kitai, and B. Chance, “Contribution of the mitochon-
drial compartment to the optical properties of rat liver: a theoretical and
practical approach,” Biophys. J., 67, 2501–2510 (1994).
22. J. Beuthan, O. Milnet, and J. Helfmann et al., “The spatial variation of
the refractive index in biological cells,” Phys. Med. Biol., 41, 369–382
(1996).
23. P.M.A. Sloot, A.G. Hoekstra, and C.G. Figdor, “Osmotic response of
lymphocytes measured by means of forward light-scattering-theoretical
considerations,” Cytometry, 9, 636–641 (1988).
24. M. Hammer, D. Schweitzer, and B. Michel et al., “Single scattering by
red blood cells,” Appl. Opt., 37, 7410–7418 (1998).
25. J.M. Schmitt and G. Kumar, “Optical scattering properties of soft
tissue: a discrete particle model,” Appl. Opt. 37, 2788–2797 (1998).
26. A. Tafloe, Computational Electrodynamics: The Finite Difference Time
Domain Method, Artech, Boston (1995).
27. A. Dunn and R. Richards-Kortum, “Three-dimensional computation
of light scattering from cells,” IEEE J. Sel. Top. Quantum Electron., 2,
898–905 (1996).
28. Z. Liao, H. Wong, B. Yang, and Y. Yuan, “A transmitting boundary for
transient wave analysis,” Sci Sin Ser. A 27, 1063–1076 (1984).
29. A. Brunsting and F. Mullaney, “Differential light scattering from
spherical mammalian cells,” Biophys. J., 14, 439–453 (1974).
30. J.R. Mourant, J. P. Freyer, and A. H. Hielscher et al., “Mechanisms of
light scattering from biological cells relevant to noninvasive optical-
tissue diagnosis,” Appl. Opt., 37, 3586–3593 (1998).
31. J.V. Watson, Introduction to Flow Cytometry, Cambridge Univ. Press,
Cambridge (1991).
32. R. Drezek, A. Dunn, and R. Richards-Kortum, “Light scattering
from cells: finite-difference time-domain simulations and goniometric
measurements,” Appl. Opt., 38, 3651–3661 (1999).
33. J.M. Schmitt, “Simple photon diffusion analysis of the effects of multiple
scattering on pulse oximetry,” IEEE Trans. Biomed. Eng. 38, 1194–1203
(1991).
34. S. Takatani and J. Ling, “Optical oximetry sensors for whole blood and
tissue,” IEEE Eng. Med. Biol. 3, 347–357 (1994).
35. M.R. Arnfield, J. Tulip, and M.S. McPhee, “Optical propagation in tissue
with anisotropic scattering,” IEEE Trans Biomed Eng 35, 372–381 (1988).
36. A. Kienle, L. Lilge, and M.S. Patterson et al., “Spatially resolved
absolute diffuse reflectance measurements for noninvasive determination
of the optical scattering and absorption coefficients of biological tissue,”
Appl. Opt. 35, 2304–2314 (1996).
Light Scattering Spectroscopy of Epithelial Tissues: Principles and Applications 95
Nikiforos Kollias
University of British Columbia, Vancouver, Canada
George I. Zonios
University of Ioannina, Ioannina, Greece
Sergei R. Utz
Saratov State Medical University, Saratov, Russia
Valery V. Tuchin
Saratov National Research State University, Saratov, Russia
Tomsk National Research State University, Tomsk, Russia
Institute of Precision Mechanics and Control, Russian Academy of Sciences,
Saratov, Russia
3.1 Introduction
Assessment of the optical properties of the skin is very helpful for the
quantification of the content and spatial distribution of the various biological
components in skin. It is also useful for the diagnosis of skin diseases,
investigation of the impact of different environmental factors (chemical agents,
drugs, UV-radiation, temperature, etc), and evaluation of the effectiveness of
skin treatments.
Currently, reflectance and fluorescence spectroscopy are probably the most
developed among the available optical methods for investigation of skin in vivo.
99
100 Chapter 3
Reflectance and fluorescence from skin carry information about the structures of
the epidermis and dermis, the quantity and density of blood vessels, the
concentration and spatial distribution of chromophores and fluorophores in
skin, and the nature of skin metabolic processes.
The latest improvements in fiber optics, electronics, and computer
technologies have made reflectance spectroscopy a common and popular
technique for skin analysis. Typical applications include in vivo quantitative
analysis of skin erythema and pigmentation, determination of cutaneous color
variation, monitoring of dermatological treatment effects, and study of skin
biophysics.1–28
Fluorescence spectroscopy has also benefited from recent technological ad-
vances that made available new light sources, supersensitive optical multichannel
analyzers, and charge-coupled device (CCD) or complementary metal-oxide
semiconductor (CMOS) detectors with high temporal and spatial resolution.
A number of particular applications of fluorescence spectroscopy has
already been identified as being very useful in the study of skin. These
include ultraviolet A (UVA) light excited fluorescence,29–35 the use of
fluorescence for diagnostic purposes, determination of skin photoaging,29,36,37
determination of the level of primary melanin deposits,38 assessment of skin
erythema and pigmentation,34,35 and diagnosis of skin tumors.30,31,39–41
Methods of skin imaging for histological purposes, based on skin
autofluorescence (AF), are also being intensively developed.30,42,43
The absorption and scattering properties of skin affect both the AF and the
reflectance spectra. Therefore, the combined use of fluorescence and reflectance
may provide additional information for the analysis of skin tissue biophysics.
In this chapter, we discuss the potential advantages of the combined use of
reflectance and fluorescence spectroscopy of skin for the evaluation of erythema
and pigmentation indices, determination of total hemoglobin and its oxygen-
ation, and investigation of the efficacy of topically applied sunscreens. Skin
reflectance and fluorescence spectra alterations, caused by morphological and
functional changes in skin with aging, disease development, or therapy, can be
adequately analyzed with the help of simple skin models. Such an approach often
leads to new diagnostic methods that utilize skin optical imaging techniques and
color measurements often combined with AF and polarization-sensitive
techniques for the examination of skin. Improvement of skin diagnostic accuracy
and therapy depends largely on the analysis of the skin optical properties.
Figure 3.1 Spectral absorption of the main skin chromophores. For chromophores marked
with (*), the range indicated is a half-width of the band. FAD: flavin adenine dinucleotide;
NADH: reduced form of nicotinamide adenine dinucleotide; NAD: nicotinamide adenine
dinucleotide; DNA: deoxyribonucleic acid.
102 Chapter 3
Figure 3.3 Molar attenuation spectra for solutions of major visible light-absorbing human
skin pigments. (1) DOPA-melanin (H2O); (2) oxyhemoglobin (H2O); (3) hemoglobin (H2O);
(4) bilirubin (CHCl3).44
primary absorption band at 430 nm, and it has a single secondary absorption
band at 555 nm. Both hemoglobins exhibit the lowest absorption at
wavelengths longer than 620 nm.2,6,44,46,50 Bilirubin has two relatively broad
absorption bands near 330 and 460 nm (Fig. 3.3).44
In the infrared (IR) spectral range, the skin absorption spectrum is
essentially determined by the absorption of water.46
In addition to absorption, skin is also characterized by strong light
scattering properties.2,44–47,51 The scattering results from inhomogeneities in
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 103
Figure 3.5 3D plots of the human skin in vitro AF spectra at different excitation
wavelengths: (a) 40 year old man, (b) and (c) 60 and 87 year old women, accordingly.61
Figure 3.7 Spectral ranges of fluorescence of the main skin chromophores. For
chromophores marked with (*), the range indicated is a half-width of the band.
Figure 3.8 Fluorescence spectrum of (1) collagen (in vitro), (2) AF spectrum of human skin
(in vivo), and the fluorescence spectrum of collagen after optical filtering by the dermal blood
plexus [thickness: (3) 50 mm and (4) 100 mm; blood content of 5%; oxygenation 50%;
modeling]. Excitation wavelength 337 nm.35
Figure 3.9 3D plot of human skin AF after UVA irradiation with 4 minimal erythema
doses (MEDs).35
Figure 3.10 Temporal dependence of the AF intensity for in vivo skin with developing
erythema: (○) lFL = 460 nm; (•) lFL = 420 nm; and (□) lFL = 500 nm.35
Figure 3.11 In vivo human skin AF with UVA-erythema for different values of external
mechanical pressure: (a) 7 days after UV irradiation; (b) 14 days after irradiation; (□) AF of
normal skin; (□) 1.4 104 Pa; (○) 8.4 104 Pa; (•) 14 104 Pa; and (▴) erythema without
pressure.35
blue spectral ranges. In the red, excitation maximum is located near 400 nm,
and emission maximum is near 600 nm. In addition to this AF maximum,
there are several less pronounced peaks in the 580 to 640 nm range.
Fluorescence in this range is mostly due to porphyrins.30,31,41,73
112 Chapter 3
Figure 3.12 AF intensity of human skin (lFL = 460 nm) vs. external mechanical pressure:
(□) 7 days after UVA irradiation; (•) 14 days after irradiation. AF intensity of human skin
without erythema and compression (a dotted line) is marked as a reference value.35
Figure 3.13 Simplified model of the layered structure of the skin (Reproduced with
permission from Ref. 6).
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 113
light to scatter forward. In the epidermis (layer 2), light is strongly absorbed
by melanin and the non-absorbed part reaches the hemoglobin-reach
papillary dermis (layer 3). The remaining part of the light is then diffusely
reflected by dermal collagen (layer 4). The light that is diffusely scattered by
the dermis reaches the skin surface after passing twice through the
hemoglobin and melanin-rich layers.
Assuming that the reflectance due to the first three layers is significantly
lower than that of the fourth layer, i.e., Rd,1, Rd,2, Rd,3 ≪ Rd,4, the total skin
reflectance is given by the equation:
I
Rd ¼ ¼ T 21 T 22 T 23 Rd,4 , (3.1)
I0
which is based on the assumption that absorption by melanin and hemoglobin
are more prominent than scattering in the top three layers, therefore, light
transmitted through these layers Ti (i ¼ 1, 2, 3) can be described by the Beer-
Lambert law and T1 1.
For a turbid medium such as human skin, the reflectance Rd determines
the apparent optical density (OD) of the medium:2
OD ¼ log Rd : (3.2)
The optical density of the skin can be expressed as50,54
X
OD ¼ 2 log e εi d i logðRd,4 Þ, (3.3)
i
where εi is the extinction coefficient of the i’th layer with thickness di. Since
OD is defined as a sum, in vivo reflectance measurements allow for the in vivo
estimation of skin chromophore content (melanin and hemoglobin, located in
the second and third layers, respectively):
OD ¼ 2 log e½ðεext c1 d 2 Þmel þ ðεext c2 d 3 Þoxy þ ðεext c3 d 3 Þdeoxy logðR4 Þ, (3.4)
where εext is the extinction coefficient, ci the concentration, and di the optical
thickness of the i’th chromophore layer.
Figure 3.14 shows the temporal dependence of OD for in vivo skin with
developing erythema, and Fig. 3.15 shows the in vivo OD spectral distribution
for erythematous human skin for different values of external mechanical
pressure. Curves in Figs. 3.14 and 3.15 demonstrate the erythema and
pressure effect on skin absorption. In particular, Fig. 3.14 shows the increase
of blood content in the skin, which rises by 80% within the first day after UVA
irradiation.
The application of an external mechanical compression on the order
of 105 Pa on skin in vivo leads to a decrease of both of its scattering properties
and its absorption caused by a decrease of the hemoglobin of blood contained
114 Chapter 3
Figure 3.14 Temporal dependence of OD for in vivo skin with developing erythema;
(•) lFL = 460 nm; (▴) lFL = 575 nm; and (▪) lFL = 650 nm.54
Figure 3.15 The erythematous skin (three days after irradiation) OD spectral distribution
for different values of external mechanical pressure: (1) without pressure; (2) 5.6 104 Pa;
(3) 8.4 104 Pa; (4) 1.4 105 Pa.54
in the skin. This process is inertial and proceeds within a period of time of
about 3 to 4 min. After the removal of the compression, the recovery time of
the skin tissue ( 40 to 50 min) considerably exceeds the stabilization time of its
parameters after application of external mechanical compression (several
minutes). At the initial moment of time after the removal of the compression,
the fullness of blood vessels in the skin increase considerably (by a factor of
2.4 compared to the normal skin).74
When an external mechanical pressure is applied to human skin in vivo,
the influence of hemoglobin on the reflection spectra is effectively reduced and
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 115
after a short optical clearing time the carotenoid absorption becomes easily
discernable in a 460 to 500 nm spectral window and its optical density can be
calculated with high accuracy.75
where lEX and lFL are the excitation and emission wavelengths, F(lEX, z)
is the fluence distribution of excitation light within the fourth layer, and
h(lFL, lEX) is the fluorescence quantum yield of dermal collagen.
where εi ðlF L Þ is the absorption of the i’th layer with thickness di at the
wavelength of fluorescence emission lFL.
The average value of the excitation of light intensity in the fourth layer
can be expressed as
X
I ðlEX Þ ¼ I 0 ðlEX Þ exp εi ðlEX Þd i , (3.7)
i
where I0(lEX) is the intensity of excitation light incident on the skin surface as
a collimated beam, and εi(lEX) is the extinction coefficient of the i’th layer at
the wavelength of the excitation light. Then the intensity of the escaping
fluorescence can be calculated as follows:
X
I 4 ðlF L Þ ¼ hðlF L ,lEX ÞI 0 ðlEX Þ exp ½εi ðlEX Þ þ εi ðlF L Þd i : (3.8)
i
I 4 ðlFL Þ
R ¼ , (3.9)
hðlF L ,lEX ÞI 0 ðlEX Þ
and the quantity OD* can be introduced:
X
1
OD ¼ log ¼ log e f½εi ðlEX Þ þ εi ðlF L Þd i g : (3.10)
R i
The quantity OD* can be called the “apparent effective optical density”
because it is directly proportional to absorption. As compared to OD, changes
of OD* are affected by changes in the absorption properties of the skin at the
excitation and emission wavelengths, thus OD* provides information about
the absorption properties of skin (blood and melanin) at two different
wavelengths.
Since it is generally difficult to obtain absolute quantitative informa-
tion regarding the fluorophores from AF measurements, mainly because
of difficulties involved in the determination of the fluorescence quantum
yield h, fluorescence spectroscopy is usually employed in the analysis of
the relative fluorophore content. While reflectance spectroscopy, where
reflected light is normalized to a reflectance standard (such as BaSO4),
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 117
1
hI 0 hI 0 I
DOD ¼ OD2 OD1 ¼ log 2 log 1 ¼ log F2 L , (3.11)
I FL I FL I FL
Figure 3.17 In vivo OD* spectral distribution for human skin with UVA-erythema (after
seven days of irradiation) for different values of external mechanical pressure: (▪) normal
skin; (▵) 14 104 Pa; (○) 2.8 104 Pa; and (•) erythema without pressure.35
118 Chapter 3
Figure 3.18 OD* (lFL = 460 nm) for erythematous human skin (after seven days of
UV irradiation) vs. external mechanical pressure; (dashed line) - normal skin under pressure
of 105 Pa.35
spectrum between 365 and 395 nm correlates with melanin content.7,90 The
melanin index M is defined as:
M ¼ 100ðOD365 OD395 Þ, (3.12)
where subscripts denote the wavelength in nanometers. However, hemoglobin
also absorbs in this region (rising slope of the Soret band) and should be taken
into account as well.
Similarly, Dwyer et al. estimated the fraction of the epidermis that
contains melanin using the reflectance values at 400 and 420 nm.91 Although
they suggested that this parameter when measured at the upper inner arm may
be used as a predictor of risk for skin cancer, it should be noted that skin
remittance at these wavelengths are determined to a great extent by the Soret
band of oxyhemoglobin.
Due to minimal influence of hemoglobin, the near-infrared spectral range
is an alternative region for determining the melanin index. Kollias and
Baqer4,48 reported that the slope of the skin OD in the spectral range of 620 to
720 nm is directly proportional to the melanin content in the epidermis.
Specifically, the melanin index is defined by the slope of OD(l) in the region
above 620 to 640 nm:4,6,48
M ¼ 100ðOD650 OD700 Þ, (3.13)
where the lines denote the mean value of OD measured near 650 and 700 nm.
Changes in hemoglobin content in the dermis (or changes in hemoglobin
oxygenation) lead to changes in absorption in the 535- to 585-nm spectral
range (oxygenated hemoglobin has a double-peak maximum at 542 nm and
577 nm, while deoxygenated hemoglobin has a single peak at 555 nm).
Therefore, skin reflectance in this spectral range can be used to obtain
information about blood content and hemoglobin oxygenation.
Methods for the determination of the erythema index, E, using in vivo
reflectance spectroscopy are also well developed. The most widely used
definition for the erythema index is given by the area under the spectral curve
OD(l) in the region of 510 to 610 nm:6
E ¼ 100½OD560 þ 1.5ðOD545 þ OD575 Þ 2.0ðOD510 þ OD610 Þ, (3.14)
where subscripts denote wavelengths in nanometers (Fig. 3.20).
Simplified methods for erythema index determination are based on
comparison of skin optical density in the green (560 nm) (high hemoglobin
absorption) and red (650 nm) (low hemoglobin absorption) spectral
ranges,87
E ¼ 100ðOD560 OD650 Þ: (3.15)
This simplified method has the disadvantage that it ignores the contribution
of melanin absorption in the green spectral range. The three-wavelength
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 121
Figure 3.20 (a) Reflectance spectrum R and (b) spectrum of optical density OD of
(1) erythematous and (2) normal human skin.88
M
E ¼ 100 OD1 OD2 Dl12 , (3.17)
k
The major advantages of the skin diffuse reflectance spectra and “pigment
indices” are their versatility and specificity because of the high information
contained in the full spectrum. Furthermore, the skin diffuse reflectance
spectra properties can be presented as biologically relevant parameters
(melanin content, blood oxygenation, blood stasis, etc.).93
The specific features of diffuse reflectance from different areas of human
face skin for laser and non-laser sources of visible and NIR light have been
investigated94 to localize the closed-eye (eyelid) region. In the visible spectral
range, the eyelid skin surface can be discriminated by determining the melanin
index. Its values for the eyelid skin exceed those for other parts of the faces of
volunteers with skin types II and III.
The eyelid skin can also be differentiated from the other facial skin areas
by comparing the skin reflectance in the NIR at the wavelengths corre-
sponding to the presence and absence of water absorption bands. The skin
reflectance in the eyelid area measured at a wavelength corresponding to the
water absorption band and normalized to the reflectance at the wavelength at
which the water absorption is minimal, are much lower than those for the
other face regions. This can be explained by the light penetration into eyeball
tissue with a high water content.94
The skin AF spectra can also be used for determination of erythema and
melanin skin indices. Equations (3.9) and (3.10) express OD* of normal
(ODN*) and erythematous (ODE*) skin:
hI 0 hI
ODN ¼ log N ; ODE ¼ log E 0 , (3.18)
I FL I FL
where lFL ¼ 460 nm.
The erythema index, which is proportional to absorption of fluorescence
(i.e., blood content), can be calculated from the equation
N
I
E ≡ kðODE ODN Þ ¼ k log FEL , (3.19)
I FL
where the value of k is chosen in such a way that erythema index values
obtained with reflection and fluorescence methods do not differ dramatically.
The melanin index can be defined by a similar equation, but fluorescence
intensities I N E
F L and I F L are measured under an external mechanical pres-
sure on skin 10 Pa (bloodless skin).
5
Figure 3.21 Near-infrared attenuation [log10] for (1) a 1-cm layer of oxyhemoglobin,
(2) deoxyhemoglobin, and (3) water; 210-mM hemoglobin concentration in water.98
ma ðl1 Þ ¼ εl1 l2
deoxy cdeoxy þ εoxy coxy , (3.21)
ma ðl2 Þ ¼ εl2 l2
deoxy cdeoxy þ εoxy coxy , (3.22)
ma L
OD ¼ logðaÞ, (3.24)
2.3
where L represents the pathlength and a is an unknown factor depending on
the efficiency of the backscattered light from skin, the geometry of the incident
beam, and the wavelength of light. As long as a and L are independent of the
chromophore concentration, this equation provides a simple relation for
calculating absolute concentrations from measured OD variations at two
wavelengths. As a result, this method provides only relative changes in
hemoglobin saturation and so far has not been successfully used to accurately
quantify absolute blood oxygenation.
Another technique used is the introduction of a differential pathlength
factor, DPF, to compensate for pathlength increase due to multiple light
scattering in tissue.101 In this method, spatially resolved optical density
measurements are related to the skin absorption coefficient, ma, via a simple
linear equation:
OD ¼ ma srsd þ G, (3.25)
Figure 3.22 (a) Changes in meff of the blood liquid model measured at 760 nm and 850 nm
during a process of deoxy ! oxy ! deoxy circle. (b) Corresponding changes in hemoglobin
saturation of the liquid model. The calibration sample is the initial deoxygenated blood-yeast
model solution (Used with permission; see Ref. 96).
spectrum in the range 620-720 nm.4 After subtracting the melanin contri-
bution, the concentration of HbO2 and Hb are calculated by a 3-point fit (560,
577 and 630 nm).
Feather et al.20,100 have developed indices for hemoglobin concentration
(H) and oxygen saturation (Y) based on the gradient of the OD spectrum at
selected wavelengths:
OD544 OD527.5 OD573 OD544
H ¼ 100 ; (3.30)
16.5 29
5.1 103 OD573 OD558.5 OD558.5 OD544
Y¼ þ 42, (3.31)
H 14.5 14.5
where ODl is the skin optical density at wavelength l nm. The above indices
can be clinically valuable for monitoring skin response to various
dermatological treatments.
In vivo measured hemoglobin indices for a white skin normally do not
exceed a value of 0.5. The hemoglobin index of skin as calculated from
Eq. (3.30) is principally a function of the hemoglobin content in the sub-
papillary plexus. Assuming that this function is linear and that the in vitro
model represents the in vivo optical properties of skin, then a conversion factor
from the hemoglobin index H to the approximate amount of hemoglobin per
unit area may be derived as follows:106
including its structure and its constituent elements, the light source used for
illumination, and the detector (eyes, camera, film, PC monitor, and printer) by
which it is observed.107,108
Skin is characterized by its surface structure, such as scale, wrinkles, and
hair (both color and texture); the concentration and distribution of absorbers
in the stratum corneum, epidermis (melanin), and dermis (hemoglobin,
bilirubin, etc.); and the concentration and distribution of scatterers in the
stratum corneum (melanin dust), epidermis (melanosoms and melanin
granules, cellular structures), and dermis (collagen, erythrocytes, etc.). Light
transport in skin (including propagation, absorption, scattering, and
remittance of light at different wavelengths) is the main factor determining
skin color.107,109 An ideal light source would be one with similar spectral
characteristics as sunlight, and the ideal spectral sensitivity of the detector
would be that which is similar to the human eye. Moreover, for a complete
description of color as perceived by a human observer, the exact physiology of
the eye must be taken into account.109,110
Skin color change resulting from different skin diseases or local/systemic
drug administration has been one of the major subjects of clinical and
experimental skin chromametry. Chromametry is based on the interpretation
of color perception due to the reflection from skin under white-light
illumination.11,13,18,19,111 It also deals with interpretation of skin color under
conditions other than white-light illumination, such as human skin
autofluorescence.112
Besides color, the exact shape of the lesion plays an important role
in performing a diagnosis. Visualization, documentation, monitoring,
measurement, and classification of morphologic manifestations of various
cutaneous processes have attracted the use of digital imaging tech-
niques that are based on computer processing of skin images.102 Recently,
digital color imaging techniques have found various applications in
dermatology, such as melanoma screening, psoriasis and erythema
detection, color analysis of nevi, monitoring of wheal and flare reactions,
etc.25,26,113,114
X
780 nm
X¼ Rd ðlÞxðlÞDl,
380 nm
X
780 nm
Y¼ Rd ðlÞyðlÞDl,
380 nm
X
780 nm
Z¼ Rd ðlÞzðlÞDl, (3.33)
380 nm
The x, y parameters are sufficient for characterizing the skin color under fixed
illumination conditions and given a standard observer. In order to obtain an
orthogonal coordinate system for color description, the CIE1976 (L*a*b*)
system was introduced, in which color is represented as a vector in a three-
dimensional space defined by the coordinates L*, a*, and b* that form a
Cartesian coordinate system116 (Fig. 3.23):
1∕3 1∕3
Y 1∕3 X Y
L ¼ 116 16; a ¼ 500 ;
Y0 X0 Y0
1∕3 1∕3
Y Z
b ¼ 200 : (3.35)
Y0 Z0
X0, Y0, Z0 are the nominally white object-color stimuli given by the CIE
standard C, Y0 ¼ 100. For the above equations to yield valid results, X/X0,
Y/Y0, and Z/Z0 must be greater than 0.008856. The coordinate L*
correlates with “psychometric lightness” (relative brightness or “gray
value”), while a* and b* are chromatic coordinates (a* varies from green
to red, and b* varies from blue to yellow), equivalent to the two other
quantities commonly used in color description, the “hue” and the color
saturation (vividness) or “chroma:”
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
b
Hue ¼ arctan , Chroma ¼ ða Þ2 þ ðb Þ2 : (3.36)
a
130 Chapter 3
The total color difference DE* between two color stimuli [(L, a, b)i and
(L, a, b)j] is calculated from the following equation:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
DE ij ¼ ðDLij Þ2 þ ðDaij Þ2 þ ðDbij Þ2 , (3.37)
where DLij ¼ Li Lj , Daij ¼ ai aj , and Dbij ¼ bi bj .
Erythema is often evaluated using the a* parameter, whereas pigmenta-
tion is evaluated by the values of L*, b*, or combinations of them,
e.g., DE.11,13,26,117–120
Although usually a* correlates closely with the erythema index of the
narrow-band instruments, L* and b* show weak correlations with the melanin
index.117,121 In particular, an increase in hemoglobin concentration can
decrease both values of L* and b* in the absence of any change in melanin
pigmentation,93 for example, during application of sub-diastolic pressure with
a pressure cuff.122 Moreover, a* values are influenced by melanin concentra-
tion.11,123,124 In UVA-induced persistent pigment darkening, the b* value was
found to initially decrease and later increase, as the yellow component of
newly generated melanin becomes prominent.119
The L*a*b* parameters provide a measure of the perception of skin color
and can, therefore, emulate how the dermatologist or the average person
perceives skin. On the other hand, such a system of parameters cannot be
expected to simulate an analytical tool that could explain the physiological
reason for the skin appearance.
Chardon et al. have proposed the use of a vector representation for the
UV-induced tanning reaction in the L*a*b* space.119,125 The authors showed
that in the three-dimensional L*a*b* space, all skin colors of subjects with fair
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 131
complexion fall within a “banana” shaped volume termed skin color volume.
Increases in skin pigmentation can be graphed as a shift on the L*–b* plane,
whereas skin reddening (erythema) is represented as a shift on the L*–a*
plane. In an attempt to quantify skin pigmentation, the “individual typology
angle (ITA)” or “alpha characteristic angle” has been proposed, defined as the
vector direction in the L*–b* plane:
L 50 180
I TA ¼ arctan , (3.38)
b p
where ITA is given in degrees. This parameter has been validated as an
expression of skin pigmentation by the analysis of diffuse reflectance
measurements. However, it has also been found that an increase in the local
concentration of deoxyhemoglobin (e.g., by application of a pressure cuff) has
a similar effect on ITA as an increase in melanin pigmentation and can,
therefore, visually simulate pigmentation.122 Therefore, although ITA may be
a measure of perceived skin pigmentation, it still lacks the information about
the molecular origin of skin color (e.g., whether the perception of ‘pigment’ is
due to an increase of melanin or deoxyhemoglobin concentration).93
An approach that has attracted attention in the last decade involves the
determination of color as a psychophysical parameter, by taking into account
the way it is perceived by the human eye and brain.13 Instruments for color
measurement based on this approach are called either chromaticity meters or
chroma meters, and they employ illumination based on a mixture of the three
basic primary colors. They provide a good description of the color variability
of different skin types,13,126 the color changes associated with erythema and
tanning,11,13 the colorimetric assessment of drug effects on the skin,18,19 the
effects of age and sun exposure,16 etc.
Another aspect associated with the color of human skin involves skin
fluorescence. UVA-induced AF emission in skin occurs in the visible
spectral range and hence may contribute to skin color. Changes in AF may
lead to changes in color perception, which can be expressed in terms of the
three color parameters: L*, chroma, and hue. These parameters, along with
the color difference DE, can be calculated using Eqs. (3.33)–(3.37) where
the AF spectrum IFL(l) is substituted in place of the diffuse reflectance
spectrum Rd(l).
Figure 3.24 shows color changes in fluorescence and reflectance of skin
with UVA-erythema and skin with external application of mechanical
pressure. The changes in color perception of both AF and reflected white
light are related to blood content in the dermis. In the case of reflected white
light, the hue decreases during erythema development and increases with
applied pressure to the skin. These results are expected because erythematous
skin has a reddish tinge and skin with pressure applied has a less reddish tinge
compared to normal skin. In contrast, changes in skin blood content only
132 Chapter 3
Figure 3.24 Colors of human skin AF radiation (3,4) and white light reflected from the skin
(1,2) with developed UVA-erythema (1,3) and the skin with applied mechanical pressure
(2,4) presented in L*a*b* color space.112
Figure 3.25 Changes of color parameters for human skin with developed UVA-erythema:
(1) autofluorescence, (2) reflectance.112
Figure 3.26 Changes of color parameters for human skin with applied mechanical
pressure: (1) autofluorescence, (2) reflectance.112
Then, according to the simple skin reflectance model [Eqs. (3.3)–(3.4)], when
selecting a green (G) and a red (R) band, the following equations can be
written: 25
1
ODR 2εext,R d 2 c þ log
mel mel
, (3.41)
R4,R
1 1
ODG 144ODR 2εhemo d
ext,G 3 chemo
þ log 1.44 log :
R4,G R4,R
(3.42)
Equations (3.41) and (3.42) have been used to evaluate spatial alterations in
the distribution of melanin and hemoglobin content in skin.25
The green color component of the images was used for wheal and flare
areas’ calculations.113 The light intensity threshold of the green color was used
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 135
for discriminating between normal unaffected skin and skin with wheal or
flare or skin with intracutaneous PAF-acether injection. For each pixel, light
intensities above the threshold were manually set to maximum white, and light
intensities below the threshold were set to zero (black). This resulted in the
generation of 1-bit resolution images (black and white images) calculated for
the wheal and flare areas.
The psoriatic skin lesions were assessed from photographs using a
computer image analysis system, with green color normalized to the original
color image.114
The estimation of the skin color parameters in the CIE1976 (Lab*) system
can be obtained using a color-camera-based technique.128 For this purpose,
the R, G, B color coordinates of the images are converted into quasi-L*, a*,
b* parameters using equations similar to Eqs. (3.35), where X/X0 ≡ RR,
Y/Y0 ≡ RG, and Z/Z0 ≡ RB.
Barel et al. have studied skin color changes after application of a
commercial tanning milk using an imaging system (Visi-Chroma VC-100;
Biophotonics, Lessines, Belgium) that records a typical RBG image and can
convert it to several CIE standard scales including L*a*b* and Yxy.129 Image
analysis software is available that can convert images from the RGB space to
the L*a*b* space provided that the acquired images have been properly
calibrated.
Erythema and melanin content in port wine stain lesions have been
evaluated by a similar method of converting an RGB image to the L*a*b*
space, assuming that the a* index image represents the erythema distribution
map and the L* index image the inverse of the melanin distribution map.130
A new method for quantitative analysis of 2D distributions of erythema
and pigmentation in the human skin using the skin computer image was
reported in Ref. 131. The recorded images of the skin using two interference
filters at wavelengths of 650 nm and 700 nm give the possibility to form,
according to Eq. (3.16), a new image of skin, where the pigmentation index is
the visualization parameter. Similarly, the recorded images using three
interference filters at 560 nm, 650 nm, and 700 nm give the possibility to form,
according to Eq. (3.17), a new image of skin, where the erythema index is the
visualization parameter.
The contrast of an erythematous skin image, when the erythema index is
used as the visualization parameter, as well as the contrast of the skin image
with melanin pigmentation, when the pigmentation index is used as
visualization parameter, was three times larger than in the case of the color
image (R-, G-, or B-images).131
Fluorescence photography appears to be a useful tool in the evaluation of
topical acne therapy.42 It also allows monitoring of patient compliance, and a
comparative analysis of images. UV-excited fluorescence is useful as an
ancillary technique in the evaluation of epidermal melanin. The maximum of
136 Chapter 3
RI I ðlÞ R⊥ ðlÞ
PrL ðlÞ ¼ : (3.44)
RI I ðlÞ þ R⊥ ðlÞ
Figure 3.27 Spectra of the degree of residual polarization of linearly polarized probing
radiation diffusely reflected by human skin (type II according to Fitzpatrik) with erythema of
different grades (a): 1 corresponds to the erythema index E ¼ 157, 2 – E ¼ 223, 3 – E ¼ 249,
4 – E ¼ 275, and 5 – E ¼ 290. Spectra of the apparent optical density of human skin with
erythema of different grades (b): 1 corresponds to the erythema index E ¼ 137, 2 – E ¼ 157,
3 – E ¼ 213, 4 – E ¼ 249, and 5 – E ¼ 288.136
Figure 3.28 Variation in the difference polarization spectra of skin with the layer-by-layer
removal of surface layers of epidermis: 1 corresponds to normal skin, 2 – to skin with the
stripping layers of thickness 40 mm, 3 – 50 mm, 4 – 60 mm, and 5 – 70 mm.136
polarization degree are similar to the spectra of the effective optical thickness
OD of skin, presented in Fig. 3.27(b), and are indicative of the presence of
melanin (increased polarization degree in the short wavelength part of the
spectrum) and blood (increased polarization degree in the spectral absorption
bands of hemoglobin) in skin. An increase in the erythema grade, which reflects
the fact of increase in the absorption properties of skin stipulated by an increase
in the blood concentration in papillar dermis, leads to an increase in the residual
degree of polarization within the absorption bands of hemoglobin.
Blood of the papillar dermis is almost not manifested in the difference
polarization spectrum of normal skin. However, due to thickening of the
epidermis as a result of its layer-by-layer removal, the polarization state of
light interacting with the papillar-dermis region is retained to a remarkable
degree. This is seen as the appearance of absorption bands of hemoglobin in
the difference polarization spectrum (Fig. 3.28). The intensity of hemoglobin
absorption bands in the difference polarization spectrum makes it possible to
estimate the thickness of the epidermal layer of skin, or, more exactly, the
depth of blood vessels in skin tissue. The thickness of the peeled-off epidermal
layer was estimated by the number of sequential retrievals (a layer of thickness
about 4 mm was peeled off for each stripping).
Figure 3.29 A freckle. A polarization image removes the melanin from a freckle. Freckle
melanosomes do not appear to influence the image.116
Figure 3.30 A benign pigmented nevus. A polarization image removes the melanin and
shows apparent scatter from the nevus structure.134
142 Chapter 3
Figure 3.31 Polarization images of a burn-injured site of skin surface: (a) the co-polarized
component of the diffusely reflected radiation; (b) the cross-polarization component; (c) the
image obtained by using the degree of residual polarization of diffusely reflected light as the
visualization parameter.131
The reflectance of in vivo skin is relatively low in the UV range; and the
ability to measure sunscreen optical characteristics can be significantly
impaired by strong light absorption and scattering within the epidermis. In
addition, for in vivo phototesting of sunscreen formulations with a high sun
protection factor (SPF) in the UVA SPF(A), many (seven or more) hours of
UVA exposure are required.
An alternative method is the fluorescence technique based on the assess-
ment of changes in fluorescence intensity of skin labeled with exogenous,
highly fluorescent dyes (dansyl chloride, acridine orange, etc), when excited by
UVA radiation.151
The following equation is used for the calculation of the SPF(A):152
X
400 nm
ðCIEl E l Þ
320 nm
SPFðAÞ ¼ , (3.46)
X
400 nm
ðT l CIEl E l Þ
320 nm
where Tl is the sunscreen transmission at wavelength l; CIEl is the CIE
action spectrum value at wavelength l; and El is the spectral irradiance
of terrestrial midday midsummer sunlight for southern Europe at wave-
length l.153–155
Changes in dye fluorescence intensity when sunscreen is applied to the
skin allow for determination of the sunscreen transmission in the UVA range,
and therefore, determination of SPF(A) as well. Skin AF excited by UVA
light may also be used for this purpose.83 Assuming that collagen is
responsible for most of the AF emission of skin, the fluorescence intensity of
normal skin and skin with sunscreen applied can be expressed as
N,S N,S N,S
F L ðlF L Þ ¼ I 0 ðlEX ÞT 1 ðlEX ÞT 2 ðlEX ÞT 3 ðlEX Þ
I NS
hT 3N,S ðlF L ÞT N,S N,S
2 ðlF L ÞT 1 ðlF L Þ (3.47)
where I0(lEX) is the intensity of the excitation radiation incident on the skin
surface; T1, T2, and T3 are the transmittances of corresponding skin layers at
the excitation and emission wavelengths (see Section 3.3.2.); h is the quantum
efficiency of collagen fluorescence; superscript N denotes normal skin with no
sunscreen applied, and superscript S denotes skin with sunscreen.
Assuming that the diffusion of the sunscreen molecules takes place only
within the epidermis (layer 1), then T N2 ¼ T 2 and T 3 ¼ T 3 . In this case, the
S N S
I SF L T S1 ðlEX Þ T S1 ðlFL Þ
¼ : (3.48)
IN 1 ðlEX Þ T 1 ðlF L Þ
TN N
FL
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 145
T N,S
1 ðlFL Þ is the epidermal transmittance measured in the visible spectral
range. It can be determined from the in vivo reflectance spectra of normal skin
and that of skin with sunscreen applied.
The reflection factors RN (for normal skin) and RS (for skin with sun-
screen applied) can be defined as (see Section 3.3.1.)
X
400 nm
T¼ TðlEX Þ, (3.52)
320 nm
and it is normalized with reference to skin with no sunscreen applied. For all
studied sunscreens, the maximum reduction of skin AF, i.e., the photoprotec-
tion maximum, was observed 1 h after application.
146 Chapter 3
Figure 3.33 In vivo transmittance spectra of tested sunscreens 20 min after application.
(1) Contralum Ultra; (2) pH5-Eucerin; (3) Ilrido Plus.83
Figure 3.34 AF changes followed up the application of tested sunscreens, lFL = 500 nm.
(1) Contralum Ultra; (2) pH5-Eucerin; (3) Ilrido Plus.83
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 147
Table 3.2. Change of the optical properties at 500 and 810 nm of human skin samples of
approximately 2 2 cm2 at pressures of 0.0, 0.1 and 1.0 kg/cm2 uniformly distributed over
the sample surface.161 In vitro measurements were done using an integrating sphere
technique and the adding-doubling method for deconvolution of absorption and reduced
scattering coefficients. N is the type of the sample: 1–3: skin from a Caucasian male; 4–6:
skin from a Hispanic male; 7–9: skin from a Caucasian female. D is the thickness of the
sample.
No pressure Pressure 0.1kg/cm2 Pressure 1kg/cm2
N l, nm D, mm ma, cm–1 m0s , cm–1 D, mm ma, cm–1 m0s , cm–1 D, mm ma, cm–1 m0s , cm–1
1 500 0.40 4.7 34.0 0.15 11.7 86.4 0.12 10.0 81.9
810 0.2 10.2 0.5 22.9 0.1 24.2
2 500 0.23 4.8 69.1 0.05 22.6 320.9 0.05 19.3 309.5
810 0.2 19.6 0.3 108.1 0.3 105.7
3 500 0.25 5.9 47.0 0.13 11.8 94.9 0.13 11.7 78.6
810 0.4 17.6 1.1 27.0 1.3 30.2
4 500 0.70 3.4 23.9 0.35 6.2 44.1 0.28 7.5 47.7
810 0.6 7.5 0.9 15.1 1.5 12.7
5 500 0.78 3.6 21.8 0.62 4.6 37.1 0.48 4.7 35.5
810 0.9 6.8 1.6 12.0 0.8 9.1
6 500 0.63 4.4 27.0 0.48 4.4 31.6 0.33 6.5 37.9
810 1.1 8.4 0.3 7.2 0.3 8.7
7 500 0.42 5.3 21.3 0.30 6.7 26.1 0.27 7.7 28.7
810 0.5 9.7 1.5 13.2 1.5 14.8
8 500 0.50 5.6 24.2 0.30 9.7 36.8 0.20 13.1 49.4
810 1.3 8.1 1.9 12.2 2.3 14.0
9 500 0.50 4.8 26.3 0.34 5.8 31.8 0.23 9.2 42.4
810 1.1 6.9 0.8 8.6 1.2 10.6
500 to 600 nm also decreases (Fig. 3.35a) due to a decrease of skin absorption
caused by the hemoglobin of blood contained in the skin. This process is
inertial and proceeds within a period of time of about 3 to 4 min.74
After removal of the external mechanical compression, the recovery of the
skin occurs within 40 to 50 min, and, in this case, during the first several
seconds, a considerable increase in the content of blood (hemoglobin) (by a
factor of 2.4 compared to the norm) occurs [Figs. 3.35(b) and 3.36]. This can
also be seen from Fig. 3.37, which presents temporal changes in the
reflectance coefficient of the skin at two wavelengths (540 and 700 nm).
It may be mentioned that during the first several seconds after removal
of the external mechanical compression, a considerable increase of the
degree of oxygenation (by a factor of 2 to 4 compared to the norm)
(Fig. 3.38) exists, which can be caused by a sharp inflow of arterial blood to
the volume of the skin tissue that was subjected to the external mechanical
compression.
In the NIR, two processes control the in vivo skin reflectance:
alterations of the absorbing and the scattering properties of the skin. The
external compression decreases both absorption and scattering of the skin,
150 Chapter 3
44
42
Reflectance, % 40
38
36
34 1
32
2
30 3
4
28 5
Wavelength, nm
(a)
44
40
36
Reflectance,%
4
32
5
28 1
24 2
3
20
Wavelength, nm
(b)
Figure 3.35 Temporal changes in in vivo reflection spectra of skin from the human forearm
(a) upon application of an external mechanical compression (p ¼ 110 kPa) and (b) after its
removal. Curves in (a): (1) norm, t ¼ (2) 15, (3) 105, (4) 200, and (5) 290 s; curves in
(b): t ¼ (1) 0, (2) 10, (3) 150 s, (4) 40 min, and (5) norm.74
Figure 3.36 Dynamics of erythema index E of skin tissue under conditions of an external
mechanical compression (within the time interval of 0 to 330 s) and after its removal (within a
time interval longer than 330 s). p = (1) 13.9 and (2) 110 kPa.74
44
2
40
36
Reflectance, %
32
1
28
24
20
100
80
Oxygenation degree
60
40
20
-20
0 100 200 300 400 500 600
Time, s
Figure 3.38 Dynamics of the degree of oxygenation of blood hemoglobin of skin tissue
under conditions of an external mechanical compression (p ¼ 110 kPa) (within the time
interval of 0–290 s) and after its removal (within a time interval longer than 290 s). The
dashed line shows the time interval within which there is no blood in the skin tissue.74
where bs accounts for additional irradiation of the upper layers of a tissue due
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
to backscattering (photon recycling effect), mef f ¼ 3ma ðma þ m0s Þ.
Owing to the fibrous structure of skin dermis – the main portion of skin, it
is quite reasonable to assume that the kinetics of fluid diffusion within the skin
could be approximated by free diffusion. Therefore, to describe the kinetics of
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 153
∂C a ðx,tÞ ∂2 C a ðx,tÞ
¼ Da , (3.54)
∂t ∂x2
where Ca(x, t) is the fluid concentration, Da is the agent coefficient of
diffusion, and x is the spatial coordinate.
The volume-averaged concentration of an agent Ca(t) in the first-order
approximation has a view197,203
C a ðtÞ ¼ ∫0 C a ðx,tÞdx ≅ C a0 1 exp
1 d t
, (3.55)
2 t
When an agent is administrated through only one sample surface (topical
agent application) the characteristic diffusion time is:
d2
t : (3.56)
Da
At tissue impregnation by a chemical agent, the refractive index of the
background (interfibrillar) media n0 is a time-dependent function of the agent
concentration that penetrates into a sample Ca(t) and is defined by Eq. (3.55).
The time-dependent volume fraction of the agent within the tissue sample fa is
proportional to its concentration Ca, thus using the law of Gladstone and
Dale, we can get:
As the SC thickness is in the range of d ¼ 10–20 mm, it follows from Eq. (3.55)
that ta can be ranged from 0.9 to 3.6 hrs. For a living epidermis of
thickness 100 mm and OCA diffusivity of Da ¼ 3 10–8 cm2/sec,
approximately, a diffusion time ta 0.9 hr can be provided. Two orders
higher diffusivity of the living epidermis in comparison with the SC is due to
the higher permeation ability of the epidermal cell membrane, which is similar
to the permeability of membranes of other epithelial cells. For a 1-mm dermis
thickness and typical diffusivity of fibrous tissue, Da ¼ 3 10–6 cm2/sec,212ta
can be estimated as 0.9 hr. In accordance with these estimations, 2.7–5.4 hrs
is needed for OCA diffusion through a skin layer. Evidently, in dependence on
tissue condition and site on the body, this time can be different.
Approximately equal contributions to the time delay of OCA permeation is
provided by all three major skin layers with 10-mm SC, 100-mm living
epidermis, and 1-mm dermis, of 2.7 hrs.
For a thicker, 20-mm SC, the resistance of SC dominates, thus various
vehicles and methods for reversible disruption of the SC protective function
should be used to provide a reasonable time for OCA diffusion. The SC is
functioning not only as a barrier against OCA penetration into skin, but also
as a reservoir for topically applied substances.202,233 Skin appendages, in
particular sebaceous glands, also serve as a reservoir and pathway for clearing
agents.234
Table 3.3 Efficiency of the immersion agent application expressed as a ratio of mean
transmitted intensities before and after application of lotion to the skin stripping; n is the
index of refraction of the used lotion.173
Immersion 1 2 3 4 5 6 7
agent n = 1.449 n = 1.380 n = 1.356 n = 1.348 n = 1.354 n = 1.337 n = 1.396
Figure 3.39 The time-dependent collimated transmittance of the rat skin samples
measured at different wavelengths at administration of immersion solution - 40%-glucose
in a bath. (a) Sample thickness 0.73 mm, with hypodermic fatty layer; (b) sample thickness
0.57 mm, with removed hypodermic fatty layer.205
increased by 41% and 58%, respectively, for 30- and 60-min treatment, while
the diffuse reflectance decreased by 39% and 47%, respectively.
A method of accelerating penetration of the index-matching com-
pounds by enhancing skin permeability through creating a lattice of micro-
zones (islets) of limited thermal damage in the SC was proposed.208 A
combination of a flashlamp system (EsteLux, Palomar Medical Technolo-
gies, Inc.) and a specially designed appliqué with a pattern of absorbing
158 Chapter 3
470 nm
650 nm
1,5
1,4
Relative transmittance
1,3
1,2
1,1
1,0
Figure 3.40 Transmittance spectra recording using Frantz diffusion chamber: left panel
demonstrates experimental arrangement (1 – physiological saline; 2 – OCA; 3 – tissue
sample; 4 – lens; 5 – water jacket; 6 – illuminating and collecting fibers); right panel shows
spectral kinetics of optical transmittance of pig skin specimen of 1-mm thickness at topical
application of 40%-glucose (see insertion), 2 pulses 27 J/cm2 (20 ms) with islet damage
mask þ 2 pulses 27 J/cm2 (20 ms) were applied to perforate stratum corneum.208
centers (center size 75–100 mm, lattice pitch 450–500 mm) has been
used to create the lattice of islets of damage (LID). Several index-matching
agents, including glucose and glycerol, have been tested. A high degree of
optical clearance of a full-thickness pig, rat, and human skin in vitro and
in vivo has been demonstrated with 40%-glucose and 88%-glycerol solution
after creating an LID with a few optical pulses (fluence 14-36 J/cm2, 20 ms
pulse duration) (Fig. 3.40).
One of the possible mechanisms of more effective optical clearing of skin
by an osmotic OCA at LID application is connected with more effective
dehydration of skin due to local disruption of the SC. A special experiment
with Yucatan pig skin samples at LID (two 20-millisecond pulses of 30 J/cm2
each with appliqué and two 20-msec pulses of 30 J/cm2 each after appliqué
removal) done for a sample non-treated by OCA and treated with 40%-
glucose showed that the sample area strictly corresponding to the LID area
was dehydrated more effectively than the surrounding area free of LID. The
dehydrated area was clearly seen as a more translucent area with less
thickness. Tissue shrinkage was of 20–25% in the thickness of the sample with
LID kept at room temperature for 2 hrs and up to 40% in the thickness of the
sample treated with 40%-glucose and kept in an oven at a temperature of 51°C
during 2 hrs. Thus, more effective skin dehydration is expected at LID due
to SC partial ablation. Besides, the local heating of the living epidermis
under the SC may enhance skin permeability due to induced phase transition
of epidermal intercellular lipids from the gel phase to liquid crystalline
phase.235,236
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 159
700 nm. It was shown that a glycerol injection causes a more prolonged effect
of tissue optical clearing, but reflectance decreased a little bit less than for the
glucose injection. This can be explained by a higher viscosity of glycerol and
by its mostly indirect action via tissue dehydration.
Within one hour after a 40%-glucose intradermal injection applied to a
human healthy volunteer, the skin reflection coefficient decreases in average
by a factor of 3.8 and then exhibits a slow increase, which indicates that
glucose is eliminated from the observation area and the skin reflectance tends
to restore itself to the initial level.212 Based on these results and the proposed
skin clearing model, the main contribution to clearing was the refractive index
matching between collagen fibrils of the dermis (n ¼ 1.46) and the interstitial
space (initially n ¼ 1.36) to which glucose (n ¼ 1.39) diffuses. The diffusion
coefficient of glucose in dermis estimated from these experimental data is
DG ¼ (2.56 0.13) · 10–6 cm2/s; this value is 3.6 fold less than that for glucose
diffusion in water at 37°C, DG 9.2 · 10–6 cm2/s, and reflects the character of
dermis permeability for glucose. This and other data on diffusion coefficients
are presented in Table 3.4.
Water loss or increase by means of moisturizing substances seriously
influences skin optical properties.190–194 NIR reflectance spectroscopy is used
as a method to directly determine changes in free, bulk, and protein-bound
water and to assess scattering effects in skin for the evaluation of skin care
products.191 The following spectral bands are associated with water: free
water, 1879 nm; bulk water, 1890 nm; and protein-bound water, 1909 and
1927 nm. The effect of increases in ambient humidity is associated with
increased levels of free water in the skin, while moisturizers containing
hydroxyethyl cellulose, propylene glycol, dipropylene glycol, and glycerol
contribute to a decrease in the light scattering.191 The water observed in such
experiments is primarily in the stratum corneum (SC) since only a small part
of the reflected light comes from the epidermis or below.
Noninvasive measurement of the SC hydration can be performed using
attenuated total reflectance Fourier transform infrared (ATR FTIR)
spectroscopy.192,193 Three absorption bands are relevant for determination
of the water content in the SC: 3300 cm–1 (3030 nm), O–H and N–H vibra-
tions; 1645 cm–1 (6079 nm), amide I band; and 1545 cm–1 (6472 nm), amide II
band. The amide I band intensity is pronounced in the presence of water due
Table 3.4 Diffusion coefficient of glucose and glycerol in rat and human dermis evaluated
on the basis of in vitro and in vivo experimental data.195,199,205,212
OCA Da, cm2/sec (in vitro) Da, cm2/sec (in vivo)
to the strong absorption of water at 1645 cm–1 and the changes in the carbonyl
absorption under the influence of water, while the amide II band intensity is
due to protein alone. The intensity ratio of the amide I/amide II bands, also
called the moisture factor, is assumed to be a relative measure of SC
hydration.192 Various SC moisturizers based on glycerol, propylene glycol,
sodium lactate, natural moisturizing vegetal liposomes, butylene glycol,
polyglycerylmethacrylate, and urea were used for an in vivo SC hydration
study.192 Depending on the composition and concentration, the maximal
SC hydration could be reached in 0.5–2 hrs after application of the substance
to the skin surface. For some substances, a considerable moisturizing effect
was detectable up to 8 hours following application.
Dual wavelength (1300 and 1450 nm) optical coherence reflectance
measurement is a prospective technique for depth profiling of water
absorption within the skin.194
normalized amplitude
1.00 0.0
r=0.35 cm
r=0.208 cm
0.96 r=0.208 cm
-1.0
r=0.114 cm
r=0.114 cm
0.92
0 500 1000 0 500 1000 1500
time (seconds) time (seconds)
(a) (b)
Figure 3.41 Frequency-domain back-reflectance measurements for the small source-
detector separations.171,188 The time-dependent changes of the (a) amplitude and (b) phase
shift of signal for several source-detector separations (1.14, 2.08, and 3.5 mm) for in vivo
study of human arm under 20 min glycerol administration.
amplitude
s
32
phase (degrees)
0.007
μa
14.8 0.2
0.007
31
14.4 0.2
0.006
(a) (b)
Figure 3.42 Frequency-domain back-reflectance measurements for the large source-
detector separation (2.5 cm).171,188 (a) The raw experimental data of the phase and ac
amplitude of the optical signal and (b) calculation of the absorption and the scattering
coefficients.
useful procedure for topical application of the gel. Only about a 6 % reduction
of the scattering coefficient averaged over the large measuring volume was
observed. This means that the scattering coefficient of the upper (superficial)
layers of the skin changed more effectively. Refractive index matching of the
fiber tips and tissue surface is also important.
In vivo frequency domain measurements for immersed tissues show that
the refractive index matching technique provided by the appropriate chemical
agent or cosmetic preparation application can be successfully used in tissue
spectroscopy and imaging, when reduction of the scattering properties is
needed.
0 0
0.5 0.5
60 60
Reflectance (dB)
Reflectance (dB)
1 1
1.5 1.5
40 40
2 2
0.5 0.5 0 0
1 1 0 0.5 1 1.5 2 0 0.5 1 1.5 2
1.5 1.5
(a) Depth (mm) (b) Depth (mm)
2 2
(c) (d)
0 0
0.5 0.5
40 40
1 1
1.5 1.5
20 20
2 2
0 0
0 0.5 1 1.5 0 0.5 1 1.5
0 0.5 1 1.5 2 0 0.5 1 1.5 2
(e) (f)
(c) Depth (mm) (d) Depth (mm)
Figure 3.43 Ex vivo rat skin OCT imaging in the course of optical clearing using topical
application of 80%-glycerol onto the rat skin: left set of images presents OCT images at
times (a) 0, (b) 3, (c) 10, (d) 15, (e) 20, and (f) 40 min after application of glycerol; all the units
presented are in millimeters, and the vertical axis presents the imaging depth; right set of
graphs presents OCT in-depth reflectance profiles (a) 0, (b) 5, (c) 15, (d) 25 min after
glycerol application.213
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 165
Figure 3.44 Confocal reflectance microscopy, Monte Carlo (MC) modeling of skin optical
clearing by 40%-glucose solution at intradermal injection: (a), (b), and (c) focusing depth is
600 mm, (a) – in 5 min, (b) – in 10 min, and (c) – in 20 min after injection; left and right down
panels: axial point spread function (PSF) before (left) and 20 min after (right) injection at
focusing (1) – 300 mm, (2) – 600 mm and (3) – 900 mm into the skin.216
Figure 3.45 Transcutaneous Raman spectroscopy of bone: spectra of rat tibia without
and with skin optical clearing by glycerol.221
epidermis for the incident long wavelength light (800 nm) and especially for
the backward SHG short wavelength light (400 nm) improves the SHG
images of dermis collagen structures. At 100%-glycerol application to rodent
skin dermis and tendon samples, as well as to an engineered tissue model
(raft), a high efficiency of tissue optical clearing was achieved in the
wavelength range from 400 to 700 nm, but the SHG signal was significantly
degraded in the course of glycerol application and it was returned back to its
initial state after tissue rehydration by the application of saline.224 This SHG
signal reduction could be considered as a linear scattering effect.199,225
Table 3.5 Possible mechanisms responsible for inducing reversible changes in tissue
optical response on laser long-pulsed irradiation175
Mechanism Description Optical response
Thermal lensing, n(T) ¼ n Gradient in the index of refraction caused Decrease in Tt and
(273K) þ DT(r, z, t)(dn/dT) by nonuniform heating increase in Rd
Temperature dependence of the Changes in the size and/or shape of Increase in Tt and
reduced scattering coefficient: scatterers due to temperature rise decrease in Rd
m0s (T) ¼ ms(T)[1 g(T)] (as m0s decreases)
Water transport Temporary local dehydration during laser Increase in Tt and
heating decrease in Rd
Thermal expansion Decrease in tissue density and increase in Decrease in Tt and
tissue thickness caused by thermal increase in Rd
expansion of tissue
3.9.10 UV radiation
UV irradiation causes erythema, stimulates melanin synthesis, and can induce
edema and tissue proliferation if the radiation dose is sufficiently
large.35,44,54,160,196 All these photobiological effects may be responsible for
variations in the optical properties of skin and need to be taken into
consideration when prescribing phototherapy or optical diagnostics. See data
in Figs. 3.9–3.12, 3.14, 3.17, and 3.18.
3.9.11 Applications
The concept of noninvasive blood glucose sensing using the scattering
properties of blood and tissue as an alternative to spectral absorption and
polarization methods for the monitoring of physiological glucose concentra-
tions in blood of diabetic patients is under intensive discussion.215
Noninvasive determination of glucose was attempted using light scattering
of skin tissue components measured by a spatially-resolved diffuse
reflectance173,227 or NIR frequency-domain reflectance techniques.172 Both
approaches are based on changes in glucose concentration, which affect the
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 169
refractive index mismatch between the interstitial fluid and tissue fibers, and
hence m0s .
OCT was proposed for noninvasive assessment of glucose concentration
in tissues.214,215 High resolution of the OCT technique may allow high
sensitivity, accuracy, and specificity of glucose concentration monitoring due
to precise measurements of glucose-induced changes in the tissue optical
properties from the layer of interest (dermis).
Reversible tissue optical clearing technology has valuable features that
can be applied to optical diagnostics, where the reduction of scattering
significantly improves information. For example, nanobiophotonic
approaches based on gold nanoparticle interactions with the individual cell
may be beneficial for more precise cell structure imaging at immersion.228
Skin impregnation by a biocompatible clearing agent allows one to
significantly improve the image quality and accuracy of spectroscopic
information, to provide an effective monitoring of trans-epidermal membrane
drug delivery, a precision laser photothermolysis of skin lesions and tattoo
removal229,237,238 (see Fig. 3.46), and an effective phototherapy.
Some other applications, such as near-infrared fluorescence image-guided
surgery239 or luminescence detection of salmonella typhimurium through
porcine skin,219 are under intensive study. Indeed, optical clearing technology
should be applied to account for many factors that can accompany optical
clearing effects. For instance, the authors of Ref. 239 failed in getting a
statistically significant increase in fluorescence signal from Indocyanine Green
(ICG) dye in in vivo studies with the aim of image-guided surgery at topical
application of OCAs. The reasons for that could be the rather prolonged OCA
diffusivity of SC and other skin layers that, as was estimated above, takes a
few hrs (2.7–5.4 hrs for normal human skin) and the impossibility of optical
clearing of underlying tissues as well as washing out of both clearing agents
and ICG (the time is around 1 hr) in living tissues. However, this problem
could be solved by injection of a clearing agent into the tissue layer lying
above the surgery area, which is contrasted by ICG fluorescence. This
injection should be carefully done to prevent OCA from penetrating during
surgery into the area where ICG is fluorescing, otherwise its fluorescence will
be damped due to less scattering of the surrounding tissue due to the decrease
of ICG molecule bulk absorption cross-section. 222
3.9.12 Conclusion
Control of optical properties of skin is a potential for:
1. a significant improvement of the image quality and accuracy of spectro-
scopic and morphological information using spectrophotometry,197–201
Raman spectroscopy,221 confocal microscopy,216 two-photon-223 and
SHG-225,226 imaging;
2. an effective monitoring of trans-epidermal membrane drug delivery and
glucose sensing;214,215
3. a precision laser photothermolysis of skin lesions and tattoo;229,237,238,240
4. an effective phototherapy.
At topical application of OCAs, fractional ablation of stratum corneum is
one of the prospective techniques.208,241,242 OCT and multiphoton imaging
provide in vivo measurements of agent diffusivity for different skin layers
within the range of their in-depth probing.230–244
Recent achievements of skin optical clearing and the study of OCA
impact on skin and underlying tissues are summarized in Refs. 158, 243–252.
3.10 Conclusion
The optical properties of skin are dynamic, variable, and complex, but they
can be quantitatively determined under certain simplifying assumptions.
Simplified skin models are used for the analysis of spectral reflection and
fluorescence, measured in a certain spectral range, where chromophores
and fluorophores with optical characteristics known a priori (melanin,
hemoglobin, bilirubin, water, collagen, etc.) dominate. These models enable
quantitative assessment of skin chromophores and fluorophores with high
precision and in real time. Combined fluorescence/reflectance techniques
make it possible to obtain additional information on the state of the skin.
Color and polarizing imaging are among the most prospective methods of
skin diagnosis in clinics. The control of optical properties is a prospective
method for the increase of effectiveness of skin disease diagnosis and
therapy.
Acknowledgments
We would like to thank all our colleagues and students from Research-
Educational Institute of Optics and Biophotonics at Saratov State University,
especially A.N. Bashkatov, L.E. Dolotov, E.A. Genina, and A.B. Pravdin.
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 171
We are grateful to R.K. Wang, G.B. Altshuler, K.V. Larin, S. Tanev, and
I. Yaroslavsky for collaboration. The support from the Government of the
Russian Federation (grant No. 14.Z50.31.0004), Russian Presidential grant
NSh-703.2014.2, and The Tomsk State University Academic D.I. Mendeleev
Fund Program is appreciated.
References
1. G. J. Müller and D. H. Sliney (eds.), “Dosimetry of laser radiation in
medicine and biology,” Advanced Optical Technology IS5, SPIE Press,
Bellingham, WA (1989).
2. S. L. Jacques, “The role of skin optics in diagnostic and therapeutic uses
of lasers,” in Lasers in Dermatology, R. Steiner (ed.), Springer-Verlag,
Berlin, 1–21 (1991).
3. S. Wan, K. F. Jaenicke, and J. A. Parrish, “Comparison of the
erythemogenic effectiveness of ultraviolet-B (290-320 nm) and
ultraviolet-A (320-400 nm) radiation by skin reflectance,” Photochem.
Photobiol. 37, 547–552 (1983).
4. N. Kollias and A. N. Baqer, “On the assessment of melanin human skin
in vivo,” Photochem. Photobiol. 43, 49–54 (1986).
5. N. Kollias and A. N. Baqer, “Quantitative assessment of UV-induced
pigmentation and erythema,” Photodermatology 5, 53–60 (1988).
6. J. B. Dawson, D. J. Barker, E. Grassam, and J. A. Cotterill et al.,
“A theoretical and experimental study of light absorption
and scattering by in vivo skin,” Phys. Med. Biol. 25, 695–702
(1980).
7. P. H. Andersen and P. Bjerring, “Non invasive computerized analysis of
skin chromophores in vivo by reflectance spectroscopy,” Photodermatol.
Photoimmunol. Photomed 7(6), 249–257 (1990).
8. H. Kopola and A. Lahti, “Two-channel fiber optic skin erythema
meter,” Opt. Eng. 32(2), 222–226 (1993).
9. J. W. Feather, K. S. Ryatt, and J. B. Dawson, “Reflectance
spectrophotometric quantification of skin color changes induced by
topical corticosteroid preparations,” Br J. Dermatol. 106, 436–443
(1982).
10. K. S. Ryatt, J. W. Feather, J. B. Dawson, and J. A. Cotrell, “The
usefulness of reflection spectrophotometric measurements during psor-
alens and ultraviolet A therapy for psoriasis,” J. Am. Acad. Dermatol.
85, 558–562 (1985).
11. J. C. Seitz and C. G. Whitmore, “Measurement of erythema and tanning
responses in human skin using a tri-stimulus colorimeter,” Dermatolo-
gica 177(2), 70–75 (1988).
12. C. Edwards and R. Heggie, “A small solid state meter for
measuring melanin pigmentation,” in The Environmental Threat to
172 Chapter 3
visual appearance of normal skin and port wine stains,” Las. Med. Sci.
10, 55–65 (1995).
25. H. Takiwaki, S. Shirai, Y. Kanno, Y. Watanabe, and S. Arase,
“Quantification of erythema and pigmentation using a videomicroscope
and a computer,” Brit. J. Dermatol. 131, 85–92 (1994).
26. H. Takiwaki, H. Miyamoto, and K. Ahsan, “A simple method to
estimate CIE-L*a*b* values of the skin from its videomicroscope
image,” Skin Res. Technol. 3, 42–44 (1997).
27. G. B. Altshuler and V. V. Tuchin, Physics behind the light-based
technology: Skin and hair follicle interactions with light in Light Based
Systems for Cosmetic Application, ed. Gurpreet Ahluwalia, William
Andrew, Inc., Norwich, NY, USA, 2008.
28. M. F. Yang, V. V. Tuchin, and A. N. Yaroslavsky, “Principles of light
skin interactions,” Light-Based Therapies for Skin of Color, Ed.
E. Baron, Springer, N. Y., 2009
29. D. J. Leffell, M. L. Stetz, L. M. Milstone, and L. I. Deckelbaum, “In
vivo fluorescence of human skin,” Arch. Dermatol. 124, 1514–1518
(1988).
30. H. J. C. M. Sterenborg, M. Motamedi, R. F. Wagner, J. R. M. Duvic,
S. Thomsen, and S. L. Jacques, “In vivo fluorescence spectroscopy and
imaging of human skin tumors,” Lasers Med. Sci. 9, 344–348 (1994).
31. H. J. C. M. Sterenborg, S. Thomsen, S. L. Jacques, and M. Motamedi,
“In vivo autofluorescence of an unpigmented melanoma in mice.
Correlation of spectroscopic properties to microscopic structure,”
Melanoma Res. 5, 211–216 (1995).
32. H. Zeng, C. MacAulay, D. I. McLean, and B. Palcic, “Spectroscopic
and microscopic characteristics of human skin autofluorescence emis-
sion,” Photochem. Photobiol. 61(6), 639–645 (1995).
33. R. R. Anderson, “In vivo fluorescence of human skin [letter, comment],”
Arch. Dermatol. 125, 999–1000 (1989).
34. Yu. P. Sinichkin, S. R. Utz, I. V. Meglinsky, and H. A. Pilipenko,
“In vivo human skin spectroscopy: II. Fluorescence spectra,” Opt.
Spectrosc. 80, 431–438 (1996).
35. Yu. P. Sinichkin, S. R. Utz, A. H. Mavlutov, and H. A. Pilipenko,
“In vivo fluorescence spectroscopy of the human skin: experiments and
models,” J. Biomed. Opt. 3, 201–211 (1998).
36. I. Miksik, R. Struzinsky, and Z. Deyl, “Change with age of UV
absorbance and fluorescence of collagen and accumulation of
e-hexosyllysine in collagen from Wistar rats living of different food
restriction regimes,” Mech. Ageing. Development 57, 163–174 (1991).
37. P. R. Odetti, A. Borgoglio, and R. Rolandi, “Age-related increase of
collagen fluorescence in human subcutaneous tissue,” Metabolism 41(6),
655–658 (1992).
174 Chapter 3
skin for laser and non-laser sources of visible and near-IR light,” Quant.
Electron. 41(4), 329–334 (2011).
95. F. F. Jobsis, “Noninvasive, infrared monitoring of cerebral and
myocardial oxygen sufficiency and circulatory parameters,” Science 19,
1264 (1977).
96. H. Liu, D. A. Boas, Yu. Zhang, A. G. Yodh, and B. Chance,
“Determination of optical properties and blood oxygenation in tissue
using continuous NIR light,” Phys. Med. Biol. 40, 1983–1993 (1995).
97. S. J. Matcher and C. E. Cooper, “Absolute quantification of
deoxyhaemoglobin concentration in tissue near infrared spectroscopy,”
Phys. Med. Biol. 39, 1–17 (1994).
98. B. Chance, M. Cope, E. Gratton, N. Ramanujam, and B. Tromberg,
“Phase measurement of light absorption and scatter in human tissue,”
Rev. Sci. Instrum. 69, 3457–3481 (1998).
99. E. M. Sevick, B. Chance, and J. Leigh et al., “Quantitation of time- and
frequency- resolved optical spectra for the determination of tissue
oxygenation,” Anal. Biochem. 195, 330–351 (1991).
100. M. Trorniley, L. Livera, Y. Wickramasinghe, S. A. Spenccer, and
P. Rolfe, “The non invasive monitoring of cerebral tissue oxygenation,”
Adv. Exp. Med. Biol. 277, 323 (1990).
101. I. V. Meglinsky and S. J. Matcher, “The determination of absorption
coefficient of skin melanin in visible and NIR spectral region,” Proc.
SPIE 3907, 143–150 (2000).
102. T. J. Farrell, M. S. Patterson, and B. Wilson, “A diffuse theory model of
spatially resolved, steady-state diffuse reflectance for the noninvasive
determination of tissue optical properties in vivo,” Med. Phys. 19, 879–
888 (1992).
103. N. Kollias, A. Baqer, I. Sadig, and R. M. Sayer, “In vitro and in vivo
ultraviolet-induced alterations of oxy- and deoxyhemoglobin,” Photo-
chem. Photobiol. 56, 223–227 (1992).
104. N. Kollias, R. Gillies, and J. A. Muccini et al., “A single parameter,
oxygenated hemoglobin, can be used to quantify experimental irritant-
induced inflammation,” J. Invest. Dermatol. 90, 421–424 (1991).
105. N. Kollias, R. Gillies, J. A. Muccini, S. B. Phillips, and L. A. Drake,
“Oxyhemoglobin is a quantifiable measure of experimentally induced
chronic tretinoin inflammation and accommodation in photodamaged
skin,” Skin Pharmacol. 10, 97–104 (1997).
106. M. Haijzadeh, J. W. Feather, and J B. Dawson, “An investigation of
factors affecting the accuracy of in vivo measurements of skin pigments
by reflectance spectrophotometry,” Phys. Med. Biol. 35, 1301–1315
(1990).
107. N. Kollias, “Physical basis of skin color and its evaluation,” Clinics
Dermatol. 13, 361–367 (1995).
Reflectance and Fluorescence Spectroscopy of Human Skin in vivo 179
191
192 Chapter 4
where n0 is the frequency of incident radiation and nvib is the frequency of the
excited vibrational mode. It follows that at room temperature the intensity of
the Stokes-Raman lines in the most informative spectral region (.400 cm–1) is
much higher than that of the anti-Stokes-Raman lines.
Different selection rules apply for excitation of molecular vibrational
states through absorption of an IR photon or through Raman scattering of an
incident photon. Some vibrations can be excited by both Raman and IR
processes, others can only be excited by either a Raman scattering process or
by IR absorption. For symmetric molecules, the selection rules are mutually
Infrared and Raman Spectroscopy of Human Skin in vivo 193
exclusive for all vibrations. Molecules exhibit IR activity when a change in the
molecular dipole moment occurs during the vibration. Raman activity
occurs when there is a change in polarizability. Therefore, the band intensity
in IR and Raman spectra of the same molecular vibration frequency can be
quite different. As a rule of thumb, symmetric vibrational modes are often
strong in Raman, whereas antisymmetric vibrational modes are strong
in IR. Depending on the polarization state of the incident and analyzed
(observed) light, information on the symmetry of the molecules can also be
obtained.
The subject of this chapter is to analyze IR and Raman vibrational
characteristics of human skin in vivo. In Sec. 4.2, IR spectra of human skin
stratum corneum, of water, and of hydrated skin are analyzed. Section 4.3
deals with the analysis of Raman spectra of human skin constituents
and of water concentration profiles in human skin. Finally, Sec. 4.4 presents
conclusions and outlook for clinical applications of IR and Raman spectro-
scopic diagnostics.
Figure 4.2 Simplified schematic of the ATR-FTIR setup. The infrared light is (totally)
reflected inside the ZnSe crystal (n ¼ 2.4). At the skin-crystal interface, part of the light is
absorbed in the skin, which attenuates the beam inside the crystal. An infrared (MCT)
detector picks up the transmitted beam.
196 Chapter 4
0.8
3275 νNH
3220 νds OH
Absorbance
3420 νb OH
0.4
νCH ?
2125 δ + ν L H2O
2873 ν CH3 symm
3615 νw OH
0.2
0.0
4000 3600 3200 2800 2400 2000
Wavenumber (cm-1)
1650 AmI
1.0
1403 δC(CH3)2
1585 ν C=C
1640 δ H2O
1245 Am III
1460 δCH2
1455 δCH3
1298 ν CH2
Absorbance
1208
1677 sh
1337
1164
883 ρCH2
917
975
0.5
1740 ν C=O lipid
0.0
1600 1400 1200 1000 800
Wavenumber (cm-1)
Figure 4.3 Measured ATR-FTIR spectrum of in vivo human skin stratum corneum on the
volar aspect of the forearm: (top) the 4000–1750 cm–1 spectral range, (bottom) the 1750–
750cm–1 spectral range. Assignments are indicated in the figure (see also Table 4.1).
Table 4.1 FTIR frequencies (cm–1) and assignments of vibrational bands of human stratum
corneum according to Barry et al.18a and Scherer et al.16,17b and frequencies found in our
study (Fig. 4.3).
Frequency (cm–1) Frequency (cm–1) Assignment
Barrya, Schererb This study
presence of sebum in the SC. In spectra taken on the forehead, cheek, or neck,
two peaks with different amplitudes belonging to the two carbonyl groups in
lipid ester appear at 1726 cm–1 and 1740 cm–1 (spectra not shown). Generally,
the bending modes have smaller frequencies as compared to most stretching
frequencies. For example, the C-H bending of lipids is found at 1451 cm–1,
and C-H stretches in asymmetric and symmetric CH2 and CH3 modes of
lipids at 2851 cm–1 and 2919 cm–1, respectively. The broadband around
3300 cm–1 contains OH stretches and an NH stretch. In the normal SC
spectrum, water bands are present, indicating that the normal SC contains a
certain amount of water. These water bands will be described in more detail in
Sec. 4.2.3.
There are not many examples of ATR-FTIR spectra of skin SC in the
literature to compare with. The in vivo skin stratum corneum spectrum shown
in Fig. 4.3 agrees well with that shown by Potts et al. (Fig. 1 in Ref 10). In
comparison with the in vivo spectrum by Wichrowski et al.,13 it is apparent
that their spectrum contains a large double C¼O peak at 1720 and 1740 cm–1,
a situation we only observed in spectra measured on the cheek and forehead
(spectra not shown). The difference between in vitro and in vivo spectra is
obvious from a comparison with the FT-IR spectrum shown by Barry et al.
(Fig. 2 in Ref. 18). Barry et al. assigned the bands in the skin SC spectrum on
the basis of a comparison with in vitro FT-IR and FT-Raman spectra. In our
in vivo ATR-FTIR spectrum (see Fig. 4.3) we observe some bands in the lower
wave-number range that may correspond to some of the assignments of
Raman bands that are not visible in their IR spectrum. For example, the
852 cm–1 dCCH band and the 883 cm–1 rCH2 band, the nCC stretches at
1034 cm–1 and 1118 cm–1, and the 1168 cm–1 nCC stretch and dCOH bending.
Some very weak bands indicated in the figure by their wave numbers are not
assigned in the list of Barry et al., e.g., 917, 975, and 1337 cm–1. Barry et al.
assign two peaks at 1650 and 1656 cm–1 to a-helix and disordered amide I
nC¼O stretches of the carbonyl groups of keratin proteins and ceramide
lipids. We do not observe such a doublet. These bands are probably obscured
by the strong band with a maximum absorbance at 1640 cm–1. The weak
shoulder at 1677 cm–1, which becomes more visible in the derivative of the
spectrum in Fig. 4.3 (not shown), has not been assigned. It could point to a
partly b-sheet conformation of the amide I keratins.
1.0
ν(OH)
νb νL
Absorbance
0.5 νds
δ(H2O)
δ-νL
δ+νL(H2O)
νw
0.0
4000 3500 3000 2500 2000 1500 1000
Wavenumber (cm-1)
Figure 4.4 Measured ATR-FTIR spectrum of water at room temperature (thick solid line)
and spectral fit (dashed line) built up from individual vibrational bands (thin solid lines). Most
important bands are the bending mode d(H2O) at 1640 cm–1, the combination band d + nL at
2124 cm–1, the symmetric nsd(OH) stretch at 3225 cm–1, the strong bond nb(OH) stretch at
3420 cm–1, and the weak nw(OH) stretch at 3615 cm–1.
molecular level are not completely understood. The high density of hydrogen
bonds and the quasi-tetragonal symmetry of water makes such a description
very complex.21,22,23,28,29
A measured ATR-FTIR spectrum (solid line) and fit (dashed line) of
water at room temperature are shown in Fig. 4.4. Water band assignments
from the literature and fit parameters are given in Table 4.2.
Table 4.2 Fit parameters of the FTIR water spectrum given in Fig. 4.4: band frequency
(in cm–1), band width (in cm–1), amplitude (A in a.u.), and assignments after Scherer.16,17
Bandwidth Amplitude
Frequency (cm–1) (cm–1) Assignment (a.u.)
685 cm–1 ± 15 cm–1 was found that was assigned to “hindered rotation” mode
nR. Also, a translational mode nT at 170 cm–1 ± 15 cm–1 could be identified.
Williams20 proposed that the 2125 cm–1 band would be a combination of
n2 þ nR nT, which is a transition starting from the first excited translational
state. However, the difference nR nT ¼ 515 cm–1 can only be matched to
485 cm–1 if the frequency accuracy ranges of the nR and nT bands are
coadded.
2
time (min)
34
Absorbance
30
25
20
15
10
9
8
1 7
6
5
4
3
2
1
0
4000 3500 3000 2500 2000 1500 1000
Wavenumber (cm-1)
Figure 4.5 Sequential hydrated human skin stratum corneum spectra measured during
occlusion each minute for half an hour. The thick line represents the water spectrum (scaled)
from Fig. 4.4. Spectral changes can be clearly identified: increased contribution of the water
bending mode at 1640 cm–1 and pronounced increase of the OH stretches in the high wave
number band around 3300 cm–1. Also, the water combination band around 2125 cm–1 is
clearly visible in the hydrated spectra.
stretches, and without the asymmetric nOH stretch at 3440 cm–1, which has
been taken together with the 3422 cm–1 nbOH stretch band as one band. If
more than three bands are used in the broad OH region it is always possible to
obtain a better fit.30 Some imperfections in the fit are seen in the wings of the
1640 cm–1 bending band and around 2750 cm–1 where intensity is lacking.
These may be due to the presence of another nearby water band or to an
actual lineshape different from a pure Gaussian.31
A Lorentzian component would give broader wings. Seshradi and Jones31
state that the lineshapes of infrared absorbance bands usually are a mixture of
Lorentzian and Gaussian bandshapes with a dominant Lorentzian contribu-
tion. In trying pure Lorentzians, we found that it was impossible to get
acceptable fits, especially in the high wave number wing of the 1640 cm–1
band where the intensity is rather low. As shown, using pure Gaussians, the
error between our measured spectrum and the fit is small and we, therefore,
have chosen to continue using pure Gaussian bandshapes, instead of
introducing three more parameters per band.
The two broadbands at 1150 and 1520 cm–1 are necessary to fit the broad
plateau between the libration band and the bending band. These bands are not
assigned by Scherer. Nonetheless, there are some possibilities of combination
bands that would add to this region.32 For example, the 1150 cm–1 band
matches exactly the d nL combination band. The n3 n2 nL, n2 þ n3 n1
and n1 n2 nL combination bands could possibly contribute to the other
band.
We did not find any information on the intensities of these combinations
in the literature, and we have not studied these further.
2
Absorbance
amplitudes in the three water band regions of interest. Good fits are
obtained by interactive control of parameters looking at least squares values
of the fits.
An example of a fit to the fully hydrated SC spectrum of Fig. 4.5 is
presented in Fig. 4.6. Parameters of the fits are listed in Table 4.3. Following
Scherer, we have fitted the broad OH band of water using the symmetric
stretch nsd(OH) at 3220 cm–1, the strong bond nb (OH) stretch at 3422 cm–1,
and the weak bond nw (OH) stretch at 3615 cm–1. The nasd (OH) at 3440 cm–1
has been merged with the strong bond nb (OH) stretch at 3420 cm–1
because these bands are close together. The amplitudes of some of the water
bands (fit results) versus time are plotted in Fig. 4.7. The amplitudes of the
bending mode at 1640 cm–1 and the combination band at 2125 cm–1 are seen
to “saturate,” while the OH stretch amplitude continuously increases with
time. The shapes of the combination band curve and that of the bending
mode and the first part of the OH amplitude-time curve turn out to be
very similar. The accuracy has been estimated from the band amplitudes.
In Fig. 4.7 for example, the relative error in the 1640 cm–1 band amplitude is
less than 2%, which can be inferred from the saturated part after 20 minutes.
The relative error increases for lower signals, e.g., 5% for the 2125 cm–1 band.
The sizes of the symbols in the figure are an indication of the errors.
The spectra in Fig. 4.5 clearly show the increase in infrared signal intensity
with time in the amide regions and in the OH region. The thick line
204 Chapter 4
Table 4.3 Fit parameters of the fits on hydrated stratum corneum spectra of Fig. 4.5: band
frequency (in cm–1) and amplitude (in a.u.) versus time (min).
time (min) Amplitudes (a.u.)
freq.
(cm–1) 1 5 10 15 20 25 30 34
1.5
δH2O 1640
Amplitude (a.u.)
1.0
νOH 3420
0.5
δ+νLH2O2125
0.0
0 5 10 15 20 25 30 35
Time (min)
Figure 4.7 Water band amplitudes from the fitting results versus time. Note that the
amplitudes of the bending mode at 1640 cm–1 and the combination band at 2125 cm–1
saturate while the OH stretch amplitude continuously increases with time.
water in the outermost layers of the skin SC, which, in fact, dilutes the skin SC
spectrum with that of water.
One might wonder whether a changed penetration depth with water
content would also add to the spectral changes observed. Hydration of the
outermost skin layers affects the refractive index of these layers since the
refractive index of dry SC is about 1.5533 and that of water is about 1.3334
Therefore, hydration of skin SC continuously decreases the refractive index of
skin SC.
4.2.5.1 Penetration depth of the IR beam
The penetration depth dp of the IR beam in the skin is given by35
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 1
n
d p ðnÞ ¼ 2pnnc sin2 ðuÞ s , (4.3)
nc
with u ¼ 45 deg the angle of incidence, ns the refractive index of the sample
(skin), nc the refractive index of the crystal (nc ¼ 2.42 for ZnSe), n ¼ 1/l the
wave number, and l the wavelength in vacuum. At the penetration depth, the
evanescent field amplitude has dropped to 1/e of its original value. According
to Eq. (2), the penetration depth is dp ¼ 1.317 mm at 650 cm–1 and dp ¼
0.214 mm at 4000 cm–1 for dry SC (using ns ¼ 1.55), and dp ¼ 1.21 mm at
650 cm–1 and dp ¼ 0.197 mm at 4000 cm–1 for water (ns ¼1.33), see Fig. 4.8.
206 Chapter 4
1.2
1.0
0.8
dp (μm)
0.6
0.4
0.2
0.0
4000 3500 3000 2500 2000 1500 1000
Wavenumber ν (cm-1)
Figure 4.8 Penetration depth dp of the IR beam as a function of the wave number
according to Eq. (4.2) for ZnSe crystal nc ¼ 2.42. The solid line represents a sample with
constant refractive index ns ¼ 1.33 (water) and the dashed line denotes a sample with
refractive index ns ¼ 1.55 of dry stratum corneum.
For the Ge crystal (nc ¼ 4.0) the penetration depths are about 50% lower than
those for the ZnSe crystal. If fully hydrated skin SC had a refractive index
close to 1.33, the penetration depth would only be 8% less than the
penetration depth in dry SC.
We tried to find evidence for the presence of a thin layer of water between
the skin and crystal, which we expected to be formed during occlusion. A
water film at the surface would show up in the spectrum immediately,
especially in the OH stretch region and by diminished lipid band
contributions. However, since the CH2 and CH3 bending bands of lipids do
not disappear relative to the OH band, we estimate that such a thin water film
would probably be less than 0.1 or 0.05 times the penetration depth of about
0.2 mm around 3300 cm–1, i.e., less than 20 nm thick.
1.5
MF, IR [-]
1.0
0 5 10 15 20 25 30 35
Time (min)
Figure 4.9 Moisturizing factor MF (•) ratio of absorbance at 1640 cm–1 and 1545 cm–1
according to Gloor,15 and infrared absorption ratio (□), shifted in y-axis to match highest MF
value, from the 2125 cm–1 combination band according to Potts4 and the amplitude ratio (○)
scaled 1.8 calculated from the ratio of the amide I at 1650 cm–1 to amide II at 1545 cm–1
amplitudes as a result of the fits.
match the MF curve. From the figure it can be inferred that the shapes of the
MF and IR curves with time are similar. The curve of the amide amplitude
ratio (1.8 scaled to match the MF curve) has a more pronounced slope in
the first part of the curve. This part of the curve is related to the maximizing of
contact area between the skin and crystal during the occlusion. The second
part shows similar slopes for the three ratios and represents the water content
increase after maximal contact area between the skin and crystal has been
reached.
The fit analyses of the hydration spectra have shown the separate
contributions from water bands in the (hydrated) skin SC spectra. For
determining the water content, the fit method bears the advantage that the
water bands (bending mode at 1640 cm–1, combination band at 2125 cm–1,
and the OH stretches) are varied individually and possible variations in, for
example, strong and weak bond stretches can be obtained. However, in the
time span of measurement in this study, no significant variations in strong or
weak bond stretch amplitudes have been observed. Further details on separate
contributions probably need longer (i.e., hours) hydration and corresponding
monitoring.
As compared to the other three methods used, the fit analysis gives more
insight in the changes due to contribution from the individual water bands and
changes of the amide I and amide II bands. In the Potts method using the
Infrared and Raman Spectroscopy of Human Skin in vivo 209
PC
RS-232
servo controller
spec
tro
piezo translator
meter
voltage
CCD
filter LVDT sensor
laser filter
Figure 4.10 Experimental confocal Raman microspectrometer used for in vivo measure-
ments of human skin.
212 Chapter 4
730 nm. In this way the Raman signal is always detected in the spectral range
between approximately 880 and 1010 nm.
Conventionally the axial resolution of a confocal microscope is derived
from the signal response of a point source that is moved through the focal
plane in the axial direction. The axial resolution is then defined as the full-
width-at-half-maximum (FWHM) of the response curve. However, for
confocal Raman spectroscopic experiments, a more practical measure of the
axial resolution is the response of the system that would be obtained if it were
scanned through an infinitesimally thin plane with Raman scatterers, rather
than a point source. We, therefore, used the following method to determine
the depth resolution. The Raman signal of oil was recorded, as the laser focus
was scanned across a CaF2-oil interface. The step response function thus
obtained was differentiated to obtain the impulse response of the system (i.e.,
the response of the system that would be obtained if it were scanned through
an infinitesimally thin plane with Raman scatterers). The FWHM of this
impulse response is a measure for the depth resolution. For excitation
wavelengths of 730 and 850 nm, the depth resolution was approximately
5 mm. In other words, an infinitely thin tissue section that is located 2.5 mm
above or below the focal plane contributes half the Raman signal of the
infinitely thin section that is in focus.
In vivo confocal Raman spectra of the skin show a considerable decrease
in the absolute signal intensity if the distance from the laser focus to the skin
surface is increased. This is mainly due to diffuse light scattering, which is a
much stronger effect in the skin than light absorption. Confocal detection is,
therefore, particularly useful to study the outer skin layers, i.e., the stratum
corneum and the viable epidermis. The in vivo Raman signal of the dermis is
strongly reduced due to scattering in the epidermis. Confocal detection of the
dermis Raman signal, therefore, requires considerably longer signal collection
times than for the epidermis.
This is illustrated in Fig. 4.11, showing in vivo confocal Raman spectrum of
the arm, measured in the stratum corneum and in the dermis. The signal
collection time is the same for both spectra, but the dermis signal is
significantly weaker than the signal of the stratum corneum. However, since
the dermis is much thicker than the epidermis (1 4 mm thick) it can easily be
studied using a nonconfocal detection scheme with a detection volume that is
large compared to the thickness of the epidermis. In this case the dermis will be
the dominant source of the Raman signal, which is illustrated by Fig. 4.12. The
figure shows the confocal spectrum from Fig. 4.11 and a nonconfocal spectrum
measured with a fiber-optic probe. The spectra are scaled to equal intensity. It
is clear that the spectrum obtained with the fiber-optic probe is almost entirely
determined by the Raman signal of the dermis. For equal signal collection
time, the S/N of the dermis spectrum obtained with the probe is considerably
higher than that of the spectrum that was measured confocally.
214 Chapter 4
stratum corneum
dermis
confocal
fiber probe
of the skin that were obtained in vivo. The first part of this paragraph presents
an overview of the molecular compounds that are relevant for the Raman
signature of the stratum corneum. The second part describes the in vitro
Raman spectra that were used for the analysis of the in vivo experiments.
The major constituents of the stratum corneum are water and keratin. The
water content (grams of water per 100 grams of wet tissue) ranges from
approximately 15% at the outer stratum corneum cells to 70% in the stratum
granulosum.62 The protein keratin represents 75–80% of the stratum corneum
dry weight (see also Sec. 4.2.2.) and accounts for most of the in vivo Raman
signal of SC. Here we use the term keratin to describe the entire class of
keratins, although keratins in fact form a group of similar and closely related
structural proteins. In the literature a multiplicity of terms, such as a- and b-
keratin, hard/soft keratin, low/high-sulfur, is used to refer to different classes
of keratins.
Free amino acids and derivatives of amino acids form another major
group of stratum corneum constituents. This group is commonly referred to as
natural moisturizing factor (NMF). The concentration of NMF can reach as
much as 10% of the stratum corneum dry weight. It has been demonstrated
that a number of NMF constituents can have significant contributions to in
vivo Raman spectra of stratum corneum.59 These are pyrrolidone-5-carboxylic
acid (PCA), arginine (Arg), ornithine (Orn), citrulline (Cit), serine (Ser),
proline (Pro), glycine (Gly), histidine (His), alanine (Ala) and urocanic acid
(UCA). Also lactate and urea, which are excreted in sweat, can strongly
contribute to the Raman signal of stratum corneum.
Lipids comprise approximately 5–15% of the dry weight of the stratum
corneum. About 50% of the stratum corneum lipids are ceramides.
Ceramides, thus represent the most abundant class of lipids in the stratum
corneum. Lipids clearly contribute to the Raman signal of the stratum
corneum; however, on certain body sites such as at the palm of the hand, their
contribution may be small.
Contributions of other stratum corneum compounds, which represent
only minor fractions of the total stratum corneum weight, were not found to
significantly contribute to the Raman signal.
The following section describes the in vitro Raman spectra that were used
for the determination of molecular concentration gradients in the stratum
corneum by in vivo Raman spectroscopy.
Keratin is an insoluble protein and cannot be extracted without destroying
its secondary structure. We, therefore, used the Raman spectrum of stratum
corneum, devoid of lipids and water-soluble compounds, as a model for the
keratin fraction of the stratum corneum. A stratum corneum sample was
obtained from the peeled stratum corneum of a male volunteer who suffered a
mild sunburn. Lipids and water-soluble compounds were largely removed by
soaking the stratum corneum sample in methanol/ethanol (2:1 volume ratio)
Infrared and Raman Spectroscopy of Human Skin in vivo 217
amide I
CH2
amide III
Intensity, arbitrary units
Phe
Phe
Tyr
Phe
S-S
and water, so that the remaining sample consisted solely of keratin. The
Raman spectrum of the purified stratum corneum is shown in Fig. 4.13.
A number of prominent spectral bands are indicated in the figure. The amide I
vibration at 1650 cm–1, the amide III vibration at approximately 1275 cm–1,
and the CH2 scissoring vibration shift at 1450 cm–1 all give rise to strong
bands in the Raman spectrum. The position of the amide I and III bands
indicate that keratin in human stratum corneum adopts a predominantly a-
helical structure.38,42 A comparison of the Raman spectrum of stratum
corneum to the IR spectrum (see Fig. 4.3) nicely illustrates the complementary
aspect of both techniques. Whereas, for example, the IR spectrum shows a
weak amide III band, it is a prominent feature in the Raman spectrum. On the
other hand, the amide II at approximately 1550 cm–1 is very strong in the IR
spectrum and very weak in the Raman spectrum. Other clear features in the
Raman spectrum are the S-S stretch vibration around 525 cm–1 and the
aromatic ring vibrations of the amino acids phenylalanine (622, 1003, and
1032 cm–1) and tyrosine (644 cm–1).
For the experiments described in this chapter, the Raman spectrum of
ceramide 3 was used as a model for the total stratum corneum lipid content.
This is justified by the fact that ceramides form the most abundant class of
lipids in the stratum corneum. The spectrum of ceramide 3 is shown in
Fig. 4.14. Also shown in Fig. 4.14 are the Raman spectra of the dominant
NMF and sweat constituents. This selection consists of the spectra of lactate,
218 Chapter 4
PCA
Arg
Orn
Cit
Ser
Pro
Gly
Intensity, a.u.
His
Ala
UCA
lactate
urea
ceramide
water
400 600 800 1000 1200 1400 1600 1800
Raman shift, cm-1
Figure 4.14 In vitro Raman spectra of skin constituents. The spectrum of ceramide was
measured in the solid state. All other spectra are from solutions in water at pH 5.5.
dominant factor, showing a broad band between 3100 and 3700 cm–1 due to a
triplet of n(OH) vibrations. Figure 4.15 shows the Raman spectra of pure
water and of dried SC in the spectral range between 2500 and 3800 cm–1. The
prominent features of the stratum corneum spectrum are indicated in the
figure.
ν(C-H)
N-H
dehydrated SC
ν(O-H)
ν(O-H)
ν(O-H)
water
The absolute water content in mass-% (gram water per 100 grams of
tissue) was derived as follows. The intensity ratio between the integrated
Raman signals of water and protein (IOH/ICH) is proportional to the mass
ratio between water and protein (mw/mp):
I OH mw
¼ · R: (4.4)
I CH mp
With this proportionality constant R, the absolute water content follows
directly from the Raman ratio (IOH/ICH) using the following equation:
I OH
mw I
water contentð%Þ ¼ · 100% ¼ I OH CH · 100%: (4.5)
mw þ mp I þR
CH
70
hydrated
60
50
Water content
40 normal
30
20
10 viable
SC epidermis
0
0 5 10 15 20 25 30 35 40
Depth, μm
Figure 4.16 In vivo water concentration profiles for normal (solid) and hydrated skin (open)
at the volar aspect of the forearm. The vertical line indicates the estimated boundary
between the stratum corneum and the viable epidermis for normal skin. Experimental
conditions: exposure time: 5 s per data point, 10 s at depths greater than 20 mm. Scan step
size: 2 mm; laser power: 100 mW; excitation wavelength: 720 nm.
signal integration times in this experiment were relatively long (5–10 s). Since
the Raman signals are integrated over large spectral intervals, the quality of the
spectra, in terms of S/N, was much better than was needed for the
determination of water concentration profiles. Therefore, the signal collection
time could be reduced to less than 1 s per spectrum without affecting the results.
These results illustrate that confocal Raman microspectroscopy offers a
refined and subtle method for spatially resolved investigation of water in the
stratum corneum and the effects of hydration on the skin in vivo.
4.3.4.2 NMF
In the CH-stretch region, the Raman spectrum of the stratum corneum is largely
determined by only two compounds: keratin and water. This enables the use of
the simple method to determine the water content. In the fingerprint region
(approximately 400–1850 cm–1) the in vivo Raman spectrum of the stratum
corneum is composed of contributions from a number of compounds with
multiple and partially overlapping bands (see Fig. 4.14). We, therefore, used the
222 Chapter 4
1.0
Normalized concentration
Ser
0.5
Gly
Pca
Lactate
0.0
0 20 40 60
Depth (μm)
Figure 4.17 In vivo concentration profiles for the stratum corneum of the palm showing the
three dominant NMF constituents and lactate.
Infrared and Raman Spectroscopy of Human Skin in vivo 223
1,0
Absorption
0,5
0,0
1523 cm-1
C=C
0,011
1156 cm-1
C-C
1005 cm-1
C-CH3
0,010
Intensity, a.u.
0,009
0,008
1
2
3
8
6
7
5
Figure 4.20 Scheme of the setup for in vivo resonance Raman spectroscopy of
carotenoids. (1): Ar+ multiline laser, (2) and (5): optical fibers, (3): optical imaging system,
(4): measuring object (skin), (6): spectrograph, (7): CCD camera, (8): personal computer.
by a personal computer (8). The power density on the skin surface was
adjusted between 30 and 45 mW/cm2 depending on the excitation wavelength
that was within the safety standard. The experimental arrangement for the
Raman measurements of cutaneous carotenoids is presented in Figure 4.20.
To eliminate the negative influence of pigmentation and heterogeneities
of the skin on the optical measurements, the excitation beam was expanded
up to 6.5 mm in diameter.78 The presented value of the excitation beam
size was evaluated experimentally and provided the optimal stability of
the Raman signal obtained from the skin (measured data scattering is
around 10%).
The intensity of the Raman peak was determined with the use of the
Gauss approximation line after the subtraction of fluorescence background.
4.4.9 Conclusions
Resonance Raman spectroscopy is a well suited, non-invasive and quick
method, which has no analogues for the in vivo determination of the
carotenoid antioxidant substances in the human skin at high sensitivity. It
provides the possibility to perform qualitative and quantitative measurements
on any site of the human skin. The performed measurements show that
cutaneous carotenoids reflect the lifestyle and stress conditions of volunteers.
“Negative” stress factors, such as irradiation, illness, alcohol abuse, smoking,
and others decrease the antioxidant status of the skin, which can be then
recovered only by an antioxidant-rich supplementation, for example, rich in
fruit and vegetables. The amount of the destroyed cutaneous carotenoids can
be well correlated with the amount of the free radicals generated in the skin,
which was confirmed by electron paramagnetic resonance measurements,
showing that carotenoids could serve as marker substances for the whole
antioxidant status of the human epidermis.
Acknowledgments
The authors are indebted to the late Dr. H.E. Boddé for very stimulating
discussions on the FTIR spectra.
References
1. G. Herzberg, Molecular Spectra and Molecular Structure, 2nd Ed., van
Nostrand Reinhold Company, New York (1950).
2. N. B. Colthup, L. H. Daly, and S. E. Wiberley, Introduction to Infrared
and Raman Spectroscopy, 3rd Ed., Academic Press, New York (1990).
3. D. Lin-Vien, The Handbook of Infrared and Raman Characteristic
Frequencies of Organic Molecules, Academic Press, San Diego (1991).
4. R. O. Potts, “Stratum corneum hydration: experimental techniques and
interpretations of results,” J. Soc. Cosmet. Chem. 37, 9–33 (1986).
5. J. Serup and G. B. E. Jemec, Eds., Handbook of Non-invasive Methods
and the Skin, CRC Press, Boca Raton (1995).
232 Chapter 4
5.1 Introduction
5.1.1 Fundamentals
Non-destructive optical techniques are increasingly used in biomedical
diagnostics. Fundamental mechanisms between optical radiation and
biological specimens are absorption, reflection, elastic or inelastic light
scattering, and luminescence. The latter is subdivided into (1) fluorescence,
corresponding to allowed optical transitions with rather high quantum yields
and short (nanosecond) lifetimes and (2) phosphorescence corresponding to
optical transitions with low probability (“forbidden transitions”) and thus
with low quantum yields and long lifetimes in the microsecond to millisecond
range.
Absorption of light is connected with an electronic transition from the
ground state S0 to an excited state of a molecule. Light passing through a layer
of thickness d is thereby attenuated according to the equation
where I (l) is the transmitted light intensity (quantum flux), I 0 is the incident
intensity, ε(l) is the molar extinction coefficient, and c is the concentration of
absorbing molecules. In scattering samples, the absorption coefficient ma and
241
242 Chapter 5
the scattering coefficient ms [omitted in Eq. (5.1)] sum up, thus causing a
further reduction of transmitted light, as described in detail elsewhere1 (and
references therein).
Fluorescence arises upon light absorption and is related to an electronic
transition from the first excited singlet state S1 to the ground state of a
molecule. Its intensity (quantum flux) corresponds to
with h being the fluorescence quantum yield and V the solid angle of detection
of isotropic fluorescence radiation. In the case of thin samples, e.g., cell
monolayers or biopsies of a few micrometers in diameter, Eq. (5.2) can be
approximated by
S1
ν2
energy
ν1
ν0
S0
ν2
ν1
ν0
nuclear distance
Figure 5.1 Potential diagram of electronic states (S0, S1) with their vibrational levels (ni).
Electronic wave functions and optical transitions are indicated (excitation: S0 n0 ! S1 nn;
fluorescence: S1 n0 ! S0 nn).
where t is the lifetime of the excited state S1. The ratio kF/k corresponds to the
fluorescence quantum yield h. Although by optical spectroscopy, only
radiative transitions can be monitored, changes of kIC or kET are
often deduced from fluorescence lifetime measurements. For example, an
244 Chapter 5
Singlet Triplet
S
2
kET
S1
kISC T1
kF kIC
S0
Figure 5.2 Jablonski diagram of molecular energy levels and transition rates. Straight
lines: radiative transitions; waving lines: non-radiative transitions.
P ¼ ðI ll I ⊥ Þ∕ðI ll þ I ⊥ Þ (5.5)
or by fluorescence anisotropy
r ¼ ðI ll I ⊥ Þ∕ðI ll þ 2 I ⊥ Þ: (5.6)
with Ill and I⊥ being the fluorescence intensities of light polarized parallel or
perpendicular to the exciting electric field vector, respectively. Usually P and r
Fluorescence Technologies in Biomedical Diagnostics 245
tr ¼ h V ∕k B T, (5.7)
where kB is the Boltzmann constant and T the absolute temperature. Time
constants of rotational diffusion of about 13 ns were correlated with a
molecular weight of proteins around 50,000 daltons,9 whereas a time constant
around 300 ps was attributed to an aggregated species of a photosensitizing
porphyrin (protoporphyrin) with a 1.6-nm diameter.10 A pronounced decrease
of fluorescence anisotropy with increasing temperature was recently described
for the membrane marker laurdan, thus indicating an increase of membrane
fluidity.11
Slit 2
Slit 3
Concave
Mirror Sample
array or a CCD camera. The main advantage of OMAs is that optical spectra
can be recorded very rapidly and repeatedly with sequences in the millisecond
range as compared with a few seconds for scanning monochromators. This
permits on-line detection of fast molecular or cellular kinetics, if the
fluorescence intensity is strong enough. Often a compromise between spectral
resolution, sensitivity, and recording time has to be found in order to detect
comparably weak signals of fluorophores in cells or tissues. As a first step
toward that direction, a “Rapid Scan Spectrometer” (RSS) has been recently
developed.61 It is based on the scanning principle using a mirror that oscillates
at 100 Hz. Therefore, fluorescence spectra can be recorded every 10 ms using
the sensitivity of a photomultiplier.
Usually, fluorescence spectra are “uncorrected,” i.e., they contain
wavelength dependent sensitivities of monochromator and detector (emission
spectroscopy) or excitation source and monochromator (excitation spectros-
copy). Correction may not be necessary if only relative changes of individual
emission bands are measured. However, for getting correct spectral intensities
IF (l) a reference source with a well known spectrum S(l) (e.g., a black body)
has to be used. Both signals, IF(l) and S(l), are “folded” by a response
function G(l). When measuring the experimental functions IF0(l) ¼ IF(l)
G(l) and S0(l) ¼ S(l) G(l), the “true” fluorescence spectrum can be calculated
using the algorithm
I F ðlÞ ¼ I 0F ðlÞ SðlÞ∕S 0 ðlÞ: (5.9)
Fluorescence spectrometers for in vivo diagnostics are commonly based on
fiber-optic systems.14,62–64 The excitation light of a lamp, LED, or a laser is
guided to the tissue (e.g., some specific organ) via a glass fiber using appropriate
optical filters (instead of an excitation monochromator). Fluorescence spectra
are usually measured either via the same fiber or via a second fiber or fiber
bundle in close proximity to the excitation fiber. Scanning monochromators or
OMA systems as reported above are used for emission spectroscopy.
Filter
Monochromator 2 Monochromator 1
(380-850nm) (240-850nm)
Variable
Diaphragm
Photometer
M2
XBO-Lamp
Sample
M1
Halogen Lamp
Figure 5.4 Microspectroscopic set-up for fluorescence excitation and emission spectroscopy with additional transillumination (M1 ¼ mirror;
M2 ¼ dichroic mirror).
251
252 Chapter 5
where f represents the focal length and D is the aperture of the objective lens
of the microscope. Assuming f ¼ 1.6 mm, D ¼ 6 mm, and the laser wavelength
lex ¼ 488 nm, a radius r ¼ 160 nm of the focused laser beam is calculated
according to Eq. (5.10). Usually this radius is slightly smaller than the lateral
(diffraction limited) resolution in conventional fluorescence microscopy
since the excitation wavelength lex is generally shorter than the emission
wavelength lem. (dmin is the minimum object size that can be resolved, whereas
AN corresponds to the numeric aperture of the objective lens). The difference
between r and dmin also explains why the lateral resolution in laser scanning
microscopy is slightly enhanced compared to conventional microscopy.
Considering a depth of focus around or even below 1 mm, large parts of a
sample are out of focus in conventional microscopy. These out-of-focus parts
can be eliminated using specific methods of wide-field 3D microscopy or
confocal laser scanning microscopy as reported in Sections 5.7 and 5.8.
be operated in a “reverse mode,” i.e., each single photon gives a start signal
that is followed by a (delayed) stop signal of the excitation pulse.
Fluorescence intensity (number of photon counts per time unit) excited by
short light pulses usually shows an exponential behavior according to
where k is the total rate of deactivation of the excited electronic state and t the
fluorescence lifetime. If several molecular species contribute to the fluores-
cence decay, their intensities sum up according to
600
500
response
400
counts
300
200
BKEz-7 + AO
100
0
0 5 10 15 20 25 30
time / ns
Figure 5.5 Fluorescence decay curve of BKEz-7 endothelial cells incubated with acridine
orange (5 mM, 30 min.) and response curve of the apparatus measured by the single photon
counting method (lex ¼ 390 nm, repetition rate ¼ 1 MHz, fluorescence detected at l $
590 nm).
iterative process until the best fit is obtained. The quality of a fit x2 is
calculated from
X
x2 ¼ vi ½I ðtÞ I C ðtÞ2 (5.17)
i
where I(t) is the measured and IC(t) the calculated fluorescence intensity. vi ¼
1/F(t) is a statistical weighting factor to account for the expected error in each
value of I(t). R(t t0) corresponds to the measured response curve (scattered
light of the picosecond laser diode). A minimum of x2 is obtained for the best
fit. x2 values larger than 2 indicate a poor fit, x2 values less than 1.2 indicate a
good fit.9 A detailed overview on advanced time-correlated single photon
counting techniques, including those with high spatial resolution, has recently
been given.77
b
relative intensity
B
ϕ
a A
0
0 50
time [ns]
Figure 5.6 Principle of a phase resolved fluorescence measurement (A,B ¼ continuous
and variable intensities of emission, a,b ¼ continuous and variable intensities of
excitation).
tan w ¼ v tp (5.18)
where v corresponds to 2p the modulation frequency (in Hz).9 Only for
monoexponential decays tp ¼ tm ¼ t, i.e., the lifetimes obtained from phase
shift and demodulation correspond to the real fluorescence lifetime. If the
fluorescence decay is more complex, tp and tm are “apparent” lifetimes with
tp , tm for multi-exponential decays.
Fitting of multi-exponential decays becomes possible if the modulation
frequency is varied, and if the phase angle w or the demodulation factor m is
measured as a function of this frequency, as described in Ref. 78. Frequencies up
to the Gigahertz range are required to resolve decay times in the subnanosecond
range. Since typical electro-optic modulators (e.g., Pockels cells) are usually
limited to some hundred MHz, specific techniques, e.g., frequency multiplication
and cross correlation have to be used, as further described in other work.78,79
Applications of phase and frequency modulation techniques for imaging fluoro-
phores in various samples, cells, and tissues have been described elsewhere.80–82
Frequency-domain fluorometry is also the basis for the so-called “phasor
approach,”83 which permits one to distinguish clusters of different fluorescence
lifetimes or gives some evidence of non-radiative energy transfer (see Section 5.6).
258 Chapter 5
25
20
A B
I [counts]
15
10
0
0 1 2 3 4 5
t [ns]
Figure 5.7 Decay kinetics of autofluorescence of BKEz-7 endothelial cells from calf aorta
after excitation by short pulses (70 ps) of a laser diode. Excitation wavelength: lex ¼ 375 nm;
fluorescence measured at lem $ 435 nm (reproduced from Ref. 80 with modifications).
Fluorescence Technologies in Biomedical Diagnostics 259
predominant, whereas in a “later” time gate (gate B), the emission maxima
around 435 nm (bound NADH) and 515 nm (flavins) became more obvious.
In contrast, CW spectra of autofluorescence were broad and exhibited only
a little substructure.91 Using time-resolved fluorescence detection, free and
bound NADH could be studied separately, e.g., as a function of oxygen
content36,37 or upon inhibition of either the respiratory chain or the
glycolytic pathway.21,85
Further applications of time-gated fluorescence spectroscopy include
studies of the dynamics of various dyes within vesicles,86 as well as the
detection of tumor-localizing porphyrins within tissues of strong autofluor-
escence.87,88 Due to their fluorescence lifetime around 15 ns, these porphyrins
could be easily distinguished from short-lived autofluorescence using a
detection window that was delayed by about 15 ns with respect to the exciting
laser pulse. Monomeric and aggregated porphyrin molecules as well as ionic
species located at different cellular sites were also distinguished by time-gated
fluorescence spectroscopy.89,90 Further applications of this technique include
the detection of membrane markers whose fluorescence is sensitive to a phase
transition of cellular lipids as well as measurements of porphyrin metabolites
in plants, whose fluorescence is strongly superposed by the emission of
chlorophyll.91 Due to the rapid fluorescence decay of chlorophyll (usually
below 1 ns), porphyrins like uroporphyrin or protoporphyrin become
detectable at delay times around 15 ns between excitation and fluorescence
measurement.
Cooled
CCD
Fluorescence
Microscope
Sample
Scanner
Figure 5.9 Fluorescence intensity (a) and fluorescence lifetime (b); scale from black to
white: 0-8 ns) of BKEz-7 endothelial cells incubated with 50 mM Rhodamine 123 (excitation
wavelength: 488 nm: detection range: lem $ 515 nm; image size: 220 mm 160 mm).
Reproduced from Ref. 111 with modifications.
262 Chapter 5
n 2 < n1
Θ n1
I0
1000
(a)
Ic
Ie
800 Im
relative fluorescence
600
400
200
0
66 68 70 72 74 76 78 80
angle of incidence [°]
(b)
(c)
Figure 5.11 (a) Fluorescence intensity IF (Θ) for different distributions of fluorophores:
Ic: continuum with homogeneous distribution; Ie: homogeneous distribution in the cytoplasm;
Im: homogeneous distribution in the cell membrane (nS ¼ 1.52 for the substrate and nC ¼
1.37 for the cytoplasm; assumed cell-substrate distance: 100 nm); (b) cytoplasm marker
calcein in a single U373-MG glioblatoma cell upon variable-angle TIRFM; (c) cell-substrate
topology (color scale: 0–200 nm for cell-substrate distances)6HHFRORUSODWHV
1a 1b
Figure 5.12 Technical realization of TIR illumination. 1: optical coupling of the cell
substrate using cube-shaped (1a) or hemispheric prism (1b); 2: prismless configuration
using a high numerical aperture objective lens.
Fluorescence Technologies in Biomedical Diagnostics 267
It should be emphasized that meanwhile, TIRFM has become one of the most
powerful techniques in single molecule detection.122,123
(5-ALA) was slightly different,140 but also in this case, a considerable amount of
photosensitizer was localized in the plasma membrane.140,141 Furthermore,
association of photosensitizers of different hydrophilicity, but comparable
photophysical properties, with the plasma membrane was investigated. Results
obtained using the TIRFS technique provided for the first time direct evidence
for a plasma membrane-associated fraction of the hydrophilic compounds meso-
tetraphenylporphyrine trisulfonate (TPPS3) and meso-tetraphenylporphyrine
tetrasulfonate (TPPS4).142 In contrast, the more lipophilic drugs meso-
tetraphenylporphyrine monosulfonate (TPPS1) and meso-tetraphenylporphyrine
disulfonate (TPPS2) could only be detected using epi-illumination. Further
techniques and applications of TIRFM are summarized in Ref. 143.
h ν ex h ν em
k ET
NADH R123
d < 10nm
intensity [a.u.]
300 350 400 450 500 550 600
wavelength [nm]
Figure 5.13 Principle of non-radiative energy transfer from the coenzyme NADH to the
mitochondrial marker R123 (top); absorption (full lines) and emission (broken lines) spectra
of NADH and R123; spectra of R123 are red-shifted as compared with NADH, with a
pronounced overlap of NADH emission and R123 absorption (bottom).
Förster mechanism146 is shown in Fig. 5.13 (upper part) for the example
of NADH ! rhodamine 123 (R123) as a donor-acceptor pair, where the
donor is a coenzyme of the mitochondrial respiratory chain and the
acceptor a well known mitochondrial marker.147 Light is absorbed by
the donor and emitted by the acceptor, whereas intermolecular energy
transfer is non-radiative.
The absorption and emission spectra of NADH and R123 are depicted in
the lower part of Fig. 5.13. The absorption spectra of NADH (left) and R123
(right) correspond to the full lines, whereas the emission spectra are represented
by the broken lines. A broad overlap of the emission spectrum of NADH
(maximum around 470 nm) and the absorption spectrum of R123 (maximum
around 510 nm) indicates that the condition of resonance is fairly fulfilled.
According to Eq. (5.26), fluorescence resonance energy transfer (FRET) is
limited to short intermolecular distances of a few nanometers. As a
quantitative measure, the so-called Förster radius r0 is used, which is defined
such that the rate of energy transfer kET and the sum of all other rates of
deactivation of the first excited molecular state S1 are equal for r ¼ r0. This
can be described by the equation
k ET ¼ t0 1 ðr∕r0 Þ6 (5.27)
with t0 being the lifetime of the excited electronic state in the absence of
energy transfer. When using non-radiative energy transfer it is an advantage if
270 Chapter 5
the fluorescence quantum yield h of the acceptor is rather high (e.g., h ¼ 0.9
for R123). In principle, kET also depends on the orientation of the transition
dipoles of the donor and the acceptor molecule. In solution as well as in most
cellular systems, however, these dipoles can be assumed to be randomly
oriented. Usually, energy transfer by dipole-dipole interaction is limited to
singlet states, i.e., to a transition S1 ! S0 for the donor and S0 ! S1 for the
acceptor molecule. In nature, non-radiative singlet energy transfer plays an
important role within the antenna complexes of photosynthetic organisms.4–6
For excited triplet states, the rate of this transition is becoming very low due
to a change of electron spins.
Energy transfer rates can be determined from stationary as well as from
time-resolved fluorescence measurements. In the first case, one measures the
fluorescence quantum flux of the donor
I D ¼ I 0 ln 10 d εD cD hD V∕4p, (5.28)
which considers that the acceptor may be excited via energy transfer from
the donor with the quantum yield hET ¼ kET/k as well as by direct
absorption of light.21 The acceptor itself fluoresces with the quantum yield
hA(εD, εA ¼ extinction coefficients, cD, cA ¼ concentrations of donor and
acceptor molecules, d ¼ diameter of the sample). Assuming an almost linear
relationship between kET and the acceptor concentration cA,148 the
quantum flux of acceptor fluorescence is proportional to the acceptor
concentration. Therefore, from the linear relationship between IA and cA,
the concentration of donor molecules can be deduced according to
Eq. (5.29). The energy transfer rate kET ¼ k hET can be obtained by
division of the Eqs. (5.29) and (5.28). If direct absorption of light by
acceptor molecules becomes negligible, and if the total rate k of
deactivation of the excited state is replaced by the lifetime t ¼ 1/k of this
excited state, one can easily calculate
1/t0 (in the absence of energy transfer), the energy transfer rate can be
calculated according to
k ET ¼ 1∕t 1∕t0 (5.31)
with t and t0 being the fluorescence lifetimes of the donor in the presence and
absence of energy transfer, respectively. Since 1/t and kET depend (almost
linearly) on acceptor concentration, various concentrations cA can be used for
optimal fitting of kET.
• In excited triplet states where dipole-dipole interaction becomes
negligible, non-radiative energy transfer may arise from an electron
exchange mechanism. This process requires a considerable overlap of the
electron orbitals of the excited donor molecule and the acceptor molecule
in the ground state with spectral characteristics being rather irrelevant.
The distance over which this exchange can act is small (typically not more
than 1 nm), so the process requires direct contact between donor and
acceptor molecules, e.g., during diffusion of one of these molecules.
Energy transfer rates kET are often small compared with dipole-dipole
interactions, but since the lifetime t of excited triplet states usually is
rather long (microseconds-milliseconds), the quantum yield of energy
transfer hET ¼ kET/k ¼ kET t may still be large. Energy transfer from the
excited triplet state of porphyrins, chlorins, or related molecules to
oxygen molecules is a well known process that occurs in native cells as
well as in cells incubated with photosensitizers. Oxygen is thereby excited
from the triplet ground state to the singlet state S1. Cytotoxic reactions
initiated by singlet oxygen are well described in the literature and used
e.g., for photodynamic therapy of tumors (for an overview on intra-
cellular reactions see Ref. 149). Quantum yields of singlet oxygen genera-
tion (including S1 ! T1 intersystem crossing of the donor and energy
transfer to oxygen) have been summarized for various photosensitizers in
solution.150 Typical values are 0.57 for protoporphyrin dimethylester,
0.61 for meso-tetrakis(m-hydroxyphenyl)chlorin (m-THPC), 0.36 for
aluminium phthalocyanine or 0.42 for porphycene, i.e., 30 to 60% of
excited molecules of a photosensitizer may generate cytotoxic singlet
oxygen. Damage of plants may also be related to cytotoxic reactions
originating from chlorophyll molecules within the photosystems and
involving singlet oxygen. Some reduced protection against photosensiti-
zation may, therefore, account for light-induced yellowing of plants and
partly explain the phenomenon of “forest decline.” 6,151
Figure 5.14 Cultivated Chinese hamster ovary (CHO) cells after incubation with acridine
orange (5 mM; 30 min.) (a) original image, (b) original image with structured illumination,
(c) image from the focal plane, as calculated from 3 phase-shifted images with structured
illumination6HHFRORUSODWHV
274 Chapter 5
about 100 nm (lateral) and 200 nm (axial) have been obtained.167,168 These
values were further reduced by a combination of structured illumination and
single molecule localization microscopy.169
Figure 5.15 Principle of light sheet fluorescence microscopy (a), and images of various layers
of 10 mm thickness of a CHO-pAcGFP1-Mem cell spheroid recorded at distances of 20 mm,
40 mm, and 60 mm from its edge (b–d); light incidence from top to bottom (excitation wavelength:
488 nm; fluorescence detected at l $ 515 nm). Reproduced from Ref. 174 with modifications
6HHFRORUSODWHV
Fluorescence Technologies in Biomedical Diagnostics 275
Laser S
Ph
Figure 5.16 Set-up of a confocal laser scanning microscope with S corresponding to the
scanning device, T to a telecentric optical system, O to the microscope objective lens, Ph to
the confocal aperture, and D to the detector.
modification with an array of beams has been realized using either a line or an
array of pinholes (Nipkow Spinning Disk; Andor Technology). The principle
of image generation by an array of pinholes was first proposed in 1883 by the
German Physicist Paul Nipkow and was realized in 1884.
Dz 1,67nl∕NA2 : (5.35)
In principle, these values are met within a few percent by high power
objective lenses. Increase of the confocal aperture in low light level
fluorescence microscopy (from a diameter of about 15 mm up to several
hundred mm), however, diminishes the degree of out-of-focus blur rejec-
tion.178 The dependence of the lateral and axial resolution on the numerical
aperture NA is plotted in Fig. 5.17.
In contrast to the conventional microscope where the complete field of
view and the whole object is illuminated, only a diffraction-limited point
source illuminates the object in CLSM. The intensity distribution within this
spot is described by the point spread function (PSF). The intensity distribution
gives the probability that a photon at a certain distance from the center
(normalized to intensity 1) is present to excite a fluorescent molecule. This
probability decreases rapidly with increasing distance from the center. The
probability distribution of a fluorescent molecule that is registered by the
detector is more widespread than the probability of illumination. As an
example, if the probability of illumination at a point P apart from the center
(focal point) is 0.1, and the probability for the detection of an emitted photon
is also 0.1, than the resulting probability of detecting a fluorescence photon is
278 Chapter 5
5
lateral resolution
axial resolution
4
FWHM / μm
0
0.4 0.6 0.8 1.0 1.2 1.4
numerical aperture
Figure 5.17 Plot of the lateral and axial resolution of a confocal laser scanning microscope
according to Eqs. (5.32) and (5.34) for l ¼ 488 nm.
0.01. Multiplication of the two probabilities of all points P leads to the PSF
for fluorescence detection in a confocal microscope, which is roughly the
square of the intensity distribution of illumination. This explains the “narrow”
PSF and the rejection of blur from out-of-focus regions.
Light sources: Laser light sources are well suited for fluorescence
excitation in CLSM. Argon/Krypton ion lasers, HeNe lasers of different
wavelengths, and also blue laser diodes are used and incorporated in the
system. Several wavelengths are necessary when biological objects are labelled
with dyes of different absorption wavelengths. In this case, multiple detection
channels are also needed for simultaneous detection of the fluorescence signals
within one specimen. Beam splitters and filters, or opto-acoustic modulators,
which are adapted to the emission spectrum of each specific dye are used for
spectral discrimination. For lifetime imaging, laser diodes modulated in the
MHz range or femtosecond lasers with multiphoton excitation are used.
Scanners: To create a 3D data set, the illuminating beam must scan all
three dimensions in space. One can distinguish between beam scanner, object
scanner, and objective lens scanner. The velocity to move an object is
limited; therefore, only slow movements (z scan) are performed with the
object. In a single beam CLSM high speed scanning is performed either by
galvo-scanners, piezo-crystals, opto-acoustic modulators, or by electrome-
chanical devices. Beam scanners are classified as single beam scanners,
multiple beam scanners, or slit scanners. The last two options, however, have
a reduced signal-to-noise ratio, because unspecific light from adjacent points
can contribute to the registered signal. The already mentioned rotating
Fluorescence Technologies in Biomedical Diagnostics 279
Further developments
• New developments permit the combination of morphometry with
spectral imaging providing multi-pixel information from a specimen180
Spectral information (SpectraCubeTM) obtained from a specimen is
stored as a third dimension of an xy-plot. By mathematical analysis a
spectral similarity map (SSM) is created that permits a selection of
specific molecules with well defined intracellular interactions.
• Excitation of fluorescent dyes or of autofluorescence in biological
samples by ultrashort laser pulses gives the possibility of fluorescence
lifetime imaging. This allows one to distinguish between molecules
showing identical fluorescence spectra but exhibiting different lifetimes.
The method also provides information on the microenvironment of
these molecules. The resolution of confocal microscopes can still be
enhanced by a 4Pi-configuration of the microscope optics, as reported,
e.g., in Ref. 181.
Fluorescence Technologies in Biomedical Diagnostics 281
170
120
6
rel. fluor. int.
110
3 4
100
90
80
70
5
60
50
40
30
0 100 200 300 400
time (s)
available femtosecond lasers range between 45 and 200 fs, and those of
picosecond systems between 1 and 10 ps.
When using a femtosecond system (80fs, 80MHz), endogenous fluor-
ophores, e.g., fluorescent coenzymes, can be excited with a mean power of
about 2 mW,186 whereas fluorescence of intracellularly accumulated
fluorophores (DAPI, Hoechst 33258, Rhodamine123) is excited already at
an average power between 100 mW and 1 mW. A power level above 10 mW of
such a laser system leads to cell damage. At mean powers of about 30–50 mW
an optical breakdown can be induced, which is used to drill holes in cellular
structures or to cut chromosomes.
In NIR two-photon microscopy the spatial resolution is about the same as
in one-photon microscopy in spite of doubling the wavelength (lateral
FWHM: 0.3 mm; axial FWHM: 0.9 mm). With the introduction of a pinhole
in the detection path, the resolution can be slightly improved, and in Ref. 187
it was shown that the axial resolution reaches about 100 nm when using 4Pi
two-photon microscopy (with two opposing objective lenses). Recently even
higher resolutions have been achieved with two-photon SNOM (scanning
near-field optical microscope) technology.188,189
One of the advantages of multiphoton NIR microscopy is multifluor-
ophore excitation, where the visible fluorescence of a wide range of
fluorophores is excited simultaneously. This method is used for multicolor
detection of genes and chromosomes in combination with fluorescence in situ
hybridization (FISH). An example deduced from Ref. 182 is depicted in
Fig. 5.19, showing a 3D reconstruction from a set of multicolor images with
intranuclear localization of the FISH-labelled centromeric regions of
Figure 5.20 Principle of STED microscopy using a regular focused excitation beam (blue)
that is superimposed by a doughnut-shaped STED beam (orange) that instantly quenches
excited molecules at the periphery of the excited spot, thus confining the fluorescence
emission to the doughnut zero (20 nm) (adapted from Ref. 194).
Figure 5.22 The principle behind PALM: A sparse subset of PA-FP molecules that are
attached to proteins of interest and then fixed within a cell are activated (A and B) with a brief
laser pulse at lact ¼ 405 nm and then imaged at lexc ¼ 561 nm until most are bleached (C)
6HHFRORUSODWHV
Fluorescence Technologies in Biomedical Diagnostics 287
References
1. C. M. Gardner, S. L. Jacques, and A. J. Welch, “Light transport in
tissue: accurate expressions for one-dimensional fluence rate and escape
function based upon Monte Carlo simulation,” Lasers Surg. Med. 18,
129–138 (1996).
288 Chapter 5
305
306 Part IV
J. David Briers
Sean J. Kirkpatrick
Co-editors
Chapter 6
Laser Speckles, Doppler, and
Imaging Techniques for Blood
and Lymph Flow Monitoring
Ivan V. Fedosov
Saratov National Research State University, Saratov, Russia
Yoshihisa Aizu
Muroran Institute of Technology, Muroran, Japan
Valery V. Tuchin
Saratov National Research State University, Saratov, Russia
Tomsk National Research State University, Tomsk, Russia
Institute of Precision Mechanics and Control, Russian Academy of Sciences,
Saratov, Russia
Naomichi Yokoi
Asahikawa National College of Technology, Asahikawa, Japan
Izumi Nishidate
Tokyo University of Agriculture and Technology, Tokyo, Japan
6.1 Introduction
Blood is a body fluid that circulates through the body through a network of
vessels to support the vital functions (e.g., nutrition, oxygenation, immunity)
of all organs. Blood is a two-phase suspension: (1) the liquid plasma, an
aqueous solution of organic molecules, proteins, and salt, and (2) solid
corpuscles such as normal and abnormal cells, their microparticles, and
aggregates. Normal blood consists of erythrocytes (so-called red blood cells
[RBCs]; 4.6–5.1 106/1 ml for humans], leukocytes (so-called white blood
309
310 Chapter 6
example, the features of intra- and peri-tumor lymphatics (e.g., gaps in the
lymphatic wall due to tumor-induced interstitial pressure and dilation) suggest
relatively easy entry of tumor cells into lymph vessels.66,133 However, only a
few studies (vs. hundreds of studies of blood CTCs) have demonstrated CTCs
in lymph.119,134–137
Furthermore, blood and lymph systems have close relationships with
multiple cross-pathways for lymph and blood, allowing passage of cells and
other compounds from one system to another (Fig. 6.1).138–140
Thus, detection and imaging of blood and lymph flows and circulating
individual cells, especially in vivo in their native state, is highly important for
the early diagnosis and therapy (theranostic) of many diseases (e.g., cancer,
diabetes) and for the study of the impact of environmental and therapeutic
agents (e.g., radiation, drugs) on living organisms at the single-cell level.
Because of this, both blood and lymph flows and their composition should be
examined to provide the right understanding of disease and, in turn, lead to
the development of new advanced diagnoses and therapies. In this chapter we
discuss optical approaches used to date for characterization of blood flow at
various structure levels of an organism. On a tissue or an organ scale, multiple
scattering of light typically prevents exact characterization of blood
independent of the whole tissue structure. But on this scale, laser Doppler
and laser speckle based techniques provide useful ways for quantification of
relative blood flow and concentration changes under various physiological
conditions as outlined through Secs. 6.1 to 6.4. At the level of individual
microvessels, multiple scattered light can be filtered out, thus the absolute
flow becomes accessible with laser Doppler techniques as well as with particle
image velocimetry (PIV) methods, both of which are discussed in Sec. 6.5.
And finally, at the cellular level corresponding to individual blood and
lymphatic vessels accessible with numerous methods of intravital microscopy,
a detailed characterization of circulating cells becomes available as it is
discussed in Secs. 6.6 to 6.12 and Sec. 6.13, overviewing clinical applications
of intravital microscopy of blood and lymphatic vessels.
314 Chapter 6
(a)
(b)
(c)
Figure 6.2 Schematic illustration of detected signals, frequency spectra, and correlation
functions for typical cases of (a) low, (b) moderate, and (c) high RBC concentration.
Techniques for Blood and Lymph Flow Monitoring 315
the area should contain a number of randomly distributed RBCs inside. The light
waves scattered by moving RBCs are coherently superposed in random phases
and interfere with each other in the observation plane. This results in a biospeckle
pattern that is time-varying with the RBCs’ motion. Consider a scattered wave Aj
coming from j-th RBC (j ¼ 1 N). The resultant amplitude A(x, t) detected at a
certain point x and a time t in the observation plane is given by
XN
Aðx,tÞ ¼ jAj ðx,tÞj exp½ifj ðx,tÞ, (6.2)
j¼1
where fj(x, t) is a random phase. The randomness in the phase originates from
the randomized optical path length of scattered waves due to multiple
scattering by randomly distributed moving RBCs and their random
interference. Equation (6.2) means that the biospeckle intensity in the pattern
shows space-time random fluctuations. The increased concentration of RBCs
may enhance the randomization, and the phase-consistent or phase-correlating
time is more shortened than that for the ordinary speckle dynamics obtained
with the diffuser. Therefore, this situation usually generates higher-frequency
speckle fluctuations than the ordinary one, and it may be regarded as an
extended scheme of the ordinary speckle technique. In this optical system, the
heterodyne beat component can hardly be detected even if the reference beam
is introduced and the concentration of RBCs is low, since the incident and
scattered wave vectors have ranges in their directions and the coherence
condition is unsatisfied. In spite of the random phenomena, biospeckle
fluctuations reflect the blood flow velocity, and their autocorrelation function
or power spectrum can be used for monitoring the blood flow.
When the incident beam is strongly focused, the number of scatterers
becomes less than that for the above case. Coherent addition of the
insufficient number of scattered waves with random phases produces an
inhomogeneous speckle pattern. As the diameter of the blood vessel becomes
large, the number of scatterers that contribute to the speckle formation
increases and the phase randomization is further enhanced. This results in,
again, higher-frequency speckle fluctuation or possibly speckled speckles.
Even in these different types of speckles, their dynamics originates from the
motion of the RBCs. Thus, their correlation or spectral properties carry
information on the velocity of the RBCs.
6.2.3 Interrelation
As far as blood flow measurements are concerned, the heterodyne Doppler
beat component cannot be clearly obtained. Recent techniques using optical
coherence tomography have ingeniously provided a solution to this problem.
In conventional techniques, however, high-frequency speckle fluctuations, or
possibly homodyne components, are usually dominant in blood flow
measurements. Providing that the Doppler technique reserves only the
Techniques for Blood and Lymph Flow Monitoring 317
PBS
Interference filter
(@780nm)
LD @830nm PC
Measurement area
Subject
(b)
(a)
Sphygmo-
manometer
Normal Put into water Recovery process
RSD
Pulse oxy. a b c d e f g h
0 60 120 180 240 300 360 420 480
Time [s]
(c)
Figure 6.3 (a) Schematic diagram of the optical system for detecting speckle patterns,
(b) picture of the volunteer’s left wrist being subjected to measurements of blood flow and
blood pressure, and (c) illustration of a time table of measurements.
The scattered light from the subject is divided into two light fields with
equivalent power. Each of the two fields passes through the polarizer,
enters an interference filter whose center wavelength is 780 nm or 830 nm,
respectively, passes through a camera lens and, then reaches the CCD camera,
at which a speckle pattern corresponding to a wavelength of 780 nm or 830 nm
is separately observed.
x¼1 y¼1
Techniques for Blood and Lymph Flow Monitoring 319
where Ix,y,n is the signal intensity at a certain pixel in the nth frame whose
horizontal and vertical coordinates inside the processing unit are x and y,
respectively, and p is the pixel number forming the processing unit in x and y.
RSD is expected to be increased as the speed of the blood flow becomes high.
220
150 130
100 120
210
50 110 Output of
sphygmomanometer
0 100 200
5 10 15 20 25 30 0 60 120 180 240 300 360 420 480
Time [s] Time [s]
(a) (b)
Figure 6.4 (a) Temporal variation of RSD value averaged over the area on the artery and
(b) temporal variations of the maximum values of RSD and sphygmomanometer output.
320 Chapter 6
0 0
(b) (c)
[arb. unit] [arb. unit]
255 255
Medication place
0 0
(a) (d) (e)
Figure 6.5 (a) Picture of an anesthetized rat used for observation of blood flow and
(b)–(e) blood flow images of the ear of the anesthetized rat obtained by RSD prior to dosing,
immediately after dosing, 5 min. after dosing, and 10 min. after dosing, respectively.
inside the wrist were naturally contracted and hardened by cold stimulation,
thus the systolic blood pressure was raised. At the same time, blood flow in the
artery was decreased by the rise of systolic blood pressure due to the increase
of peripheral vascular resistance. Thus, results in Fig. 6.4(b) are found to be
true to the physiologic knowledge mentioned above.
DHbTV DHbT
gt ¼ ∕
HbTV,0 HbT0
: (6.9)
ΔCtotal l
ΔCdeoxy l
ΔCoxy l
SBR
Figure 6.7 Measured images of hemoglobin concentration changes and blood flow (SBR)
on human finger in occlusion6HHFRORUSODWHV
0
(b) [arb. unit]
255
(a) 0
(c)
Figure 6.8 (a) Picture of the exposed pharyngeal area of an anesthetized rat, and (b) and
(c) blood flow images of an anesthetized rat obtained by RSD for the observation at t ¼ 3.9
and 18.0 s, respectively.
150
RSD [arb .unit]
100
50
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Time[s]
(a)
1
Output [arb. unit]
0.8
0.6
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Time[s]
(b)
Figure 6.9 (a) Temporal variation of the RSD value averaged over the area on the carotid
artery and (b) electrocardiogram in the exposed pharyngeal area of the anesthetized rat.
Figures 6.8(b) and (c) show examples of the RSD images at the
observation area as shown in Fig. 6.8(a) at t ¼ 3.9 and 18.0 s, respectively,
under illumination of LD with a wavelength of 830 nm. By comparing
Figs. 6.7 (b) and (c), the brightness on the carotid artery, which is vertically
located at the center of both images, is clearly changed. Figures 6.9(a) and (b)
Techniques for Blood and Lymph Flow Monitoring 325
[mmol.mm]
+1.6
Δ C total䝿l [mmol䝿mm]
0
0.5
0
–1.6 –0.5
(a)
[mmol.mm] –1
Artery
+1.6 –1.5
Tissue
–2
0 5 10 15 20 25 30
0 Time [s]
(c)
–1.6
(b)
Figure 6.10 (a) and (b) Examples of imaging of blood concentration change at t ¼ 10 and
20 s with median filtering in 5 5 pixels, respectively, and (c) blood concentration change
versus time relations with median filtering in 5 5 pixels.
show temporal variations of RSD value averaged over the measurement area
shown in Fig. 6.8(b) and the corresponding electrocardiogram, respectively.
Over 0–10 s and 20–30 s, the RSD value in Fig. 6.9(a) shows periodic
fluctuations. The electrocardiogram in Fig. 6.9(b) also shows periodic pulses
according to the heartbeat. The peaks of the RSD value and pulses in the
electrocardiogram are almost synchronized with each other. In contrast, the
RSD value during 10–20s in Fig. 6.9(a) significantly decreases and shows no
periodical fluctuation due to the reduction of blood flow speed according to
the stimulation.
Figures 6.10(a) and (b) demonstrate images of blood concentration
change at t ¼ 10 and 20 s, respectively, obtained by applying median filtering
in 5 5 pixels to the speckle patterns in advance. As seen from Fig. 6.10, the
blood concentration change on the carotid artery is significantly decreased in
(b). This is due to the reduction of the blood flow according to the stimulation.
Figure 6.10(c) shows temporal variations of the blood concentration change
on the artery and the surrounding tissue. Results in Fig. 6.10(c) indicate that
the temporal variation in blood concentration change on both the artery and
tissue seems to be reproduced in detail. This temporal variation is considered
to be mainly derived from the change in blood flow according to the
heartbeat.
Moving object
(a) (b)
Figure 6.11 (a) Illustration of a low-coherence speckle interferometer for detecting speckle
images and (b) sample model used for distinguishing between objects moving at different
velocities in the same plane under the depth-resolved manner.
One beam illuminates a reference mirror and the other illuminates a moving
object behind a scattering medium. The light fields from the reference mirror
and the object are superimposed again, then reach the CCD camera by which
a speckle pattern is observed. The speckle contrast is enhanced only when the
optical pathlength difference between two arms of the interferometer is within
a coherence length of the SLD. This allows us to make a depth-resolved
enhancement of speckle patterns by scanning the reference mirror along the
optical axis of the interferometer. Figure 6.11(b) illustrates a sample model
used for depth-resolved motion imaging of two moving objects in layers
behind a scattering medium. A diffusive plate A has a 2-mm-wide painted bar
of 1% Intralipid solution that was dried on a glass plate and was vertically
oscillated. A diffusive plate B was composed of two horizontally separated
aluminum diffusive plates moving with different velocities to each other. Both
plates were illuminated by a beam via 4% Intralipid solution.
Figure 6.12(a) shows a speckle difference pattern obtained for the
diffusive plate A, and (b) and (c) show patterns obtained for the diffusive plate
B with the upper-plate velocities of 33 and 100 mm/s, respectively. When the
interference position is adjusted to plate A, the speckle difference pattern is
found only along the vertical bar on plate A, as seen in Fig. 6.12(a), without
the effects of both the Intralipid solution and plate B. When plate B is selected
by the interferometer, it is imaged with almost no effect for both the Intralipid
solution and plate A, as seen in Figs. 6.12(b) and (c). Moreover, as seen in the
upper plate in Figs. 6.12(b) and (c), the magnitude of the speckle difference is
increased as the upper-plate velocity becomes large. Thus, the present method
is useful not only for imaging an object motion in a depth-resolved manner
but also for discriminating between objects moving at different velocities in
the same plane.
Techniques for Blood and Lymph Flow Monitoring 327
0 0 0
(a) (b) (c)
Figure 6.12 Speckle difference patterns obtained for (a) diffusive plate A and (b) and
(c) upper diffusive plate B with velocities of 33 and 100 mm/s, respectively, while the lower
plate has a velocity of 11 mm/s.
Figure 6.15 Power spectra of LDA38: 1 – P12(f ); 2 – P1(f ); 3 – P2(f ); 4 –|P 012(f )|
Techniques for Blood and Lymph Flow Monitoring 331
the spectrum P12(f ) of a signal obtained when the probe volume is illuminated
with both beams of a differential LDA as shown in Fig. 6.13. The black box
denotes fd ¼ vd/2p. Lines 2 and 3 are spectra P1(f ) and P2(f ) obtained when
only beam 1 or 2 was used. These are the spectra of the first and the second
terms of Eq. 6.15. that obviously form a low-frequency pedestal for P12(f ).
Because each term of Eq. 6.15 additively impacts each spectral component
of P12(f ), it was recently proposed to minimize the low-frequency pedestal by
calculating the difference spectrum:38,45
P012 ðf Þ ¼ P12 ðf Þ P1 ðf Þ P2 ðf Þ: (6.17)
The modulus of the resulting spectrum |P0 12(f )| is shown in Fig. 6.15 with line
4. The frequency component centered around fd is clearly visible, and the flow
velocity can be measured. Broadening of the spectral component is
determined with a nonuniform velocity distribution over the probe volume
and the size of the probe volume, as has been discussed in the extensive
literature on LDA metrology.44 We should note here the impact of multiple
light scattering into the broadening of the spectrum that must be also taken
into consideration.38
Technical implementation of the proposed approach is shown in Figs. 6.13
and 6.16. It includes a simple mechanical modulator rotating with a motor.
A modulator blade blocks light at 1/3 of the circumference, and three
commutations can be performed during a full turn of the modulator: 1) both
laser beams are open; 2) beam 1 is closed; 3) beam 2 is closed. The light
Figure 6.17 Superposition of raw photodetector signal (Channel 1) and the blade position
encoder data (Channel 2): 1 is the opening of both beams; 2 is the closing of beam 1; 3 is the
closing of beam 2.45
emitting diode and reference photodetector are used to encode the modulator
position. Figure 6.17 shows superposition of the raw detector signal and the
blade position encoder. The signal processing procedure includes calculation
of three periodograms per cycle of the modulator. Periodograms were
calculated with FFT using a Hanning data window over 521 samples.106 Then
the estimates of P12(f ), P1(f ), and P2(f ) were calculated over a set of
corresponding periodograms and finally, P012(f ) was calculated using Eq. (6.17).
Figure 6.18 shows spectra of the LDA signal registered at the centerline
flow of whole rat blood in a 250-mm-thick rectangular channel at a depth of
125 mm from the glass window surface. A blood sample of 2 mL was taken
Figure 6.18 Power spectra of LDA signal for whole blood flow38 at centerline of 250 mm
rectangular channel: 1 is P12(f ); 2 is |P012(f )|; 3 is positive values of P012(f ); 4 – P012(f ) 10
Techniques for Blood and Lymph Flow Monitoring 333
immediately prior to the experiment from the anesthetized rat’s aorta. Red
blood cells’ concentration in the sample was 7.5 106 ml–1, which is normal
for the animal.107 A syringe pump was used to drive blood flow through the
channel.
Spectra were calculated over 170 periodogams of data sequence acquired
during 13 seconds. Because of the multiple scattering of light in a blood power
spectrum, P12(f ) [Fig. 6.18(1)] decreases monotonically over the amplifier
bandwidth of 0.5to 20 kHz. Modulus of the difference spectrum |P0 12(f )| and
especially the positive values of P0 12(f ) [Fig. 6.18(2) and (3)] have a maximum
centered near 3 kHz, that corresponds to an average blood flow velocity of
4 mm/s in the channel.
Line 4 on Fig. 6.18 shows positive values of P0 12(f ) magnified ten times with
respect to line 3. Because the probe volume length is comparable to a channel
thickness spectrum, P0 12(f ) is broad. It contains numerous DFS components
corresponding to all velocities of RBCs over the LDA probe volume.
The frequency of the optical modulator rotation was 13 Hz and it was
enough for measurement in a flow with a relatively constant velocity. In
fact, the flow velocity in biological tissues varies with time. The velocity of
blood flow in arteries varies periodically in accordance with the cycles of the
heart contractions. Therefore, the frequency of the Doppler shift can change
over time. For example, a heartbeat rate of a rat is 300–600 per minute, so
the period (cycle) of the heart is 0.2–0.1 sec.107 To resolve blood flow
velocity changes in different phases of the cardiac cycle (systolic and
diastolic), it is necessary to have at least 10 measurements at regular
intervals for one cycle.
an object within the object plane. Displacements along the microscope optical
axis cause defocusing of the image, thus they can be handled only with a
hardware autofocus arrangement.
The effect of involuntary movement on mPIV measurements was
demonstrated on a capillary loop of a human nailfold.48 Figure 6.19 shows
the experimental setup for in vivo imaging and mPIV measurements of
microcapillary blood flow.48 It consists of a digital microscope, an adjustable
optical table for alignment of the object under study, and a light source. The
object is illuminated with a 3W white light emitting diode with a condenser
incorporated into the light source module. The image of the capillary is
captured by a microscope built of an objective (magnification 10 and
numerical aperture NA ¼ 0.24) and digital monochrome CMOS camera
DCC1545M (Thorlabs, Germany) with an adjustable microscope tube
adaptor. The microscope can be focused on the object with a precise linear
translation stage (sensitivity 1 mm).
The arrangement was used for in vivo imaging of nailfold capillaries of a
human hand. The volunteer’s right forearm in a relaxed position was located
on a semi-rigid armrest at chest height. The right hand was placed palm down
on the adjustable table of the microscope. The ring finger was fixed in a
lodgment of modeling clay with all the necessary hygiene certificates. The
lodgment, attached to the microscope stage, was tightly fitted to the finger
shape, but was not squeezing it. Such a fixation has no significant effect on
capillary blood flow. It does not totally exclude the involuntary movement of
the finger as a result of breathing, palpitations, tremors, etc., but was very
useful to avoid relatively large translational movements of the finger.
Positioning of capillary vessels within the microscope field of view was
performed by translation of a stage with lodgment in a horizontal plane
perpendicular to the optical axis of the microscope with micrometer screws.
To reduce the light scattering by the epidermis of the skin layer, glycerol as an
Techniques for Blood and Lymph Flow Monitoring 335
Figure 6.20 Image of human nailfold capillary loop and stabilized window position.48
Figure 6.21 Blood flow velocity field calculated for stabilized (a) and nonstabilized
(b) series of capillary loop images.48
Figure 6.22 Rat stomach microvasculature image (a) and the same image superposed
with flow velocity color map (b)6HHFRORUSODWHV
light along its path in tissue, and it does not exceed a few millimeters.29
Typically, the depth is much shorter than the limit because of the effect of
scattered light that introduces ambiguity in either the object position or
velocity detection. The limitation addresses optical velocimetry of
superficial blood vessels that are parallel to the tissue surface and thus,
are examined with optical instruments in a direction perpendicular to blood
flow velocity. Therefore, the optical imaging techniques sensitive to
transversal displacement of the object with respect to the optical axis are
more useful for blood flow velocity measurement than an arrangement
based on a Michelson interferometer like Doppler OCT. The latter
technique ensures efficient suppression of multiply scattered light and
provides near to maximal possible probing depth and is capable for
measurements of the velocity component parallel to the instrument axis.
Thus it is not suitable for unambiguous absolute flow velocity in measure-
ments without its modification41 for bi- or three-directional detection. In
contrast to this, a differential laser Doppler anemometer that is equivalent
to an imaging system29 can be used for absolute flow velocity even in its
simplest implenetation.38,45
Figure 6.24 Imaging blood flow at 25 fps. (a, b) Rolling WBCs in the small mesenteric
veins of healthy rats at low (10 ) magnification (a) and at high (100 , water immersion)
magnification (b). (c) Platelets in slow blood flow (100 , water immersion). (d) slow moving
RBCs and WBCs (100 , water immersion).
Figure 6.25 High-speed high resolution ITDM. (a) Three subsequent images of RBCs in
small artery at 10,000 fps; blood flow velocity up to 2.5.mm/s. (b) 1D image of a blood cell
using 512 1 pixels at 40,000 fps.
2. Detection and counting circulating cells in blood and lymph flows, and
measurement of cell to cell interactions in vivo by labeling cells of interest
with different fluorescent markers. Marked cells can be of animal or
human origin, or infectious pathogens including circulating tumor cells
(CTCs), their aggressive subpopulations such as tumor-initiating cancer
stem CTCs, circulating apoptotic cells, and circulating sepsis-induced
bacteria (e.g., staphylococcus aureus);
3. Exploring physiological processes at the cellular level in living animals,
using special fluorescent indicators sensitive to changes in the microenvi-
ronment, such as pH, ion fluxes, membrane electrical potential, redox
levels, protease activity, and more;
4. Studying the anatomical structure and function of blood or lymphatic
vessels (e.g., angiogenesis);
5. Monitoring in vivo blood and lymphatic responses to therapy, including
pharmacological drugs and therapeutic laser irradiation.
Finally, obtained images and video records can be acquired, combined (if
necessary), and processed in Adobe Photoshop 7.0.1 software (Adobe Systems,
San Jose, CA, USA), Adobe Premiere Pro CS6 software and Image J 1.46 for
Windows.
Despite the promising capability of IV microscopy, the high-resolution
imaging of individual flowing cells, especially in label-free mode, can be
obtained in relatively transparent animal models (e.g., the mesentery of the
frog, mouse, and rat.9,10,144,149,184–190 High-resolution imaging of blood and
lymph flow in other widely used animal models (e.g., mouse ear, skin-fold,
tumors at different locations) is limited because of significant light scattering
from surrounding tissue (e.g., skin) and/or the relatively deep location of
vessels from the skin (or other tissue).
Figure 6.26 Imaging rolling WBCs in small vein (10 ) and RBCs in capillaries (100 ) of
mouse ear using integration of ITDM and optical clearing with glycerol.144
Figure 6.27 (a) Schematic of two-color in vivo FFC. (b) Signal traces in green and red
channels; each signal is associated with single fluorescently labeled cells travelling in blood
(or lymph) flow. (c) laser beam focused on a lymph vessel of mouse ear; EB dye was used
for mapping lymph vessel (see details below).
Techniques for Blood and Lymph Flow Monitoring 343
Figure 6.28 Principle of lymph FC in vivo. (a) General schematic of natural cell focusing
with lymphatic valve. (b) ITDM imaging of valve-induced cell focusing in real lymph flow
in vivo at 10 (top) and 100 (bottom) magnifications. (c) Imaging of cell focusing (bottom)
induced by natural lymphatic contractility: top image is non-focused cell flow in the relaxed
lymph vessel; bottom image is the same vessel during constriction with the focused cell flow.
Figure 6.29 Animal models of rat mesentery and nude mouse ear6HHFRORUSODWHV
700 nm is lower than the refractive index for skin (1.40–1.42 for rats) and,
especially for epidermis (1.55 for humans); thus, it is close to the refractive
index of water (1.33). As a result, these optical and geometric features
significantly reduce unwanted scattered light, allowing the use of a microscope
objective with a high numerical aperture (up to 1.4) and high magnification
(60 –100 ). An additional advantage of the rat mesenteric model is the
good penetration of reagents into the mesenteric tissue. Thus, the responses of
the microvascular network to the impact of different environmental factors
can be studied relatively easily with the use of a simple topical application.
In general, on the basis of our experience and the experiences of other
groups, this easy-to-access mesenteric model is a very promising model for
real-time monitoring of individual static, migrating, and circulating cells (e.g.,
WBCs, RBCs, CTCs, and many others) with the highest optical resolution.
Such a model is essential for studying cell traffic in the blood and the lymph
systems under normal and pathologic (lymphedema, metastasis, and many
other lymph-related diseases) conditions. The capability of this model for
blood and lymphatic research is demonstrated in detail below. The limitation
of the mesentery model is invasiveness.
To overcome this problem, most FFC studies have been performed
noninvasively on the thin ear (±250 mm) of the nude mice that has well-
distinguished blood vessels (e.g., 30 to 70 mm in diameter) and lymph vessels
(Fig. 6.29 bottom).129,153–161,169,174,175
We have also used numerous animal models to study metastatic tumors,
including melanoma and breast cancer.16,17,64,65,129,130,161,169,175 To accurately
reflect the clinical problems of metastatic progression, it is necessary to
employ an animal model in which the disease is similar to that in humans.
Clearly, to date, there is no model that faithfully reproduces all aspects of
human cancer. Among others, a well-established model of choice is an
orthotopic xenograft metastatic mouse model that has already made major
346 Chapter 6
Figure 6.30 Optical lymphography. (a) Mapping of mouse ear lymph vessels with EB (5 ml
of 1% EB dye in 0.9% NaCl was injected intradermally into the ear tip with a 10-ml Hamilton
solution syringe). (b) Mapping of mouse skin lymph vessels and their visualization using skin
chamber. (c) Fluorescent mapping of blood (green; FITC) and lymph (red; RITC) vessels in
mouse ear.
Techniques for Blood and Lymph Flow Monitoring 347
Figure 6.31 ICG blood and lymph angiography: ITDM image before ICG injection (left) and
fluorescence images (excitation 805 nm; emission 830 nm) at the 5th (middle; no dye in
lymph flow) and 45th min (right; dye in lymph flow) after ICG injection; LV – lymph vessel,
BVs – blood vessels.
348 Chapter 6
Figure 6.32 FITC-dextran blood and lymph angiography: ITDM image before injection
(left) and fluorescence images at the 15th (middle; dye in lymph flow) and 120th min (right; no
dye in lymph flow) after injection; LV – lymph vessel, BVs – blood vessels.
Figure 6.33 Monitoring lymph flow profile after injection of a bolus of FITC-dextran solution
into a lymph vessel. (a) imaging of microinjection in lymph flow. (b) Changes of lymph flow
profile over time
North America) (Fig. 6.33). This local injection allowed us, for the first time,
to estimate lymph plasma flow velocity in individual lymph vessels in vivo and
measure the real-time dynamics of the lymph flow profile in vivo.
Figure 6.34 In vivo imaging of mesenteric lymph vessels. (a) Schematic of vascular
network in tissue. (b,c) Initial lymphatic (4 and 100 ); (d) Mesenteric tissue with valvular
lymph vessel and surrounding blood vessels (10 ). (e) High-resolution imaging of single
cells in lymph flow (100 ).
lymphatics with phasic contractions (78%) had active valves that periodically
opened and closed. Cells moved in 85% of the lymphangions.9,144
6.13.2.1 Lymph flow
Using the principles of PIV approach, we calculated lymph flow velocity as an
average cell velocity (Fig. 6.35):9,10,129,144
Flow velocity ¼ Distance travelled ðmmÞ∕Time ðsecÞ
The recording of lymph flow in real time showed that cell motion had an
oscillatory character. Lymph usually moved forward for a short time; the
motion was then interrupted, and lymph movement stopped for up to 1 to
1.5 s. After that, the lymph started to flow in the reverse direction. Lymph
usually oscillated at a mean rate of 64 ± 8 oscillations/min.
Figure 6.35 Real-time dynamics of cell velocity in axial lymph flow of the mesenteric lymph
vessel with a mean diameter of 170 ± 5 mm.
350 Chapter 6
Figure 6.36 Extension of lymphatic vessels at the experimental lymphedema. (a) Mesenteric
lymph vessel before lymphedema. (b) The same vessel after 1 week of lymphedema.221
was developed in one half of the animals after lymphadenectomy and was
accompanied by lymphatic dilation (increasing diameter) and inhibition of
contractility and lymph flow (Fig. 6.36). Notably, the rest of the animals without
tissue edema also demonstrated lymphatic dysfunction, which may indicate a
latent stage of lymphedema. From this, a possible preventive treatment might be
directed at the compensation of vascular disturbances.
We also obtained experimental evidence involving blood vessels in the
development of lymphedema after lymphadenectomy. The dilation of blood
microvessels and expansion of the microvascular net without significant
hemorrhage into the interstitium was detected at the stage of well developed
tissue edema and significant lymphatic disturbances. Obtained links between
functions of lymph vessels, blood vessels, and tissue edema in vivo are
important for understanding basic lymphedema and may be used for
developing new treatment strategies.
Figure 6.37 Laser-induced thermal effects on blood and lymph vessels in vivo. (a) Intact
mesenteric vein (V) and arteries (A1 and A2) with fast blood flow. (b) Damage of these
vessels immediately after a laser pulse: localized hemorrhage (arrow) around the vein (V)
and stasis in small arteries (A2). (c) Intact lymph vessel (L) before a laser pulse (black
dashed line, internal margin of lymphatic wall). (d) Laser-induced constriction of a lymph
vessel, which coincided with stasis in neighboring veins. Pulse parameters: wavelength,
585 nm; pulse duration, 10 ms; radiant exposure, 0.5–30 J/cm2.
Techniques for Blood and Lymph Flow Monitoring 353
Figure 6.38 High-resolution image (100 , water immersion) of RBC aggregates in lymph
flow.
entry of many RBCs into lymph flow, which formed aggregates that inhibited
lymph flow up to stopping (Fig. 6.38).
Figure 6.39 High speed imaging (2,500 fps) of a WBC in the mesenteric lymph flow.
RBCs. Rare RBCs can transport by lymph flow in humans and other
mammals under physiological conditions.66,67,229–233 The features of initial
lymphatics66,234–238 suggest easy entry for highly deformable and relatively
small-diameter (5 to 7 mm) RBCs1–3 when they extravasate into tissue from
blood vessels [Figs. 6.34(b) and (c)]. Compared to the high-speed packed flow
of RBCs in blood that induced their significant deformability, the relatively
slow lymph flow with small cell concentration prevents transient deformation
of the rare-moving RBCs and allows keeping their original biconcave shape.
(Fig. 6.40).10,11,129,144
The number of RBCs in the peripheral lymph increases as a result of aging,
muscular exercises, exposure to heat/cold and radiation as well as in some
pathologies associated with increasing extravasation of RBCs in tis-
sues.144,221,233,243 In experimental venous insufficiency (ligation of the collecting
vein) and after high-power laser irradiation, we monitored hemorrhages into the
interstitium and, as a result, many RBCs entered into lymph flow; they can
change their behavior in flow and affect lymphatic function (Fig. 6.38).144–221
CTCs. It has been well established that the metastatic cascade for many
tumors involves the lymphatic system.117,119,129–131,134,135,137,139,140,169,244–248
However, the role of lymphatic CTCs in metastatic progression are poorly
understood. To solve this problem, we have developed different FC-based in
vivo approaches and demonstrated their feasibility on animal cancer models to
count and image lymphatic CTCs naturally and spontaneously shedding from
Figure 6.40 In vivo high-resolution, high-speed imaging of the single RBC in lymph flow of
a healthy rat (indexed colors).
Techniques for Blood and Lymph Flow Monitoring 355
Figure 6.41 Image lymph flow cytometry in vivo. (a) Metastatic melanoma tumor in mouse
ear. (b) High-resolution, high-speed images of melanoma CTCs (aggregate of two cells), a
partly lysed necrotic melanoma CTC and normal WBC in lymph flow (magnification, 40
and 100 , water immersion; 500–2,500 fps).
Figure 6.42 High-resolution, high-speed monitoring of cells in blood flow. RBCs are
indicated by conventional arrows and arrowheads, rolling cells by arrows originating from
filled circles, and direction of flow by arrows with dashed lines. (a) Parachute-shaped RBC
traveling at 0.4 mm/s; 1,250 fps. (b) Interaction of fast-moving RBC with rolling cell;
2,500 fps. (c) Two RBCs in an area of merging flow streams with a velocity of 0.3 mm/s;
2,500 fps. (d) Fast-moving RBCs, 2,500 fps; magnification 40 .
Figure 6.44 Lymph image FFC: (a) In vivo CD45 þ WBC exiting a valve aperture. (b) In
vivo FITC-labeled apoptotic cell in the lymph flow; dashed lines indicate lymph vessel
structure. (c) Apoptotic (green) and necrotic (red) WBCs in suspension in vitro.
358 Chapter 6
Figure 6.45 (a) Apoptotic (arrowhead, red) and normal (two arrow triangles, green) cells in
two blood vessels (dashed lines). (b) Apoptotic cells (arrowhead, red) in interstitium (arrow
shows one vessel). (c) Normal (arrow, green) and apoptotic (red) cells in a lymph node
(dashed line).
blood vessels (50 mm deep) of a mouse ear. Later Lin’s, ours, and other
groups used one- and two-color FFC for counting various types of fluorescently
tagged circulating blood cells, including RBCs, WBCs, apoptotic cells, and
CTCs.153–161,169,174,175,252
Furthermore, using a mesentery model, we performed counting and
imaging apoptotic cells in the blood flow (Fig. 6.45).144 Fluorescently labeled
apoptotic WBCs (red) were injected into a rat’s tail vein. The high-speed IFDM
was used to image apoptotic cells in blood flow [Fig. 6.45(a)] and tissues, and,
additionally, the PIV approach was used to count these cells in small mesenteric
vessels. Our results demonstrated rapid clearing of apoptotic cells from the
circulation, with a half-life of 8 minutes, which is in line with data from C.
Lin.252 We also observed (1) rolling apoptotic cells in veins within 10 minutes
after the injections [Fig. 6.45(a), bottom vessel], (2) the appearance of apoptotic
cells in the interstitium at the 15th min, and (3) accumulation of apoptotic cells
in mesenteric lymph nodes [Fig. 6.45(b)] at 30th min.
(Section 6.10 In vivo Flow Cytometry), the sensitivity of these tools is limited
for detecting rare cells. Additionally, blood sampling, which is required for
any in vitro test, only provides snapshots of CTCs and thus, prevents
monitoring the behavior of individual CTCs over disease progression. As a
result, the multistep process of CTC dissemination in the entire body at the
single-cell level is far from being understood, thus many early mechanisms of
metastasis remain elusive.
The problem can be solved by in vivo CTC detection16,143,169 as a first
step, in preclinical animal models of human disease. One of the advanced
technical platforms to study CTCs in animal models in vivo is
FFC.153–161,174–175 As in conventional fluorescent microscopy, to be detect-
able by FFC, cells should be labeled by fluorescent contract agents. The most
promising approach involves using CTCs with fluorescent proteins, in
particular green fluorescent protein (GFP).16,145,154,156,161–162 Genetic encod-
ing of GFP results in fluorescence in all cancer cells produced during disease
progression. This enables easy and noninvasive detection of CTCs by in vivo
FFC, and of primary tumor and metastases by intravital whole-body imaging.
More specific molecular targeting, involving exogenous labels bioconjugated
with antibodies against a cell-surface marker of CTCs, can identify a specific
cell subpopulation (e.g., stem CTCs).
Over the last decade, in vivo FFC has been widely used in preclinical
cancer research to study i.v. injected and spontaneous (i.e., naturally shed
from primary tumor) CTCs.17,153–161,169,175 It was shown that 80 to 90% of
tumor cells injected in the bloodstream of healthy animals were depleted from
circulation within 0.5 to 4 h, while a low amount of CTCs continued to
circulate over days. Furthermore, cells from highly metastatic cell lines
disappeared faster than low metastatic ones.16,159
Recently, we used FFC to find links between counts of spontaneous
CTCs, primary tumor size, and metastasis progression. For this, immunode-
ficient nude mice were inoculated in the mammary glands with human breast
cancer cells expressing GFP and luciferase (MDA-MB-231-luc2-GFP), which
caused orthotopic primary tumor growth and metastatic disease development
(Fig. 6.46).161
With FFC, we were able to detect bulk CTCs that were expressing GFP
with an excitation wavelength of 488 nm and an emission wavelength of
509 nm. Luciferase was used to identify metastasis by whole-body imaging
(e.g., with IVIS system). In the majority (.90%) of mice, CTCs appeared
starting at week 1 of tumor development and were monitored during the entire
course of the disease. The in vivo whole-body imaging, ex vivo high-resolution
fluorescence imaging, and in vitro histology, as expected, demonstrated the
earliest metastasis in the sentinel lymph node (SLN). In week 2 after tumor
inoculation, metastatic lesions of the SLN affected 20% of the mice, and CTC
colonies in the lung were detected in 40% of the mice [Fig. 6.46(b)]. In weeks 5
360 Chapter 6
Figure 6.46 Development of metastatic breast cancer. (a) Growth of primary tumor;
(b) Colony of cancer cells (green) in the lung at week 2 after tumor inoculation; (c) A CTC
(green) in the blood sample obtained from a mouse at week 1 after tumor inoculation;
(d,e) Lung metastases at week 8 after tumor inoculation confirmed by two independent
methods: fluorescence image of lung ex vivo (d) and histological staining (H&E) of a lung
section (e). Black arrow and green line in (e) show metastasis6HHFRORUSODWHV
Figure 6.47 Correlation of CTC kinetics with tumor growth in breast cancer based on FFC
data and whole body imaging, respectively. (a) Individualized CTC dynamics maximize
before the primary tumor starts progressively growing; (b) Individualized CTC dynamics with
two peaks. Tumor volume was calculated by the equation: ½ long diameter short
diameter.161
was almost 30 times less than the number of inoculated bulk tumor cells (5
106 cells per a mouse), all mice from the first group developed metastatic
disease (e.g., lung, liver) within 4 weeks after inoculation. In this group, CTCs
were detected in circulation starting from week 1 after inoculation, and their
dynamic was characterized by the maximum CTC number at an early stage of
disease and their gradual decrease during primary tumor growth and
metastasis progression (Fig. 6.48).161
In summary, our findings hold promise to provide new insights on
mechanisms of metastatic diseases and may have diagnostic implication
towards developing advanced diagnosis, allowing for well-timed therapy that
is more effective. Further studies with comprehensive statistical analysis and
trials in humans will provide the insight on these phenomena.
Figure 6.48 Dynamics of CTCs in mice after inoculation of CSCs. (a) A typical example of
the dynamics of CTCs with their maximum number before the primary tumor starts growing
progressively; (b) Lung micrometastases (green).
362 Chapter 6
6.14 Summary
Optical techniques provide minimal invasive insight into blood and lymph
circulation systems physiology and biophysics. At tissue level, coherent
radiation scattering can be employed to detect blood perfusion that causes
specific spatiotemporal variations of scattered light intensity known as
dynamic laser speckles. Because of high light scattering in tissues, the laser
speckle technique does not provide quantitative data related to volumetric
blood flow rate, and it is applicable for the monitoring and mapping of short-
term relative variations of blood perfusion caused by external factors, as
demonstrated in Section 6.3. The effect of light scattering within a tissue can
be decreased with certain optical techniques. Section 6.4 illustrates a possible
way to improve the laser speckle technique by means of a coherent selection of
light scattered at a specific depth. Quantitative optical measurements of the
blood flow velocity within an individual blood vessel are available at length
scales less than 1 mm. As discussed in Section 6.5.1, in this case one can detect
the fraction of light that retains its rectilinear propagation while passing
through thin tissue layers. The further decrease of length scale toward cellular
levels makes the effect of light scattering negligible, and superficial blood
vessels can be visualized in a common way. At the capillary-network level, the
issue concerns complex dynamics of a living object. The problem of
involuntary movements' effect on blood velocity measurements is discussed
in Section 6.5.2. Finally, at very high magnification, intravital microscopy
provides a plenty of data at cellular level as it was overviewed through
Sections 6.7 to 6.13. Although intravital microscopy at the single-cell level
became a routine tool with its highest impact in physiology and biophysics,
the authors hope that the optical technique will find a challenging application
in understanding life functions at larger scales of tissues, organs, and whole
organisms.
Acknowledgments
This work was supported by the grants R01 EB000873, R01 CA131164, R01
EB009230, R21 CA139373; R01EB017217 from the National Institutes of
Health (NIH); grant IDBR 085 2737 from the National Science Foundation
(NSF); grant W81XWH-11-1-0129 from the Department of Defense (DOD),
grant UL1TR000039 from the Arkansas Breast Cancer Research Program,
and grants from the Arkansas Biosciences Institute and the Translational
Research Institute at UAMS.
This work has been performed in parts of the in the framework of RF
governmental contract №2014/203, №1490, competitive contract of The
Ministry of Science and Education of RF, №3.1340.2014/K, grant of The
President of RF NSh-703.2014.2, The Tomsk State University Academic D.I.
Mendeleev Fund Program, and grant RFBR 14-02-00526a.
Techniques for Blood and Lymph Flow Monitoring 363
References
1. K. S. Saladin, “Anatomy and Physiology: the unity of form and
function,” 6th Edition, McGraw-Hill Higher Education, New York (2012).
2. C. Pallister, “Blood: physiology and pathophysiology,” Oxford Univer-
sity Press, Oxford, 585 (1994).
3. K. Rogers (Ed.), “Blood: physiology and circulation (The Human
body).” Rosen Education Service, Britannica, ISBN-10: 1615301216,
239 (2010).
4. S. E. Charm and G. S. Kurland, “Blood flow and microcirculation,”
John Wiley & Sons, Boston (1974).
5. H. H. Lipowsky, “Microvascular rheology and hemodynamics,”
Microcirculation (N.Y.). 12, 5–15 (2005).
6. R. Skalak and P. I. Branemark, “Deformation of red blood cells in
capillaries,” Science 164, 717–719 (1969).
7. J. C. Firrell and H. H. Lipowsky, “Leukocyte margination and
deformation in mesenteric venules of rat,” Am. J. Physiol. 256,
H1667–H1674 (1989).
8. H. Minamitani, K. Tsukada, E. Sekizuka, and C. Oshio, “Optical
bioimaging: from living tissue to a single molecule: imaging and
functional analysis of blood flow in organic microcirculation,”
J. Pharmacol. Sci.. 93, 227–233 (2003).
9. E. I. Galanzha, G. E. Brill, Y. Aizu, S. S. Ulyanov, and V. V. Tuchin,
“Speckle and Doppler Methods of Blood and Lymph Flow Monitor-
ing,” In: V. V. Tuchin (Ed.), Handbook of Optical Biomedical
Diagnostics, SPIE Press, PM107, Bellingham, USA, 875–937 (2002)
10. V. V. Tuchin, E. I. Galanzha, and V. P. Zharov, “In vivo image flow
cytometry,” In: V. V. Tuchin (Ed.), Advanced Optical Flow Cytometry:
Wiley-VCH Verlag GmbH & Co. KGaA, 387–433 (2011).
11. E. I. Galanzha, V. V. Tuchin, and V. P. Zharov, “In vivo integrated flow
image cytometry and lymph/blood vessels dynamic microscopy,”
J. Biomed. Opt. 10, 54018 (2005).
12. V. P. Zharov, E. I. Galanzha, Yu. Menyaev, and V. V. Tuchin, “In vivo
high-speed imaging of individual cells in fast blood flow,” J. Biomed.
Opt. 11 (5), 054034 (2006).
13. D. E. Sosnovik, “Molecular imaging in cardiovascular magnetic
resonance imaging: current perspective and future potential,” Top.
Magn. Reson. Imaging. 19, 59–68 (2008).
14. H. Markus, “Transcranial Doppler detection of circulating cerebral
emboli. A review,” Stroke 24, 1246–1250 (1993).
15. Y. Yang, D. G. Grosset, Q. Li, A. Shuaib, and K. R. Lees. “Turbulence
and circulating cerebral emboli detectable at Doppler ultrasonography: a
differentiation study in a stenotic middle cerebral artery model,” Am. J.
Neuroradiol. 23, 1229–1236 (2002).
364 Chapter 6
66. M. Foldi and E. Kubik Foldi. (Eds.) Textbook of Lymphology. Urban &
Fischer, Munchen, Germany. (2003).
67. W.L. Olszewski, “The lymphatic system in body homeostasis: physio-
logical conditions,” Lymphat. Res. Biol. 1, 11–21 (2003).
68. P. Brown, “Lymphatic system: unlocking the drains,” Nature 436(7050),
456–458 (2005).
69. T. Godal and A. Engeset, “A preliminary note on the composition of
lymphocytes in human peripheral lymph,” Lymphology 11, 208–10
(1978).
70. J. G. Hall, J. G. Hall, and B. Morris, “The origin of the cells in the efferent
lymph from a single lymph node,” J. Exp. Med. 1121, 901–10 (1965).
71. Y. Aizu, K. Ogino, T. Sugita, T. Yamamoto, N. Takai, and T. Asakura,
“Evaluation of blood flow at ocular fundus by using laser speckle,”
Applied Optics 31(16), 3020–3029 (1992).
72. S. S. Ulyanov, “A new type of manifestation of doppler effect. An
application to blood and lymph flow measurements,” Opt. Eng. 34(10),
2850–2855 (1995).
73. L. E. Drain, The Laser Doppler Technique, John Wiley & Sons,
New York (1980).
74. A. Serov, W. Steenbergen, and F. de Mul, “Laser Doppler perfusion
imaging with a complimentary metal oxide semiconductor image
sensor,” Opt. Lett. 27, 300–302 (2002).
75. A. Serov, B. Steinacher, and T. Lasser, “Full-field laser Doppler
perfusion imaging and monitoring with an intelligent CMOS camera,”
Opt. Exp. 13, 3681–3689 (2005).
76. A. Serov and T. Lasser, “High-speed laser Doppler perfusion imaging
using an integrating CMOS image sensor,” Opt. Exp. 13, 6416–6428
(2005).
77. N. Konishi, Y. Tokimoto, K. Kohra, and H. Fujii, “New laser speckle
flowgraphy system using CCD camera,” Opt. Rev. 9, 163–169 (2002).
78. M. Lee, N. Konishi, and H. Fujii, “Blood flow analysis of skin tissue
under the sacrum using laser speckle flowgraphy,” Opt. Rev. 10, 562–566
(2003).
79. B. Choi, J. C. Ramirez-San-Juan, J. Lotfi, and J. S. Nelson, “Linear
response range characterization and in vivo application of laser speckle
imaging of blood flow dynamics,” J. Biomed. Opt. 11, 041129-1–7 (2006).
80. N. Serov, W. Steenbergen, and F. de Mul, “Prediction of the
photodetector signal generated by Doppler-induced speckle fluctuations:
theory and some validations,” J. Opt. Soc. Am. A 18, 622–639 (2001).
81. V. Rajan, B. Varghese, T. G. van Leeuwen, and W. Steenbergen, “Speckles
in laser Doppler perfusion imaging,” Opt. Lett. 31, 468–470 (2006).
82. V. Rajan, B. Varghese, T. G. van Leeuwen, and W. Steenbergen,
“Influence of tissue optical properties on laser Doppler perfusion
Techniques for Blood and Lymph Flow Monitoring 369
imaging, accounting for photon penetration depth and the laser speckle
phenomenon,” J. Biomed. Opt. 13, 024001-1–9 (2008).
83. N. Konishi and H. Fujii, “Real-time visualization of retinal microcircu-
lation by laser flowgraphy,” Opt. Eng. 34, 753–757 (1995).
84. N. Yokoi, T. Maeda, Y. Shimatani, M. Kyoso, H. Funamizu, and
Y. Aizu, “Improvement of estimation parameter for frame-rate analysis
of blood flow using laser speckle image sensing,” Opt. Lasers Eng. 27,
156–166 (2014).
85. N. Yokoi, J. Sato, Y. Shimatani, M. Kyoso, H. Funamizu, and Y. Aizu,
“Frame-rate analysis of arterial blood flow in human and rat using laser
speckle image sensing,” Opt. Rev. 21, 345–352 (2014).
86. P. Liu, H. Huang, N. Rollins, L. F. Chalakd, T. Jeon, C. Halovanic, and
H. Lu, “Quantitative assessment of global cerebral metabolic rate of
oxygen (CMRO2) in neonates using MRI,” NMR Biomed. 27, 332–340
(2014)
87. K. L. Leenders, A. J. Palmer, N. Quinn, J. C. Clark, G. Firnau, E. S.
Garnett, C. Nahmias, T. Jones, and C. D. Marsden, “Brain dopamine
metabolism in patients with Parkinson’s disease measured with positron
emission tomography,” J. Neurol. Neurosurg. Psychiatry 49, 853–860
(1986).
88. R. L. Buckner, A. Z. Snyder, B. J. Shannon, G. LaRossa, R. Sachs,
A. F. Fotenos, Y. I. Sheline, W. E. Klunk, C. A. Mathis, J. C. Morris, and
M. A. Mintun, “Molecular, structural, and functional characterization of
Alzheimer’s disease: evidence for a relationship between default activity,
amyloid, and memory,” J. Neurosci. 25, 7709–7717 (2005).
89. M. D’Esposito, L. Y. Deouell, and A. Gazzaley, “Alterations in the
BOLD fMRI signal with ageing and disease: a challenge for
neuroimaging,” Nat. Rev. Neurosci. 4, 863–872 (2003).
90. F. Hyder, “Neuroimaging with calibrated FMRI,” Stroke 35, 2635–2641
(2004).
91. K. Dunn, A. Devor, A. M. Dale, and D. Boas, “Spatial extent of oxygen
metabolism and hemodynamic changes during functional activation of
the rat somatosensory cortex,” NeuroImage 27, 279–290 (2005).
92. M. Jones, J. Berwick, D. Johnston, and J. Mayhew, “Concurrent optical
imaging spectroscopy and Laser-Doppler flowmetry: The relationship
between blood flow, oxygenation, and volume in rodent barrel cortex,
NeuroImage 13, 1002–1015 (2001).
93. M. Kohl, U. Lindauer, G. Royl, M. Kuhl, L. Gold, A. Villringer, and
U. Dirnagl, “Physical model for the spectroscopic analysis of cortical
intrinsic optical signals,” Phys. Med. Biol. 45, 3749–3764 (2000).
94. P. B. Jones, H. K. Shin, D. A. Boas, B. T. Hyman, M. A. Moskowitz,
C. Ayata, and A. K. Dunn, “Simultaneous multispectral reflectance
imaging and laser speckle flowmetry of cerebral blood flow and oxygen
370 Chapter 6
211. C. Li, S. Meng, X. Yang, D. Zhou, J. Wang, and J. Hu, “Sentinel lymph
node detection using magnetic resonance lymphography with conven-
tional gadolinium contrast agent in breast cancer: a preliminary clinical
study,” BMC Cancer 15, 213 (2015).
212. M. Nakagawa, M. Morimoto, H. Takechi, Y. Tadokoro, and
A. Tangoku, “Preoperative diagnosis of sentinel lymph node (SLN)
metastasis using 3D CT lymphography (CTLG),” Breast Cancer. [Epub
ahead of print] PubMed PMID: 25814093 (2015).
213. C. Shah, D. Arthur, J. Riutta, P. Whitworth, and F. A. Vicini, “Breast-
cancer related lymphedema: a review of procedure-specific incidence
rates, clinical assessment aids, treatment paradigms, and risk reduction,”
Breast J. 18, 357–361 (2012).
214. N. L. Browse and G. Stewart, “Lymphoedema: pathophysiology and
classification,” J. Cardiovasc. Surg. (Torino) 26, 91–106 (1985).
215. “The diagnosis and treatment of peripheral lymphedema,” Consensus
Document of the International Society of Lymphology. Lymphology 36,
84–91 (2003).
216. A. A. Ramelet, “Pharmacologic aspects of a phlebotropic drug in CVI-
associated edema,” Angiology 51, 19–23 (2000).
217. P. A. Hurst and J. M. Edwards, “Chylous ascites and obstructive
lymphoedema of the small bowel following abdominal radiotherapy,”
Br. J. Surg. 66, 780–781 (1979).
218. R. L. Ahmed, K. H. Schmitz, A. E. Prizment, and A. R. Folsum,
“Risk factors for lymphedema in breast cancer survivors, the Iowa
Women’s Health Study,” Breast Canc. Res. Treatment 130, 981–991
(2011).
219. J. L. Bevilacqua, M. W. Kattan, Y. Changhong, S. Koifman, I. E.
Mattos, R. J. Koifman, and A. Bergmann, “Nomograms for predicting
the risk of arm lymphedema after axillary dissection in breast cancer,”
Ann. Surg. Oncol. 19, 2580–2589 (2012).
220. S. A. Norman, A. R. Localio, M. J. Kallan, A. L. Weber, H. A. Simoes
Torpey, S. L. Potashnik, L. T. Millers, K. R. Fox, A. DeMichele, and
L. J. Solins, “Risk factors for lymphedema after breast cancer
treatment,” Cancer Epidemiol. Biomarkers Prev. 19, 2734–2746
(2010).
221. E. I. Galanzha, V. V. Tuchin, and V. P. Zharov, “Optical monitoring of
microlympatic disturbances at experimental lymphedema,” Lymphat.
Res. Biol. 5, 11–27 (2007).
222. Report of Surgeon General: The Health Consequences of Smoking.
Department of Health & 13 Human Services, Public Health Services,
Centers for Disease Control & Prevention, National Center for Chronic
Disease Prevention and Health Promotion, Office of Smoking & Health,
Washington, D.C. (2004).
380 Chapter 6
52 patents, and more than 200 papers in the field of laser spectroscopy,
biophotonics, and nanomedicine including five publications in the Nature
family journals. Dr. Zharov has been the principle investigator on 16 NIH,
NSF, DoD and other agency grants including 5 R01 awards. He is one of the
pioneers of high resolution photoacoustic spectroscopy and the inventor of
photoacoustic tweezers, pulse nanophotothermolysis of infections and cancer,
and in vivo multicolor flow cytometry for detection, magnetic capturing, and
photothermal elimination of circulating tumor cells (CTCs), bacteria and
virus with the unprecedented sensitivity down to a few CTCs (1 CTC/500 mL)
in whole blood volume (5 liter in adult). These technologies can provide
breakthroughs in the diagnosis of cancer, infections, and cardiovascular
disorders at an early stage when well-timed therapy is more effective. Dr.
Zharov is the State Prize Winner, the most prestigious national award in
Russia, and the first recipient of the U.S. Maiman Award, named after the
inventor of the first laser.
Andrew M. Rollins
Case Western Reserve University, Cleveland, USA
7.1 Introduction
Optical coherence tomography (OCT) is a noninvasive imaging modality that
provides high-resolution, depth-resolved imaging of tissue microstructure in
real time.1,2 Images are generated by detecting back-reflected light, where
contrast is generated by optical index changes in the sample. By measuring
singly backscattered light as a function of depth, OCT fills a valuable niche in
the imaging of tissue microstructure, providing subsurface imaging to depths
of 1 to 3 mm with high spatial resolution (10 mm) in three dimensions and
high sensitivity (.110 dB) in vivo with no contact needed between the probe
and the tissue. With high imaging speeds, high resolution, optical fiber-based
implementations, and functional extensions for measuring tissue birefringence
and flow, OCT has made a significant impact in clinical and biomedical
applications, including ophthalmology, cardiology, and oncology. In these
applications OCT provides the physician with near-histological resolution
imaging of subsurface tissue morphology, potentially aiding in monitoring
treatment, biopsy site selection, or even approaching the goal of “optical
biopsy.” With these specifications, OCT is quickly being translated into
commercial products readily available to the clinical community, and in some
cases, they are becoming the standard of care.
385
386 Chapter 7
2 ln 2 l20
lc ¼ : (7.1)
p Dl
Here, l0 is the center wavelength of the source, with full-width at half-
maximum (FWHM) bandwidth Dl. Hence, low-coherence (broadband)
sources are employed in OCT to achieve microstructural imaging approaching
the cellular level.3 On the other hand, the lateral resolution is determined by
the focused beam spot size in the tissue. This decoupling of axial and lateral
resolution is an advantage of OCT, allowing for high axial resolution in
applications where high-numerical-aperture optics is challenging, such as
ophthalmic and catheter-based imaging. Using conventional optics (assuming
Dx2
b¼p , (7.2)
2l
Figure 7.2 OCT image generation of mouse epicardium. (a) 1-D axial scan. (b) 2-D B-scan
image generated by transverse scanning and collection of multiple axial scans. (c) 3-D
volume reconstruction from raster scanning across surface, collecting a series of B-scan
images.
388 Chapter 7
Figure 7.3 a) Time-domain OCT system, in which axial scans are generated by
mechanical scanning of reference mirror. (b) A sample with three reflecting sites results in
depth, z. (c) Interference fringes are localized to within the coherence length of light source.
(d) The envelope of the fringe pattern results in an axial reflectivity profile, or A-scan.
Real-Time Imaging of Microstructure and Function Using OCT 389
Figure 7.4 Fourier-domain OCT can be implemented using either (a) a broadband source
and spectrometer, (spectral domain OCT) or (b) a rapidly tunable laser (swept source OCT
or optical frequency domain imaging).
Figure 7.5 Axial scan generation with FDOCT. (a) Three reflection sites within the sample.
(b) Spectral interferogram represents summation of sinusoidal modulations of spectrum.
(c) Axial scan obtained by computing the inverse-Fourier transform of a spectral
interferogram that is evenly spaced in wavenumber.
392 Chapter 7
Figure 7.6 Full range imaging, complex conjugate ambiguity resolved in the anterior
chamber of the eye. Reprinted from Sarunic et al.17
1 l2c
DD ¼ N: (7.6)
4 Dl
The SNR for a FDOCT system, assuming it is shot-noise limited, is
rPs Rs Dt
SNRF DOCT ¼ : (7.7)
2e
This is comparable to Eq. (7.4), replacing the detector bandwidth with the
inverse of the integration time of the camera or sweep time of the swept
source, Δt. The increase in sensitivity of the FDOCT systems is approximately
a factor of N/2, and allows significantly increased A-line rates, enabling
acquisition of 3-D image sets in short time periods. For many clinical and
biomedical applications, a high imaging speed is critical to reduce motion
artifacts and observe dynamic biological processes.
Within FDOCT systems, the sensitivity degrades with increasing distance
from the zero path length match.18,19 This sensitivity falloff is related to the
spectrometer optics and the pixel width of the detection cameras in SDOCT or
the instantaneous linewidth of the swept light source. The theoretical 6-dB
fall off can be calculated using Eq. (7.8), where drk is the spectral resolution:
ln 2
Dz6 dB ¼ : (7.8)
pdr k
7.2.2.1 Spectrometers
SDOCT uses a spectrometer to detect the broadband interference spectrum
between the sample and reference. The spectral interferogram recorded by the
spectrometer must be inverse Fourier transformed to generate an OCT image.
Spectrometers used for FDOCT typically consist of a diffraction grating,
collimating and objective optics, and a line-scan camera with pixels
approximately evenly spaced as a function of wavelength. The diffraction
angle of light dispersed by the grating is a nonlinear function of wavenumber
k ¼ 2p/l. Therefore, the spectrum recorded by the line-scan camera is
unevenly spaced in k. However, a spectral interferogram that is a linear
function of k is needed to inverse Fourier transform the data into the spatial
domain. One common practice is to interpolate the nonlinearly spaced
spectral interferogram and rescale the data into the wavenumber-domain
prior to the inverse-Fourier transform. Another option is to use a
spectrometer with a dispersion that is linear in wavenumber (linear k), which
improves sensitivity falloff and reduces computing time as compared to an
equivalent conventional spectrometer.20 To linearize the spectral dispersion in
394 Chapter 7
Figure 7.8 Broadband continuum light source: (a) Measured optical spectrum of photonic
crystal fiber (PCF) output (solid line), input pump laser spectrum (dotted blue line) and
(b) numerically simulated spectrum generated by using the same parameters as used in the
experiment. Simulations including (solid line) and not including (dashed line) stimulated
Raman scattering. Reprinted from Wang et al.25
Figure 7.9 (a) Doppler and (b) intensity OCT image of patient with proliferative diabetic
retinopathy. Reprinted from Wang et al.44
Figure 7.10 Three different applications of speckle variance measurements for vascular
PDSSLQJ DUH VKRZQ LQ WKH EUHDVW EUDLQ DQG VNLQ 6HHFRORUSODWHV5HSULQWHG IURP 9DNRF HW DO
Figure 7.11 PS-OCT images of retina with age related macular degeneration
(a) Reflectivity; (b) retardation (color bar: 0–90 deg); (c) degree of polarization uniformity
(color bar: 0 – 1); (d) reflectivity overlaid with segmented retinal pigment epithelium. Image
VL]H GHJ KRUL]RQWDO [ PP YHUWLFDO6HHFRORUSODWHV 5HSULQWHG IURP *RW]LQJHU HW DO
damage due to ablation therapy,67–69 infarction,66 and the nerve fiber layer
within the retina70–73 (Fig. 7.11). When imaging through fiber catheters, the
rotation of the fiber may cause stress-induced birefringence, which can
adversely affect PSOCT measurements. Through the use of frequency
multiplexing, an OFDI system has been developed that allows catheter-based
PSOCT independent of the fiber birefringence.74 Spectral binning has also
been implemented for reducing polarization mode dispersion artifacts in
catheter-based imaging.75
Figure 7.12 High-resolution image of the anterior chamber angle. Clearly visible is the
trabecular meshwork and Schlemm’s canal. Reprinted from Sarunic et al.17
Figure 7.13 Comparison of histology (a) with in vitro ultra-high-resolution OCT (b) in the
monkey (Macaca Fascicularis) fovea. Ganglion cell axons (gglc ax); ganglion cell body layer
(gglc); inner plexiform layer (ipl); inner nuclear layer (inl); outer plexiform layer (transition
between Henle fibers and inl; not l/abeled); outer nuclear layer (onl); foveal cone inner
segments (cis); foveal cone outer segments (cos); pigment epithelium þ processes (pe);
choriocapillaris (chc); choroid (ch). Reprinted from Drexler and Fujimoto.83
7.4.2 Cardiology
Cardiovascular disease is the leading cause of morbidity and mortality in the
United States.90 Imaging has played a vital role for early diagnosis of
cardiovascular diseases, monitoring and guidance of procedures, and
characterization of preclinical models of disease. The unique features of
OCT have made it a powerful tool for cardiovascular imaging, from basic
scientific research to clinical applications. In particular, cardiovascular OCT
is a potentially disruptive technology in the detection and treatment of
atherosclerotic plaques. Postmortem studies of patients who experienced a
Real-Time Imaging of Microstructure and Function Using OCT 401
Figure 7.14 C-mode slab images of the outer plexiform layer of the perimacular retina
provide a detailed image of the retinal capillaries. Reprinted from Kagemann et al.84
Figure 7.15 Mosaicking of ultrahigh resolution C-scans. En face images show retinal
nerve fibers and in some areas, loss of fibers, with corresponding decrease in retinal nerve
ILEHU OD\HU WKLFNQHVV PHDVXUHV 6HHFRORUSODWHV5HSULQWHG IURP +RRG HW DO
Figure 7.16 (a) Perspective cutaway view of entire 3D volume OFDI data set,
demonstrating the stent, a side branch, and a large calcific nodule. (b) Longitudinal section
through a portion of the dataset. The side branch and calcific nodule are evident. The scale
bar in B is 1.0 mm. Reprinted from Tearney et al.102
Figure 7.18 Correlation of optical coherence tomography catheter based pullback images
(a,b) within a human cadaveric coronary artery compared with near infrared fluorescence
(c) cylindrical rendering of near infrared fluorescence signal and (d) validation with
IOXRUHVFHQFH UHIOHFWDQFH LPDJLQJ WR LGHQWLI\ UHJLRQV ZLWK LQFUHDVHG ILEULQ6HHFRORUSODWHV
5HSULQWHG IURP<RR HW DO
Figure 7.19 (A1–A3) 3D OCT imaging of fiber orientation in the rabbit right ventricular free
wall. (B1–B3) The fiber structure is visible within the en face slices. (C1–C3) Quantification
of the fiber orientation in the plane parallel to the wall. Slightly modified from Fleming et al.116
7.4.3 Oncology
The use of OCT in the field of oncology is increasing, including clinical
applications such as endoscopy and tumor margin detection, and small animal
imaging for the evaluation of tumor growth and response to drugs. The
potential for clinical application of OCT in gastro-intestinal (GI) endoscopy is
particularly promising since many common GI lesions occur within the depth
range of OCT imaging (1–2 mm). Endoscopic ultrasound (EUS), which is
currently used in clinical practice for evaluating larger lesions, lacks the
resolution to image small lesions in this depth range.
OCT catheters have been extensively developed for minimally invasive
procedures such as gastrointestinal endoscopy and intravascular imaging.
For endoscopic imaging, OCT light is delivered to the GI tract via small
diameter catheter probes that are often passed through the accessory
channel of standard GI endoscopes. Probes are usually developed with
a specialized shaft, which is axially flexible and torsionally rigid to
mechanically support the optical elements within the probe housing. The
probe sheath materials are selected for biocompatibility, compatibility with
standard cleaning and sterilization procedures, and optical and mechanical
properties. A variety of designs have been employed, including forward
Real-Time Imaging of Microstructure and Function Using OCT 407
Figure 7.20 OCT imaging in a developing embryo. (a) 2D OCT image in diastole cut
coronal to the body of the embryo. (b) 2D OCT image in systole cut coronal to the body of the
embryo. (c) Sum voxel projection image in diastole. (d) Sum voxel projection image in
systole. The OCT images clearly show the compact myocardial and endocardial borders of
the heart. CM is the compact myocardium, endo is the endocardium, and CJ is the cardiac
jelly. Reprinted from Jenkins et al,128
Figure 7.21 Catheter schematics of (A) a compound-lens configuration and (B) a single
gradient-index (GRIN) lens configuration. A cylindrical reflector used to redirect the light by
90 deg and to correct the astigmatism caused by the inner balloon lumen. (C) and (D) are the
calculated focused spot size and the maximum beam diameter within the GRIN lens versus
the GRIN lens pitch number for the compound- and single-lens configurations, respectively.
Reprinted from Xi et al.148
Figure 7.22 Representative real-time OCT images of a pig esophagus in vivo acquired
with the balloon imaging catheter in conjunction with an FDML SS-OCT system. (A) 2-D
snapshot and (B) 3-D cutaway view. Reprinted from Xi et al.148
Real-Time Imaging of Microstructure and Function Using OCT 409
Figure 7.25 Example of positive tumor margins from patient diagnosed with ductal
carcinoma in situ. Foci of high back reflection (arrows) indicate areas with cancerous cells.
Reprinted from Nguyen et al.157
This would provide rapid feedback to the surgeon as to whether more tissue
needs to be removed.
7.5 Conclusions
Optical coherence tomography has advanced rapidly over the last two
decades. From its origins as a laboratory tool to its present status in a variety
of clinical and scientific applications, the technique has enjoyed rapid
technological growth and commercialization.
Several extensions of OCT technology have been developed that supple-
ment morphological imaging with functional imaging capability. In addition to
Doppler flow sensing and measuring polarization described above, functional
imaging research using OCT also includes technologies for analyte concentra-
tions or scatter size determination by spectral analysis,158–162 and tissue
electrical activity.163–166 Ongoing work to develop molecular sensitive/contrast
OCT,167–168 with either absorption or scattering agents, may highlight the pre-
sence of molecular labels in a manner comparable to fluorescence microscopy.
Multi-modality imaging has been shown to be an increasing trend, with
the development of integrated systems with techniques such as multiphoton
microscopy,169 second harmonic generation,170,171 or Raman spectroscopy.172,173
Future applications of OCT in biomedical research, disease diagnosis, and
procedural guidance are vast. With continued and rapid technology
developments, increased imaging speed, high resolution, small-diameter
catheters, automated analysis tools, and complementary functional data, the
future of OCT biomedical and clinical imaging looks bright. The rapid
technological development that has characterized OCT research and its
extension into clinical and basic science applications since its inception shows
no sign of abating.
412 Chapter 7
References
1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson,
W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G.
Fujimoto, “Optical Coherence Tomography” Science 254(5035), 1178–
1181 (1991).
2. “Optical Coherence Tomography: Technology and Applications,”
ed. W. Drexler and J. G. Fujimoto, Springer (2008).
3. W. Drexler, U. Morgner, F. X. Kartner, C. Pitris, S. A. Boppart,
X. D. Li, E. P. Ippen, and J. G. Fujimoto, “In Vivo Ultrahigh-
Resolution Optical Coherence Tomography,” Optics Letters 24, 1221–
1223 (1999).
4. E. A. Swanson, D. Huang, M. R. Hee, J. G. Fujimoto, C. P. Lin, and
C. A. Puliafito, “High-speed optical coherence domain reflectometry,”
Optics Letters 17, 151–153 (1992).
5. A. M. Rollins and J. A. Izatt, “Optimal interferometer designs for
optical coherence tomography,” Optics Letters 24, 1484–1486
(1999).
6. A. F. Fercher, C. K. Hitzenberger, G. Kamp, and S. Y. El-Zaiat,
“Measurement of intraocular distances by backscattering spectral
interferometry,” Optics Communications 117(1-2), 43–48 (1995).
7. M. Wojtkowski, R. Leitgeb, A. Kowalczyk, T. Bajraszewski, and A. F.
Fercher, “In vivo human retinal imaging by Fourier domain optical
coherence tomography,” J Biomed Optics 7(3), 457–463 (2002).
8. M. Choma, M. Sarunic, C. Yang, and J. Izatt, “Sensitivity advantage of
swept source and Fourier domain optical coherence tomography,” Opt.
Express 11(18), 2183–2189 (2003).
9. R. Leitgeb, C. Hitzenberger, and A. Fercher, “Performance of fourier
domain vs. time domain optical coherence tomography,” Opt. Express,
11(8), 889–894 (2003).
10. J. F. d. Boer, B. Cense, B. H. Park, M. C. Pierce, G. J. Tearney, and
B. E. Bouma, “Improved signal-to-noise ratio in spectral-domain
compared with time-domain optical coherence tomography,” Optics
Express 28(21), p 2067–2069 (2003).
11. M. Wojtkowski, A. Kowalczyk, R. Leitgeb, and A. F. Fercher, “Full
range complex spectral optical coherence tomography technique in eye
imaging” Opt. Lett. 27(16), 1415–1417 (2002).
12. R. A. Leitgeb, C. K. Hitzenberger, A. F. Fercher, and T. Bajraszewski,
“Phase-shifting algorithm to achieve high-speed long-depth-range
probing by frequency-domain optical coherence tomography,” Opt.
Lett. 28(22), 2201–2203 (2003).
13. E. Götzinger, M. Pircher, R. Leitgeb, and C. Hitzenberger, “High speed
full range complex spectral domain optical coherence tomography,” Opt.
Express 13(2), 583–594 (2005).
Real-Time Imaging of Microstructure and Function Using OCT 413
David J. Briers
Kingston University, London, UK
Valery V. Tuchin
Saratov National Research State University, Saratov, Russia
Tomsk National Research State University, Tomsk, Russia
Institute of Precision Mechanics and Control, Russian Academy of Sciences,
Saratov, Russia
8.1 Introduction
The appearance of random interference patterns, or speckle fields, in the spatial
distributions of laser light scattered by weakly ordered media such as tissues, is
probably the most evident and simply observed manifestation of coherence
phenomena in tissue-light interactions. In the case of dynamic light scattering,
when temporal speckle intensity fluctuations are induced by stochastic or
regular motions of the scattering structure elements in the probed tissue volume,
it is reasonable to try to apply the statistical analysis of these fluctuations to the
monitoring or functional imaging of the tissue structure or the dynamics. As a
result, laser speckle methods have recently become one of the most universally
adopted and familiar technologies in biology and medicine (e.g., laser Doppler
flowmetry is an example of such an analysis of dynamic laser speckle as applied
to the monitoring and functional imaging of in vivo blood microcirculation).
429
430 Chapter 8
typical for the vast majority of biological systems such as individual organelles
and cells, cell aggregates, and whole tissues), the corresponding Doppler shifts
of the individual components of the scattered light will be negligibly small
compared with the probe light frequency. Thus, the resulting spectral
broadening of the probe light passing through the scattering system is usually
classified as a result of quasi-elastic light scattering (QELS) and can be
detected only by means of so-called light-beating spectroscopy.4
QELS methods as applied to the monitoring of dynamic systems are based
mainly on the correlation or spectral analysis of the temporal fluctuations of
the speckle intensity at a fixed detection point; these fluctuations are caused by
the motions of scattering centers in the probed volume. In the case of single
scattering disordered media (i.e., when the scattered optical field can be
considered as the result of the superposition of the statistically independent
components and each of them appears due only to one scattering event), the
spectral width of the speckle intensity fluctuations depends on the scattering
angle that characterizes the detector position with respect to the probe beam
axis and the dynamic parameters of the scattering ensemble. In particular, for
disordered systems of scattering particles that undergo Brownian motion, the
spectral density of the speckle intensity fluctuations has a typical Lorentzian
form with its halfwidth evaluated as DT jq̄j2 ; DT is the self-diffusion coefficient
of the scattering particles; and jq̄j is the momentum-transfer module equal to
(4pn/l)sin(u/2), where n is the refractive index of the scattering medium, u is
the scattering angle, and l is the wavelength of the probe light in free space.
Thus, measurement of the spectral width of the speckle intensity fluctuations
for a given scattering angle allows the determination of the value of the self-
diffusion coefficient of the scattering centers. A similar technique, known as
light-beating spectroscopy or correlation spectroscopy (if the correlation time of
the speckle intensity fluctuations is determined during the QELS experiment) is
widely used for various biophysical and medical applications.4,5
When laser light is used to probe an optically thick medium consisting of
relatively closely packed scattering centers, each contribution to the scattered
optical field should be considered as a result of at least a few scattering
events and, consequently, this single-scattering approach cannot be used to
determine the parameters of the scattering system. In this case, to find the
relationship between the parameters of the experimentally obtained speckle
intensity fluctuations I(t) and the parameters describing the dynamics of the
scattering sites in the probed volume, the following assumptions are usually
made:
1. The temporal autocorrelation function of the scattered field fluctuations
g1(t) ¼ 〈E(t þ t)E*(t)〉/〈|E(t)|2〉 and the corresponding normalized inten-
sity autocorrelation function g2(t) ¼ 〈I(t þ t)I(t)〉/〈I(t)2〉 are related to
each other by the Siegert relation4
g2 ðtÞ ¼ 1 þ bjg1 ðtÞj2 , (8.1)
432 Chapter 8
Y
Mi
E si ðtÞ ¼ expðjvtÞ aik expfj q̄ik r̄ik ðtÞg, (8.2)
k
where E0,v are the probe incident light amplitude and frequency, respectively,
Mi is the number of scattering events for the i’th scattering sequence, aij is
determined by the amplitude of the scattered light for the j’th scattering
event in the i’th scattering sequence; q̄ij is the momentum transfer for the j’th
scattering event; and r̄ij ðtÞ is the current position of the j’th scattering site.
It is necessary to note that each partial contribution can be associated with
the effective optical path in the scattering volume and can, therefore, be
characterized by the path length value defined as si Mil.
Expressing the path-dependent field correlation function as
and carrying out the averaging over the pairs of noncorrelated scattering
events, one can obtain
Y
Mi
G s1 ðtÞ ¼ jaik j expðjvtÞ
2
expfjDq̄ik Dr̄ik ðtÞg .
k
For each scattering event, the momentum transfer Dq̄ik and the
displacement of the elementary scatterer Dr̄ik for the delay time t can be
considered as statistically independent vectors and, consequently, we can
express g1(s)(t) ¼ G1s(t)/G1s(0) as
ðsÞ 1 2 2 s
g1 ðtÞ ¼ exp k 0 hDr ðtÞi , (8.3)
3 l
where the factor exp(jvt) is excluded without loss of generality, and k0 is the
wavenumber of the probe light in the scattering medium.
To obtain the temporal correlation function of the observed field,
averaging over the ensemble of partial contributions should be done; the final
form of the normalized temporal correlation function can be obtained by
introducing the probability density of the effective optical paths and replacing
the discrete path statistics by continuous statistics:
`
g1 ðtÞ ¼ G1 ðtÞ∕G1 ð0Þ ¼ ∫ rðsÞ exp k 0 hDr ðtÞi ds.
1 2 2 s
(8.4)
0 3 l
Figure 8.5 Schematic of burn depth diagnostics (by Boas et al. taken from Ref. 9).
the underlying layers of tissue where there is blood flow. This will be
manifested as an increase in the slope of the intensity correlation function and,
correspondingly, g1(t).
This promising possibility of DWS burn depth diagnostics was
demonstrated by D. Boas et al.9 using the pig burn model suggested by
Nishioka and Schomacker at the Wellman Institute in Boston (Fig. 8.5).
A He-Ne laser with an output power of 8 mW was used as the light source
in their experiments. The laser light was coupled into a multimode optical
fiber with a core diameter of 200 mm. This fiber delivered the probe light to
the burn surface being diagnosed; after passing through the layer of burned
tissue, the backscattered laser light was collected by the light-collecting system
consisting of a single-mode fiber assembly: several single-mode fibers were
positioned at different distances varying from 0.2 mm to 2.4 mm with respect
to the source fiber. The light-collecting fibers were connected to a photo-
detector [photon-counting photomultiplier tube (PMT)] via an electronically
controlled fiber optical switch. The PMT output signal was processed by a
digital autocorrelator to obtain the temporal correlation function for a given
burn depth and source-detector separation. The burn depth was controlled by
applying a hot metal block (100°C) to the surface of a pig skin for a given
duration. In this experiment, five different durations of burn with
consequently increasing depth (60–100 mm, 400–500 mm, 500–600 mm,
1500–2000 mm, and 2100–2200 mm) were used. The strong dependence of the
440 Chapter 8
Figure 8.6 Temporal field correlation functions obtained from 48-hour-old burns for a
source–detector separation of 800 mm (by Boas et al., reprinted from Ref. 9); the correlation
functions for burn duration times of 3 s (solid line), 5 s (dotted line), 7s (dashed line), 12 s
(dot-dash line) and 20 s (dot-dot-dot-dash line) are presented.
decay rate of the correlation function on the burn depth for a given source–
detector separation allows the different grades of tissue burns to be
distinguished (Fig. 8.6).
To summarize the data for all source–detector separations and to
produce the criteria for burn depth estimations, the following technique was
suggested: decay rates of the field correlation functions were determined
for 0 , t , 100 ms by fitting a line to the data, and these values of decay rate
were plotted against the source-detector separation. The tendencies of the
decay rate behavior can be summarized as follows (Ref. 9, Fig. 8.7): for
shallow burns, the decay rate increases linearly with the source–detector
separation as observed for healthy tissue and as would be expected for a
homogeneous system, i.e., the shallow burn does not perturb the correlation
function. On the contrary, for deeper burns, the decay rate is smaller and no
longer increases linearly with the source–detector separation.
One of the advantages of the developed DWS technique is that it makes it
possible to distinguish burns with thicknesses that vary by 100 mm. However,
it is necessary to have an adequate calibration technique in order to obtain
reliable diagnostic results. One possibility is to use the DWS technique in
combination with other optical technologies (e.g., with diffuse reflectance
measurements in the time domain and/or frequency domain).
In the detection of backscattered light, some analytical problems related
to the so-called diffusion approximation breakdown (see, e.g., Ref. 11) and
connected with an underestimation of ballistic and low-step scattered
contributions and the nonphysical behavior of the diffusion equation solution
cause strong discrepancies between experimentally observed and calculated
correlation functions of the detected intensity fluctuations.
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 441
Figure 8.7 Dependence of the decay rates of the field correlation functions on the source–
detector separations for different burn depths.
distribution of the scatterers.13 Thus, the two approaches are just different
ways of looking at the same physical phenomenon.
Figure 8.8 Basic setup for laser speckle contrast analysis (LASCA).
sides of an odd number of pixels was chosen so that the computed contrast
could be assigned to the central pixel.) The speckle contrast is quantified
by the usual parameter of the ratio of the standard deviation to the mean
(s/,I.) of the intensities recorded for each pixel in the square [see Eq (8.9)].
The pixel square is then moved along by one pixel and the calculation
repeated: this overlapping of the pixel squares results in a much smoother
image than would be obtained by using contiguous squares, and at little cost
in terms of additional processing time. It must be remembered, though, that
this overlapping of the squares does not lead to an increase in resolution,
which is determined by the size of the square used: there is a trade-off between
spatial resolution and reliable statistics.
If the object under investigation contains moving scatterers, such as blood
cells, each speckle will be fluctuating in intensity. A time-integrated image,
therefore, shows a reduction in speckle contrast because of the averaging of
the intensity of each speckle over the integration time. In practice, the
exposure time can be very short, typically 0.02 seconds, and the processing
time is less than one second for the whole frame,33 making it effectively a real-
time technique.
compromise is to arrange for the speckle size to be equal to the pixel size.34
This is achieved by choosing an appropriate setting for the aperture of the
camera lens. However, this removes the possibility of using the aperture to
control the amount of light reaching the detector, the usual function of a
variable lens aperture. Further, the shutter speed cannot be used to do this, as
it is determined by the need to match the velocities being measured, and some
other method of controlling the amount of light reaching the detector must be
found.
8.3.7 Theory
The principle of LASCA is very simple. A time-integrated image of a moving
object exhibits blurring. In the case of a laser speckle pattern, this appears as a
reduction in the speckle contrast, defined (and measured) as the ratio of
the standard deviation of the intensity to the mean intensity. This occurs
whatever is the “movement” of the speckle. For random velocity distribu-
tions, each speckle fluctuates in intensity. For lateral motion of a solid object,
on the other hand, the speckles also move laterally and become “smeared” on
the image, but a reduction in speckle contrast still occurs. For fluid flow, the
situation might be a combination of both these types of “movement.” In each
case, the problem for quantitative measurements is the establishment of a
relationship between speckle contrast and velocity (or velocity distribution).
It is clear that there must be a link between the flow velocity and the
amount of blurring. The higher the velocity, the faster are the fluctuations and
the more blurring occurs in a given integration time. By making certain
assumptions, the following mathematical relationship can be established
between speckle contrast and the temporal statistics of the fluctuating
speckle:21
¼ ∫ g̃2 ðtÞdt,
1
s2s ðTÞ (8.10)
T 0
where s2s is the spatial variance of the intensity in the speckle pattern, T is the
integration time, and g̃2 ðtÞ is the autocovariance of the temporal fluctuations
in the intensity of a single speckle. The autocovariance is a normalized version
of the autocorrelation function and is defined as follows:
time-varying speckle techniques measure the quantity on the right side. (It is
also worth noting that LASCA uses image speckle, whereas most of the
temporal techniques use far-field speckle. However, this does not detract from
the fundamental equivalence of the two approaches expressed in Eq. (8.11)).
Provided the assumptions made in establishing Eq. (8.11) are valid,
LASCA is now on an equal footing with all of the temporal techniques
(photon correlation, Doppler, time-varying speckle) so far as linking the
measurements to actual velocities is concerned. All the techniques now allow
the correlation time tc to be determined. In the case of photon correlation, this
parameter is measured directly. In the case of LASCA, some further
assumptions must be made in order to link the measurement of speckle
contrast (defined as ss/〈I〉;) with tc.
Various models can be used, depending on the type of motion being
monitored. For the case of a Lorentzian velocity distribution, for example, the
equation becomes21
1
ss tc 2T 2
¼ 1 exp : (8.12)
hI i 2T tc
Equation (8.12) (and similar equations for other types of scatterer motion)
relates the speckle contrast, defined as the ratio of the standard deviation of
the speckle pattern intensity to its mean intensity, to the correlation time tc
for a given integration time T. The relationship is plotted in Fig. 8.9.
Figure 8.9 shows that the speckle contrast rises from near zero to near its
maximum value of 1 over about three orders of magnitude of tc (and hence of
velocity). (For a single exposure, of course, T is a constant.) For flow
velocities corresponding to values of tc less than about 0.04T, the speckle
contrast is very low, i.e., the speckles are completely blurred out by the
Figure 8.9 Variation of speckle contrast with the ratio decorrelation time to integration time
for the Lorentzian model of LASCA
448 Chapter 8
motion. For velocities corresponding to values of tc greater than about 4T, the
speckle pattern has high contrast. Between these limits, flow velocity will be
mapped as variations in speckle contrast. If a different velocity distribution is
assumed, for example, a Gaussian or a uniform distribution, the curve is
slightly different but still shows this characteristic S-shape. The dynamic range
– the range of velocities that can be covered – remains similar.
Other problems with the statistics occur as a result of such effects as the
Gaussian profile of the laser beam and the nonlinearity of the CCD camera.28
However, many of these problems also affect the laser Doppler, photon
correlation, and other time-varying speckle techniques.
Other parameters can be varied. The effect of using different exposure
times (integration times) has been investigated and times as short as 1 ms have
been used. Wavelengths other than 633 nm could also be used. There is a need
in some clinical situations, for example, eczema and other dermatological
problems, to measure the superficial dermal blood flow.35 This has prompted
several groups to use other wavelengths with the laser Doppler technique.35–37
Penetration of tissue is highly wavelength-dependent, with infrared light
penetrating the furthest (several mm) and green and blue hardly at all
(typically 150 mm for green light).
Figure 8.10 LASCA image of part of a forearm, showing increased perfusion around the
site of a hot-water burn (Briers and Webster, 1996276HHFRORUSODWHV
450 Chapter 8
Figure 8.11 Raw image of part of a rat cortex (left) and its processed version (right)
(source: D. Boas, Harvard Medical School).
Figure 8.12 Blood-flow changes during stroke: relative cerebral blood flow ten minutes
after occlusion of the middle cerebral artery in a rat, demonstrating the spatial gradient in the
EORRG IORZ GHILFLW GXH WR WKH LVFKHPLD VRXUFH $ . 'XQQ 8QLYHUVLW\ RI 7H[DV6HHFRORU
SODWHV
examples are just a selection from the large amount of work that is going on -
the list is not meant to be exhaustive. We are aware of more than forty groups
in at least eighteen countries who are working on or with the LASCA
technique.)
The intensive activity involving the LASCA technique in recent years has,
of course, been accompanied by improvements in the images produced. Two
examples are reproduced here. Figure 8.11 is from David Boas’ group at
Harvard Medical School, and Fig. 8.12 is from Andrew Dunn at the
University of Texas. In both cases the improvement in the quality of the
images is clear.
Figure 8.13 This image, produced by the Moor Instruments laser perfusion imager (FLPI),
shows the mesenteric vessels in a frog. The area shown is approximately one centimeter
square, and the gut runs around the right periphery of the image. In this investigation, the
user is assessing angiogenic growth in mesenteric vessels caused by a virus. This image
was taken before injection of the virus. (source: Moor Instruments Ltd: www.moor.co.uk)
6HHFRORUSODWHV
452 Chapter 8
Figure 8.14 A composite image produced by the Moor Instruments laser perfusion
imager (FLPI). The series of images of the hand shows the base-line blood flow, the
changes during a partial occlusion using a pressure cuff, and the flow following pressure
release. The images were taken at 3-s intervals. The single image window is from a video
sequence taken at 25 frames per second with a time constant of 0.3 s. The single point
measurement is taken from a finger tip. (source: Moor Instruments Ltd: www.moor.co.uk)
6HHFRORUSODWHV
8.3.11 Conclusions
To summarize, the LASCA technique (also known as laser speckle contrast
imaging or laser speckle imaging) offers a full-field, real-time, noninvasive,
and non-contact method of mapping flow fields such as capillary blood flow.
In its basic form it uses readily available off-the-shelf equipment. Laser
Doppler, photon correlation spectroscopy, and time-varying speckle are
related techniques, but work by analyzing the intensity fluctuations in
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 453
scattered laser light. As they are essentially methods that operate at a single
point in the flow field, some form of scanning must be used if a full-field
velocity map of the flow area is required. Typical scanning laser Doppler
systems take some minutes to complete this scan. LASCA achieves this goal in
a single shot by utilizing the spatial statistics of time-integrated speckle. The
technique produces a false-color map of blood flow in a fraction of a second,
without the need to scan. Thus, LASCA is a truly real-time technique. In
addition, the LASCA system is much more cost effective and very simple to
operate. The main disadvantage of LASCA compared with the other
techniques is the loss of resolution caused by the need to average over a
block of pixels in order to produce the spatial statistics used in the analysis. In
principle, at least, the time-domain techniques can operate on a single pixel.
However, in practice, even this disadvantage is not always present, as many of
the temporal methods use sampling in order to speed up the processing. In any
case, the advantage of real-time operation without scanning probably
outweighs the problem of loss of resolution, as demonstrated by the current
global interest in the technique. Finally, a more complete account of the
history and development of LASCA can be found in some review papers.61,62
Recent advances in laser speckle contrast analysis for tissue functional
imaging and characterization are related to modifications of the speckle data
acquisition procedures and implementation of novel data processing
algorithms. These approaches will be discussed in Sec. 8.5.
Figure 8.15 Burn depth diagnostics by means of a localized probe light source; optical
properties (scattering coefficient, absorption coefficient, and mean cosine of the scattering
angle) for both layers are identical.
optical path, associated with the center line of the “banana” region, in the case
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
of the weak absorption limit r 3ma ms ð1 gÞ ≪ 1, can be expressed as
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1
zmod ðxÞ ðx2 þ ðr xÞ2 Þ2 þ 32x2 ðr xÞ2 x2 ðr xÞ2 : (8.13)
8
Here r is the separation between the source and the observation point; the
x axis is directed along the surface, and the z axis is oriented normally to the
surface.
Central line of the “banana”-shaped region is shown as the dotted line.
The maximum penetration depth of the pmost ffiffi probable photon trajectory is
reached at x ¼ r/2 and is equal to zmod 4 . On the other hand, in the strong
max 2r
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
absorption limit r 3ma ms ð1 gÞ ≫ 1, the form of the most probable
trajectory can be expressed as
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2xðr xÞ
zmod ðxÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , (8.14)
3ma ms ð1 gÞr
pffiffiffiffi
mod 2k at x ¼ r/2.
r
and the maximum penetration depth is equal to zmax
It is obvious that for the burn model used (Fig. 8.16), consisting of a layer
of motionless, stationary scatterers (upper layer, burned tissue with suppressed
blood microcirculation) and an underlying layer of normal tissue with normal
blood microcirculation, the scattered light at the observation point is a
mixture of Doppler-shifted components induced by the deeper penetration of
the probe light into the tissue layers with moving scatterers (erythrocytes), and
nonshifted components propagating nearer the surface. The fraction f of
Doppler-shifted photons in the total observed optical signal will determine the
ratio of the fluctuating part of the detected speckle intensity to its average
value (i.e., the modulation depth of the detected optical signal). In particular,
the relation between the fraction of the Doppler-shifted component of the
scattered light and the modulation depth of the detected signal has been
studied by Serov, Steenbergen, and De Mul63; it was found that the
modulation depth, estimated as the ratio of the variance of the fluctuating
part of the detected signal to the square of its mean value, can be evaluated
as f ð2 f Þ N1 , where N is the number of speckles within the detector aperture.
A corresponding dependence obtained in experiments with phantom layered
scattering media consisting of an aqueous suspension of Intralipid (used as the
nonstationary scattering medium) with different concentrations, and layers of
clear tape as the static scatterer, is shown in Fig. 8.17.
The dependence of the penetration depth on the distance between the
localized light source and the observation point leads to a strong influence of
the observation point position on the fraction of Doppler-shifted photons and,
456 Chapter 8
Figure 8.17 Value of modulation depth normalized by N against the fraction of the
Doppler-shifted component of the scattered optical field (reprinted from Ref. 63). Object
under study—aqueous suspension of Intralipid in container plus a few layers of clear tape.
Open symbols correspond to different concentrations of Intralipid in suspension; solid line:
theoretical curve. Probe used: commercial blood perfusion monitor (Periflux 4000, Perimed)
with a standard optical fiber probe (Probe 408, Perimed). Source-detector separation:
0.25 mm.
Figure 8.19 The experimental setup for recording time-integrated speckle patterns with a
varying ratio of the Doppler-shifted and non-shifted contributions induced by laser light
scattering in human skin. 1: localized light source (focused laser beam or optical fiber);
2: layered tissue with differing levels of blood perfusion for each level; 3: variable diaphragm;
4: backscattered light; 5: CCD camera used to record dynamic speckle pattern in the
diffraction plane.
458 Chapter 8
Figure 8.20 Time-dependent contrast vs. exposure time for different diameters of the
exposed area of probed skin. Object: fingernail bed; patient: 23-year-old male.
Figure 8.21 “Residual” contrast as a function of the diaphragm diameter. Object: forearm;
patient: 44-year-old male.
Figure 8.22 Determination of the depth of burned tissue by measurement of the diameter
of the time-averaged image of the speckle pattern (reprinted from Ref. 64). Photons that
travel to depth x1 in the static layer produce a speckle pattern upon returning to the surface,
whereas photons that travel to depth x2 in the modulated layer produce a time-averaged
blurred pattern at the surface. The probability of a photon returning to the surface further
from the point of incidence increases with the depth that it travels. Integrating over depth
yields the speckle line profiles P(r), which change from speckle to a blurred pattern at some
value r. This value defines the speckle pattern radius that is dependent on the thickness of
the static layer. The transition point is found from an empirically defined threshold.
Figure 8.23 Scheme of a full-field speckle correlometer with a localized source of probing
radiation and spatial filtration of scattered radiation in the image plane of the optical
system:65 (1) laser; (2) lens; (3) probed sample; (4) objective; (5) CMOS or CCD camera.
time t averaged over the ensemble of scatterers; and P(S) is the probability
density function of the paths of scattered field components S ¼ s̄∕l normalized
to l . For stationary and ergodic scattering systems, the normalized
autocorrelation function of the scattered field intensity fluctuations g2(t) ¼
〈{I(t)〈I〉}{I(t þ t)〈I〉}〉/〈{I(t)〈I〉}2〉 is related to |g1(t)| by the Siegert
relation: g2(t) ¼ b|g1(t)2, where b is a constant factor determined by the
conditions of detection. The use of a localized source of probing radiation and
the spatial filtration of detected radiation make possible discrimination of the
partial components with respect to the path length s. Using Eq. (8.15), we can
eventually estimate the correlation time of intensity fluctuations for a uniform
pffiffiffiffi
translational motion of the medium in the selected zone as tc 1.22∕kv S̄ ,
where v is the velocity of the medium, and S̄ is the average weighed
normalized path for the analyzed ring segment of the surface. In the case of
probing light wavelengths l1 and l2, the ratio of correlation times for the
qffiffiffiffiffiffiffi qffiffiffiffiffiffiffi
identical detection conditions is estimated as tc,l1 ∕tc,l2 l1 S̄ l2 ∕l2 S̄ l1 . In
the case of a ring zone with a radius obeying the condition R . l , where r is
the effective radius of the source of diffuse radiation generated in the volume
of the medium, we can express the optical path in the diffusion regime of
radiation propagation as follows: s̄ ¼ S̄l K 1 R2 ∕l , where K1 is the
normalization coefficient; accordingly, tc,l1 ∕tc,l2 l1 l l1 ∕l2 l l2 . For the
optical scheme presented in Fig. 8.15, the correlation time of the fluctuations
of subjective speckles in the image plane is related to the value of tc
determined in the object plane (near the surface of the medium) by the
transform t0c ¼ K 2 tc , where K2 is a factor determined by the parameters of the
pffiffiffi
optical system. It can be shown that K 2 ¼ K 3 F ðM þ 1Þ∕ 2pqM, where K3 is
the normalization coefficient, F is the focal distance, q is the pupil radius, and
M is the magnification employed. In order to determine the absolute value of
l* from the measured correlation time t0c , it is necessary to know S̄ for a given
462 Chapter 8
Figure 8.24 Plots of t0c kv vs. R65 for (1) l ¼ 633 nm, v ¼ 0.1 mm/s; (2) l ¼ 633 nm, v ¼
0.19 mm/s; (3) l ¼ 633 nm, v ¼ 0.29 mm/s; (4) l ¼ 405 nm, v ¼ 0.1 mm/s; (5) l ¼ 405 nm,
v ¼ 0.19 mm/s; (6) l ¼ 405 nm, v ¼ 0.29 mm/s(ðt0c,633 k 633 vÞ∕ðt0c,405 k 405 vÞ ¼ l 633 ∕l 405 2.87
at R ¼ 1.875 mm). (b) Radial distributions I(R/r)/I(0): (1–4) simulations for r/l* ¼ 4,8,16
and 32, respectively; (5, 6) experiment for l ¼ 633 and 405 nm, respectively. Best fit
parameters r633 1120 ± 70 mm and r405 1050 ± 60 mm yield l 633 195 10 mm and
l 405 68 4 mm.
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 463
The values of m0s ¼ 1∕l for PTFE obtained from the analysis of full-field
speckle-correlometric data and radial distributions of scattered intensity have
a good agreement with methods of diffuse transmission measurements using
an integrating sphere. It should be noted that, in addition to determining m0s ,
the proposed approach can be used for evaluating other parameters (in
particular, refractive index) from an analysis of the dependences of the
correlation time and average intensity of scattered radiation on the radius of
the ring filter. The characteristics of a probed medium that can be evaluated
using this approach also include the parameters of mobility of scatterers
(average velocity or coefficient of translation diffusion of particles in the
medium).
It can be seen that these techniques are similar to each other in basic
principles and differ in details of raw data processing. Thus, we should not
expect any qualitative advantages from one modification over other
modifications; the choice of the most appropriate for a given case should be
done on the basis of consideration of the particular conditions.
One of the original approaches to the full-field speckle contrast analysis
(the so-called multi-exposure speckle imaging, MESI, Ref. 75) is based on
application of an illuminating pulse laser with the variable pulse duration. In
this case the exposure time is kept constant and the speckle integration time is
determined by the pulse duration.
Further development of various modifications of the laser speckle contrast
analysis in spatial and/or temporal domain is related to the development of
more sophisticated data processing algorithms. The typical example is the
adaptive computation of contract for laser speckle contrast analysis
(adLASCA) technique considered in Ref. 76. This technique is based on
adaptive computation of contrast. In the preprocessing stage, histogram
equalization is implemented to increase the dynamic range of input intensity.
As an example, the raw speckle image of mice skin vasculature and its
preprocessed version are shown in Figs. 8.25 (a) and (b).
Next, image registration is performed to overcome the internal inevitable
disturbances due to respiration and heart beating of the animal. The first
frame is considered as the reference image, and subsequently all other frames
Figure 8.25 (a) Raw image of mice skin vasculature, (b) histogram equalized image,
(c) zoomed time series speckle images, before registration (up) and after registration (down).
Taken from Ref. 76.
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 465
where
X
rf f i ðx0 ,y0 Þ ¼ f ðx,yÞf i ðx x0 ,y y0 Þ
x,y
X
ux0 vy0
¼ F ðu,vÞF i ðu,vÞ exp i2p þ :
x,y
M N
Here, rffI is a cross-correlation between the first frame f(x,y) and the rest of
the frames f(xi,yi) for i ¼ 2, 3,. . . , n in time series image data. (x0,y0) denotes the
translation between each set of images. M and N are image dimensions and
uppercase letters represent the discrete Fourier transform (DFT) of their
lowercase counterparts. The algorithm uses a matrix multiplication implemen-
tation of the 2D DFT to refine the initial peak location.77 It helps to compute
an upsampled version of rff(x0,y0) without zero padding of F(u,n)F*(u,v).
Upsampled cross-correlation (by a factor k) is derived within a 1.5 x 1.5 pixel
neighborhood by multiplying three matrices with dimensions (1.5K, N),(N, M)
and (M,1.5k).77 Rigid translational parameters (Dxi,Dyi) for each frame (i ¼
2,3,. . . , n) are computed to align each of the histogram equalized images.
It should be also noted that simple contrast analysis is influenced by the
presence of corrupting speckles. The effect of these unwanted speckles
becomes very hard to reduce as the vascular information also appears in the
form of speckles. Contrast analysis algorithm, which reduces the effect of
corrupting speckles, restraining structural and functional integrity of
microvasculature is based on a hypothesis—spatial locations of high-
frequency corrupting speckle (coherent speckle noise) alter in successive time
frames, without maintaining any statistical correlation. In contrast to this,
changes in dynamic speckle pattern for slowly moving scatterers bear a strong
correlation in local volume of pixels. For faster movements, contrast gets
reduced and the areas become blur in time-integrated speckle images. In a
local volume of pixels, statistical properties of such corrupting speckles are
different with respect to the dynamic speckle pattern. Adaptive computation
of contrast helps to reduce such unwanted speckles based on local speckle
statistics in a spatiotemporal domain. It helps to maintain the integrity of the
dynamic speckle pattern in successive speckle images. In general, a low-pass
466 Chapter 8
filter can reduce the energy of speckles. A modified Gaussian low-pass filter
function is given as follows:
where F(u,v) is the Fourier transform of fx,y. K1 and K2 are the filtering
weights, where K1 is kept at 1, and K2 is derived as the average of Kx,y,t over a
predefined number of time frames. Kx,y,t for the tth time frame is computed
adaptively as
1 ḡ2i,j,t s2i,j,t
K x,y,t ¼ , (8.19)
s2i,j,t ð1 þ s2t Þ
where s2i,j,t and s2t are the local noise variance in the moving window and
global noise variance for the tth time frame, respectively. ḡi,j,t is the average
intensity in a 5 5 pixel moving window. The global noise variance in the tth
time frame is computed from
X s2i,j,t
s2t ¼ : (8.20)
i,j,t ḡ2i,j,t
1 Xiþ1 X jþ1
di,j,t
C tLASCAði,jÞ ¼ (8.21)
9 r¼i1 c¼j1 hI i,j,t i
where di,j,t and 〈Ii,j,t〉 are the standard deviation and mean intensity,
respectively, of all pixels at (i,j) over n number of frames.
As a result, this approach allows the reconstruction of sufficiently less
noisy speckle contrast images in comparison with other techniques (see
Fig. 8.26).
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 467
Figure 8.26 Comparative study of different contrast analysis techniques: (a) classical
LASCA, (b) LSPI, (c) t-LASCA, and (d) adLASCA for retinal microvasculature in normal
condition of mice.77 It is clearly evident from marked regions that distinguishability of fine
vessels is superior in the adLASCA image more than the other contrast analysis techniques.
` `
0 0
0
V ¼ ∫ ∫ SðlÞSðl Þ exp½2pjs 0
1 1
l l
dldl
0
1∕2
∕∫`
0
SðlÞdl, (8.22)
468 Chapter 8
where S(l) is the spectral density of the light source used, and s is the value of
the effective optical path of the individual component of the scattered
field. The expected value of the phase term, or characteristic function
hexp½2pjsðl1 l10 Þi, can be expressed by introducing the probability density
function of the effective optical paths r(s) as
`
¼ ∫ rðsÞ exp 2pj
1 1 1 1
exp 2pjs ds. (8.23)
l l0 0 l l0
The path length statistics described by r(s) depend on the illumination and
detection conditions as well as on the geometry of the probed scattering
system. In order to analyze its influence on the contrast of partially coherent
speckles as a visualization parameter, a random walk approach can also be
used.78,79 In this case, one can interpret random interference patterns such as
speckle fields as the result of the superposition of noncorrelated contributions
caused by sequences of scattering events. In this case, the coordinate-
dependent part of the complex amplitude of the scattered field at an
arbitrarily chosen point can be expressed by using the scalar approximation as
X
Eðr̄Þ ¼ ak expfjfk ðr̄Þg, (8.24)
k
where the summation over l is carried out from 1 to (N2!N)/2. In this notation,
the statistical properties of the intensity fluctuations are controlled by the
distribution of the path length differences Dsl. The second term in Eq. (8.26)
can be considered as the result of interference between the k and k0
components of the scattered field. Thus, if a partially coherent source is used
for the probing of the scattering medium and this source is characterized by
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 469
the normalized coherence function g(Ds), then Eq. (8.26) should be replaced
by the following equation:
X
I ðr̄Þ ¼ N þ 2 cosð2pDsl ∕lÞjgðDsl Þj. (8.27)
l
l
X X
þ4 cosð2pDsl ∕lÞ cosð2pDsl 0 ∕lÞjgðDsl ÞjjgðDsl0 Þj
l l0
X
¼N þ4 2
cos ð2pDsl ∕lÞjgðDsl Þj ,
2 2
where l ≠ l0 .
Taking the number of contributions as infinite, N ! `, and considering
the transition from the discrete model of a scattering medium that is
characterized by a finite number of contributions to the “continuous”
description, we can introduce the probability density function of the path
length differences r(Ds) and write the second-order statistical moment of the
scattered light intensity as
`
i.e., r(Ds) can be expressed as the autocorrelation of r(Ds). Thus, the contrast
of the multiply scattered speckle pattern can be expressed as
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u`
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u
t
V ¼ hI 2 ðr̄Þi∕hI ðr̄Þi2 1 ∫f1 þ cosð4pDs∕lÞgjgðDsÞj2rðDsÞdðDsÞ.
0
(8.29)
470 Chapter 8
For a fully coherent illumination source with |g(Ds)| equal to 1 for all
possible values of the path length differences, the contrast value introduced in
Eq. (8.29) is equal to 1 independently of the path length statistics. On the other
hand, for broadband light sources, V 0. In the intermediate case, when the
coherence length of the light source used is comparable to the spatial scales
characterizing the ensemble of the path length differences (e.g., root mean
square or modal value), the speckle contrast varies between 1 and 0.
The statistics of the path length differences described by the probability
density function r(Ds) can be analyzed by using the diffusion approximation,
or, in the case of nondiffuse regimes of the probe light propagation in the
scattering object, when the diffusion approximation fails,11 by means of a
Monte Carlo simulation. In our case, the latter approach has been used to
study the influence of the scattering and detection conditions on the contrast
of the partially coherent speckles induced by the probe light being scattered by
macroscopically inhomogeneous media.
One of the possible ways of controlling the coherence length of a probe
light source is the use of a laser diode with the varying pumping current below
the generation threshold.79 Another way is connected with the application of a
frequency-modulated laser diode above the generation threshold (for example,
based on the Littrow scheme with a piezo electric actuator). If the speckle
integration time is much larger than the period of laser frequency modulation,
the variations of modulation depth provide an effect similar to that in the case
of a decrease of the coherence length for a continuous waveform (CW) source.
This approach was defined as the “blink speckle spectroscopy.”80
It should be noted that the question about the potential applications of
partially coherent speckle analysis to medical imaging is still open. But it is
necessary to note that such an analysis plays an important role in other
speckle techniques in biomedical applications, especially those based on the
measurements of the time-dependent contrast of speckles obtained with a light
source of finite coherence length. The evaluation of the additional contrast
decay due to the multiple scattering of the partially coherent probe light
improves the accuracy of speckle-analysis-based diagnostic techniques (e.g., in
the case of blood flow monitoring).
Here we discuss some peculiarities of the speckle contrast analysis with the
use of a bundle-based full-field instrument applied to monitor the thermal
modification of collagenous tissue (cartilage),86
Figure 8.27 displays the scheme of the considered full-field speckle
analyzer with a fiber optic bundle as a light-delivering unit and speckle
detection in the diffraction zone. A probed object is illuminated by a
collimated laser beam. The area of interest on the object surface is imaged by
lens L1 on the input tip of the bundle. The dynamic speckle pattern is captured
with a CCD camera without a lens, which is placed behind the output tip of
the bundle at the distance Z≫d2/l (d is the diameter of the single fiber in the
bundle, l is the wavelength of light). For these conditions, the transformation
of the scattered light in the optical system is the result of:
1. spatial filtration of the optical field in the object plane by the image-
transferring component L1,
2. spatial sampling and random-phase modulation of propagating light
by the fiber optic bundle, and
3. diffraction in free space between the output tip of the fiber and the
CCD sensor.
Such sequential transformation of the speckle field causes changes in the
correlation properties of detected light with respect to that of light leaving
the object. The problem of correlation transformation can be addressed by the
general problem of coherence transfer in optical systems.87 We consider the
effect of light transfer in the optical system (Fig. 8.27) on the spatial-temporal
correlations of detected light in the framework of scalar approach.
Despite the limitations of the scalar model in the case of depolarization of
incident radiation by the probed medium and the fiber optic bundle, it will
allow us to find basic relationships between the transfer properties of the
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 473
¼ ∫∫hEðx̄1 ,t1 ÞE ðx̄2 ,t2 ÞiKðX̄ 1 ,x̄1 ÞK ðX̄ 2 ,x̄2 Þd x̄1 d x̄2 . (8.30)
Here, G1 ðX̄ 1 ,X̄ 2 ,t1 ,t2 Þ is the correlation function of the field fluctuations
in the image plane, Eðx̄,tÞ is the field amplitude in the object plane,
and KðX̄ ,x̄Þ is the point-spread function of the image-transferring
component under coherent illumination; we assume the statistical homoge-
neity and ergodicity of spatial fluctuations of Eðx̄,tÞ. In the case of Gaussian
statistics of RefEðX̄ ,tÞg,ImfEðX̄ ,tÞg and hEðx̄,tÞi ¼ 0, the Siegert equation
is valid, and the normalized temporal correlation functions of the intensity
and field fluctuations in an arbitrarily chosen point X̄ are related to each
other as90
hI ðX̄ ,tÞI ðX̄ ,t þ tÞi
g2 ðtÞ ¼ ¼ 1 þ jg1 ðtÞj2 , (8.31)
hI ðX̄ ,tÞi2
where g1 ðtÞ ¼ G1 ðtÞ∕G 1 ð0Þ ¼ hEðX̄ ,tÞE ðX̄ ,t þ tÞi∕hjEðX̄ ,tÞj2 i.
In the case of a conventional full-field speckle technique, the time
averaging of speckle patterns in the image plane is used and the contrast of the
averaged speckle pattern is introduced for characterization of the speckle
decorrelation due to scatter dynamics:
T̃ 0.5
V T̃ ¼
1
∫ðg2 ðtÞ 1Þdt , (8.32)
T̃ 0
where T̃ is the exposure time used to capture speckles. Usually, the fixed value
of T̃ is set comparable with the speckle decorrelation time, and the local
estimates of V T̃ reflect the relative changes in scatter dynamics across the
analyzed region.
Thus, analysis of Eqs. (8.30)–(8.32) for the given detection conditions
allows us to characterize the influence of the transfer properties of the
image-transferring component (L1) on the decorrelation rate of dynamic
speckles in the image plane. It is obvious that for the ideal conditions
(KðX̄ ,x̄Þ dðX̄ x̄Þ) this influence is absent. Conversely, with the low-
quality imaging component the role of correlation transformation becomes
474 Chapter 8
significant. In the extreme case of d-correlated speckles in the object plane ðx̄Þ,
the spatial-temporal speckle dynamics in the image plane is controlled by the
transfer properties of a speckle-imaging system.88,89,91
The important feature is the strong effect of the type of speckle dynamics
in the object plane on the speckle decorrelation rate in the image plane ðX̄ Þ for
the given transfer properties of L1. Considering the extreme cases of speckle
dynamics in the object plane (the pure translation or the pure “boiling”) we
obtain in the latter case the relatively weak dependence of the speckle
decorrelation time tc,i in the image plane, which is determined by a 1/e decay
of g2(t)1, on the full width at half maximum (FWHM) value DV of the
square of the point-spread function modulus jK L1 ðX̄ ,x̄Þj2 . For “boiling”
speckles, the decrease in tc,i plane with respect to that in the object plane (tc,o)
is controlled not only by the properties of the imaging system, but also the
peculiarities of hEðx̄1 ,t1 ÞE ðx̄2 ,t2 Þi; decay with the increasing jx̄1 x̄2 j; and |
t1t2|. The rigid and detailed consideration of the spatial-temporal correlation
properties of multiply scattered optical fields in the object plane is beyond the
framework of our consideration, but we can choose the reasonable model of
the decay in G 1 ðx̄1 ,x̄2 ,t1 ,t2 Þ for further analysis of the influence of correlation
transfer in the considered full-field speckle instrument on the exposure-
dependent speckle contrast in the image plane.
Figure 8.28 displays the calculated dependencies of V T̃ for partially blurred
speckles in the ðX̄ Þ plane, which are imaged by L1 with the “soft” apodized
aperture (the Gaussian function jK L1 ðX̄ ,x̄Þj2 ), on the normalized exposure time
T̃∕tc,o . Different curves correspond to different relationships between DV
and the average speckle size r̄o in the object plane (the high-resolution
Figure 8.28 Theoretical dependencies of V T̃ in the image plane on T̃∕tc,o for the case of a
soft aperture of L1. Solid line: DV ≪ r̄ 0 ; dashed line: DV ¼ r̄ 0 ; dotted line: DV ≫ r̄ 0 . Inset:
dependencies of the characteristic exposure time on DV∕r̄ 0 ; solid line: uniform distribution of
td ; dashed line: quadratic model; dotted line: linear model.
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 475
(8.34)
where wi(t) is the fluctuating component of phase related to speckle
fluctuations at the input tip, and Fi is the stationary random component
determined by the transfer properties of the i-th fiber and free space between
the bundle and the detector. In the case of deep phase modulation (Fi ¼ 1,. . . ,N
≫2p) and for large values of N, the outgoing light induces fully developed
speckle patterns with the circular Gaussian statistics and the zero mean value
of the complex amplitude.
In this case, after some simple calculations, the temporal correlation
function of the intensity fluctuations in the arbitrarily chosen detection point
is obtained as
* +
XN X
N
G2 ðtÞ ¼ hI ðx2 ,y2 ,tÞI ðx2 ,y2 ,t þ tÞi ¼ I i ðtÞI i0 ðt þ tÞ , (8.35)
i¼1 i0 ¼1
where Ii(t) is the intensity related to the contribution from the i-th fiber in the
detected signal. Ensemble averaging, which was carried out under the
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 477
k maxX
¼N1
G 2 ðtÞ NG2,s ðtÞ þ G2,k ðtÞPðkÞ, (8.36)
k¼1
P
where G 2,s ðtÞ ¼ ð1∕NÞ N i¼1 hI i ðtÞI i ðt þ tÞi and the second cross-correlation
term is caused by temporal correlations between optical signals sampled
from different regions of the same speckle in the input plane. G2,k(t) are
the cross-correlation functions of Ii(t) and Ii0 (t) with k ¼ |ii0 | and P(k)
are
Pkmaxthe weight factors satisfying the following normalization condition:
¼N1
k¼1 PðkÞ ¼ NðN 1Þ. The influence of the cross-correlation term on
G2(t) rises up with an increase of the average speckle size in the input plane
above the distance between the neighboring sampling zones determined by
the antenna properties of the fibers. Thus, the effect of light propagation in the
bundle and free space on decorrelation of detected speckles is related to the
averaging of td in the input plane over the discrete groups of partially
correlated optical signals sampled from the same coherence area in the input
plane. Therefore, the two-stage transformation of “boiling” speckles (by the
image-transferring element and by the bundle in combination with free space)
will be accompanied by a sequential decrease in the speckle decorrelation time
with respect to that in the object plane.
In the experimental studies, the full-field speckle analyzer with the bundle-
based light delivery unit (Fig. 8.29, a) was applied to monitor laser-mediated
modification of cartilage. The laser beam emitted by a single-mode He-Ne
laser (1) (632.8 nm, linear polarization, 5-mW output power) was expanded by
a 50 telescopic system (2) and fell on the surface of a sample (3) of ex vivo
bovine nasal septum cartilage. The samples under study were taken from
freshly slaughtered animals and prepared as square pieces with 25 25 mm2
lateral dimensions and 1.5-mm thickness. The diffuse reflected light was
collected by the imaging lens 4 (focal length of 16 mm and diameter of 6 mm)
and the speckle-modulated image of the area of interest on the sample surface
was formed on the input tip of a fiber optic bundle (5). The bundle with the
hexagonal packing of fibers (the fiber diameter is 25 mm, and the separation
between the fibers in a row is 3 mm) was 1.2 m in length. Light from the
output tip was detected by a CCD sensor (6) (SONY ICX415 chip-based unit,
25-fps frame rate, 10-bit analog-to-digital conversion) co-axially placed at a
distance of 300 mm from the output tip. In the experiments, the capture of
sub-frames (450 pixels 450 pixels) at the frame rate of 25 fps was used. This
allowed us to set the constant value of exposure time equal to 20 ms with the
value of CCD readout time approximately equal to 22 ms.
To characterize the optical system of the bundle-based speckle monitor,
we estimated the average speckle size in the image plane of lens 4 and in the
478 Chapter 8
Figure 8.29 (a) scheme of the experimental setup with the bundle-based full-field analyzer;
1 – He-Ne laser, 2 – telescopic system, 3 – object, 4 – image-transferring element, 5 – fiber-
optic bundle, 6 – CCD detector, 7 – erbium laser, 8 – light-delivering fiber, 9 - thermograph.
(b) scheme of the experimental setup with the LASCA modality; 1 – He-Ne laser, 2 –
telescopic system, 3 – object, 4 – CCD camera, 5 – erbium laser, 6 – light-delivering fiber, 7 -
thermograph. Inset: the temperature distributions across the surface of treatment zone at
various stages of modification; 1 – tissue heating, 2 – quasi-stationary phase of modification.
detector plane. A Teflon plate (10 mm thick) was placed in the object plane as
a source of backscattered speckle-modulated light. The pixel size of the CCD
array is 8.3 mm 8.3 mm, and the average speckle size in the detection plane
was evaluated as 65 mm. Thus, the average coherence area in the detection
plane covered approximately 50 CCD pixels. Also, the quantification of
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 479
With the zero time lag t ¼ 0, the 1/e decay of g2(Dx2,Dy2,0) characterizes
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
the average speckle size hrs i 2 ðDx2 Þ2 þ ðDy2 Þ2 g2 ¼1∕e . The presence of a
noticeable translational component in speckle dynamics should manifest itself
in the appearance of a visible correlation peak for the given sufficiently non-
zero value of t (t . tc). This peak should be shifted in the coordinate plane
(Dx2,Dy2) to the position (vx2t,vy2t), where vx2,vy2 are the x2– and y2–
components of the speckle translation velocity. Figure 8.30(a) displays the
normalized temporal correlation functions g2(t) of the detected optical signal
at various stages of modification; spatial correlations of captured speckle
patterns are illustrated in Fig. 8.30(b) for the cases of the same frame (g2(Dx2,
Dy2,0)) and two frames (g2(Dx2,Dy2,t)) captured with the time delay t slightly
exceeding the speckle decorrelation time in the detector plane [Fig. 8.30(c)].
Note the absence of any correlation peak in the latter case. This indicates the
absence of any remarkable translational dynamics of speckles in the detector
plane. Also, the direct observation of the sequences of captured speckle
patterns showed only the “boiling” of speckles at the stages of tissue heating
and modification.
The boiling type of speckle dynamics is obviously caused by the absence
of macroscopic regular motions of scattering sites in the thermally treated
tissue. Because of the backscattering geometry and contributions of multiply
scattered diffuse components, the detected light is characterized by the
significantly more broadband intensity fluctuations in comparison with the
case of transillumination monitoring of thermally treated ex vivo cartilage.81
The remarkable blur of speckle patterns occur at the frame rate of 25 fps (with
the above-mentioned value of the exposure time equal to 20 ms). Conse-
quently, the value of V T̃ was estimated for each frame in the recorded
sequence without any additional time averaging (as it was carried out in
Ref. 81). Figure 8.31 displays the dependencies of the temperature in the
central region of the treatment zone (a) and the normalized speckle contrast
V T̃ ðtÞ∕V T̃ ð0Þ (b) on the time lapse t in the course of the treatment procedure
provided with the different values of the output power of IR radiation.
The appearance of some artifacts on the experimental dependencies
plotted in Fig. 8.31(b) (in particular, the minimal value of contrast at 4.8 W
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 481
Figure 8.30 (a) normalized temporal correlation functions of speckle intensity fluctuations
in the detector plane; 1 – tissue heating (the temperature in the central region of treatment
zone is 50°C), 2 – the quasi-stationary phase of tissue modification (the temperature is
68°C). (b), (c) normalized spatial correlation functions of dynamic speckle patterns in the
detector plane; (b) the generalized spatial autocorrelation function; (c) the cross-correlation
function for two frames with the time delay Dt approximately equal to 2.5tc,detector; the scale
for (b) and (c) are the same.
482 Chapter 8
Figure 8.31 (a) The dependencies of the tissue temperature in the central region of
treatment zone on the time lapse; inset displays the “loop-like” behavior of V T̃ ðT Þ (the output
power of the erbium laser is 4.5 W; I – tissue heating; II- modification; III – thermal
relaxation); (b) the dependencies of V T̃ on the time lapse (bundle-based analyzer); (c) the
same as in (b) for LASCA modality (IR laser output power is 4.5 W; low-amplitude
fluctuations related to non ergodic behavior of the speckle-modulated image are marked by
arrows).
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 483
appears later than the minimal values of V T̃ ðtÞ∕V T̃ ð0Þ at 4.2 W and 4.5 W) is
related to the arbitrary shift of the origin for each data sequence because of
the above-mentioned random delay in the start of laser treatment. As
mentioned above, this circumstance is insufficient for further data analysis.
Similar to previously reported results on the transillumination speckle
monitoring of cartilage modification,81 the loop-like behavior of
V T̃ ðtÞ∕V T̃ ð0Þ occurs with the changes in tissue temperature. This peculiarity
is related to the observed non-monotonic behavior of the contrast: the
abrupt decrease in Ṽ at the stage of tissue heating from room temperature to
65°C changes to a slight increase at the stage of modification [Fig. 8.31(b)].
This results in the larger value of Ṽ at the final phase of modification (when
the treatment is terminated) in comparison with the initial phase (when T
approaches 65°C) and, correspondingly, in the “loop-like” form of Ṽ ðTÞ
dependence. This “loop-like” feature is displayed by the inset in Fig. 8.31(a).
Fig. 8.31(c) shows the dependence V T̃ ðtÞ∕V T̃ ð0Þ obtained with the LASCA
modality. In calculation of the speckle contrast from raw images obtained
with the LASCA unit, the area of the analyzed zone of speckle-modulated
images was 0.7 0.7 mm2, which corresponded to the analyzed area of the
central part of the treatment zone with a size of 1.75 1.75 mm2. The area of
the same size was analyzed with the bundle-based modality.
Considering the contrast decrement DV T̃ ðtÞ ¼ V T̃ ðtÞ V T̃ ðt þ T̃Þ, we
can assume the first-order approximation of DV T̃ for the small values of
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
T̃∕tc : DV T̃ fðdg2 ðT̃Þ∕d T̃Þ∕2 g2 ðT̃Þ 1gT̃ KðT̃∕tc Þ and, respectively,
tc ∝ 1∕DV T̃ . The factor K is controlled by the asymptotic behavior of the
temporal correlation function of intensity in the object plane and the transfer
properties of the bundle-based analyzer as well. The static character of
captured speckle patterns at room temperature and the abrupt increase in the
contrast decrement with the increasing tissue temperature allow us to consider
the decay in tc at the first stage of tissue modification as a thermally activated
process with a certain value of the activation energy.
Figure 9.32 displays the values of lnðDV T̃ Þ estimated for the pairs of
sequential frames and plotted against the instantaneous inverse absolute
temperature in the central part of treatment zone. This plot corresponds to
data obtained with the conventional LASCA modality (with the single value
of output power of an IR laser) and the bundle-based speckle analyzer. For
the relatively narrow interval of T, the plotted data can be approximated with
reasonable accuracy by the linear dependencies of lnðDV T̃ Þ on T1, which are
also shown in Fig. 8.32.
This allows us to assume the Boltzmann-like dependence of the inverse
speckle decorrelation time on the absolute temperature of modified tissue:
t1
c,o K 1 expðE a ∕RTÞ, where the parameter K1 depends on various factors
such as the frequency factor of the basic thermally activated process with the
activation energy of Ea and the transfer properties of the speckle-imaging
system. Note that both data sets are characterized by the close values of
484 Chapter 8
Figure 8.32 The values of lnðDV T̃ Þ obtained with the bundle-based analyzer (1–3) and the
LASCA modality (4) and plotted against the inverse absolute temperature in the central
region of treatment zone. The output power of the erbium laser is: 4.5 W (1), 4.2 W (2),
4.8 W (3), 4.5 W (4).
Ea (61 ± 4.5) kJ/mol for data obtained with the LASCA modality and
(64 ± 6) kJ/mol for the bundle-based speckle analyzer, respectively,), but
significantly differ in the values of the normalization parameter (K1,LASCA/
K1,bundle 2.9). This difference is obviously caused by the difference in the
transfer properties of the conventional LASCA system and the bundle-based
speckle analyzer. In the latter case, the influence of the transfer properties of
the light-delivering channel is more expressed because of the cascade
transformation of speckle patterns and the lower quality of the image-
transferring component in comparison with the examined LASCA modality.
The above-considered linear and quadratic models of td decay predict the
extreme value of the ratio tc,o/tc,detector in the case of sequential two-stage
transformation of “boiling” speckle patterns and varies from 1.74 (the
quadratic model) to 3.24 (the linear model). Thus, under the semi-
qualitative assumptions used in theoretical analysis and without knowledge of
the correlation properties of dynamic speckles in the object plane, the
experimentally obtained ratio K1,LASCA/K1,bundle 2.9 reasonably agrees with
the above-presented values.
Let us consider some probable mechanisms of the laser-induced cartilage
modification, which cause dynamic light scattering in the treated tissue. One
mechanism can be related to the temperature-dependent changes in the rate of
scatter diffusion in the treated volume. For many condensed media, the
dependence of the diffusion coefficient on the temperature is described as:
D ¼ D0 exp(E0/RT), where E0 is the activation energy of transition of a
scattering particle from one stable position to another one. Among other
factors, the contrast decrement is determined by the average displacement
pffiffiffiffiffiffiffiffi
of scatters L over the observation time T̃ (i.e., DV T̃ ∝ L ∝ DT̃ ∝
expðE 0 ∕2RTÞ). Thus, this consideration leads to the Boltzmann dependence
of the contrast decrement on the characteristic temperature of tissue. Another
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 485
8.8 Summary
In this chapter, we have considered some typical examples of laser speckle
techniques applied to diagnostics and functional imaging in biology and
medicine. The common feature of almost all of the methods presented is the
use of correlation analysis of the intensity fluctuations of time-varying speckle
patterns to evaluate the dynamic parameters of a scattering system (such as,
e.g., ensembles of erythrocytes moving through the microcapillary net).
Correlometry as an optical signal processing technique is closely related to
another method of analysis of dynamic speckle, the spectral analysis widely
used in laser Doppler flowmetry. In the latter case, values of the spectral
moments of the time-dependent speckle intensity fluctuations are used to
characterize the blood perfusion level for diagnosed tissues and organs.
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 487
Acknowledgments
David J. Briers is very grateful to David Boas of the Harvard Medical School,
Andrew Dunn of the University of Texas, and Moor Instruments Ltd, for
their cooperation and permission to use some of their images.
Preparation of this chapter was possible due to the support from the
Russian Foundation for Basic Research (grants # 13-02-00440), basic tasks of
the state higher education institutions of research, the Ministry of Education
and Science of the Russian Federation (grant № 14.Z56.15.7102-MK).
Valery V. Tuchin is grateful for support from the Russian Presidential
grant NSh-703.2014.2, the Government of the Russian Federation grant 14.
Z50.31.0004, the Tomsk National Research State University Academic D.I.
Mendeleev Fund Program, and RFBR 14-02-00526a.
References
1. G. Mueller, B. Chance, R. Alfano, S. Arridge, J. Beuthan, E. Gratton,
M. Kaschke, B. Masters, S. Svanberg, and P. Van der Zee, Medical
Optical Tomography: Functional Imaging and Monitoring, SPIE Press,
IS11, Bellingham, WA (1993).
2. V.V. Tuchin, Tissue Optics: Light Scattering Methods and Instruments for
Medical Diagnostics, 3rd ed. PM254, SPIE Press, Bellingham, WA (2015).
488 Chapter 8
34. A.K. Dunn, H. Bolay, M.A. Moskowitz, and D.A. Boas, “Dynamic
imaging of cerebral blood flow using laser speckle,” J. Cereb. Blood Flow
Metab. 1, 195–201 (2001).
35. A.N. Obeid, D.M. Boggett, N.J. Barnett, G. Dougherty, and P. Rolfe,
“Depth discrimination in laser Doppler skin blood flow measurement
using different lasers,” Med. And Biol. Eng. And Comput. 26, 415–419
(1988).
36. L. Duteil, J.C. Bernengo, and W. Schalla, “A double-wavelength laser
Doppler system to investigate skin microcirculation,” IEEE Trans.
BME-32, 439–447 (1985).
37. R.J. Gush and T.A. King, “Discrimination of capillary and arterio-
venular blood flow in skin by laser Doppler flowmetry,” Med. And Biol.
Eng. And Comput. 29, 387–392 (1991).
38. M.M. Gonik, A.B. Mishkin, and D.A. Zimnyakov, “Visualization of
blood microcirculation parameters in human tissues by time-integrated
dynamic speckles analysis,” Ann. NY Acad. Sci. 972, 325 (2002).
39. K. Forrester, C. Stewart, J. Tulip, C. Leonard, and R. Bray,
“Comparison of laser speckle and laser Doppler perfusion imaging:
Measurement in human skin and rabbit articular tissue,” Med. Biol.
Engg. Comp. 40, 687–697 (2002).
40. B. Choi, N. Kang, and J. Nelson, “Laser speckle imaging for monitoring
blood flow dynamics in the in vivo rodent dorsal skin fold model,”
Microvasc. Res 68, 143–146 (2004).
41. H. Cheng, Q. Luo, Q. Liu, Q. Lu, H. Gong, and S. Zeng, “Laser speckle
imaging of blood flow in microcirculation,” Phys. Med. Biol. 49, 1347–
1357 (2004).
42. S.K. Nadkarni et al., “Characterization of atherosclerotic plaques by
laser speckle imaging,” Circ. 112, 885–892 (2005).
43. K. Yaoeda et al., “Measurement of microcirculation in optic
nerve head by laser speckle flowgraphy in normal volunteers,” Am. J.
Opthalmol. 130, 606–610 (2000).
44. M. Nagahara, Y. Tamaki, M. Araie, and T. Umeyama, “The acute
effects of stellate ganglion block on circulation in human ocular fundus,”
Acta Ophthalmol. Scand. 79, 45–48 (2001).
45. J. Flammer et al., “The impact of ocular blood flow in glaucoma,” Prog.
Retin. Eye Res. 21, 359–393 (2002).
46. B. Weber, C. Burger, M. Wyss, G. von Schulthess, F. Scheffold, and
A. Buck, “Optical imaging of the spatiotemporal dynamics of cerebral
blood flow and oxidative metabolism in the rat barrel cortex,” Eur. J.
Neurosci. 20, 2664–2670 (2004).
47. T. Durduran et al., “Spatiotemporal quantification of cerebral blood
flow during functional activation in rat somatosensory cortex using laser
speckle flowmetry,” J. Cereb. Blood Flow Metab. 24, 518–525 (2004).
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 491
61. J.D. Briers, “Laser speckle contrast imaging for measuring blood flow,”
Optica Applicata 37, 139–152 (2007).
62. D. Briers, D.D. Duncan, E. Hirst, S.J. Kirkpatrick, M. Larsson,
W. Steenbergen, T. Stromberg, and O.B. Thompson, “Laser speckle
contrast imaging: theoretical and practical limitations,” J Biomed Opt.
18(6), 066018 (2013).
63. A. Serov, W. Steenbergen, and F. De Mul, “A method for estimation of
the fraction of Doppler-shifted photons in light scattered by mixture of
moving and stationary scatterers,” Proc. SPIE 4001, 178–189 (2000).
64. A. Sadhwani, K.T. Schomacker, G.J. Tearney, and N.S. Nishioka,
“Determination of Teflon thickness with laser speckle. I. Potential for
burn depth diagnosis,” Applied Optics 35(28), 5727–5735 (1996).
65. D.A. Zimnyakov, A.A. Isaeva, E.A. Isaeva, O.V. Ushakova, and
R.A. Zdrazhevskii, “Speckle Correlometry Method for Evaluating the
Transport Scattering Coefficient of a Randomly Inhomogeneous
Medium,” Technical Physics Letters 38(10), 935–937 (2012).
66. L.F. Rojas, M. Bina, G. Cerchiari, M.A. Escobedo-Sanchez, F. Ferri,
and F. Scheffold, “Photon path length distribution in random media
from spectral speckle intensity correlations,” Eur. Phys. J. ST. 199(1).
167–180 (2011).
67. M. Draijer, E. Hondebrink, T. Leeuwen, and W. Steenbergen, “Review
of laser speckle contrast techniques for visualizing tissue perfusion,”
Lasers in Medical Science, 639–651 (2009).
68. H. Cheng, Q. Luo, S. Zeng, S. Chen, J. Cen, and H. Gong, “Modified
laser speckle imaging method with improved spatial resolution,”
J. Biomed. Opt. 8, 559-564 (2003).
69. P. Li, S. Ni, L. Zhang, S. Zeng, and Q. Luo, “Imaging cerebral blood
flow trough the intact rat skull with temporal laser speckle imaging,”
Opt. Lett. 31, 1824–182 (2006).
70. K.R. Forrester, C. Stewart, J. Tulip, C. Leonard, and R.C. Bray,
“Comparison of laser speckle and laser Doppler perfusion imaging:
measurement in human skin and rabbit articular tissue,” Med. Biol. Eng.
Comput. 40, 687–697 (2002).
71. K.R. Forrester, J. Tulip, C. Leonard, R.C. Bray, and C. Robert,
“A laser speckle imaging technique for measuring tissue perfusion,”
IEEE Trans. Biomed. Eng. 51, 2074-2084 (2004).
72. N. Konishi, Y. Tokimoto, K. Kohra, and H. Fujii, “New laser
speckle flowgraphy system using CCD camera,” Opt. Rev. 9, 163-196
(2002).
73. Y.K. Tan, W.Z. Liu, Y.S. Yew, S.H. Ong, and J.S. Paul, “Speckle image
analysis of cortical blood flow and perfusion using temporally derived
contrasts,” In: International conference on image processing ICIP 2004.
Proc. IEEE 5, 3323–3326 (2004).
Speckle Technologies for Monitoring and Imaging Tissues and Tissue-Like Phantoms 493
74. T.M. Le, J.S. Paul, H. Al-Nashash, A. Tan, A.R. Luft, F.S. Sheu, and
S.H. Ong “New insights into image processing of cortical blood flow
monitors using laser speckle imaging,” IEEE Trans. Biomed. Eng. 26,
833–842 (2007).
75. A.B. Parthasarathy, W.J. Tom, A. Gopal, X. Zhang, and A.K. Dunn
“Robust flow measurement with multi-exposure speckle imaging,” Opt.
Express. 16, 1975–1989 (2008).
76. K. Basak, G. Dey, M. Mahadevappa, M. Mandal, and P.K. Dutta,
“In vivo laser speckle imaging by adaptive contrast computation for
microvasculature assessment,” Optics and laser in Engineering 62,
87–94 (2014).
77. M.C. Sicairos, S.T. Thurman, and J.R. Fienup, “Efficient subpixel
image registration algorithms,” Opt Lett. (2), 156–8 (2008).
78. C.A. Thompson, K.J. Webb, and A.M. Weiner, “Imaging in scattering
media by use of laser speckle,” JOSA A 14(9), 2269–2277 (1997).
79. D.A. Zimnyakov, Jung-Taek Oh, Yu.P. Sinichkin, V.A. Trifonov, and
E.V. Gurianov, “Polarization-sensitive speckle spectroscopy of scatter-
ing media beyond the diffusion limit,” J. Opt. Soc. Am. A 21, 59–70
(2004).
80. D.A. Zimnyakov and M.A. Vilensky, “Blink speckle spectroscopy of
scattering media,” Optics Letters 31(4), (2005). 31, 429–431 (2006).
81. D.A. Zimnyakov, D.N. Agafonov, A.P. Sviridov, A.I. Omel’chenko,
L.V. Kuznetsova, and V.N. Bagratashvili, “Speckle-contrast monitoring
of tissue thermal modification,” Applied Optics 41, 5989–5996 (2002).
82. V. Viasnoff, F. Lequeux, and D.J. Pine, “Multispeckle diffusing-wave
spectroscopy: A tool to study slow relaxation and time-dependent
dynamics,” Review of Scientific Instruments 73, 2336–2344 (2002).
83. S.E. Skipetrov and R. Maynard, “Dynamic multiple scattering of light in
multilayer turbid media,” Phys. Lett. A 217, 181–185 (1996).
84. S. Romer, F. Scheffold, and P. Schurtenberger, “Sol-gel transition of
concentrated colloidal suspensions,” Phys. Rev. Lett. 85, 4980–4983
(2000).
85. F. Scheffold, S. E. Skipetrov, S. Romer, and P. Schurtenberger,
“Diffusing-wave spectroscopy of non-ergodic media,” Phys. Rev. E 63,
061404 (2001).
86. D.A. Zimnyakov, A.P. Sviridov, L.V. Kuznetsova, S.A. Baranov, and
N. Yu. Ignatieva, “Monitoring of tissue thermal modification with a
bundle-based full-field speckle analyzer,” Applied Optics 45, 4480–4490
(2006).
87. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics,
Cambridge University Press, 1995.
88. T. Yoshimura, “Statistical properties of dynamic speckles,” J. Opt. Soc.
Am. A 3, 1032–1054 (1986).
494 Chapter 8
Donald D. Duncan
Department of Electrical and Computer Engineering, Portland State University,
Portland, USA
Brendan F. Kennedy
Optical þ Biomedical Engineering Laboratory, School of Electrical, Electronic, and
Computer Engineering, The University of Western Australia, Perth, Australia
David D. Sampson
Optical þ Biomedical Engineering Laboratory, School of Electrical, Electronic,
and Computer Engineering, and Center for Microscopy, Characterization, and
Analysis, The University of Western Australia, Perth, Australia
497
498 Chapter 9
9.1 Introduction
Optical elastography, as the mechanical characterization of biological tissues
using optical methods has come to be known, has developed substantially
since the original printing of this handbook in 2002. Advances in both
hardware and in data-processing algorithms have permitted these develop-
ments. Full-field 2-D and 3-D elastograms can now be generated in only a few
seconds. These developments have led to an increased focus on applying
elastography to clinical and biological applications, including in dermatology,
oncology, ophthalmology, and cardiology. This updated chapter discusses
several of the recent advances in optical elastography, including a discussion
of what is really measured, mechanically speaking, with optical elastography.
A discussion of some recent advances in OCT-based elastography, optical
coherence elastography (OCE), is also presented.
9.3.1 Dermatology
Skin imaging is among the most attractive application areas for optical
elastography, as skin is amenable both to optical imaging and mechanical
loading. A number of examples of in vivo skin imaging have been
demonstrated in optical elastography.15–19 Importantly, several diseases and
conditions alter the mechanical properties of skin. For example, subsurface
skin tumors present as objects with distinct mechanical properties relative to
the surrounding normal tissue. The displacement of fibrillar papillary dermis
by the cellular mass of a growing melanoma is one example of this.
Preliminary measurements using an acousto-optical elastography technique
have demonstrated mechanical contrast in vivo on the micro-scale in both a
mouse melanoma lesion and a human melanocytic nevi.15 Optical elasto-
graphic techniques may provide a means of probing these masses to determine
their state of progression and thereby help to determine a proper means of
disease management. Other skin conditions, such as psoriasis, scleraderma,
and icthyosis, also present as localized tissue areas with distinct mechanical
properties that can be delineated optically. Additionally, it is well established
that scarring significantly alters the mechanical properties of skin.20,21 Indeed,
pliability is one of the four parameters used in the Vancouver scar scale, an
assessment scale commonly used by clinicians. Preliminary measurements,
based on optical palpation, a derivative of optical coherence elastography
(OCE), have demonstrated mechanical contrast between scars and normal
tissue.22 An example of optical elastography on skin performed using an
acousto-optical approach is shown in Fig. 9.1.
9.3.2 Oncology
Optical elastography holds great promise for improving the treatment of
cancer. A number of preliminary results have demonstrated this using both
OCE23–25 and digital holography-based elastography.26 Ultrasound elasto-
graphy and, to a lesser extent, magnetic resonance elastography have been
widely used clinically in oncology, mainly for the detection of suspicious lesions
in breast cancer. The higher spatial resolution and limited penetration depth of
optical elastography suggest that it is better suited to other aspects in the
treatment of cancer patients. For example, in breast cancer a main problem
502 Chapter 9
Figure 9.1 Skin optical elastography. (a) White light photograph, (b) elastogram of the region
indicated by the black box in (a). The less-stiff nevus seen in the photograph presents as a
region of relatively high strain in the elastogram. Reproduced with permission from Ref. 15
6HHFRORUSODWHV
facing the surgeon is in identifying during the surgery if all malignant tissue
has been removed. Techniques such as frozen section histology and imprint
cytology are in use, but re-excision rates remain very high (20–60%).27 Optical
elastography could potentially be used to scan the perimeter of the excised
tumor mass sufficiently rapidly to provide feedback to the surgeon during the
initial surgery. Preliminary results from OCE have demonstrated visualization
of microstructure within the breast and have indicated the promise of optical
elastography in distinguishing malignant tissue from surrounding healthy
tissue.25 An example of OCE on human breast tissue is presented in Fig. 9.2.
Recently, optical elastography has also been shown to detect prostate cancer
in biopsy samples with sensitivity and specificity of 98% and 91%,
respectively.28 The next steps needed for optical elastography in oncology are
larger-scale studies to assess the diagnostic accuracy in detecting cancer.
9.3.3 Ophthalmology
As OCT is widely used in ophthalmic applications, it is a natural area in which
to develop optical elastography. Coupled with this, several diseases of the eye
are linked with changes to its mechanical properties, including glaucoma,
keratoconus, and presbyopia. Initial efforts in optical elastography have
focused on corneal biomechanics and, in particular, on keratoconus, a disease
that reduces corneal stiffness. A number of OCE techniques have been
proposed, based on both contact29,30 and noncontact31,32 mechanical loading
of the cornea. Noncontact mechanical loading is of particular importance in
this delicate tissue and has been achieved using both an air-puff port and
photothermal expansion via a pulsed laser. Figure 9.3 shows an example of
OCE performed on a mouse cornea. As well as measuring corneal stiffness,
several papers have proposed OCE as a technique to assess the treatment of
keratoconus with collagen cross-linking33 induced by UV light exposure.34
Optical Assessment of Tissue Mechanics 503
Figure 9.2 Optical elastography of malignant human breast tissue. (a) OCT image, (b)
OCE elastogram, and (c) co-registered histology. A, adipose; D, duct; M, smooth muscle;
T, region densely permeated with tumor; and V, blood vessel. In the micro-elastogram, the
scale is in millistrain. The insets show a 2.5X magnification of the blue-dotted boxes.
Reproduced with permission from Ref. 256HHFRORUSODWHV
504 Chapter 9
Figure 9.3 Ophthalmic optical elastography. (a) 3D-OCT image of a mouse eye. The white
dots indicate the regions subjected to mechanical loading. (b) Graph showing the variation in
phase velocity of surface waves propagating in the cornea as a function of age. Reproduced
with permission from Ref. 134.
9.3.4 Cardiology
In cardiovascular disease, the mechanical properties of atherosclerotic plaques
have been linked to plaque rupture, and have been investigated using
intravascular ultrasound elastography.36 However, the spatial resolution of
ultrasound elastography is relatively low and optical elastography may
provide higher resolution measurements on a scale more relevant to the
disease morphology. A number of preliminary studies have been reported,
using both OCE37–39 and laser speckle-based elastography.12,40 A preliminary
measurement performed on excised human coronary artery is presented in
Fig. 9.4. A key requirement for cardiology is the development of an
intravascular optical elastography probe to enable in vivo imaging. While this
presents technical challenges, not least the incorporation of mechanical
loading in a small footprint probe, optical elastography will likely take
advantage of advances in closely related intravascular OCT imaging probes.41
Figure 9.4 Optical elastography in cardiology. (a) 3D-OCT image of a human cadaver
coronary artery, (b) corresponding OCE image. The red region was confirmed by histology to
correspond to an atherosclerotic lesion. Scale bars, 1 mm. Reproduced with permission from
Ref. 396HHFRORUSODWHV
where the stress tensor, uij, is linearly related to the strain tensor, ekl. The
variables Cijkl are known as constitutive constants. In this case, the con-
stitutive constants are simply the elastic moduli, and Eq. (9.1) is a simplified
expression of Hooke’s law. One of the main reasons for the simplicity of this
model is that, in these materials, the constitutive constants are assumed to
be independent of both the direction of the applied strain and the rate of
strain application. Neither of these assumptions is truly physically valid,
however, they are assumed to be valid for simplicity and based on evidence
from empirical data. Equation (9.1) also assumes linearity over a range of
strains. This is not a physically valid assumption because linear elasticity
assumes that the stress tensor is a linear function of the strain tensor only
for infinitesimal strains. However, empirical data again show that this
assumption is sufficient for describing the macroscopic behavior of these
materials within their elastic range. This simple discussion demonstrates the
synergistic relationship between theory and experiment in the development of
constitutive equations.
Biological tissues, however, are nonlinear, nonuniformly inhomogeneous,
demonstrate a distinct dependence upon time and rates (viscoelasticity),
poroelastic, anisotropic, and capable of undergoing very large strains. In the
Optical Assessment of Tissue Mechanics 507
very simple case of a tissue in a uniaxial stress state, Tanaka and Fung50
proposed an exponential form as a constitutive model of soft tissue in
tension:
∂W
sij ¼ ði, j ¼ 1,2,3Þ. (9.3)
∂eij
where eu and ez are the Green’s strains in the circumferential and axial
directions, respectively, and A, B, etc., are constitutive constants. By taking
the partial derivatives of W with respect to the strains, one can evaluate the
stresses in the u and z directions.
Fung52,64 proposed a simpler expression for the elastic properties of
arterial tissue based on an exponential function that yields the relationship
between the average circumferential stresses and axial stresses and strains in
the same directions (denoted by the subscripts 1 and 2, respectively):
ro W ¼ C 0 expða1 e21 þ a2 e22 þ 2a4 e1 e2 Þ. (9.5)
The constants C0 (with units of stress), a1, a2, and a4 (dimensionless) are
material constants. For soft tissues, Eq. (9.5) implies that the incremental
moduli increase with increasing stresses.
Takamizawa and Hayashi65 proposed a logarithmic form of the function.
This form fits experimental data from pressure-diameter and axial force-
diameter experiments of carotid arteries at least as well as Fung’s exponential
equation, but with fewer constants. Both models outperform the polynomial
models.65 The general form of the logarithmic model is:
ro W ¼ C lnð1 auu e2u ∕2 azz e2u ∕2 auz eu ez Þ, (9.6)
where C, auu, azz, and auz characterize the mechanical properties of the tissue.
A fourth equation, which is most appropriately applied in a strictly
uniaxial case, is a power law stress–strain model.66,67 This model has the
distinct advantage of being the simplest equation for soft tissue. The model
can relate uniaxial stress–strain data reasonably well in a variety of tissue
types, including both cardiac and smooth muscle, fascia, fibrocartilage, and
cortical bone. The power law model is generally given in terms of the stress
tensor, T, the strain or strain rate tensor, S, and material constants, K and q:
T ¼ K ijkl S qijkl . (9.7)
The original implementation was for modeling the constitutive behavior of
blood and is known as the Walburn-Schneck power law.68
None of the above models, however, adequately fits experimental data
throughout the entire range of observable physiological stresses. At the lower
strain range, any of the models (linear, exponential, logarithmic, power law)
can be made to fit experimental data reasonably well, while none fit the
remainder of the strain range. Fung et al.69 proposed a model that combines
the exponential form with the polynomial form in an attempt to model the
entire range of physiological strains and beyond. Using this model, the strain
energy is expressed as
q
W ¼ CðeQ Q 1Þ þ ,
2
Optical Assessment of Tissue Mechanics 509
where
dε 1 ds
¼ , (9.9)
dt E dt
510 Chapter 9
where E is the Young’s modulus of the tissue. If, however, we assume that
viscous flow of the tissue occurs while it is loaded, then the strain will tend to
change at a rate that depends upon its initial, loaded value:
dε 1 ds ε
¼ þ , (9.10)
dt E dt t0
where t0 has the dimensions of time and is called the retardation time of the
material. By integrating Eq. (9.10) for ds∕dt ¼ 0, we can obtain the strain at
time t:
εðtÞ ¼ ε0 expðt∕t0 Þ, (9.11)
and t0 is thus the time required for the strain to increase (or decrease) to 1/e of
its original value. Assuming that the strain is being subtracted from its
original value, we may write:
εðtÞ X 1
¼ DðtÞ ¼ ½1 expðt∕t0n Þ, (9.13)
so En
where D(t) is the creep compliance. Equation (9.13) indicates that the
contribution of each individual element to the equilibrium compliance is 1/En.
Thus, each compliance is associated with a unique retardation time, resulting
in a spectrum of retardation times. The retardation times are often close
together and numerous in real biological tissues, so replacing the summation
sign of Eq. (9.13) with an integral, we see that
εðtÞ
so
¼ DðtÞ ¼ ∫ E1 ½1 expðt∕t Þdt .
0 0
(9.14)
Because of the long times frequently associated with some materials, it has
become convention to express D(t) as a function of ln t0 . Doing so results in
∫ Et ½1 expðt∕t Þdðln t Þ,
0
εðtÞ 0 0
¼ DðtÞ ¼ (9.15)
so
where t0 ∕E is the contribution to the creep compliance of retardation times in
the range d(ln t0 ), and the integral covers all values of t0 . The quantity t0 /E is
called the retardation spectrum.71
Implicit in the above argument is the idea that each element in the series
has its own frequency dependent behavior that results in a spectrum of
Optical Assessment of Tissue Mechanics 511
compliances for the tissue. Thus, the retardation spectrum completely defines
the behavior of a tissue and, in principle, can be determined through inversion
of the integral that defines ε(t). Through an understanding of the spectrum,
the molecular processes that determine the mechanical behavior may be
elucidated, which may be of use in the investigation of early neoplastic
changes in tissues. Many optical elastography techniques lend themselves
particularly well for investigating retardation spectra and, therefore, may be
valuable in such studies. However, this has not been investigated in any
detail.72 Of equal importance, data from different types of experiments can be
compared directly by transforming the results into a spectrum that can then be
transformed into the results of any other experiment conducted over a range
of frequencies (or temperatures).
It is apparent, then, that measures of the time rate of strain and the
frequency response of tissue are vital to understanding the mechanics of both
healthy and diseased tissue and may provide information on pathological
processes.
where pðI Þ is the probability that a speckle has brightness between I and
(I þ dI), and I is the average brightness. Equation (9.16) is plotted as curve 1
in Fig. 9.5, and it can be seen that the most probable speckle is dark. The
brightness distribution dictated by Eq. (9.16) can be produced only by the
interference of light that is polarized all in the same manner, resulting in a
similarly polarized speckle pattern.77 Thus, to produce such a distribution, the
scattering surface cannot depolarize the scattered light or a polarizer must be
Figure 9.6 Measured intensity probability distribution function of the speckle field
generated by illuminating a sample of porcine skin with a He-Ne laser compared to that
predicted by an incoherent combination of two fields.
where lo is the wavelength of the incident light. If, however, a lens is used to
image the speckle pattern (imaged or subjective speckle), then the spatial
distribution of the speckle in the image plane is ultimately determined by the
diffraction limit of the imaging system. Here, the mean speckle size, d 0sp , is
approximated by
lo d i
d 0sp , (9.19)
D
where di is the imaging distance, and D is the pupil diameter. Equation (9.19)
can be generalized to account for image magnification, M:
lo f
d 0sp ð1 þ MÞ , (9.20)
D
where f is the focal length of the imaging system. In certain instances, such as
in classical speckle interferometry and speckle photography, it is important
to be able to approximate the size of the speckles on the scattering surface.
In this case, the approximate speckle size referred to the object plane, d 00sp , is
given by
1 lo f
d 00sp 1þ . (9.21)
M D
Equations (9.19)–(9.21) assume that the lens actually images the scattering
surface. This is, however, not a requirement.77 It should be noted that while
the spatial distribution of the speckles differs between objective and subjective
speckle, the intensity distributions are the same for the two cases.
For speckle patterns that have a minimum size of a single pixel or larger,
the minimum speckle size in the pattern can be estimated by examining the
FWHM of the autocovariance function via the Wiener-Khintchine theorem
or, for speckle patterns that have a minimum speckle size of at least two
pixels, by examining the width of the power spectral density of the speckle
intensity.
It can be seen, then, that the in-plane strain term can readily be isolated. Had
the complementary configuration (2 cameras, 1 laser beam) been employed,
Eq. (9.23) would have taken the form
dA ≡ dxðþuo ,0Þ dxðuo ,0Þ ¼ 2Lo εxx tan uo 2az sin uo (9.24)
and the desired strain term could only be isolated if the term containing az was
made negligible compared to the strain term. This can be accomplished
through a proper choice of Lo and uo.
518 Chapter 9
Figure 9.8 Stacked speckle history of a sample undergoing a linear strain. Time is along
the y axis, and space is along the x axis.
rate of speckle pattern shift, dx,_ is given by the slope, m, of the bright line
running through DC. The slopes are determined by fitting a weighted least
squares regression to the transformed data, using the signal-to-noise ratio
(SNR) of each point as the weight. Thus, by taking the time derivative of
Eq. (9.24), and rearranging to isolate the desired strain term, we get a simple
expression for directly estimating the time rate of in-plane strain, ε_ xx
(confidence intervals):
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
m2 m1 s22 þ s21
ε_ xx ¼ tðN 2,aÞ, (9.25)
2Lo sin us 2Lo sin us
where the subscripts are associated with the positive and negative illumination
angles, s is the standard deviation about each slope, respectively, and t is the
critical value of the Student’s t-distribution with N 2 degrees of freedom at a
probability level of a. Absolute strains can be determined by an integration
over the time course of the experiment. Figure 9.10 summarizes the scheme.
In practice, only a subset of records is transformed into the frequency
domain. Typically, somewhere between 10 and 30 records are analyzed. The
stacked speckle histories are visually inspected and matching sets of records
from each history that display minimal decorrelation, or for some other
experiment-driven reason, are selected and transformed. While it is possible to
perform the transform with a 2-D FFT, other spectral estimators can also be
used. For example, good results have been obtained using a fast Fourier
s cos us
fs ¼ . (9.29)
lo Lo
This model has been applied to billets of stainless steel84 and to rectangular
samples of mandibular cortical bone,87 each with a round hole drilled
into them. Strain rate estimates were made in a direction parallel to the
direction of the load application, along a stripe immediately adjacent to the
circular holes. The results were roughly what one would expect based on
elasticity theory for the steel samples and viscoelasticity theory for the bone
samples. This is an area that requires much further development before
optical elastography implemented in this way will become a meaningful
diagnostic tool.
where the first term on the right side of the equation is a result of in-plane
translation, and the second term is a result of rotation about the axis
perpendicular to the measurement plane. The total speckle motions as
observed for each illumination angle are then
m1 m2 m1 þ m2
dxð0,us Þ ¼ þ T,
2 2
m1 m2 m1 þ m2
dxð0, us Þ ¼ T, (9.31)
2 2
522 Chapter 9
where T is the total measurement time. The total speckle motion associated
with in-plane strain, dxε, is then
dn
εxx . (9.34)
n
Since the typical strains measured in these experiments are on the order of a
few to tens of microstrain, the maximum allowable change in n over the
course of the experiment is on the order of a few ppm or less.
where t is the time over which the data were analyzed (,1 s). Again, for the
slowest strain rate recorded, the total speckle movement was approximately
1.15 mm. The camera pixel pitch was 10.5 mm, yielding a total movement in
terms of pixels at 0.11 pixels. While these numbers by no means indicate the
524 Chapter 9
Figure 9.11 (a and b) Stacked speckle histories from a cortical bone sample undergoing a
sawtooth waveform loading; (c and d) FFT-AR transform of data records #160–180 of (a)
and (b).
resolution of the algorithm, they do provide a feel for the high sensitivity of
the approach.
Using a similar approach, Kirkpatrick and Duncan42 simultaneously
recorded strains in cortical bone samples with contact strain gauges,
extensometers, and with the transform method. Even with the strain ranges
being at the very lower limit of the sensitivity of contact strain gauges, the
results generated by the methods were in good agreement with each other,
with the correlation between the extensometer strain data and the speckle
strain data being within 4% of each other. Comparisons between the
transform method and other mechanical methods have also been made using
copper wire,93 stainless steel billets,84 and stainless steel wires.92 All of these
studies reflect the accuracy of the transform method.
Experimental measures of vascular mechanics, specifically human tibial
arteries and rat vena cava, have also been performed using the transform
method.12 Strain rates and total strains on the order of tens of microstrain
were recorded from the tissues that were both superfused and perfused to
Optical Assessment of Tissue Mechanics 525
Taking the time derivative, as was done for the in-plane strain derivations,
rearranging, and assuming uniform dilational behavior, the total out-of-plane
motion (dilation) rate is given by
_
dxðu _ s , us Þ
s ,us Þ dxðu
a_ ðz,zÞ ¼ . (9.37)
sin us ð1 LLos Þ
The factor of 2 arises because the dilation is in both the þz and z directions.
Integration of Eq. (9.37) over the time for which the data were analyzed will
yield the total dilational movement along the z axis.
of the data the concept is the same; we, therefore, illustrate with data in one
dimension. Two sequential speckle records are modeled as
where one record is shifted with respect to the other by the amount dx. The
cross-correlation between these two records is given by
`
g ( x)
xo
g ( x − βt )
xo
Figure 9.14 Illustration of shifting frozen speckle pattern.
where the subscript i denotes the pixel (spatial dimension), and the subscript j
represents the record (temporal dimension). We assume that the shift, dx, is
small compared to a pixel so that Eq. (9.41) can be approximated as
This is simply the first two terms of the Taylor series expansion for g. To
introduce a degree of symmetry into the problem, we inspect the two speckle
records on either side of the record of interest (Fig. 9.15),
X
N
ε2j ¼ ½gjþ1 ðxi þ dxÞ gj1 ðxi dxÞ2 , (9.44)
i¼1
Optical Assessment of Tissue Mechanics 529
space, x = ip
record j − 1
record j
record j + 1
time, t = j Δt
Figure 9.15 Central differencing technique.
where the summation is over all pixels in the array. We are thus seeking
the dx that brings these two records into registration. Equation (9.44) may
be solved numerically by making use of a gradient search algorithm.100 If,
however, we make use of the approximation in Eq. (9.42) (small speckle
motions), then differentiation with respect to dx and rearranging yields the
following equation:
PN 0
j¼1 ½gjþ1 ðxi Þ gj1 ðxi Þ½gjþ1 ðxi Þ þ g0j1 ðxi Þ
dxj ¼ PN 0 0
: (9.45)
j¼1 ½gjþ1 ðxi Þ þ gj1 ðxi Þ
2
The term in the first square bracket in the numerator is simply the first central
difference approximation101 to the derivative
Note that the shift parameter, dx, is the time rate at which the speckle pattern
shifts; units are pixels/record.
Although the means by which we arrived at our estimate for dx was quite
specific, this estimation approach is more general than it would seem. For
instance, the term approximating the first central difference for the estimate of
the temporal derivative arose because we chose to inspect speckle records on
either side of the record of interest. We could just as easily have included
additional records, weighting their contributions appropriately to estimate
530 Chapter 9
we could use102
1
½1, 8,0,8, 1: (9.49)
12
In this case, the formulation for the mean square error would be
X
N
ε2j ¼ ½gjþ2 ðxi þ 2dxÞ 8gjþ1 ðxi þ dxÞ þ 8gj1 ðxi dxÞ gj2 ðxi 2dxÞ2 ,
i¼1
(9.50)
the temporal derivative term of Eq. (9.46) would be
∂gj ðxi Þ
½gj2 ðxi Þ þ 8gj1 ðxi Þ 8gjþ1 ðxi Þ þ gjþ2 ðxi Þ 12 , (9.51)
∂tj
½2g0j2 ðxi Þ þ 8g0j1 ðxi Þ þ 8g0jþ1 ðxi Þ 2g0jþ2 ðxi Þ. (9.52)
∂gj ðxi Þ 1
½gj ðxi2 Þ þ 8gj ðxi1 Þ 8gj ðxiþ1 Þ þ gj ðxiþ2 Þ: (9.54)
∂xi 12
These higher-order approximations to the derivative have somewhat
better noise characteristics.101 This improvement, however, comes at the
expense of reduced temporal resolution. Nevertheless, by making use of these
higher-order approximations, the processing can be tailored to the demands of
the experiment.
Up to this point, we have made no specific assumptions about the
statistics of the speckle pattern or associated noise. The only a priori
knowledge that we have introduced is that the speckle shift is small with
respect to the pixel size. If we now make the assumption that the measured
Optical Assessment of Tissue Mechanics 531
d jþ1 ðxi þ dxÞ d j1 ðxi dxÞ ¼ gjþ1 ðxi þ dxÞ gj1 ðxi dxÞ
þ njþ1 ðxi þ dxÞ nj1 ðxi dxÞ. (9.56)
and wish to determine the time progression of the lateral shift of features in
this image. To accomplish this, we inspect this shift based on the pair of
images prior to and after the k-th image. For this image, we define a mean
square difference between a portion of the pair of adjacent images:
XX
kεk k2 ¼ ½gðxi þ f x ,zj þ f z ,k þ 1Þ gðxi f x ,zj f z ,k 1Þ2 . (9.59)
i j
where
1 ∂
gx ¼ ½gðxi ,zj ,k 1Þ þ gðxi ,zj ,k þ 1Þ,
2 ∂x
1 ∂
gz ¼ ½gðxi ,zj ,k 1Þ þ gðxi ,zj ,k þ 1Þ,
2 ∂z
1
gt ¼ ½gðxi ,zj ,k þ 1Þ gðxi ,zj ,k 1Þ: (9.61)
2
In the above, gx and gz are, respectively, the average x and z gradients at the
k-th record, and gt is the central difference approximation to the temporal
gradient. Minimization of this error, by taking the partial derivatives with
respect to the velocity components, results in a pair of simultaneous
equations:
hgx gx i hgx gz i f x
¼ hgx gt i , (9.62)
hgz gx i hgz gz i f z hgz gt i
where 〈⋯〉 represents the local averaging operation. These equations are easily
inverted to yield estimates of the velocity, f̄ . Note that each element in
Eq. (9.62) is a full-size N M array. Solution of these linear simultaneous
equations produces velocity estimates for each pixel of each plane of the image
cube.
Optical Assessment of Tissue Mechanics 533
where the window function, w, denotes the local averaging operation. In its
simplest form, this window function is a simple boxcar that evenly weights the
residuals within the region of interest.
An alternative derivation of this equation is based on the concept of
optical flow. This concept relies on the assumption that the brightness,
g(x, z, t), is conserved (only its spatial distribution changes with time) so that
the total time derivative, dg/dt, is zero, i.e.,
ð∇gÞT f̄ þ gt ¼ 0, (9.64)
where f ¼ [ fx fz]T is the optical flow. Note that this equation (commonly
known as the brightness change constraint equation, or BCCE) characterizes
a single point on the image. We presume that the image gradient, ∇g, and
the partial derivative with respect to time, gt, are known, i.e., measured.
We have a single equation [Eq. (9.64) represents a single constraint] and
two unknowns, the velocities fx and fz. This is referred to as the aperture
problem of motion estimation. Minimizing the residual within the region
specified by the function, w (the aperture), leads to the previous result,
Eq. (9.63).
Once the velocities (fx, fz) are estimated, they can be integrated over time
and mapped on a pixel-by-pixel basis to create an image whose gray values
encode displacements in either the x or z direction, or the vector sum via the
Pythagorean theorem. Normalizing these displacement maps by the initial
dimensions of the sample yields elastograms that encode strain in the limit of a
small deformation (i.e., infinitesimal strain).
Visualization of regions that display a different strain response is
enhanced through the use of a neighborhood operator in the form of a
discrete convolution filter. Pixel operations as described above are not
particularly suitable for discriminating regions of interest in images because
the gray value of each pixel is determined with no consideration of the
gray values of the neighboring pixels. In contrast, neighborhood opera-
tions, such as convolution filters, analyze the spatial relations of the gray
values. Thus, in effect, this operator converts the gray scale elastograms
into feature-based elastograms. The convolution kernel (mask) may be
adaptively generated directly from the gray values of a small background
region of the strain-encoded elastogram. The elastogram can then be
convolved with this kernel in two dimensions to emphasize the local
magnitude of the strains.
534 Chapter 9
∂u ∂v
εxx þ εyy ¼ 0, where εxx ¼ , and εyy ¼ : (9.67)
∂x ∂y
Optical Assessment of Tissue Mechanics 535
where the reference image is constantly updated to the image prior to the
image of interest. Equation (9.71) yields images (elastograms) of cumula-
tive displacement (over n images), whereas Eq. (9.72) provides images of
instantaneous displacements between subsequent images. Normalization of
these images by the initial or updated reference specimen dimensions yields
the desired components of the Cauchy infinitesimal strain tensor. Because
many of the cumulative tissue deformations of interest are large relative to the
restrictions of the small deformation approach, it is possible to implement
the updated Lagrangian model in the analysis. Using this approach, one can
describe the tissues using a hyperelastic material description. That is, the
stress-strain relationships in the tissues can be described via strain-energy
density functions (SEDFs). The exact forms of these functions will depend
upon the nature of the data. In real tissues, the stress–strain relations will
change as a function of time, as well as the mechanical, biochemical, and
structural history of the tissue, so the SED functional form may also change
with these parameters.
where
1 ∂xk ∂xk 1 ∂ui ∂uj ∂uk ∂uk
E ij ¼ dlk ¼ þ þ , (9.74)
2 ∂ai ∂aj 2 ∂aj ∂ai ∂ai ∂ai
Optical Assessment of Tissue Mechanics 537
dlk is the Kronecker delta, and the subscripts are dummy variables. Note that
if the derivative ∂uk/∂ai is infinitesimally small, its square can be neglected and
the Green’s strain tensor [Eq. (9.73)] reduces to
1 ∂ui ∂uj
E ij ¼ þ ¼ εij , (9.75)
2 ∂aj ∂ai
where ro is the tissue density, and W is the strain energy function (strain
potential or work). From this definition, it is seen that because the strain
energy function (SEF) is a function of strain, then the stresses can be obtained
by differentiation with respect to strain. It is difficult to justify the existence of
an SEF in biological tissues based solely upon thermodynamic considerations,
as this requires that stress be purely a function of strain and variables such as
strain rate do not influence the derived stress values. In order to justify
applying the SEF approach to biological tissues, we can take the usual
approach and assume that, after preconditioning, strain rate has minimal
influence on the SEF.
The issue then becomes what form does the strain energy density function
take? Historically, there have been essentially two schools of thought on the
SEDF form: either a polynomial form or an exponential form. Perhaps the
most commonly used 2D SEDF is a combination of the two, so that the strain
energy function may take the form (in the absence of shear, and thus no
E3 terms)
C
ro W ¼ exp½a1 E 211 þ a2 E 222 þ 2a4 E 11 E 22 : (9.78)
2
This form has been applied successfully to a variety of tissue types including
vascular, heart, skin, and connective. Thus, it serves as a reasonable starting
538 Chapter 9
place. It can be seen, then, that in elastography, we can progress from the
collection of speckle images to the generation of elastograms and the
formulation of SEDFs in a relatively straightforward manner.
1
Shift(pixels/record)
-1
-2
-3
-4
-5
0 20 40 60 80 100 120 140 160 180 200
Record
Figure 9.17 Performance of correlation estimator for large speckle shifts.
2.5
1.5
1
Shift(pixels/record)
0.5
-0.5
-1
-1.5
-2
-2.5
0 20 40 60 80 100 120 140 160 180 200
Record
Figure 9.18 Performance of closed-form max likelihood estimator for large speckle shifts.
1
Shift(pixels/record)
-1
-2
-3
-4
-5
0 20 40 60 80 100 120 140 160 180 200
Record
Figure 9.19 Gradient search for max likelihood solution for large speckle shifts. Compare
with the results shown in Fig. 9.17.
The correlation between these two results is in excess of 0.99. Obviously, the
small shift requirement is violated by these data. These results simply show
that estimating speckle shifts for large motions is a relatively straightforward
problem.
Now, we consider an example displaying small speckle motions
(Fig. 9.20). Correlation and maximum likelihood processing produce the
results shown, respectively, in Figs. 9.21 and 9.22. Clearly, the speckle motion
in these data is below the resolution of the correlation processing algorithm (it
cannot perceive motions of less than a half pixel), whereas the maximum
likelihood estimator performs quite well.
Optical Assessment of Tissue Mechanics 541
0.8
0.6
0.4
Shift(pixels/record)
0.2
-0.2
-0.4
-0.6
-0.8
0 20 40 60 80 100 120 140 160 180 200
Record
0.8
0.6
0.4
Shift(pixels/record)
0.2
-0.2
-0.4
-0.6
-0.8
0 20 40 60 80 100 120 140 160 180 200
Record
9.12 Generalizations
In the discussion so far, it has been assumed implicitly that we are dealing with
objective (nonimaged) speckle. In this case, a particular portion of the speckle
record cannot be identified with a particular spot on the object. Most of
these techniques, however, are appropriate for subjective speckle as well. The
bone example previously discussed employed objective speckle. However, the
542 Chapter 9
0.8
0.6
0.4
Shift(pixels/record)
0.2
-0.2
-0.4
-0.6
Figure 9.23 Speckle motion estimates based on first and last 128 columns.
experiment was carefully contrived so that the specimen (a cortical bone sample)
was uniformly illuminated and uniformly strained. Success of this effort is
confirmed by the plots of speckle motion shown in Fig. 9.23. These two traces
represent estimates of speckle motion made with the max likelihood estimator
using the first and the last 128 columns of the speckle history. Although there
are some minor differences, which we attribute to the fact that these are two
distinct speckle realizations, the speckle motions are very similar (r ¼ 0.97).
With this concept in mind, it is easy to see that these algorithms can be used for
a full 2-D sequence of images by separately processing a 2-D speckle history for
each of the image dimensions, as in Fig. 9.24.
We have discussed some illustrative parametric processing algorithms.
Using the basic approach, one could imagine a variety of others. For example,
the conventional processing approach searches for the lag for which the cross-
correlation is a maximum. Rather than using the FFT, one might search
directly for the shift that maximizes the cross-correlation. Specifically, in the
notation of this chapter, we could seek the speckle pattern shift (in one
dimension), dx, that maximizes the cross-correlation coefficient,
X
N
p2j ∝ gj1 ðxi dxÞgjþ1 ðxi þ dxÞ: (9.79)
i¼1
dx1 dx2
εxx ¼ , (9.80)
2Lo sin us
where Lo is the effective object difference, and us is the illumination angle. For
objective speckle, the effective object distance is the physical distance between
the object and the detector focal plane. In the case of subjective speckle, it is
the misfocus distance. For the complementary configuration of one normally
incident laser beam and two cameras, the corresponding relationship is
dx1 dx2
εxx ¼ , (9.81)
2Lo tan us
where uo is the observation angle. Note that these are the same results as
Eqs. (9.23)–(9.25).
544 Chapter 9
Note that Eq. (9.84) is a complex function in which the amplitude describes
the OCT structural image. The phase F(z) appears to be pseudo-random with
depth, z, and is temporally invariant, i.e., it is constant versus time for a given
position for (relatively) static scatterers. A translation of the scatterer at
depth z by a distance Dd(z) during the time interval Dt between two
successive A-scans will induce a change in the measured phase of the
reflected light given by113
where n is the refractive index of the sample, and k is the wavenumber of the
light source (k ¼ 2p/l ¼ 2pv/c). Calculating this phase difference at each
depth z yields depth-resolved measurements of both the magnitude and
direction of the axial (parallel to the imaging beam) displacement of the tissue
at the time t,113
DFðz,tÞl
Ddðz,tÞ ¼ : (9.86)
4pn
546 Chapter 9
DFðz,tÞl
vðz,tÞ ¼ : (9.87)
4pnDt
After the depth-resolved instantaneous displacement and velocity are
obtained, the strain rate map ε_ ðz,tÞ representing the localized elastic properties
of tissue can be generated:
vðz,tÞ DFðz,tÞl
ε_ ðz,tÞ ¼ ¼ , (9.88)
z0 4pnz0 Dt
where zo is the initial depth in the sample before the tissue movement, i.e.,
before the compression, and the dot over the character denotes the time
derivative. So far, we have derived the depth-resolved instantaneous
displacement [Eq. (9.86)] and strain [Eq. (9.87)] of the sample at the time t.
The total displacement and strain over a time period T can, therefore, be
obtained by integration of the instantaneous displacement and strain over
the elapsed time, respectively. As a consequence, the depth-resolved displace-
ment d(z) and strain ε(z) maps of the sample over the time duration T can be
written as
DF ðz,tÞl
dðzÞ ¼ ∫0 Ddðz,tÞdt ¼ ∫0
T T
dt, (9.89)
4pn
DFðz,tÞl
εðzÞ ¼ ∫0 ε_ ðz,tÞdt ¼ ∫0
T T
dt: (9.90)
4pnz0 Dt
To ensure correlation between the phase measurements made at a given depth
in successive A-scans while simultaneously transversally scanning, the
transverse displacement of the imaging beam between A-scans must be small
enough relative to the probe beam size to avoid spatial speckle decorrelation.
This constraint can be met by densely sampling in the transverse direction.
Above, we describe phase-sensitive detection in the context of comparing
consecutive A-scans acquired within a B-scan. A similar approach was
outlined by Wang et al.114 in which the phase difference was calculated
between A-scans acquired in consecutive B-scans. Recently, a technique was
proposed in which the phase difference was calculated between A-scans
acquired in consecutive C-scans, enabling rapid acquisition of OCE volumes
in as little as 5 s.115
Phase-sensitive detection is the most prominent displacement-estimation
technique in contemporary OCE systems. However, speckle-tracking methods
are also being utilized.116 Advantages of phase-sensitive detection include
Optical Assessment of Tissue Mechanics 547
higher displacement sensitivity (in the picometer range) and a larger dynamic
range, particularly if phase-unwrapping techniques are used to extend the
maximum measureable displacement.117 However, it is anticipated that
speckle-tracking techniques will continue to be utilized. Speckle tracking has
the distinct advantage that it can readily be used to measure displacement in
both axial and transverse dimensions. This feature is needed to allow accurate
quantification of tissue stiffness in OCE. Phase-sensitive detection, on the
other hand, measures only the axial component of displacement. Extensions
of phase-sensitive detection, developed mainly for OCT flow imaging, have
extended the technique to enable the detection of transverse displacements;
however, they require either additional hardware to provide illumination of
the sample at multiple angles or they necessitate a degradation in spatial
resolution.118 Additionally, speckle tracking may prove to be more robust for
in vivo measurements as it is likely that it will be able to better tolerate motion
artifacts caused by patient and/or operator motion.
Several other related techniques have been proposed for measuring
deformation in OCE. In one technique, based on joint spectral- and time-
domain (STd) OCT, dynamic loading applied to a tissue sample results in
an amplitude spectrum of frequency tones described by Bessel functions.119
In this technique, as well as the standard Fourier transform used to obtain
depth-resolved information in Fourier domain OCT, a second Fourier
transform is performed in the time domain, i.e., across multiple A-scans.
The result is a Doppler spectrum containing frequency tones, which can
be used to extract the vibration amplitude of the tissue. It has been shown
that this technique is more resilient to noise that phase-sensitive detection.
Another method, closely related to speckle tracking, measures strain
following a compressive load directly from the speckle decorrelation.120
This is an attractive technique as it bypasses displacement estimation
inherent in other techniques. The principle of this technique is that for
softer tissue, higher speckle decorrelation results following compressive
loading than for stiffer tissue. Importantly, speckle decorrelation can result
from both displacement and strain. In order to isolate decorrelation caused
by strain, it is important to first translate the deformed speckle pattern
relative to the initial speckle pattern to maximize correlation between the
two speckle patterns, thus accounting for the decorrelation caused by
displacement.
Dd
εl ¼ : (9.91)
Dz
vDr
cs ¼ , (9.92)
Dw
where v is the acoustic angular frequency and Df is the phase shift of the
shear wave over a distance Dr. Assuming that soft tissue is incompressible
(Poisson’s ratio of 0.5), this allows the Young’s modulus, E, to be estimated
from the shear wave speed as
E ¼ 3rcs 2 : (9.93)
The speed of shear waves in soft tissue is typically in the range 1–10 m/s. It
has been evaluated from phase-sensitive OCT measurements, although in
principle could also be calculated using speckle tracking. To measure the shear
wave phase shift, an M-B scanning acquisition method has been developed
whereby a number of A-scans are acquired at each lateral position.135
Surface wave OCE is similar to shear OCE wave, with the key distinction
being that the wave propagates largely along the surface rather than in the
bulk of the tissue. Assuming the surface wave propagates as a Rayleigh
550 Chapter 9
wave, the Young’s modulus, E, can be calculated from the surface wave
velocity, csw, as136,137
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
0.87 þ 1.12y E
csw ¼ : (9.94)
1þy 2rð1 þ yÞ
Figure 9.25 (a) Static speckle image of skin sample and (b) the same sample subject to a
30 Hz acoustic vibration.
uniform gray image, whereas stiffer tissues should present a more strongly
speckled image. That is, stiffer tissues should have a higher speckle contrast
value.
Illustrating the concept is Fig. 9.25(a)–(b). Here, a skin sample was first
static [Fig. 9.25(a)] and then vibrated at 30 Hz [Fig. 9.25(b)], while
illuminated by an expanded He-Ne laser (633 nm) beam. A gradual
graying (i.e., loss of speckle contrast) is seen as the modulation frequency
is increased. The camera integration time was 0.0167 s. Concurrent with
this increase in grayness (decrease in speckle contrast) is a change in the
intensity probability distribution. As the image grays, the intensity
probability distribution takes on a more Gaussian shape, centered about
552 Chapter 9
0.9500
0.9000 Native
0.8500
0.8000
Treated
G
0.7500
0.7000
0.6500
0.6000
0 5 10 15 20 25 30 35
Driving Frequency (Hz)
Figure 9.26 Grayness values of speckle contrast for a chemically stiffened and native
(nontreated) portions of a tendon.
G i ¼ 1 sI ∕Ī : (9.95)
9.15 Conclusions
The optical elastography methods presented here characterize the mechanical
properties of tissue on a length scale intermediate between cell-based methods
and medical imaging methods. Because these methods lend themselves to in
vivo applications, they hold the promise of assisting in the diagnosis of diseases,
with early results in the eye, skin, and breast pointing the way to future clinical
tools. It is also evident that these methods will make important future
contributions to biomechanics, biomaterials, and tissue engineering as well.
References
1. J. Ophir, I. Cespedes, H. Ponnekanti, Y. Yazdi, and X. Li,
“Elastography: A quantitative method for imaging the elasticity of
biological tissues,” Ultrason. Imaging 13, 111–134 (1991).
2. R. Jones and C. Wykes, Holographic and Speckle Interferometry.
A Discussion of the Theory, Practice and Applications of the Techniques,
Cambridge University Press, Cambridge (1983).
3. K. J. Gåsvik, Optical Metrology Second Edition, John Wiley & Sons,
Chichester (1995).
4. G. Cloud, Optical Methods of Engineering Analysis, Cambridge
University Press, Cambridge (1995).
5. P. K. Rastogi (ed.), Optical Measurement Techniques and Applications,
Artech House, Inc., Boston (1997).
6. A. Oulamara, G. Tribillon, and J. Duvernoy, “Biological activity
measurement on botanical specimen surfaces using a temporal decorr-
elation effect of laser speckle,” J. Mod. Opt. 36, 165–179 (1989).
7. L. Gao, K. J. Parker, R. M. Lerner, and S. F. Levinson, “Imaging of the
elastic properties of tissue – a review,” Ultrasound in Med. & Biol. 22(8),
959–977 (1996).
8. R. M. Lerner, S. R. Huang, and K. J. Parker, “Sonoelasticity images
derived from ultrasound signals in mechanically vibrated tissues,”
Ultrasound in Med. & Biol. 16(3), 231–239 (1990).
9. H. E. Talhami, L. S. Wilson, and M. L. Neale, “Spectral tissue strain:
a new technique for imaging tissue strain using intravascular ultra-
sound,” Ultrasound in Med. & Biol. 20(8), 759–772 (1994).
10. R. Muthupillai, P. J. Rossman, D. J. Lomas, J. F. Greenleaf, S. J.
Riederer, and R. L. Ehman, “Magnetic resonance imaging of transverse
acoustic waves,” Magn. Reson. Med. 36, 266–274 (1996).
11. E. Konofagou and J. Ophir, “A new elastographic method for
estimation and imaging of lateral displacements, lateral strains, corrected
axial strains and Poisson’s ratios in tissues,” Ultrasound in Med. & Biol.
24(8), 1183–1199 (1998).
554 Chapter 9
I. Yu. Yanina
Saratov National Research State Medical University, Saratov, Russia
V. V. Tuchin
Saratov National Research State University, Saratov, Russia
Tomsk National Research State University, Tomsk, Russia
Institute of Precision Mechanics and Control, Russian Academy of Sciences,
Saratov, Russia
565
566 Chapter 10
The scattering coefficient (ms) and the scattering anisotropy factor (g)
mainly depend upon the refractive index mismatch between the components
of the cells, such as cell membranes, mitochondria, nucleus, other organelles,
and cytoplasm and extracellular fluid. In fibrous tissues (stroma of eye sclera
and cornea, dermis, dura mater, connective tissue of vascular walls, fibrous
components of muscle tissue and mammary gland, cartilage, tendon, etc.)
the scattering is caused by the refractive index difference between the
interstitial fluid or cytoplasm and the extensive chains of scleroproteins
(collagen, elastin, and reticulin fibrils).5–7 The refractive index values for
nuclei and cytoplasm organelles of animal cells, containing nearly the same
amount of proteins and nucleic acids, lie within the relatively narrow interval
from 1.38 to 1.41.1,7 In particular, for the nucleus, the refractive index is
nnc ¼ 1.39,1,8 and for the cytoplasm n0 ¼ 1.35–1.37.1 The scattering particles
(organelles, protein fibrils, membranes, and globules) have a greater density of
proteins and lipids, and thus a higher refractive index (ns ¼ 1.39–1.47) in
comparison with the ground substance of the cytoplasm.1 The refractive
index magnitudes for the connective tissue fibrils lie in the range 1.41–1.53
and depend upon the degree of hydration of their major component, the
collagen.9 The refractive index of the interstitial fluid and the blood plasma
amounts to nearly 1.33–1.35, depending on the wavelength.1,10 The main
scatterers in the blood are the red blood cells (RBC) (erythrocytes), which are
acaryocytes, containing 70% water, 25% hemoglobin, and 5% lipids, sugars,
salts, enzymes, and proteins (see Chapter 2 of Vol. 1). The refractive index
values of dehydrated RBC at the wavelength of 550 nm fall within the
range of 1.61–1.66.11 The refractive index of the hemoglobin solution with a
concentration of 32 g/dL, which is a typical concentration of hemoglobin in
the erythrocyte, amounts to nearly 1.42.12,13 For human blood the refractive
index is 1.36–1.40, depending on the wavelength.1
Numerous methods have been developed to increase the tissue probing
depth.14 For example, for a long time the limit probing depth in multiphoton
microscopy did not exceed 100 mm;15 however, the combination of
multiphoton excitation of fluorescence with very high efficiency of light
collection increased the imaging depth in a scattering medium to 2 mm.16,17
The OCT allows one to study the internal structure of tissues to a depth up to
3 mm with a spatial resolution of 5–20 mm without disturbing the tissue
integrity.18 The probing depth up to a few centimeters is provided by the
multimodal imaging method, combining light absorption with acoustic
detection (referred to as photoacoustic tomography).19
One of the simple and efficient methods for solving the problem of
increasing the depth and quality of intratissular structure imaging, as well as
for increasing the precision of spectroscopic information from the deep
tissue layers and the blood, is the temporary reduction of tissue light
Optical Clearing of Tissues 567
50
40
Number of publications
various sources
Web of Science
30 PubMed
20
10
0
1960 1970 1980 1990 2000 2010
Years
Figure 10.1 Approximate evaluation of the number of publications related to the optical
clearing of tissues from 1955 to May 2015. The dependences are plotted using the
databases Web of Science, PubMed, and other available sources6HHFRORUSODWHV
568 Chapter 10
fibers) cause by both pushing out the interstitial fluid from the affected
volume and the dehydration at the expense of the agent impact or water
evaporation.1,18,21,22,25,26,28–32,39 The particular mechanisms of optical clear-
ing depend on the tissue type and the method used.
In this chapter, we discuss the physical and molecular mechanisms of
optical clearing (OC) methods based on agent immersion, tissue compression,
and photodynamic/photothermal action. The OC efficiency is demonstrated
for a few typical fibrous and cellular tissues. The beneficial applications of OC
methods for increasing the probing depth and resolution of a number of
prospective optical diagnostic methods, such as OCT, projection and
photoacoustic tomography, fluorescence imaging, nonlinear and Raman
microscopy, and terahertz imaging, are shown.
10.2 Immersion OC
Following the development of immersion refractometry applied to cells, in
1955, Barer et al.40 first proposed the optical clearing (OC) of a cell suspension
by means of a protein solution having the same refractive index as the cell
cytoplasm. In the late 1980s, the method of immersion optical clearing was
first applied to the eye sclera and cornea.41
Afterwards, a few research groups started intense studies of the specific
features and mechanisms of the OC phenomenon and demonstrated the
capabilities of the method to increase the probing depth or image contrast of
optical inhomogeneities inside a scattering medium.1,6,18,21–27,31,32,34–36,39,42–87
The beneficial results of OC obtained for many optical imaging techniques,
such as laser speckle-contrast imaging,34,70,74,76,78 OCT,20,29,35,59–60,64,68,79,81
microscopy,23,24,82,83 ultra-microscopy,84–86 etc., demonstrated the high
potential of their mutual use not only for obtaining high-resolution structural
and functional tissue images in vitro,1,21,22,39,54,55,71,72,82–86 but also
in vivo.1,22,25,34–36,39,45,50,54,55,60,64,69,70,74–76,78–80 Table 10.1 shows the increase
of the light penetration depth in tissues caused by application of optical
clearing agents (OCAs) to some imaging and spectroscopy techniques.
The study of tissue OC kinetics at penetration of an OCA allows one to
evaluate the OCA diffusion and permeability coefficients.1,22,35,43,47,87–96
Using these techniques, based mostly on collimated light transmittance
measurements for in vitro studies and OCT for in vivo, the diffusion rates of
glucose, some drugs, and OCAs were determined in eye tissues,43,91–95
muscle,96 skin,45,97,98 dura mater,87 arterial99 and lung100 tissues. The
monitoring of OCA diffusion with high temporal and depth resolution allows
one to differentiate healthy from pathologically modified tissues.101–105
The immersion OC, as already mentioned above, is based on the
impregnation (immersion) of the tissue by a biocompatible agent, possessing a
sufficiently high refractive index to match the refractive indices of the
Optical Clearing of Tissues
Table 10.1 Examples of the increase of light penetration depth at applications of OCAs to different imaging and spectroscopic techniques.
Light penetration
Technique Tissue OCA, refractive index Spectral range depth increase Reference
Confocal microscopy Dehydrated murine adipose Murray’s Clear, 1.55 334–799.3 nm ~35 fold 250
tissue, muscle tissue,
myocardium, brain
Two-photon microscopy Human skin ex vivo Anhydrous glycerol, 1.47 Excitation – 700–1000 nm, 2 fold 23
detection – 370–670 nm
Confocal/two-photon Cellulose membrane FocusClearTM, 1.46 Excitation – 820 nm, 2 fold 251
microscopy detection – 390–465 nm
3-D second-harmonic Muscle tissue of a mouse 50%-glycerol solution, 1.395 Excitation – 890 nm, 2.5 fold 24
microscopy (direct wave mode) detection – 445 nm
Raman microspectroscopy Porcine skin in vitro 80%-glycerol solution, 1.46 785 nm 4 fold 297
THz spectroscopy Bovine muscle tissue Anhydrous glycerol, 1.47 0.25–1.6 THz 45% 305
OCT Mouse embryo 50%-glycerol solution, 1.395 1310 nm 4.5 fold 59
Human skin in vivo Pre-polymer mixture of propylene 1305 nm 1.2 fold 70
glycol and polyethylene glycol, 1.47
Whole blood, diluted 6.5%-glycerol solution, 1.34 820 nm 2.4 fold 128
two times with saline
569
570 Chapter 10
0.7 0.7
(2)
0.6 (2)
0.6
(1)
Dehydration degree
Dehydration degree
0.5 0.5
0.4 0.4
(1)
0.3 0.3
0.2 0.2
0.1 0.1
0.0 0.0
0 200 400 600 800 1000 0 50 100 150 200 250 300 350
Time, hrs Time, hrs
(a) (b)
Figure 10.2 Dehydration kinetics of human skin samples with intact (a) and perforated
(b) epidermis under the action of 88% glycerol solution (1) and in the course of interstitial
water evaporation in air (2). The symbols correspond to the averaged experimental data; the
curves show the result of approximation. The vertical lines present the standard deviation
(from the data of Ref. 56).
both water and the 88%-glycerol solution. Figure 10.2 shows that when the
samples with an intact and perforated horny layer are dried under identical
conditions, their final dehydration degree should be practically similar.
However, from the analysis of the dehydration characteristic time (for the
perforated sample t amounts to 71.2 1.9 hrs.) it follows that the dehydra-
tion process in this case occurs by more than 20-fold faster than in the
intact epidermis (t ¼ 1408.9 36.3 hrs.). The value of t for the sample
with a perforated epidermis under the action of the OCA was only 36.4
0.7 hrs, which is almost by 5.5-fold smaller than for the intact sample
(t ¼ 193.4 12.3 hrs).56
In Fig. 10.2(b), it is well seen that with the perforated epidermis the
maximal dehydration degree under the evaporation of water from the tissue is
essentially higher than that under osmotic action. At the same time, the rate of
skin dehydration due to evaporation is nearly two times slower than that
under osmotic action.
The study of the water evaporation process allowed for the separation of
the dehydration mechanism of OC from the other mechanisms (i.e., the
matching of refractive indices due to the penetration of OCA into the tissue
and the collagen structure modification). For glycerol action, the degree of
dehydration in the samples with perforated stratum corneum is smaller than
that in the intact samples. Glycerol permeates through perforated skin much
more easily. Due to its hygroscopic properties, penetrating glycerol retains
water inside skin. From the practical point of view, the topical delivery of
OCA into the skin through the perforated channels hampers the formation of
a strong concentration gradient between the interstitial fluid and the OCA
solution, which reduces tissue dehydration and can be considered as a positive
factor for OC of skin in vivo.
Mao et al.107 focused their attention on the study of the efficiency of
six polyatomic alcohols with hydroxyl groups (1-butanol, 1,4-butanediol,
1,3-propanediol, PEG-200, PEG-400, and glycerol) with molecular weights
from 74 to 420 Da, and refractive index values from 1.40 to 1.47, affecting
the degree of skin OC. For better in vitro modeling of the topical OCA
application in vivo, the agents were applied to the pig skin epidermis topically,
while the dermis was moistened with saline. The relative transmission of the
samples, measured using the integrating sphere, was used to assess the
efficiency of OC.107 Authors found that glycerol, which is a triatomic alcohol,
provides the maximal OC effect, while monoatomic alcohol 1-butanol
provides the smallest effect, and four different diatomic alcohols provide an
intermediate one. After analysis of the interrelation between the OC efficiency
and the refractive index or molecular weight of the OCA under study, the
authors concluded that the OC effect of alcohols should be related to
the number of hydroxyl groups rather than to the refractive index or the
molecular weight of the OCA.
Optical Clearing of Tissues 573
OCAs that use sugars and polyatomic alcohols can induce a reversible
solubility of tissue collagen and lead to the additional reduction of tissue
scattering due to a decrease of the basic scatterer size.121 The collagen fibers
have a complex self-organization structure and are the main scattering centers
in many tissues, such as skin dermis and eye sclera.1,117 The hydrogen bond is
the main bonding force between the triple collagen helices. The OCAs with
multiple hydroxyl groups possess a greater negative charge that destabilizes
the highly-ordered collagen structure until its dissociation. Since the hydrogen
bonds in the triple collagen helices belong to noncovalent interactions, the
OCA-induced collagen dissociation is reversible. Hirshburg et al.109,110,117,121,122
observed the glucose-induced dissociation of collagen fibers in tissues in vitro
and the corresponding reduction of the scattering coefficient. The subsequent
replacement of glycerol by phosphate-buffer solution provided the restoration
of the collagen structure and scattering properties.
Hirshburg et al.110 performed mathematical modeling of molecular
dynamics in order to clarify the formation of hydrogen bonds between the
alcohol (glycerol, xylitol, and sorbitol) and collagen molecules. They
divided the hydrogen bond bridges into different types depending on the
position of hydroxyl groups involved in the interaction with respect to
carbon atoms. They associated the type with the number of carbon atoms
in the hydrogen bond bridge. It was found that the bridges with a large
number, built in between the collagen molecules in the helix, can break the
collagen–collagen and collagen–hydroxyl bonds more efficiently than the
bridges with smaller numbers. Thus, the alcohols having pairs of hydroxyl
groups with a longer carbon chain between them should be more efficient
for OC than the alcohols having adjacent hydroxyl groups. The results of
the modeling explain the fact that 1,3-propanediol demonstrated an OC
potential twice as great as that of 1,2-propanediol, although both possessed
similar molecular weights (76.10 Da), close values of the refractive index
(1.44 and 1.43), and osmolality (8.3 and 8.7 Osm/kg). The reason is that
1,2-propanediol can only form bridges of the type I, while 1,3-propanediol
forms bridges of the type II.110 Since the modeling of molecular dynamics
can clarify the specific features of interaction between the OCA and the
structural elements of the tissue at the microscopic (molecular) level, it can
be a powerful tool in the study of OC mechanisms, particularly for finding
new, highly efficient OCAs.
OC of hard tissues, such as bone, cartilage and tendon, is of great practical
interest. The reduction of scattering in these tissues offers the possibility to
develop minimally invasive methods of laser diagnostics and therapy of brain
and other deeply located organs. The possibility of optical monitoring of the
cerebral blood flow (CBF) through intact mouse skull at OC was
demonstrated by Wang et al.80 The experiments with laboratory mice
in vivo have shown that the minimal diameter of microvessels that could be
574 Chapter 10
the stratum corneum of the skin epidermis is a natural barrier that impedes the
OCA penetration into the dermis.129 For in vivo applications of the OC
method, one has to use direct exposure of the skin dermis or intradermal OCA
injection. In addition, the low concentrations of OCAs do not provide
sufficient OC, whereas at high concentrations they can induce edema,
chemical burn, partial necrosis and scarring.45,64,108 To develop an efficient
and safe way of breaking the integrity of the epidermis stratum corneum and
accelerating the penetration of OCA into the dermis, various physical
methods,56,118,130–140 chemical enhancers of permeability,113,118,141–147 and
their combinations118,148,149 have been proposed.
OCA diffusion enhancement is implemented by using the agents that serve
as enhancers of tissue permeability in medicine and cosmetology.36,39 In
particular, it is shown that Azone,141 oleic acid,142 DMSO,118,143–146
ethanol,150 propylene glycol,147,151 Thiazone149 considerably enhance tissue
permeability for OCAs and increase the efficiency of OC.
The possible mechanism of diffusion enhancement using Azone on
application to skin could be an increase of the fluidity of hydrophobic regions
of the stratum corneum and the corresponding reduction of resistance to the
penetration of OCAs.141 For example, after 60-min exposure of in vitro skin to
the mixture of 40%-glyercol Azone, the absorption of light at the wavelength
1276 nm increased by 41.1%, and the diffuse reflection at the wavelength
1066 nm decreased by 29.3%, which considerably exceeded the values of
these parameters under the application of a more-concentrated 80%-aqueous
solution of glycerol.141
Oleic acid is a monounsaturated fatty acid that is widely used as a safe
transdermal enhancer for drug delivery. In the region of skin clearing, the
synergetic effect of oleic acid as a promoter of OCA skin penetration was
studied.142
DMSO is a well-known agent widely used for improving the
transdermal delivery of drugs. It is a polar aprotic solvent that is effective
at application to lipids.143 DMSO enhances the permeability of the stratum
corneum for both hydrophilic and lipophilic agents.152,153 In addition, it
possesses a high refractive index and fast permeability, and thus may
serve as an OCA as well.144 It also interacts with the highly organized
structure of collagen fibers, changing the interfibrillar space at the sub-
micrometer scale,145 which is of great importance for the skin OC.
However, the results on the DMSO OC potential and safety are rather
contradictory.106,144,154,155 In Ref. 106, it is shown that the optical clearing
potential of DMSO is smaller than that of glycerol, propylene glycol,
ethylene glycol, and some other OCAs. The application of high-
concentration DMSO to the skin surface causes irritation, accompanied
by epidermal spongiosis.154 On the other hand, it was noticed that DMSO
is a high-efficiency OCA at topical application, and no side effects under
Optical Clearing of Tissues 577
the 20-min exposure of the rat skin surface in vivo were reported.144 In
spite of the controversial evaluation of the DMSO efficiency and safety,
this substance is widely used for OC in mixtures with different
OCAs.63,118,143–146,152,153,156
Ethanol is also a solvent that modifies the skin barrier properties. Under
sufficiently high concentration (40%), ethanol facilitates the formation of
pores and essentially increases the transport of agents through them due to
increasing the size and/or the density of pores in the epidermal membrane.150
The mechanism of the stratum corneum permeability enhancement under
the action of propylene glycol is the solvation of keratin in the process of
water replacement in the binding hydrogen groups and the inclusion of
propylene glycol into the polar heads of the lipid bilayer.151 It was shown that
the mixtures of different OCAs with propylene glycol increase the efficiency of
the OC, however, the clearing effect induced by propylene glycol itself is
weaker than for these mixed OCAs.147
Thiazone is an innovative agent that increases the skin permeability nearly
three-fold more than Azon.36 It is also more efficient as compared to
propylene glycol.36,147
To increase the tissue permeability for OCAs, one can use combinations
of the abovementioned agents. For example, in Ref. 157 for OC of skin, a
combined OCA was used, including glycerol, PEG-300, ethanol and DMSO.
Beside the chemical agents, a number of physical methods were proposed
to overcome the skin barrier, including low- and high-intensity irradia-
tion,130,158 fractional lamp131,133 and laser132 microablation, mechanical
microperforation,137,138 ultrasonic (US) irradiation,134,135,159 electrophoresis,160
needleless injection,161 mechanical removal of the surface layer by means of
abrasive paper,136 epidermal tape stripping,162 and microdermabrasion.163
Different radiation sources (e.g., CO2 and Nd:YAG lasers, operating at
the wavelengths 532 and 1064 nm, respectively, broad-band sources of intense
pulsed light, operating in the ranges 650–1200, 525–1200, and 470–1400 nm)
have been used to irradiate the skin in vivo before OCA application with
different doses and exposures. The measurements of reflection spectra before
and after the exposure have shown that the radiation of an Nd:YAG laser in
the Q-switch and long pulse modes can improve a transepidermal OCA
penetration by 8–9 fold as compared to the intact skin.130 In the other study of
Stumpp et al.,158 a 980-nm diode laser was used for the rodent skin irradiation
using absorbing substrates on the skin surface providing heating of the
stratum corneum and causing a failure of the protective barrier function. After
the laser action and removal of absorbing substrates, the skin surface was
subjected to glycerol application. The OCT study of skin in the exposed
regions has shown the increase of the light-penetration depth up to 42%.
A similar principle was used to create regions of microablation under the
action of a broadband flash lamp. The transparent mask with a set of
578 Chapter 10
Figure 10.3 Images of tattooed skin surface: the tattooed sample before the glycerol
action (a); the tattooed sample after the surface microperforation and glycerol action during
24 hrs (b).133
absorbing carbon centers was applied to the skin surface in order to create
regions of epidermis microdefects under the light absorption.131,133
An increase of the skin transparency and image clearness of the tattoo,
located under the skin at depth of 300–400 mm, was demonstrated by Genina
et al.133 and Bashkatov et al.164 Figure 10.3 presents the tattooed skin images
before and after fractional epidermis thermal ablation and the action of the
88%-glycerol solution.
Fractional laser microablation of the skin surface by means of the erbium
laser (2940 nm) with a pulse energy from 0.5 to 3.0 J is also an efficient tool of
overcoming the barrier for hydrophilic and hydrophobic OCAs.132
The integrity of the epidermis stratum corneum can be disturbed mech-
anically using a roller with multiple needles, often applied in cosmetology.
Yoon et al.137 used this device to create transdermal microchannels in
samples of porcine skin ex vivo with the aim of improving the glycerol
penetration into the depth of the skin. The combination of multiple needle
perforation with a low-frequency ultrasonic treatment allowed one to
increase a delivery rate of the 70%-glycerol solution into the skin by 2.3-
fold in comparison with using the multiple-needle roller alone.138
The low-frequency US (sonophoresis) is one of the noninvasive methods
for improvement of OC, and increases the depth and rate of OCA delivery
into tissues.134,135,165–168 The combined use of OCA and US provided
considerable increase of the depth and contrast of the OCT images of porcine
skin in vitro and human skin in vivo.166 Xu et al.134 used the 1 MHz-US to
enhance the penetration of 60%-glycerol and 60%-PEG-200 solutions into
the porcine skin in vitro. OCT study has shown an increase of the imaging
depth by 40% and 93%, respectively, as compared to using these OCAs
without US.
Optical Clearing of Tissues 579
slide is applied to the skin surface with moderate pressure, kept there
nearly 3 min, and then left for 2 min more without compression. When
removing the slide, the outer layer of the skin epidermis was also removed.
The procedure was repeated 3–6 times until the skin became shiny. Then an
OCA was applied to the processed skin surface under slight pressure, which
also improved the agent penetration into the skin.
The combined use of chemical and physical OCA diffusion enhancers
facilitates further increase of the OC efficiency.118,148,149 Xu et al.148 showed
that the sodium lauryl sulphate, a surface-active agent often used as an
enhancer of penetration of pharmaceutic and cosmetic products into skin, in
combination with the US demonstrates a synergetic enhancement effect on the
penetration of 60%-glycerol solution into the skin. As a result, the optical
transmission and OCT probing depth increased and the time needed for
effective OC decreased in comparison with the US used alone.
The US processing was also used in combination with Thiazone to
improve the penetration of PEG-400 into the skin.149 It was shown that after
the complex processing with Thiazone, PEG-400, and US the diffuse
reflection coefficient decreased by 33.7-fold as compared to the control
measurements (without OC). Using only PEG-400 or PEG-400 with
Thiazone, the reduction of the reflection coefficient was of 2.7- and 3.3-fold,
respectively. The probing depth increased by 41.3% in comparison with the
control samples.
The comparison of the total attenuation coefficients measured for the rat
skin in vivo at application of physical and chemical penetration enhancers was
carried out in Ref. 118. Figure 10.4 presents the result of calculation the value
m mtðcontrolÞ
Dmt ¼ tðtreatedÞ
mtðcontrolÞ 100% of the relative attenuation coefficient change as a
result of the multimodal skin processing with respect to the control
measurements. In this case, the control group consisted of animals not
subjected to the treatment. For this group Dmt ¼ 0. The results allow for
assessment of the efficiency of different OCA diffusion enhancers (the OCA
was a mixture of equal amounts of glycerol and PEG-400). It is seen that
the OCA bar has a minimal height, which means the minimal efficiency of
the 20-min OCA action on the surface of intact skin (OC is very small). The
20-min application of OCA with 9% DMSO appeared to be less efficient in
OC than the 4-min sonophoresis. However, the maximal efficiency of
OC was observed under the combined application of US-DMSO-OCA
during 4 min.
The combined use of surface epidermis stripping with chemical enhancers
of diffusion, such as Thiazone, Azon, and propylene glycol, was studied for
the rat skin in vivo.170 The mixture of an enhancer with PEG-400 was applied
to the skin area that was preliminarily processed by tape stripping. As a result,
the skin diffuse reflection coefficient was reduced, the greatest change being
observed for Thiazone-OCA, then, in descending order, Azon-OCA,
Optical Clearing of Tissues 581
Figure 10.4 The difference between the averaged coefficients of total light attenuation by
the intact rat skin in vivo, calculated from the OCT data. The bar “control” corresponds to the
data, obtained before the OCA action. “OCA” is a 20-min topical action of OCA (glycerol-
PEG-400 mixture in equal proportions); “DMSO-OCA” is a 20-min action of DMSO
(9%)-OCA mixture; “US-OCA” is a 4-min action of low-frequency ultrasound and OCA;
“US-DMSO-OCA”is a combined 4-min action of ultrasound and the DMSO (9%)-OCA (from
the data of Ref. 118).
propylene glycol-OCA, and pure OCA. These results differ from the data of
in vitro studies for porcine skin, for which the optimal combination of OCA
and enhancer was propylene glycol-OCA, and the minimal reduction of the
reflection coefficient was demonstrated by Azon-OCA.147 The found
differences for in vitro and in vivo studies underline the importance of the
tissue physiological response on OC.
The influence of microdermabrasion on the skin permeability for
hydrophilic and lipophilic agents was studied in Ref. 163. Microdermabrasion
is a partial ablation and homogenization of the stratum corneum under the
action of a high-pressure flow of microparticles. Depending on the pressure
and the time of exposure, the skin permeability for hydrophilic agents
increased by 8–24 fold, as compared to the intact skin.
10.3 Compression OC
The in vivo study of external mechanical compression on the optical properties
of tissues is interesting for several reasons. The specific features of light
propagation in a tissue vary depending on its morphologic, biochemical, and
physiological characteristics, therefore, at compression, the spectral reflectiv-
ity of a tissue that carries information on its morphologic and functional
condition can be changed (see Chapter 3).171,172 When the mechanical
compression is applied to a relatively large area of the tissue surface (of the
582 Chapter 10
order of a few square centimeters), the change of its optical properties is due to
better packing density of the scatterers and leakage from the compression site
of absorbers, such as blood and water.
At local mechanical compression, produced, for example, by a fiber tip of
an optical probe with an area of a few square millimeters, a gradient of the RI
is induced at the compression site. This volume of the tissue plays the role of a
lens for the probing radiation, propagating through the tissue. At the point-
wise compression, besides tissue lensing, scatterer ordering and blood/water
displacement also take place.
Since the external mechanical compression can essentially and reversibly
improve optical properties of the tissue, this method can be considered as an
alternative to a widely used immersion OC. As discussed above, the main
mechanisms of the immersion OC are water transport from the tissue and
partial replacement of intercellular or intracellular water with an OCA. Better
scatterer packing and RI matching lead to OC as a result. Analogous
processes partially occur under the compression, since the removal of water
from the interstitial space and the increase of protein concentration in it as a
result of the external compression should also lead to the reduction of
scattering.
The effect of increasing the penetration depth of laser radiation into a
tissue under the application of the external local mechanical pressure was
demonstrated more than 30 years ago.173 Since that time, many papers related
to use of external mechanical compression to control tissues optical properties
(absorption and scattering) have been published.
The first studies were carried out with the tissue samples ex vivo.174,175 In
Ref. 174, the growth of the transmission and the reduction of the diffuse
reflectance in the spectral range from 400 to 1800 nm were found. It was
concluded that at compression, tissue absorption and scattering coefficients
increases, and the possible mechanisms of the OC are the reduction of the
sample thickness and its dehydration. Similar results were also obtained by the
authors of Ref. 175. The compression of porcine skin samples have shown
that it causes the increase of the light transmission through the sample, and
the effect is inertial.
The mechanical compression also changes the optical properties of tissues
in vivo, which is accompanied by the spectral changes both of the diffuse
reflection176–186 and of the fluorescence.185,187–189 The authors of Refs. 176
and 187 were the first who found the impact of the compression on the diffuse
reflection and autofluorescence spectra of the human skin in vivo. It was
reported that the pressure applied to the skin reduces the depth of the dip in
the green region of the reflectance spectrum, which indicates a lesser blood
content in the tissue. The reflection coefficient of skin in the green region
increases, while in the red and yellow regions it decreases. In the reflection
spectrum, an isobestic point was found for which the reflection coefficient
Optical Clearing of Tissues 583
amounted to 20% for epidermis, 70% for dermis, and 60% for subcutaneous
tissue (fat and muscle).203,204
Experimental diffuse reflection spectra were analyzed for two absorption
lines of water in the NIR region, 1160 nm and 1220 nm, attributed by the
authors to the absorption of free and bound water, respectively, basing on the
comparative analysis of the reflection spectra of skin and pure water.
The measurement of water content in the skin in vitro has shown that
under compression, the intensity of the band at 1220 nm significantly grows,
while the intensity of the band at 1160 nm decreases. When the pressure is
increased to 376 kPa, the absorption peak at 1160 nm almost vanishes. This
indicates the fact that under the compression of skin, the free water leaves the
compressed region, while the bound water that forms complexes with proteins
stays. When the pressure reaches 400 kPa, the volume of free water decreases
to 30% of its initial value. This is the main cause of the tissue deformation.
Under long-time compression, the absorption peak of free water decreases
with time, and the peak of bound water increases. At the early stages of
the deformation due to the large stress in the tissue, the free water leaves the
compressed region with a sufficiently high rate. As the stress relaxes, the
deformation of the tissue and the rate of free water migration gradually
decrease. After nearly 6 min, the tissue deformation and the water transport
stop. The duration of this process is also related to the compression
magnitude.
Thus, for the analysis of the structural and optical changes occurring in
the skin in vivo under the conditions of external compression, the above model
of the tissue can be considered, according to which the skin consists of a solid
matrix formed by the collagen fibers and the interstitial fluid, the major part
of which is water. When the skin is compressed, the solid matrix is deformed
and the interstitial fluid leaves the region subject to compression with the rate
determined by the density of packing of the matrix fibers and the viscosity of
the fluid. This hypothesis is in good agreement with the experimental data.
The relation of skin deformation and relaxation time under the conditions of
compression depends upon the elastic properties of the fibrils and the density
of their packing, as well as upon the amount and viscosity of the fluid in the
tissue. Since the bound water is integrated in the tissue solid matrix,
the measurements of free and bound water content variations can reflect the
deformation of the solid matrix and the transport of free water in the tissue
under compression.
of a tissue change its optical properties, and this should be taken into account
in the calculation of the radiation dose in the process of treatment.3,22 The
method of controlling the optical properties of fat tissue, which is usually a
strongly scattering tissue, was proposed in Refs. 205 and 206. The method is
based on the combined photochemical and photothermal effects2,207 induced
in the fat tissue cells.
The selective photothermal action on the fat tissue can be implemented
by the choice of the laser radiation wavelength corresponding to the
absorption band of the endogenous tissue chromophores. The lipids
contained in the fat droplet of adipocytes exposed to light with a wavelength
of 1210 nm serve as tissue heaters.208 However, with exogenous chromo-
phores, e.g., the indocyanine green (ICG), the efficiency of light interaction
with tissue may be essentially higher and more selective.209–213 The intensity
and shift of ICG absorption bands depend upon the solvents used and
interaction with biological molecules.214–216 The solutions with complex
compositions, such as water-alcohol-glycerol, stabilize the ICG absorption
bands.215 Besides that, the alcohol component of the solution makes the cell
membrane much more permeable for the dye.217–219 The next step of the
interaction between the dye and the cell is related to the light action. The
light causes photochemical reactions; for ICG they simultaneously follow
two scenarios: 1) photodynamic reactions; 2) reactions yielding toxic
products.216 Depending on the intensity, the biological response of the cell
may lead to reversible or irreversible injury of the cell membrane. Reversible
membrane injury is associated with the creation of new pores or enlargement
of the already existing ones, which facilitates efficient exchange between the
cell content and the environment. In fat cells, the presence of pores increases
the lipolysis, the result of which is that the intercellular space becomes
filled up with the cell content and decay products (triglycerides, fatty acids,
water and glycerol).37,220–223 The appearance of such immersion fluid in
the intercellular space facilitates the process of optical clearing of the
adipose tissue.37
The probability of triglyceride leakage through the cell membrane pores
is rather small due to the relation between the pore diameter (0.1–2 nm) and
the triglyceride molecule size (nearly 1–2 nm).221–223 The hydrolysis of
triglycerides occurs in two stages. At the first stage, the hydrolysis of
external complex ether bonds occurs; this process is catalyzed by the lipase
enzyme. The hydrolysis of triglycerides is referred to as lipolysis. The
monoglyceride produced at the first stage of the triglyceride decay is further
hydrolyzed by the nonspecific esterase, producing glycerol and three
molecules of higher fatty acids.221–223 The hydrolysis products can easily
leave the cell through the pores, since their molecular size is small compared
to the pore size. In this process, the refractive index of the intercellular
fluid (nм ¼ 1.36)45 approaches the refractive index inside the adipose cell
Optical Clearing of Tissues 587
Figure 10.5 Transmitted light images of a subcutaneous fat tissue sample stained with
brilliant green before (a) and after (b) – (e) light exposure of 15 min of a two-wavelength band
diode lamp (442 nm and 597 nm) with a total power density of 75 mW/cm2. The time interval
between the image recording and the end of light exposure is 8 min (b), 19 min (c), 39 min
(d), and 120 min (e). The sample temperature was 33°C.
itself (nв ¼ 1.44), the medium becomes more optically homogeneous and, as a
result, more transparent.22
The enhanced porosity of the adipose cell membrane at photodynamic
action (PDA) on cells in adipose tissue sensitized by brilliant green (BG) dye
was studied in vitro.37,220 As a result of induced cell porosity, their lipolysis
was observed. The hypothesis of PDA induced pore formation is in good
agreement with the experimental results.37,206,220,224
Figure 10.5 presents the transmitted light images of the subcutaneous fat
tissue sample, stained with BG, before (a) and after (b)–(e) the irradiation with
a dental diode lamp Ultra Lume Led5 with wavelengths of 442 and 597 nm
and total power density of 75 mW/cm2 during 15 min at the constant
temperature 33°C. The cell lipolysis is seen to stimulate the leakage of a part
of the intracellular fluid into the intercellular space and the tissue gradually
undergoes immersion clearing.
70 1.0
60 0.8
the thickness of the region divided by the time, during which the diffusion of
zregion
the agent through the chosen region occurs:232 ðP ¼ tregion Þ.
The OCTA approach can be used to calculate the permeability coefficient
at a definite depth in the tissue: P(z) ¼ zi/tzi, where zi is the measurement depth
and tzi is the time of agent diffusion to the given depth. The value of tzi should
be calculated from the time of the agent application to the time of the
beginning of the OCT signal amplitude variation, caused by the OCA.232
These approaches allowed the assessment of the following permeability
coefficients: rabbit cornea for mannitol (8.99 1.43) 10–6 cm/s [95], cm/s,95
rabbit sclera for mannitol and 20%-glucose solution (6.18 1.08) 10–6 and
(8.64 1.12) 10–6 cm/s, respectively,95 human sclera for cortexin (2.40
0.32) 10–5 cm/s [Fig. 10.6(b)],93 porcine aorta for 20%-glucose solution
(1.43 0.24) 10–5 cm/s,99 epidermis and dermis for 40%-glucose solution
(6.01 0.37) 10–6 cm/s and (2.84 0.68) 10–5 cm/s, respectively,97 lung
tissue for 30%-glucose solution (1.35 0.13) 10–5 cm/s in the norm and for
different malignant neoplasms.100
The depth-resolved analysis of the tissue optical properties using OCT
allows one to reconstruct the two-dimensional diffusion maps. The visual
representation of the molecular diffusion front was first demonstrated by
Ghosn et al.97 in the course of noninvasive determination of the penetration
rate of 20%-glucose solution into the rhesus macaque skin in vivo.
Using the OCT methods, one can reveal the variations of the effective
refractive index of the adipose tissue as a result of the light photodynamic
action, which arise both directly after the exposure, and in the process of the
systematic biological response.38,224 The observed changes of effective
refractive index can be interpreted as the reduction of the relative refractive
index of the scatterers, which can be related to the immersion optical clearing.
Optical Clearing of Tissues 591
Figure 10.7 OCT images of adipose tissue stained with BG before the irradiation (a),
immediately after 15-min irradiation (b), at 60 min (c), and at 120 min (d) after the
irradiation. The source was the dental diode lamp Ultra Lume Led5 (442 and 597 nm,
75 mW/cm2). The dye concentration was 6 mg/ml. The sample temperature was 37 °C.
The curves correspond to A-scans of the OCT image, averaged over the whole B-scan
region.224
Figure 10.7 shows the series of OCT images of adipose tissue stained with
BG before (a) and after (b-d) the irradiation with the light of a diode lamp
during 15 min and under the heating of the samples to the physiological
temperature. Figure 10.7(b) shows the image of the adipose tissue immediately
after the irradiation, while the OCT images in Figs. 10.7(c) and (d) correspond
to relatively long observation times, 60 and 120 min, respectively. The mean
thickness of the samples was 237 10 mm.
The visual comparison of the OCT tomograms of the adipose tissue shows
that after the irradiation, the tissue structure considerably varies in time. One
can easily see the changes of the outer layer of the cellular structure
that becomes more homogeneous, which can be associated with partial
destruction of the cells and formation of an immersion layer consisting of
intracellular content and the products of the adipose component hydrolysis. In
Fig. 10.7(b)–(d), this layer is 80 to 90 mm in thickness. The mean size of
adipocytes in the tissue amounts to 60 to 70 mm.
If the geometric thickness of the studied layer is known, the effective
refractive index n is calculated as n ¼ z/l, where l is the true sample thickness in
mm, and z is the layer thickness observed in the OCT, i.e., the optical path
length in mm. The geometric thickness was measured using a micrometer
gauge. The optical path length was found from the OCT signal as the
difference between the depths of two peaks, corresponding to the sample
592 Chapter 10
boundaries (see Fig. 10.7). For better determination of the boundaries, the
A-scans were averaged over the lateral region of 2 mm. This operation
smooths out the stochastic noise and the disordered cell structure of the tissue,
while the peaks, corresponding to the sample boundaries, become more
distinct. The calculation of the refractive index showed its monotonic decrease
with the increase of observation time.
The observed changes in the OCT images can be interpreted as the result
of lipolysis and cell destruction at the surface of the sample due to the
photochemical effect. The intracellular fluid leaks from the cells and fills the
intercellular space, as a rule, containing the intercellular fluid, thus producing
the cleared (more homogeneous) outer layer and giving rise to optical clearing
inside the tissue by matching the refractive indices of the cells and intercellular
medium.
In Ref. 233 the chemically induced lipolysis was assayed by measuring the
amount of glycerol released from the cells into the surrounding medium. In
addition, using frame-by-frame CARS (coherent anti-Stokes Raman spec-
troscopy) processing of a single living cell, the authors could control the
morphological changes of lipid droplets in the cells. They found that the
microscopic lipid droplets appeared gradually in the course of the experiment,
achieving a size of nearly 1 mm 60 min after the impact. All these observations
allow for the assumption that analogous processes may occur in the case of
light-induced lipolysis.
The results of OCT measurements of the refractive index of adipose tissue
after the exposure to light can be explained based on the idea of forming two
layers over the course of time. One of these layers is cleared (the released
intracellular fluid) and the other one is scattering (cells). The refractive index
m of the scattering medium can be expressed as128
n2 n2
m ≡ n0 þ Dm ¼ n þ Qðl∕lÞ, (10.1)
n
where
n Vs
n ¼ n0 1 þ 1 , (10.2)
n0 V0
medium has its own thickness and refractive index. Denoting the time-
dependent thickness of the layer of adipose cells as H(t), and the thickness of
the outer layer of transparent cytoplasm as [L H(t)], the mean refractive
index of the layer with the thickness L, containing two layers with different
refractive indices, can be written as
½L HðtÞ HðtÞ
mT ¼ nþ m, (10.3)
L L
where m and n are defined by Eqs. (10.1) and (10.2). Since m is always
greater than n due to the effect of scattering and tissue degradation, this
leads to the decrease of H(t), and the summary refractive index should
decrease with time.
In the process of adipose tissue degradation and formation of a greater
amount of lipolysis products, the conditions of refractive index matching can
be implemented, which will lead to a further reduction of light scattering, i.e.,
the part of the refractive index m, depending on these processes. Both of these
mechanisms, the tissue degradation and the refractive index matching, explain
the experimentally observed reduction of the mean refractive index of the
sample layer, i.e., the reduction of its scattering capability over the course of
time.
Figure 10.8 Single image from the OPT scan datasets with the selected regions of interest
(insert) (a) and volume reconstruction of the pancreas based on the background
autofluorescence (dark gray) and the signal from insulin-specific antibodies (white islets)
with the regions of interest (insert) (b).237
(see, Fig. 10.8). As an OCA «Murray’s clear» (1 part of benzyl alcohol and 2
parts of benzyl benzoate (BABB)) with the refractive index 1.55 was used.
planar light beam. The fluorescence signal is, therefore, emitted only from a
thin optical section and is collected by the objective lenses. The declined light
is blocked by a filter, and the image is projected through a cylindrical lens
onto the screen of a digital camera. Since all sample parts below or above the
light sheet are not illuminated, the emission is not excited beyond the limits of
the focus and, therefore, is not to be eliminated from the useful signal. This
approach is referred to as ultra-microscopy (UM). In comparison with CM or
two-photon microscopy, UM can rapidly create optical sectioning of
macroscopic samples, since the objectives with small focal power and small
numerical aperture can provide a large field of view. However, the use of UM
is completely dependent upon the optical transparency of biological objects,
so that the improvement of optical clearing efficiency, including that due to
the development of new OCAs, remains an urgent problem.85,86,258–262 In
Ref. 258, it was shown that the use of optical clearing allowed for cellular
resolution in sectioning of the fixed mouse brain and detecting individual
neurons, labelled with green fluorescent protein, in the removed murine
hippocampus. In the isolated hippocampus the 3D images of dendrite
structures and processes of neuron populations were obtained. The use of UM
includes the imaging of cleared murine organs and total embryo specimens,
adult insects Drosophila, and other fixed tissues with the size of a few
millimeters.259
Ertürk et al.86 found that tetrahydrofurane (THF) in combination with
BABB can completely clear the spinal cord, preserving its fluorescence.
Recently Ertürk et al.262 developed a new protocol of optical clearing using
the dibenzyl ether instead of BABB in combination with THF to get three-
dimensional images of neurons of the whole mouse brain using the UM. This
method was called 3-D imaging of solvent-cleared organs (3DISCO). In this
study, the clearing of the tissue took only three hours, and the imaging was
performed during 45 min.
Yang et al.263 presented techniques for tissue clearing in which whole
organs and bodies were rendered macromolecule permeable and optically
transparent, thereby exposing their cellular structure with intact connectiv-
ity. A passive clarity technique (PACT) and corresponding protocols for
quicker passive lipid extraction and immunostaining of intact organs were
described. To image PACT-cleared thick tissue, a refractive index matching
solution (RIMS) with an outcome similar to FocusClear™ was designed.
For whole body clearing and immunolabeling, a methodology based on the
method of perfusion assisted agent release in situ (PARS) was proposed to
facilitate fast, whole-brain and whole-body clearing using systemic or
cerebrospinal circulation to directly deliver clarifying agents. These
methods are applicable for high-resolution, high-content mapping, and
phenotyping of normal and pathological elements within intact organs and
bodies. Figure 10.9 illustrates how PARS enables whole-brain mapping
Optical Clearing of Tissues 597
Figure 10.10 Improvement of PAFC detection sensitivity for B16F10 melanoma cells
demonstrated for in vitro phantom (0.8-mm mouse-skin layer over a blood vessel) at skin
optical clearing with glycerol. Typical PAFC traces for melanoma cells in flow before (a) and
after optical clearing (b). Dependence of peak rate of melanoma cell counting (cells/min) on
the laser pulse energy before and after optical clearing (c).280
Figure 10.11 Optical clearing of human skin: visual contrast of the vein (a); typical
changes in PA signal waveform before/after optical clearing (b); PAFC traces for a
vein in human hand before/after optical clearing (c); US-imaging of the selected
vein (d).280
600 Chapter 10
B16F10 melanoma cell counting in blood flow increased 1.7-fold (Fig. 10.10).
By using optical clearing, the feasibility of PA contrast improvement for
human hand veins was also demonstrated (Fig. 10.11).
Figure 10.12 The packing density change of collagen fibers in the rat skin ex vivo under
the action of 50%-aqueous solution of PEG-400: the initial state (a); immediately after the
OCA application (b), and 15 min after the application of OCA (c). The images were obtained
by means of SHG microscopy. The incident light wavelength was 790 nm, the detected
wavelength 395 nm6HHFRORUSODWHV
Yeh et al.121 and Wen et al.75 demonstrated that for the collagen-based
connective tissues (samples of tendon and model tissues), the optical clearing
process using the high-concentration glycerol (13 M) manifested itself in
considerable reduction of the backscattered signal intensity and the increase of
the transmitted signal.
For deeper probing of the murine skeletal muscle and tendon samples
using three-dimensional SHG microscopy in the transmission mode, the
authors of Ref. 24 used the 50%-glycerol solution. They obtained an increase
of SHG imaging depth by 2.5 times in the muscle tissue. The amplification of
the signal was also due to better packing of the fibrils. It was also shown that
the axial attenuation of the direct SHG signal is reduced with the increase of
the glycerol concentration (25, 50, and 75%).71 The backward SHG signal
considerably decreased in the process of optical clearing due to the scattering
reduction and the change of the local density of dipoles, producing the second
harmonic.71
The Raman nonlinear spectroscopy also finds its application in studying
the mechanisms of optical clearing. Thus, e.g., in Ref. 295 the signal of
coherent anti-Stokes Raman scattering (CARS) served as the reference in the
study of the DMSO impact on the collagen structure and, therefore, the loss of
the SHG signal and the reduction of light scattering by human skin.
Hirshburg et al.110 found the correlation between the rate of the water loss by
the rodent skin ex vivo under the effects of different OCAs and the potential of
optical clearing, which was defined as m0s (before clearing)/m0s (after clearing).
Raman spectroscopy is a potentially noninvasive method for studying the
bone development and for diagnostics of bone diseases.296 Essential
amplification of commonly undetectable Raman signals from internal tissues
can be implemented by means of optical clearing of near-surface layers of the
tissue that shade the object of study.296,297 In this way, the transcutaneous
spectroscopy of the rat tibial bone in vivo was performed under the skin
Optical Clearing of Tissues 603
optical clearing with glycerol.296 The process of porcine skin optical clearing
was studied in vitro by means of Raman microspectroscopy.297 The intensity
of Raman peaks at the depth of 400 mm was increased by 2-4 times. In
addition, the shifts of the peaks in the Raman spectrum of skin at different
concentrations of glycerol were observed.297
The confocal Raman spectroscopy can also be used for in vivo monitoring
of the penetration of clearing agents and permeability enhancers (particularly
DMSO) into the skin stratum corneum.298
∂Cðx,tÞ ∂2 Cðx,tÞ
¼D , (10.4)
∂t ∂x2
presenting the second Fick law, where C(x,t) is the concentration of the
immersion agent in the tissue (g/ml); D is the diffusion coefficient (cm2/s); t is
the time, during which the diffusion process occurs (s); x is the spatial
coordinate along the sample thickness direction (cm). Since in the experiments
606 Chapter 10
the volume of the immersion agent, as a rule, significantly exceeds the volume
of the tissue sample, the appropriate boundary conditions have the form90
∂Cðl,tÞ
Cð0,tÞ ¼ C 0 and ¼ 0, (10.5)
∂x
where C0 is the agent concentration in the solution (g/ml); l is the tissue sample
thickness (cm). The second boundary condition reflects the fact that the
diffusion of the agent into the tissue slab occurs only from one side of the
sample. In the case, when the agent diffuses from both sides of the tissue slab,
the boundary conditions have the following form:
The initial conditions express the absence of the agent in all internal points
of the sample before its incubation into the solution, i.e.,
Cðx,0Þ ¼ 0: (10.7)
The solution of the diffusion equation (10.4) with the initial (10.7) and
boundary (10.5) conditions taken into account has the following form:
X̀
4 ð2i þ 1Þpx ð2i þ 1Þ2 Dp2 t
Cðx,tÞ ¼ C 0 1 sin exp :
i¼0
pð2i þ 1Þ 2l 4l 2
where nH2O(l) is the spectrally dependent refractive index of water ([l] ¼ nm),
and Cgl is the glucose concentration in the solution(g/ml). The spectral
dependence of the water refractive index has the following form:322
X
Nt
f ðDÞ ¼ ðT c ðD,ti Þ T c ðti ÞÞ2 (10.18)
i¼1
Figure 10.13 Flowchart of the program for calculating the diffusion coefficients of agents
based on the kinetics of collimated transmittance of tissue samples.
where h(T) is the viscosity of the medium, where the diffusion occurs (e.g.,
water) at the given temperature T, T1 ¼ 20°C and T2 ¼ 37°C.
From Table 10.2 it is well seen that the temperature increase from room to
physiological, i.e., by 17°C, causes the increase of the diffusion coefficient by
nearly 1.5 times.
10.7 Conclusion
We have reviewed the specific features and methods of optical clearing and
related interactions of light with tissues. The impact of the OCA on a tissue
allows for efficient control of the optical properties, particularly, the reduction
of the tissue scattering coefficient, which facilitates the increase of efficiency of
optical imaging (optical biopsy) in medical applications. Both immersion and
compression techniques of optical clearing possess considerable potential for
many diagnostic, therapeutic, and surgical methods, in which the laser impact
on the target site hidden in the tissue is used.
Acknowledgments
The work was supported by grant 14-15-00186 of the Russian Science
Foundation.
This chapter is written on the basis of the recently published review paper
of the authors, E. A. Genina, A. N. Bashkatov, Yu. P. Sinichkin, I. Yu.
Yanina, V. V. Tuchin, Optical clearing of biological tissues: prospects of
application in medical diagnostics and phototherapy [Review], J. of
Biomedical Photonics & Eng., 1(1), 22–58, 2015.
References
1. V. V. Tuchin, Tissue Optics: Light Scattering Methods and Instruments
for Medical Diagnosis, 3rd edition, PM 254, SPIE Press, Bellingham,
WA, (2015).
2. T. Vo-Dinh (Ed.), Biomedical Photonics Handbook, CRC Press, Boca
Raton, FL (2003); second edition (2014).
3. L. V. Wang and H.-I. Wu, Biomedical Optics: Principles and Imaging,
Wiley-Interscience, Hoboken, New Jersey (2007).
4. R. K. Wang and V. V. Tuchin, eds., Advanced Biophotonics: Tissue
Optical Sectioning, CRC Press, Taylor & Francis Group, London
(2013).
5. V. V. Tuchin, Tissue Optics and Photonics: Biological Tissue Structures
[Review], J. of Biomedical Photonics & Eng. 1(1), 3–21, (2015).
6. V. V. Tuchin, L. V. Wang, and D. A. Zimnyakov, Optical Polarization
in Biomedical Applications, Springer-Verlag, NY (2006).
Optical Clearing of Tissues 611
112. Y. Ding, J. Wang, Z. Fan, D. Wei, R. Shi, Q. Luo, D. Zhu, and X. Wei,
“Signal and depth enhancement for in vivo flow cytometer measurement
of ear skin by optical clearing agents,” Biomed. Opt. Exp. 4(11), 2518–
2526 (2013).
113. J. Jiang and R. K. Wang, “Comparing the synergistic effects of oleic acid
and dimethyl sulfoxide as vehicles for optical clearing of skin tissue in
vitro,” Phys. Med. Biol. 49, 5283–5294 (2004).
114. Y. Liu, X. Yang, D. Zhu, and Q. Luo, “Optical clearing agents improve
photoacoustic imaging in the optical diffusive regime,” Optics Letters
38(20), 4236–4239 (2013).
115. X. Xu and R. K. Wang, “Synergistic effect of hyperosmotic agents of
dimethyl sulfoxide and glycerol on optical clearing of gastric tissue
studied with near infrared spectroscopy,” Phys. Med. Biol. 49, 457–468
(2004).
116. J. Jiang, M. Boese, P. Turner, and R. K. Wang, “Penetration kinetics of
dimethyl sulphoxide and glycerol in dynamic optical clearing of porcine
skin tissue in vitro studied by Fourier transform infrared spectroscopic
imaging,” J. Biomed. Opt. 13(2), 021105 (2008).
117. A. T. Yeh and J. Hirshburg, “Molecular interactions of exogenous
chemical agents with collagen-implications for tissue optical clearing,”
J. Biomed. Opt. 11(1), 014003 (2006).
118. E. A. Genina, A. N. Bashkatov, E. A. Kolesnikova, M. V. Basco, G. S.
Terentyuk, and V. V. Tuchin, “Optical coherence tomography monitor-
ing of enhanced skin optical clearing in rats in vivo,” J. Biomed. Opt.
19(2), 021109 (2014).
119. A. N. Bashkatov, A. N. Korolevich, V. V. Tuchin, Y. P. Sinichkin, E. A.
Genina, M. M. Stolnitz, N. S. Dubina, S. I. Vecherinski, and M. S.
Belsley, “In vivo investigation of human skin optical clearing and blood
microcirculation under the action of glucose solution,” Asian J. Physics
15(1), 1–14 (2006).
120. E. A. Genina, A. N. Bashkatov, and V. V. Tuchin, “Glucose-induced
optical clearing effects in tissues and blood,” Chap. 21 in Handbook of
Optical Sensing of Glucose in Biological Fluids and Tissues, V. V.
Tuchin, Ed., Taylor & Francis Group LLC, CRC Press, 657–692
(2009).
121. A. T. Yeh, B. Choi, J. S. Nelson, and B. J. Tromberg, “Reversible disso-
ciation of collagen in tissues,” J. Invest. Dermatol. 121, 1332–1335 (2003).
122. J. Hirshburg, B. Choi, J. S. Nelson, and A. T. Yeh, “Collagen
solubility correlates with skin optical clearing,” J. Biomed. Opt. 11,
040501 (2006).
123. J. W. Wiechers, J. C. Dederen, and A. V. Rawlings, “Moisturization
mechanisms: internal occlusion by orthorhombic lipid phase stabilizers - a
novel mechanism of action of skin moisturization,” Chap. 9 in Skin
620 Chapter 10
199. C. Li, J. Jiang, and K. Xu, “The variations of water in human tissue
under certain compression: studied with diffuse reflectance spectros-
copy,” J. Innov. Opt. Health Sci. 6(1), 1350005 (2013).
200. K. A. Martin, “Direct measurement of moisture in skin by NIR
spectroscopy,” J. Soc. Cosm. Chem. 44, 249–261 (1993).
201. C. W. J. Oomens, D. H. Vancampen, and H. J. Grootenboer,
“A mixture approach to the mechanics of skin,” J. Biomech. 20(9),
877–885 (1987).
202. A. Hidenobu and E. Mariko, “Non-contact skin moisture measurement
based on near-infrared spectroscopy,” Appl. Spectrosc. 58, 1439–1446
(2004).
203. A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, “Optical properties of
skin, subcutaneous and muscle tissues, a review,” J. Innovative Opt.
Health Sci. 14(1), 9–38 (2011).
204. T. L. Troy and S. N. Thennadil, “Optical properties of human skin in the
NIR wavelength range of 1000–2200 nm,” J. Biomed. Opt. 6, 167–176
(2001).
205. I. Yu. Yanina, G. V. Simonenko, and V. V. Tuchin, “Destructive fat
tissue engineering using photodynamic and selective photothermal
effects,” Proc SPIE 7179, 71790C (2009).
206. V. A. Doubrovsky, I. Yu. Yanina, and V. V. Tuchin, “Inhomogeneity of
photo-induced fat cell lipolysis,” Proc. SPIE 7999, 79990M (2011).
207. B. C. Wilson, “Photodynamic therapy/diagnostics: Principles, practice and
advances,” in Handbook of Photonics for Biomedical Science, V. V. Tuchin,
Ed., CRC Press, Taylor & Francis Group, London, 649–686 (2010).
208. M. Wanner, M. Avram, D. Gagnon, M. C. Mihm, Jr., D. Zurakowski,
K. Watanabe, Z. Tannous, R. R. Anderson, and D. Manstein, “Effects
of noninvasive, 1210 nm laser exposure on adipose tissue: results of a
human pilot study,” Lasers Surg. Med. 41, 401–407 (2009).
209. W. R. Chen, R. L. Adams, S. Heaton, D. T. Dickey, K. E. Bartels, and
R. E. Nordquist, “Chromophore-enhanced laser-tumor tissue photo-
thermal interaction using an 808-nm diode laser,” Cancer Lett. 88, 15–19
(1995).
210. W. R. Chen, R. L. Adams, A. K. Higgins, K. E. Bartels, and R. E.
Nordquist, “Photothermal effects on murine mammary tumors using
indocyanine green and an 808-nm diode laser: an in vivo efficacy,”
Cancer Lett. 98, 169–173 (1996).
211. S. Fickweiler, R. M. Szeimies, W. Baumler, P. Steinbach, S. Karrer,
A. E. Goetz, C. Abels, F. Hofstadter, and M. Landthaler, “Indocyanine
green: intracellular uptake and phototherapeutic effects in vitro,”
J. Photochem. Photobiol. B 38, 178–183 (1997).
212. V. V. Tuchin, E. A. Genina, A. N. Bashkatov, G. V. Simonenko, O. D.
Odoevskaya, and G. B. Altshuler, “A pilot study of ICG laser therapy of
Optical Clearing of Tissues 627
639
640 Index
E H
elastin, 111 hematocrit, 19
elastography, 505 human skin, 103
electrophoresis, 585 hydroxyl groups, 579
endoscopic polarized scanning hyperosmotic agent, 171
spectroscopy (EPSS), 86 hyperosmotic properties, 576
energy transfer spectroscopy, 274
epidermal stripping, 161 I
epidermis, 104 indocyanine green (ICG), 353, 592
epidermis–dermis junction, 594 infrared (IR), 105
epithelium, 45, 46 infrared spectroscopy, 197
erythema, 112 in-plane strain, 523
erythema index, 126 intravital microscopy (IM), 343
erythema–melanin meter, 144
excitation-emission map (EEM), J
108 Jablonski diagram, 249
extinction coefficient, 119
K
F keratin, 111, 222
fibrous tissue, 577 keratinocyte, 160
Fick law, 611
finite difference time domain L
(FDTD) method, 58 laser Doppler anemometry (LDA),
flow cytometry (FC), 346 333
fluorescence, 248 laser heating, 173
fluorescence imaging, 259 laser speckle, 518
fluorescence quantum yield, 121 laser speckle contrast analysis
Förster resonance energy transfer (LASCA), 436, 558
(FRET), 278 laser tweezers (LT), 21
Fourier transform, 524 laser-induced thermal effects, 358
Fourier-domain OCT (FdOCT), 550 lattice of islets of damage (LID),
fractional microablation, 577 164
free diffusion, 158 lazer tweezers (LT), 22
frequency-domain measurements, light scattering, 51
169
frequency-domain fluorometry, 263
Index 641
light sheet fluorescence microscopy optical coherence elastography
(LSFM), 280 (OCE), 507, 550
light transport, 66 optical coherence tomography
lipid bilayers, 585 (OCT), 78, 169, 391
LSS imaging, 85 optical density (OD), 119
lycopene, 234 optical projection tomography
lymph flow cytometry (LFC), 349 (OPT), 599
lymph flow profile, 353 oximetry, 129
lymphedema, 356 oxygen saturation, 130
lymphography, 345 oxyhemoglobin, 104
M P
mechanical compression, 119, 588 palpation, 506
melanin, 105 passive clarity technique (PACT),
metastatic breast cancer, 366 602
microparticle image velocimetry, penetration depth, 211
318 perfusion assisted agent release in
Mie scattering, 590 situ (PARS), 602
minimum mean square error permeability coefficient, 595
(MMSE), 533 photoacoustic flow cytometry
minimum speckle size, 520 (PAFC), 604
Monte Carlo modeling, 460, 601 photoacoustic imaging, 603
mucosa, 44 photodynamic action (PDA), 593
mucosal tissues, 39 photothermal action, 592
pigmentation index, 141
N Piola–Kirchhoff stress tensor, 542
natural moisturizing factor (NMF), polarization, 142
221 polarization degree, 146
near-infrared (NIR), 107 polarization-sensitive spectroscopy,
nicotine intoxication, 357 143
nitric oxide (NO), 358 polyatomic alcohols, 578
numerical error, 527 polyethylene glycol (PEG), 576
propylene glycol (PG), 162
O pseudo-elastic potential, 513
OCE probes, 556
oleic acid, 162 R
oncology, 412, 507 Raman nonlinear spectroscopy, 608
ophthalmology, 404 Raman spectroscopy, 197
optical clearing, 574 Raman spectrum, 223
optical biopsy, 616 Rayleigh scattering, 53, 590
optical clearing, 153, 346 Rayleigh–Gans approximation, 54
optical clearing agent (OCA), 576 reciprocal spatial difference (RSD),
optical clearing potential (OCP), 324
162 red blood cell (RBC) aggregation, 5,
6, 10
642 Index
T
terahertz spectroscopy, 609
tetrahydrofurane (THF), 602
Thiazone, 582
three-dimensional SHG microscopy,
608
time-domain OCT, 394
tissue engineering, 510
transdermal microchannels, 584
triglyceride, 592
tryptophan, 105
tyrosine, 105
U
ultra-microscopy (UM), 602
ultrasonic treatment, 584
ultraviolet A (UVA), 106
HANDBOOK OF
OPTICAL
BIOMEDICAL
DIAGNOSTICS
SECOND EDITION
Volume 2: Methods
Valery V. Tuchin
EDITOR
Since the publication of the first edition of the Handbook in 2002, optical methods for
biomedical diagnostics have developed in many well-established directions, and new
trends have also appeared. The text has been updated and expanded into two volumes.
P.O. Box 10
Bellingham, WA 98227-0010
ISBN: 9781628419139
SPIE Vol. No.: PM263