ParticleSizeAnalysis Guidebook
ParticleSizeAnalysis Guidebook
ParticleSizeAnalysis Guidebook
TO PARTICLE SIZE
ANALYSIS
TABLE OF CONTENTS
23 PSA300 and CAMSIZER image analysis technique There are also industry/application specific reasons why controlling and
Static image analysis measuring particle size is important. In the paint and pigment industries particle
Dynamic image analysis size influences appearance properties including gloss and tinctorial strength.
Particle size of the cocoa powder used in chocolate affects color and flavor.
The size and shape of the glass beads used in highway paint impacts reflectivity.
26 Dynamic range of the HORIBA particle characterization systems
Cement particle size influences hydration rate & strength. The size and shape
distribution of the metal particles impacts powder behavior during die filling,
27 Selecting a particle size analyzer
compaction, and sintering, and therefore influences the physical properties of
When to choose laser diffraction
When to choose dynamic light scattering
the parts created. In the pharmaceutical industry the size of active ingredients
When to choose acoustic spectroscopy influences critical characteristics including content uniformity, dissolution and
When to choose image analysis absorption rates. Other industries where particle size plays an important role
include nanotechnology, proteins, cosmetics, polymers, soils, abrasives,
29 References fertilizers, and many more.
1
TABLE OF CONTENTS
23 PSA300 and CAMSIZER image analysis technique There are also industry/application specific reasons why controlling and
Static image analysis measuring particle size is important. In the paint and pigment industries particle
Dynamic image analysis size influences appearance properties including gloss and tinctorial strength.
Particle size of the cocoa powder used in chocolate affects color and flavor.
The size and shape of the glass beads used in highway paint impacts reflectivity.
26 Dynamic range of the HORIBA particle characterization systems
Cement particle size influences hydration rate & strength. The size and shape
distribution of the metal particles impacts powder behavior during die filling,
27 Selecting a particle size analyzer
compaction, and sintering, and therefore influences the physical properties of
When to choose laser diffraction
When to choose dynamic light scattering
the parts created. In the pharmaceutical industry the size of active ingredients
When to choose acoustic spectroscopy influences critical characteristics including content uniformity, dissolution and
When to choose image analysis absorption rates. Other industries where particle size plays an important role
include nanotechnology, proteins, cosmetics, polymers, soils, abrasives,
29 References fertilizers, and many more.
1
WHICH SIZE TO MEASURE?
VERTICAL
equivalent spherical diameter. This is essentially taking the physical measured
value (i.e. scattered light, acoustic attenuation, settling rate) and determining the Understanding and interpreting
size of the sphere that could produce the data. Although this approach is simplistic
particle size distribution calculations.
PROJECTION
and not perfectly accurate, the shapes of particles generated by most industrial
processes are such that the spherical assumption does not cause serious
problems. Problems can arise, however, if the individual particles have a very Performing a particle size analysis is the best way to answer the question:
large aspect ratio, such as fibers or needles. What size are those particles? Once the analysis is complete the user has
a variety of approaches for reporting the result. Some people prefer a single
Shape factor causes disagreements when particles are measured with different number answer—what is the average size? More experienced particle scientists
particle size analyzers. Each measurement technique detects size through the cringe when they hear this question, knowing that a single number cannot
use of its own physical principle. For example, a sieve will tend to emphasize the describe the distribution of the sample. A better approach is to report both a
second smallest dimension because of the way particles must orient themselves central point of the distribution along with one or more values to describe the
to pass through the mesh opening. A sedimentometer measures the rate of width of distribution. Other approaches are also described in this document.
HORIZONTAL
PROJECTION fall of the particle through a viscous medium, with the other particles and/or the
container walls tending to slow their movement. Flaky or plate-like particles will CENTRAL VALUES: MEAN, MEDIAN, MODE
figure 1 orient to maximize drag while sedimenting, shifting the reported particle size in For symmetric distributions such as the one shown in Figure 2 all central values
| SHAPE FACTOR
Many techniques make the general
assumption that every particle is a
the smaller direction. A light scattering device will average the various dimensions
as the particles flow randomly through the light beam, producing a distribution of
are equivalent: mean = median = mode. But what do these values represent?
sphere and report the value of some sizes from the smallest to the largest dimensions. MEAN
equivalent diameter. Microscopy or
automated image analysis are the Mean is a calculated value similar to the concept of average. The various mean
The only techniques that can describe particle size using multiple values are
figure 2
only techniques that can describe
particle size using multiple values
for particles with larger aspect ratios.
microscopy or automated image analysis. An image analysis system could
describe the non-spherical particle seen in Figure 1 using the longest and shortest
calculations are defined in several standard documents (ref.1,2). There are
multiple definitions for mean because the mean value is associated with the | SYMMETRIC DISTRIBUTION
WHERE MEAN=MEDIAN=MODE
basis of the distribution calculation (number, surface, volume). See (ref. 3) for an
diameters, perimeter, projected area, or again by equivalent spherical diameter. explanation of number, surface, and volume distributions. Laser diffraction results
When reporting a particle size distribution the most common format used even for are reported on a volume basis, so the volume mean can be used to define the
image analysis systems is equivalent spherical diameter on the x axis and percent central point although the median is more frequently used than the mean when
on the y axis. It is only for elongated or fibrous particles that the x axis is typically using this technique. The equation for defining the volume mean is shown below.
displayed as length rather than equivalent spherical diameter. The best way to think about this calculation is to think of a histogram table show-
ing the upper and lower limits of n size channels along with the percent within this
channel. The Di value for each channel is the geometric mean, the square root of
upper x lower diameters. For the numerator take the geometric Di to the fourth
power x the percent in that channel, summed over all channels. For the denomi-
nator take the geometric Di to the third power x the percent in that channel,
summed over all channels.
2 3
WHICH SIZE TO MEASURE?
VERTICAL
equivalent spherical diameter. This is essentially taking the physical measured
value (i.e. scattered light, acoustic attenuation, settling rate) and determining the Understanding and interpreting
size of the sphere that could produce the data. Although this approach is simplistic
particle size distribution calculations.
PROJECTION
and not perfectly accurate, the shapes of particles generated by most industrial
processes are such that the spherical assumption does not cause serious
problems. Problems can arise, however, if the individual particles have a very Performing a particle size analysis is the best way to answer the question:
large aspect ratio, such as fibers or needles. What size are those particles? Once the analysis is complete the user has
a variety of approaches for reporting the result. Some people prefer a single
Shape factor causes disagreements when particles are measured with different number answer—what is the average size? More experienced particle scientists
particle size analyzers. Each measurement technique detects size through the cringe when they hear this question, knowing that a single number cannot
use of its own physical principle. For example, a sieve will tend to emphasize the describe the distribution of the sample. A better approach is to report both a
second smallest dimension because of the way particles must orient themselves central point of the distribution along with one or more values to describe the
to pass through the mesh opening. A sedimentometer measures the rate of width of distribution. Other approaches are also described in this document.
HORIZONTAL
PROJECTION fall of the particle through a viscous medium, with the other particles and/or the
container walls tending to slow their movement. Flaky or plate-like particles will CENTRAL VALUES: MEAN, MEDIAN, MODE
figure 1 orient to maximize drag while sedimenting, shifting the reported particle size in For symmetric distributions such as the one shown in Figure 2 all central values
| SHAPE FACTOR
Many techniques make the general
assumption that every particle is a
the smaller direction. A light scattering device will average the various dimensions
as the particles flow randomly through the light beam, producing a distribution of
are equivalent: mean = median = mode. But what do these values represent?
sphere and report the value of some sizes from the smallest to the largest dimensions. MEAN
equivalent diameter. Microscopy or
automated image analysis are the Mean is a calculated value similar to the concept of average. The various mean
The only techniques that can describe particle size using multiple values are
figure 2
only techniques that can describe
particle size using multiple values
for particles with larger aspect ratios.
microscopy or automated image analysis. An image analysis system could
describe the non-spherical particle seen in Figure 1 using the longest and shortest
calculations are defined in several standard documents (ref.1,2). There are
multiple definitions for mean because the mean value is associated with the | SYMMETRIC DISTRIBUTION
WHERE MEAN=MEDIAN=MODE
basis of the distribution calculation (number, surface, volume). See (ref. 3) for an
diameters, perimeter, projected area, or again by equivalent spherical diameter. explanation of number, surface, and volume distributions. Laser diffraction results
When reporting a particle size distribution the most common format used even for are reported on a volume basis, so the volume mean can be used to define the
image analysis systems is equivalent spherical diameter on the x axis and percent central point although the median is more frequently used than the mean when
on the y axis. It is only for elongated or fibrous particles that the x axis is typically using this technique. The equation for defining the volume mean is shown below.
displayed as length rather than equivalent spherical diameter. The best way to think about this calculation is to think of a histogram table show-
ing the upper and lower limits of n size channels along with the percent within this
channel. The Di value for each channel is the geometric mean, the square root of
upper x lower diameters. For the numerator take the geometric Di to the fourth
power x the percent in that channel, summed over all channels. For the denomi-
nator take the geometric Di to the third power x the percent in that channel,
summed over all channels.
2 3
The volume mean diameter has several names including D4,3. In all HORIBA dif- MODE
fraction software this is simply called the “mean” whenever the result is displayed The mode is the peak of the frequency distribution, or it may be easier to visualize MODE
as a volume distribution. Conversely, when the result in HORIBA software is it as the highest peak seen in the distribution. The mode represents the particle
converted to a surface area distribution the mean value displayed is the surface size (or size range) most commonly found in the distribution. Less care is taken MEDIAN
mean, or D 3,2. The equation for the surface mean is shown below. to denote whether the value is based on volume, surface or number, so either
run the risk of assuming volume basis or check to assure the distribution basis. MEAN
The mode is not as commonly used, but can be descriptive; in particular if there is
more than one peak to the distribution, then the modes are helpful to describe the
mid-point of the different peaks.
The description for this calculation is the same as the D4,3 calculation, except that For non-symmetric distributions the mean, median and mode will be three
Di values are raised to the exponent values of 3 and 2 instead of 4 and 3. different values shown in Figure 3.
The generalized form of the equations seen above for D4,3 and D3,2 is shown DISTRIBUTION WIDTHS
below (following the conventions from ref. 2, ASTM E 799, ). Most instruments are used to measure the particle size distribution, implying an figure 3
interest in the width or breadth of the distribution. Experienced scientists typi-
cally shun using a single number answer to the question “What size are those
| A NON-SYMMETRIC DISTRIBUTION
Mean, median and mode will be three
different values.
particles?”, and prefer to include a way to define the width. The field of statistics
provides several calculations to describe the width of distributions, and these
Where: calculations are sometimes used in the field of particle characterization. The most
D = the overbar in D designates an averaging process common calculations are standard deviation and variance. The standard deviation
(p-q)p>q = the algebraic power of Dpq (St Dev.) is the preferred value in our field of study. As shown in Figure 4, 68.27%
Di = the diameter of the ith particle of the total population lies within +/- 1 St Dev, and 95.45% lies within +/- 2 St Dev.
Σ = the summation of Dip or Diq, representing all particles in the sample
Although occasionally cited, the use of standard deviation declined when
Some of the more common representative diameters are: hardware and software advanced beyond assuming normal or Rosin-Rammler -1 STD 68.27% +1 STD
D10 = arithmetic or number mean distributions.
D32 = volume/surface mean (also called the Sauter mean)
D43 = the mean diameter over volume (also called the DeBroukere mean) Once “model independent” algorithms were introduced many particle scientists
began using different calculations to describe distribution width. One of the
The example results shown in ASTM E 799 are based on a distribution of liquid common values used for laser diffraction results is the span, with the strict
droplets (particles) ranging from 240 – 6532 µm. For this distribution the following definition shown in the equation below (2):
results were calculated:
95.45%
D10 = 1460 µm
D32 = 2280 µm
D50 = 2540 µm
-2 STD MEAN +2 STD
D43 = 2670 µm In rare situations the span equation may be defined using other values such as
figure 4
These results are fairly typical in that the D43 is larger than the D50—
Dv0.8 and Dv0.2. Laser diffraction instruments should allow users this flexibility.
| A NORMAL DISTRIBUTION
The mean value is flanked by 1 and 2
standard deviation points.
the volume-basis median value. An additional approach to describing distribution width is to normalize the
standard deviation through division by the mean. This is the Coefficient of
MEDIAN Variation (COV) (although it may also be referred to as the relative standard
Median values are defined as the value where half of the population resides deviation, or RSD). Although included in HORIBA laser diffraction software this
above this point, and half resides below this point. For particle size distributions value is seldom used as often as it should given its stature. The COV calculation
the median is called the D50 (or x50 when following certain ISO guidelines). is both used and encouraged as a calculation to express measurement result
The D50 is the size in microns that splits the distribution with half above and half reproducibility. ISO13320 (ref. 4) encourages all users to measure any sample
below this diameter. The Dv50 (or Dv0.5) is the median for a volume distribution, at least 3 times, calculate the mean, st dev, and COV (st dev/mean), and the stan-
Dn50 is used for number distributions, and Ds50 is used for surface distributions. dard sets pass/fail criteria based on the COV values.
Since the primary result from laser diffraction is a volume distribution, the default
D50 cited is the volume median and D50 typically refers to the Dv50 without in-
cluding the v. This value is one of the easier statistics to understand and also one
of the most meaningful for particle size distributions.
4 5
The volume mean diameter has several names including D4,3. In all HORIBA dif- MODE
fraction software this is simply called the “mean” whenever the result is displayed The mode is the peak of the frequency distribution, or it may be easier to visualize MODE
as a volume distribution. Conversely, when the result in HORIBA software is it as the highest peak seen in the distribution. The mode represents the particle
converted to a surface area distribution the mean value displayed is the surface size (or size range) most commonly found in the distribution. Less care is taken MEDIAN
mean, or D 3,2. The equation for the surface mean is shown below. to denote whether the value is based on volume, surface or number, so either
run the risk of assuming volume basis or check to assure the distribution basis. MEAN
The mode is not as commonly used, but can be descriptive; in particular if there is
more than one peak to the distribution, then the modes are helpful to describe the
mid-point of the different peaks.
The description for this calculation is the same as the D4,3 calculation, except that For non-symmetric distributions the mean, median and mode will be three
Di values are raised to the exponent values of 3 and 2 instead of 4 and 3. different values shown in Figure 3.
The generalized form of the equations seen above for D4,3 and D3,2 is shown DISTRIBUTION WIDTHS
below (following the conventions from ref. 2, ASTM E 799, ). Most instruments are used to measure the particle size distribution, implying an figure 3
interest in the width or breadth of the distribution. Experienced scientists typi-
cally shun using a single number answer to the question “What size are those
| A NON-SYMMETRIC DISTRIBUTION
Mean, median and mode will be three
different values.
particles?”, and prefer to include a way to define the width. The field of statistics
provides several calculations to describe the width of distributions, and these
Where: calculations are sometimes used in the field of particle characterization. The most
D = the overbar in D designates an averaging process common calculations are standard deviation and variance. The standard deviation
(p-q)p>q = the algebraic power of Dpq (St Dev.) is the preferred value in our field of study. As shown in Figure 4, 68.27%
Di = the diameter of the ith particle of the total population lies within +/- 1 St Dev, and 95.45% lies within +/- 2 St Dev.
Σ = the summation of Dip or Diq, representing all particles in the sample
Although occasionally cited, the use of standard deviation declined when
Some of the more common representative diameters are: hardware and software advanced beyond assuming normal or Rosin-Rammler -1 STD 68.27% +1 STD
D10 = arithmetic or number mean distributions.
D32 = volume/surface mean (also called the Sauter mean)
D43 = the mean diameter over volume (also called the DeBroukere mean) Once “model independent” algorithms were introduced many particle scientists
began using different calculations to describe distribution width. One of the
The example results shown in ASTM E 799 are based on a distribution of liquid common values used for laser diffraction results is the span, with the strict
droplets (particles) ranging from 240 – 6532 µm. For this distribution the following definition shown in the equation below (2):
results were calculated:
95.45%
D10 = 1460 µm
D32 = 2280 µm
D50 = 2540 µm
-2 STD MEAN +2 STD
D43 = 2670 µm In rare situations the span equation may be defined using other values such as
figure 4
These results are fairly typical in that the D43 is larger than the D50—
Dv0.8 and Dv0.2. Laser diffraction instruments should allow users this flexibility.
| A NORMAL DISTRIBUTION
The mean value is flanked by 1 and 2
standard deviation points.
the volume-basis median value. An additional approach to describing distribution width is to normalize the
standard deviation through division by the mean. This is the Coefficient of
MEDIAN Variation (COV) (although it may also be referred to as the relative standard
Median values are defined as the value where half of the population resides deviation, or RSD). Although included in HORIBA laser diffraction software this
above this point, and half resides below this point. For particle size distributions value is seldom used as often as it should given its stature. The COV calculation
the median is called the D50 (or x50 when following certain ISO guidelines). is both used and encouraged as a calculation to express measurement result
The D50 is the size in microns that splits the distribution with half above and half reproducibility. ISO13320 (ref. 4) encourages all users to measure any sample
below this diameter. The Dv50 (or Dv0.5) is the median for a volume distribution, at least 3 times, calculate the mean, st dev, and COV (st dev/mean), and the stan-
Dn50 is used for number distributions, and Ds50 is used for surface distributions. dard sets pass/fail criteria based on the COV values.
Since the primary result from laser diffraction is a volume distribution, the default
D50 cited is the volume median and D50 typically refers to the Dv50 without in-
cluding the v. This value is one of the easier statistics to understand and also one
of the most meaningful for particle size distributions.
4 5
Another common approach to define the distribution width is to cite three values IMAGE ANALYSIS
Dv0.5 MEDIAN on the x-axis, the D10, D50, and D90 as shown in Figure 5. The D50, the median, The primary results from image analysis are based on number distributions.
has been defined above as the diameter where half of the population lies below These are often converted to a volume basis, and in this case this is an accepted
this value. Similarly, 90 percent of the distribution lies below the D90, and 10 and valid conversion. Image analysis provides far more data values and options
percent of the population lies below the D10. than any of the other techniques described in this document. Measuring each
particle allows the user unmatched flexibility for calculating and reporting particle
Dv0.1 Dv0.9
TECHNIQUE DEPENDENCE size results.
HORIBA Instruments, Inc. distributes particle characterization tools based on
10% 50% 90% several principles including laser diffraction, dynamic light scattering, acoustic at- Image analysis instruments may report distributions based on particle length as
below below below
this size this size this size tenuation, and image analysis. Each of these techniques generates results in both opposed to spherical equivalency, and they may build volume distributions based
similar and unique ways. Most techniques can describe results using standard on shapes other than spheres.
statistical calculations such as the mean and standard deviation. But commonly
accepted practices for describing results have evolved for each technique. Dynamic image analysis tools such as the CAMSIZER allow users to choose a
figure 5
| THREE X-AXIS VALUES
D10, D50 and D90
LASER DIFFRACTION
variety of length and width descriptors such as the maximum Feret diameter and
the minimum largest chord diameter as described in ISO 13322-2 (ref. 5).
All of the calculations described in this document are generated by the HORIBA
With the ability to measure particles in any number of ways comes the decision
laser diffraction software package. Results can be displayed on a volume, surface
to report those measurements in any number of ways. Users are again cautioned
area, or number basis. Statistical calculations such as standard deviation and
against reporting a single value—the number mean being the worst choice of
variance are available in either arithmetic or geometric forms. The most common
the possible options. Experienced particle scientists often report D10, D50, and
approach for expressing laser diffraction results is to report the D10, D50, and D90
D90, or include standard deviation or span calculations when using image
values based on a volume distribution. The span calculation is the most common
analysis tools.
format to express distribution width. That said, there is nothing wrong with using
any of the available calculations, and indeed many customers include the D4,3
CONCLUSIONS
when reporting results.
All particle size analysis instruments provide the ability to measure and report the
A word of caution is given when considering converting a volume distribution particle size distribution of the sample. There are very few applications where a
into either a surface area or number basis. Although the conversion is supplied single value is appropriate and representative. The modern particle scientist often
in the software, it is only provided for comparison to other techniques, such as chooses to describe the entire size distribution as opposed to just a single point
microscopy, which inherently measure particles on different bases. The conver- on it. (One exception might be extremely narrow distributions such as latex size
sion is only valid for symmetric distributions and should not be used for any other standards where the width is negligible.) Almost all real world samples exist as
purpose than comparison to another technique. a distribution of particle sizes and it is recommended to report the width of the
distribution for any sample analyzed. The most appropriate option for expressing
DYNAMIC LIGHT SCATTERING width is dependent on the technique used. When in doubt, it is often wise to refer
to industry accepted standards such as ISO or ASTM in order to conform to
Dynamic Light Scattering (DLS) is unique among the techniques described in
common practice.
this document. The primary result from DLS is typically the mean value from the
intensity distribution (called the Z average) and the polydispersity index (PDI) to
describe the distribution width. It is possible to convert from an intensity to a
volume distribution if the refractive index of the sample is known. The HORIBA
DLS software makes this conversion easy and therefore many HORIBA
customers report D10, D50, and D90 values from the volume distribution.
ACOUSTIC SPECTROSCOPY
6 7
Another common approach to define the distribution width is to cite three values IMAGE ANALYSIS
Dv0.5 MEDIAN on the x-axis, the D10, D50, and D90 as shown in Figure 5. The D50, the median, The primary results from image analysis are based on number distributions.
has been defined above as the diameter where half of the population lies below These are often converted to a volume basis, and in this case this is an accepted
this value. Similarly, 90 percent of the distribution lies below the D90, and 10 and valid conversion. Image analysis provides far more data values and options
percent of the population lies below the D10. than any of the other techniques described in this document. Measuring each
particle allows the user unmatched flexibility for calculating and reporting particle
Dv0.1 Dv0.9
TECHNIQUE DEPENDENCE size results.
HORIBA Instruments, Inc. distributes particle characterization tools based on
10% 50% 90% several principles including laser diffraction, dynamic light scattering, acoustic at- Image analysis instruments may report distributions based on particle length as
below below below
this size this size this size tenuation, and image analysis. Each of these techniques generates results in both opposed to spherical equivalency, and they may build volume distributions based
similar and unique ways. Most techniques can describe results using standard on shapes other than spheres.
statistical calculations such as the mean and standard deviation. But commonly
accepted practices for describing results have evolved for each technique. Dynamic image analysis tools such as the CAMSIZER allow users to choose a
figure 5
| THREE X-AXIS VALUES
D10, D50 and D90
LASER DIFFRACTION
variety of length and width descriptors such as the maximum Feret diameter and
the minimum largest chord diameter as described in ISO 13322-2 (ref. 5).
All of the calculations described in this document are generated by the HORIBA
With the ability to measure particles in any number of ways comes the decision
laser diffraction software package. Results can be displayed on a volume, surface
to report those measurements in any number of ways. Users are again cautioned
area, or number basis. Statistical calculations such as standard deviation and
against reporting a single value—the number mean being the worst choice of
variance are available in either arithmetic or geometric forms. The most common
the possible options. Experienced particle scientists often report D10, D50, and
approach for expressing laser diffraction results is to report the D10, D50, and D90
D90, or include standard deviation or span calculations when using image
values based on a volume distribution. The span calculation is the most common
analysis tools.
format to express distribution width. That said, there is nothing wrong with using
any of the available calculations, and indeed many customers include the D4,3
CONCLUSIONS
when reporting results.
All particle size analysis instruments provide the ability to measure and report the
A word of caution is given when considering converting a volume distribution particle size distribution of the sample. There are very few applications where a
into either a surface area or number basis. Although the conversion is supplied single value is appropriate and representative. The modern particle scientist often
in the software, it is only provided for comparison to other techniques, such as chooses to describe the entire size distribution as opposed to just a single point
microscopy, which inherently measure particles on different bases. The conver- on it. (One exception might be extremely narrow distributions such as latex size
sion is only valid for symmetric distributions and should not be used for any other standards where the width is negligible.) Almost all real world samples exist as
purpose than comparison to another technique. a distribution of particle sizes and it is recommended to report the width of the
distribution for any sample analyzed. The most appropriate option for expressing
DYNAMIC LIGHT SCATTERING width is dependent on the technique used. When in doubt, it is often wise to refer
to industry accepted standards such as ISO or ASTM in order to conform to
Dynamic Light Scattering (DLS) is unique among the techniques described in
common practice.
this document. The primary result from DLS is typically the mean value from the
intensity distribution (called the Z average) and the polydispersity index (PDI) to
describe the distribution width. It is possible to convert from an intensity to a
volume distribution if the refractive index of the sample is known. The HORIBA
DLS software makes this conversion easy and therefore many HORIBA
customers report D10, D50, and D90 values from the volume distribution.
ACOUSTIC SPECTROSCOPY
6 7
Another way to visualize the difference between number and volume distribu-
tions was given to us by a customer who needed to explain the difference to her
colleagues. In this case beans are used as the particle system. Figure 9 shows
a population where there are 13 beans in each of three size classes, equal on a
number basis. Figure 10 shows these beans placed in volumetric cylinders where
it becomes apparent that the larger beans represent a much larger total volume
than the smaller ones.
number vs. volume distributions volumetric cylinders that each volumes are equal.
TRANSFORMING RESULTS
Interpreting results of a particle size measurement requires an under-
standing of which technique was used and the basis of the calculations. Results from number based systems, such as microscopes or image analyzers
D = 1µm
VOLUME = 0.52µm Each technique generates a different result since each measures different construct their beginning result as a number distribution. Results from laser
% BY VOLUME = 0.52/18.8 = 2.8%
physical properties of the sample. Once the physical property is measured a diffraction or acoustic attenuation construct their beginning result as a volume
calculation of some type generates a representation of a particle size distribution. distribution. The software for many of these systems includes the ability to trans-
Some techniques report only a central point and spread of the distribution, form the results from number to volume or vice versa. It is perfectly acceptable
others provide greater detail across the upper and lower particle size detected. to transform image analysis results from a number to volume basis. In fact the
pharmaceutical industry has concluded that it prefers results be reported on a figure 10
D = 2µm
VOLUME = 4.2µm
% BY VOLUME = 4.2/18.8 = 22%
The particle size distribution can be calculated based on several models: most
often as a number or volume/mass distribution. volume basis for most applications (ref. 6). On the other hand, converting a
volume result from laser diffraction to a number basis can lead to undefined
| THE SAME 39 BEANS PLACED
IN VOLUMETRIC CYLINDERS
NUMBER VS. VOLUME DISTRIBUTION errors and is only suggested when comparing to results generated by micro-
scopy. Figure 13 below shows an example where a laser diffraction result is
The easiest way to understand a number distribution is to consider measuring
transformed from volume to both a number and a surface area based
particles using a microscope. The observer assigns a size value to each particle
distribution. Notice the large change in median from 11.58µm to 0.30µm
inspected. This approach builds a number distribution—each particle has equal
when converted from volume to number.
weighting once the final distribution is calculated. As an example, consider the
D = 3µm nine particles shown in Figure 6. Three particles are 1µm, three are 2µm, and
12
VOLUME = 14.1µm three are 3µm in size (diameter). Building a number distribution for these particles NUMBER
% BY VOLUME = 14.1/18.8 = 75%
will generate the result shown in Figure 7, where each particle size accounts for
10
TOTAL VOLUME one third of the total. If this same result were converted to a volume distribution,
figure 11
|
0.52 + 4.2 + 14.1 = 18.8µm
the result would appear as shown in Figure 8 where 75% of the total volume 8
EQUAL VOLUME OF EACH OF
AREA VOLUME THE THREE TYPES OF BEANS
comes from the 3µm particles, and less than 3% comes from the 1µm particles.
figure 6
| PARTICLES 1, 2 AND 3µm IN SIZE
Calculations show percent by volume
and number for each size range. 30 70
6
60 4
25
50
20
2
40
15
30
10 0
20
0.34 0.58 1.15 2.27 4.47 8.82 17.38 34.25
5 10 PARTICLE SIZE
0 NUMBER DISTRIBUTION VOLUME DISTRIBUTION
0
MEAN = 0.38µm MEAN = 12.65µm
1µm 2µm 3µm 1µm 2µm 3µm
figure 12
|
MEDIAN = 0.30µm MEDIAN = 11.58µm
EQUAL VOLUMES IN
figure 7 figure 8
| |
NUMBER DISTRIBUTION VOLUME DISTRIBUTION SA = 13467 cm²/cm³ SA = 13467 cm²/cm³
VOLUMETRIC CYLINDERS
STANDARD DEV = 0.40 STANDARD DEV = 8.29
When presented as a volume distribution it becomes more obvious that the figure 13
majority of the total particle mass or volume comes from the 3µm particles.
Nothing changes between the left and right graph except for the basis of the
| VOLUME DISTRIBUTION CONVERTED
TO AREA AND NUMBER
Conversion errors can result when
deriving number or area values from
distribution calculation.
a laser diffraction volume result.
8 9
Another way to visualize the difference between number and volume distribu-
tions is supplied courtesy of the City of San Diego Environmental Laboratory.
In this case beans are used as the particle system. Figure 9 shows a population
where there are 13 beans in each of three size classes, equal on a number basis.
Figure 10 shows these beans placed in volumetric cylinders where it becomes
apparent that the larger beans represent a much larger total volume than the
smaller ones.
number vs. volume distributions volumetric cylinders that each volumes are equal.
TRANSFORMING RESULTS
Interpreting results of a particle size measurement requires an under-
standing of which technique was used and the basis of the calculations. Results from number based systems, such as microscopes or image analyzers
D = 1µm
VOLUME = 0.52µm Each technique generates a different result since each measures different construct their beginning result as a number distribution. Results from laser
% BY VOLUME = 0.52/18.8 = 2.8%
physical properties of the sample. Once the physical property is measured a diffraction or acoustic attenuation construct their beginning result as a volume
calculation of some type generates a representation of a particle size distribution. distribution. The software for many of these systems includes the ability to trans-
Some techniques report only a central point and spread of the distribution, form the results from number to volume or vice versa. It is perfectly acceptable
others provide greater detail across the upper and lower particle size detected. to transform image analysis results from a number to volume basis. In fact the
pharmaceutical industry has concluded that it prefers results be reported on a figure 10
D = 2µm
VOLUME = 4.2µm
% BY VOLUME = 4.2/18.8 = 22%
The particle size distribution can be calculated based on several models: most
often as a number or volume/mass distribution. volume basis for most applications (ref. 6). On the other hand, converting a
volume result from laser diffraction to a number basis can lead to undefined
| THE SAME 39 BEANS PLACED
IN VOLUMETRIC CYLINDERS
NUMBER VS. VOLUME DISTRIBUTION errors and is only suggested when comparing to results generated by micro-
scopy. Figure 13 below shows an example where a laser diffraction result is
The easiest way to understand a number distribution is to consider measuring
transformed from volume to both a number and a surface area based
particles using a microscope. The observer assigns a size value to each particle
distribution. Notice the large change in median from 11.58µm to 0.30µm
inspected. This approach builds a number distribution—each particle has equal
when converted from volume to number.
weighting once the final distribution is calculated. As an example, consider the
D = 3µm nine particles shown in Figure 6. Three particles are 1µm, three are 2µm, and
12
VOLUME = 14.1µm three are 3µm in size (diameter). Building a number distribution for these particles NUMBER
% BY VOLUME = 14.1/18.8 = 75%
will generate the result shown in Figure 7, where each particle size accounts for
10
TOTAL VOLUME one third of the total. If this same result were converted to a volume distribution,
figure 11
|
0.52 + 4.2 + 14.1 = 18.8µm
the result would appear as shown in Figure 8 where 75% of the total volume 8
EQUAL VOLUME OF EACH OF
AREA VOLUME THE THREE TYPES OF BEANS
comes from the 3µm particles, and less than 3% comes from the 1µm particles.
figure 6
| PARTICLES 1, 2 AND 3µm IN SIZE
Calculations show percent by volume
and number for each size range. 30 70
6
60 4
25
50
20
2
40
15
30
10 0
20
0.34 0.58 1.15 2.27 4.47 8.82 17.38 34.25
5 10 PARTICLE SIZE
0 NUMBER DISTRIBUTION VOLUME DISTRIBUTION
0
MEAN = 0.38µm MEAN = 12.65µm
1µm 2µm 3µm 1µm 2µm 3µm
figure 12
|
MEDIAN = 0.30µm MEDIAN = 11.58µm
EQUAL VOLUMES IN
figure 7 figure 8
| |
NUMBER DISTRIBUTION VOLUME DISTRIBUTION SA = 13467 cm²/cm³ SA = 13467 cm²/cm³
VOLUMETRIC CYLINDERS
STANDARD DEV = 0.40 STANDARD DEV = 8.29
When presented as a volume distribution it becomes more obvious that the figure 13
majority of the total particle mass or volume comes from the 3µm particles.
Nothing changes between the left and right graph except for the basis of the
| VOLUME DISTRIBUTION CONVERTED
TO AREA AND NUMBER
Conversion errors can result when
deriving number or area values from
distribution calculation.
a laser diffraction volume result.
8 9
Rather than use a single point in the distribution as a specification, it is suggested
to include other size parameters in order to describe the width of the distribution.
The span is a common calculation to quantify distribution width: (D90 – D10) /
D50. However, it is rare to see span as part of a particle size specification. The
more common practice is to include two points which describe the coarsest
and finest parts of the distribution. These are typically the D90 and D10. Using
the same convention as the D50, the D90 describes the diameter where ninety
percent of the distribution has a smaller particle size and ten percent has a larger
particle size. The D10 diameter has ten percent smaller and ninety percent larger.
A three point specification featuring the D10, D50, and D90 will be considered
Setting particle size specifications complete and appropriate for most particulate materials.
The creation of a meaningful and product-appropriate particle size How these points are expressed may vary. Some specifications use a format
specification requires knowledge of its effect on product performance in where the D10, D50, and D90 must not be more than (NMT) a stated size.
addition to an understanding of how results should be interpreted for Example: D10 NMT 20µm
a given technique. This section provides guidelines for setting particle size D50 NMT 80µm
specifications on particulate materials—primarily when using the laser diffraction D90 NMT 200µm
technique, but also with information about dynamic light scattering (DLS), acoustic
Although only one size is stated for each point there is an implied range of
spectroscopy, and image analysis.
acceptable sizes (i.e. the D50 passes if between 20 and 80µm).
DISTRIBUTION BASIS
Alternatively, a range of values can be explicitly stated.
Different particle sizing techniques report primary results based on number,
volume, weight, surface area, or intensity. As a general rule specifications should Example: D10 10 – 20µm
be based in the format of the primary result for a given technique. Laser diffraction D50 70 – 80µm
D90 180 – 200µm
generates results based on volume distributions and any specification should be
volume based. Likewise, an intensity basis should be used for DLS specifications, This approach better defines the acceptable size distribution, but may be
volume for acoustic spectroscopy, and number for image analysis. Conversion to perceived as overly complicated for many materials.
another basis such as number—although possible in the software—is inadvisable
because significant error is introduced. The exception to this guideline is convert- It may also be tempting to include a requirement that 100% of the distribution is
ing a number based result from a technique such as image analysis into a volume smaller than a given size. This implies calculating the D100 which is not recom-
basis (ref. 7). The error involved is generally very low in this scenario. mended. The D100 result (and to a lesser degree the D0) is the least robust
calculation from any experiment. Any slight disturbance during the measurement
DISTRIBUTION POINTS such as an air bubble or thermal fluctuation can significantly influence the D100
While it is tempting to use a single number to represent a particle size distribu- value. Additionally, the statistics involved with calculating this value (and other
tion (PSD), and thus the product specification, this is typically not a good idea. In “extreme” values such as the D99, D1, etc.) aren’t as robust because there may
nearly every case, a single data point cannot adequately describe a distribution of not be very many of the “largest” and “smallest” particles. Given the possible
data points. This can easily lead to misunderstandings and provides no information broad spread of D100 results it is not recommended for use in creating specifica-
about the width of the distribution. Less experienced users may believe that the tions involving a statement that 100% of the particles are below a stated size.
“average particle size” can adequately describe a size distribution, but this implies
expecting a response based on a calculated average (or mean). If forced to use a INCLUDING A MEAN VALUE
single calculated number to represent the mid-point of a particle size distribution, Ultimately, the sophistication of the specification should be driven by how particle
then the common practice is to report the median and not the mean. The median size influences product performance. Given that some people ask about the
is the most stable calculation generated by laser diffraction and should be the “average size”, it is not surprising that some specifications are based on a mean
value used for a single point specification in most cases. diameter. This approach is complicated by the fact that there are several mean
values that can be calculated and reported in the result (ref. 8). The most common
mean value noted when using laser diffraction is the volume mean, or D4,3. The
D4,3 is very sensitive to the presence of large particles in the distribution. It is a
good idea to use or include the D4,3 in the specification if product performance
is sensitive to the presence of large particles. The other mean value occasion-
ally used is the D3,2, or surface mean. This value is only typically used when the
product is an aerosol or spray.
10 11
Rather than use a single point in the distribution as a specification, it is suggested
to include other size parameters in order to describe the width of the distribution.
The span is a common calculation to quantify distribution width: (D90 – D10) /
D50. However, it is rare to see span as part of a particle size specification. The
more common practice is to include two points which describe the coarsest
and finest parts of the distribution. These are typically the D90 and D10. Using
the same convention as the D50, the D90 describes the diameter where ninety
percent of the distribution has a smaller particle size and ten percent has a larger
particle size. The D10 diameter has ten percent smaller and ninety percent larger.
A three point specification featuring the D10, D50, and D90 will be considered
Setting particle size specifications complete and appropriate for most particulate materials.
The creation of a meaningful and product-appropriate particle size How these points are expressed may vary. Some specifications use a format
specification requires knowledge of its effect on product performance in where the D10, D50, and D90 must not be more than (NMT) a stated size.
addition to an understanding of how results should be interpreted for Example: D10 NMT 20µm
a given technique. This section provides guidelines for setting particle size D50 NMT 80µm
specifications on particulate materials—primarily when using the laser diffraction D90 NMT 200µm
technique, but also with information about dynamic light scattering (DLS), acoustic
Although only one size is stated for each point there is an implied range of
spectroscopy, and image analysis.
acceptable sizes (i.e. the D50 passes if between 20 and 80µm).
DISTRIBUTION BASIS
Alternatively, a range of values can be explicitly stated.
Different particle sizing techniques report primary results based on number,
volume, weight, surface area, or intensity. As a general rule specifications should Example: D10 10 – 20µm
be based in the format of the primary result for a given technique. Laser diffraction D50 70 – 80µm
D90 180 – 200µm
generates results based on volume distributions and any specification should be
volume based. Likewise, an intensity basis should be used for DLS specifications, This approach better defines the acceptable size distribution, but may be
volume for acoustic spectroscopy, and number for image analysis. Conversion to perceived as overly complicated for many materials.
another basis such as number—although possible in the software—is inadvisable
because significant error is introduced. The exception to this guideline is convert- It may also be tempting to include a requirement that 100% of the distribution is
ing a number based result from a technique such as image analysis into a volume smaller than a given size. This implies calculating the D100 which is not recom-
basis (ref. 7). The error involved is generally very low in this scenario. mended. The D100 result (and to a lesser degree the D0) is the least robust
calculation from any experiment. Any slight disturbance during the measurement
DISTRIBUTION POINTS such as an air bubble or thermal fluctuation can significantly influence the D100
While it is tempting to use a single number to represent a particle size distribu- value. Additionally, the statistics involved with calculating this value (and other
tion (PSD), and thus the product specification, this is typically not a good idea. In “extreme” values such as the D99, D1, etc.) aren’t as robust because there may
nearly every case, a single data point cannot adequately describe a distribution of not be very many of the “largest” and “smallest” particles. Given the possible
data points. This can easily lead to misunderstandings and provides no information broad spread of D100 results it is not recommended for use in creating specifica-
about the width of the distribution. Less experienced users may believe that the tions involving a statement that 100% of the particles are below a stated size.
“average particle size” can adequately describe a size distribution, but this implies
expecting a response based on a calculated average (or mean). If forced to use a INCLUDING A MEAN VALUE
single calculated number to represent the mid-point of a particle size distribution, Ultimately, the sophistication of the specification should be driven by how particle
then the common practice is to report the median and not the mean. The median size influences product performance. Given that some people ask about the
is the most stable calculation generated by laser diffraction and should be the “average size”, it is not surprising that some specifications are based on a mean
value used for a single point specification in most cases. diameter. This approach is complicated by the fact that there are several mean
values that can be calculated and reported in the result (ref. 8). The most common
mean value noted when using laser diffraction is the volume mean, or D4,3. The
D4,3 is very sensitive to the presence of large particles in the distribution. It is a
good idea to use or include the D4,3 in the specification if product performance
is sensitive to the presence of large particles. The other mean value occasion-
ally used is the D3,2, or surface mean. This value is only typically used when the
product is an aerosol or spray.
10 11
1.0 X VS. Y AXIS INCLUDING THE ERROR
0.9 Other published specifications are based on the percent below a given particle The reproducibility errors discussed above should be investigated and minimized
0.8 size such as: 50% below 20µm and 90% below 100µm. This type of specification because they play an important role in the final setting of a specification. Once the
0.7 is based on points along the y axis (which reports frequency percent) as opposed specification based on product performance has been determined, then the final
undersize error of +/- 20%
0.6 to the x axis (which reports diameter) as in the previous examples. Although this specification must be narrowed by the error range (ref. 11). In the example shown
% UNDER
0.5
approach has been used in many specifications, it is important to realize the differ- in Figure 16 the specification for the D50 is 100 +/- 20% (or 80–120µm) based on
ence between using the x (size) and y (percent) axes. All measurements include product performance. If the total measurement error is +/- 10% (using USP<429>
0.4
an error which should always be considered when setting a specification. guidelines for the D50 value), the specification must be tightened to ~90–110µm
0.3
(rounded for simplicity) in order to assure the product is never out of the perfor-
0.2 For the example shown in Figure 14, the D50 is 100µm with an error of +/- 5% on mance specification. For example, if the D50 is measured to be 110µm, we are
size error
0.1 of +/- 5% the x (size) axis. This error includes all sources such as sampling and sample prep- certain the D50 is actually less than 120µm even with a maximum 10% error.
aration. The same error becomes +/- 20% when translated to the y (percent) axis.
20 40 60 80 100 120 140 Stating an error of +/- 5% is more attractive than +/- 20%, even when expressing This is why it is important to create robust standard operating procedures for any
SIZE IN µm
the same actual error range. The degree to which the y axis error is exaggerated material we wish to set a published specification for. Any combination of high
figure 14
| MEASUREMENT ERROR
Error appears exaggerated on the
Y axis because of the narrowness
vs. the x axis depends upon the steepness of the distribution curve. measurement error (usually stemming from non-optimized method development)
and tight specifications will make meeting that specification more difficult.
There are applications where the percent below a given particle size is an impor- Why make life harder than it need be?
of the PSD
tant result. Recently there has been interest in the presence of “nanoparticles”
figure 16
(at least one dimension smaller than 100nm) in products such as cosmetics. The
software which calculates the PSD should be capable of easily reporting the per- | BUILDING SIZE SPECIFICATION
TO INCLUDE ERROR SOURCES
If the total measurement error is
cent under any chosen size—in this case the percent below 100nm (Figure 15). PRODUCT PERFORMANCE SPECIFICATION
+/- 10%, then the specification must
In the LA-950 software this is displayed as “Diameter on Cumulative %”. In the be tightened in order to assure the
example below the value for percent less than 100nm is reported as 9.155%. product stays within performance
80 85 90 95 100 105 110 115 120
specification.
SIZE IN µm
Several points are worth mentioning in regards to setting a specification on the
percent below 100nm as in this example specifically and for sub-micron materials
generally. The particle size distribution is dependent upon many factors including SPECIFICATION INCLUDING ERROR
the sample preparation method. The laser diffraction technique works best within
a certain particulate concentration range. This sometimes requires that samples DYNAMIC LIGHT SCATTERING
undergo dilution. In some cases this dilution may change the state of the particles
The primary results from dynamic light scattering (DLS) systems are typically
and affect the apparent size distribution. Additionally, ultrasonic energy can be
reported as an intensity distribution. Key values included in DLS-based specifica-
applied to improve the dispersion of agglomerates which can significantly change
tions are the intensity-weighted average (often called the “z average”) and the
the result.
polydispersity index (PI). The results can be transformed into a volume-based
distribution and D10, D50, and D90 results can also be used.
TESTING REPRODUCIBILITY
figure 15 There are currently two internationally accepted standards written on the use of
| REPORTING PSD PERCENTAGE
SMALLER THAN THE GIVEN SIZE
In this example, percentage of the
laser diffraction: ISO 13320 (ref. 9) and USP<429> (ref. 10). Both standards state
that samples should be measured at least three times and reproducibility must
ACOUSTIC SPECTROSCOPY
12 13
1.0 X VS. Y AXIS INCLUDING THE ERROR
0.9 Other published specifications are based on the percent below a given particle The reproducibility errors discussed above should be investigated and minimized
0.8 size such as: 50% below 20µm and 90% below 100µm. This type of specification because they play an important role in the final setting of a specification. Once the
0.7 is based on points along the y axis (which reports frequency percent) as opposed specification based on product performance has been determined, then the final
undersize error of +/- 20%
0.6 to the x axis (which reports diameter) as in the previous examples. Although this specification must be narrowed by the error range (ref. 11). In the example shown
% UNDER
0.5
approach has been used in many specifications, it is important to realize the differ- in Figure 16 the specification for the D50 is 100 +/- 20% (or 80–120µm) based on
ence between using the x (size) and y (percent) axes. All measurements include product performance. If the total measurement error is +/- 10% (using USP<429>
0.4
an error which should always be considered when setting a specification. guidelines for the D50 value), the specification must be tightened to ~90–110µm
0.3
(rounded for simplicity) in order to assure the product is never out of the perfor-
0.2 For the example shown in Figure 14, the D50 is 100µm with an error of +/- 5% on mance specification. For example, if the D50 is measured to be 110µm, we are
size error
0.1 of +/- 5% the x (size) axis. This error includes all sources such as sampling and sample prep- certain the D50 is actually less than 120µm even with a maximum 10% error.
aration. The same error becomes +/- 20% when translated to the y (percent) axis.
20 40 60 80 100 120 140 Stating an error of +/- 5% is more attractive than +/- 20%, even when expressing This is why it is important to create robust standard operating procedures for any
SIZE IN µm
the same actual error range. The degree to which the y axis error is exaggerated material we wish to set a published specification for. Any combination of high
figure 14
| MEASUREMENT ERROR
Error appears exaggerated on the
Y axis because of the narrowness
vs. the x axis depends upon the steepness of the distribution curve. measurement error (usually stemming from non-optimized method development)
and tight specifications will make meeting that specification more difficult.
There are applications where the percent below a given particle size is an impor- Why make life harder than it need be?
of the PSD
tant result. Recently there has been interest in the presence of “nanoparticles”
figure 16
(at least one dimension smaller than 100nm) in products such as cosmetics. The
software which calculates the PSD should be capable of easily reporting the per- | BUILDING SIZE SPECIFICATION
TO INCLUDE ERROR SOURCES
If the total measurement error is
cent under any chosen size—in this case the percent below 100nm (Figure 15). PRODUCT PERFORMANCE SPECIFICATION
+/- 10%, then the specification must
In the LA-950 software this is displayed as “Diameter on Cumulative %”. In the be tightened in order to assure the
example below the value for percent less than 100nm is reported as 9.155%. product stays within performance
80 85 90 95 100 105 110 115 120
specification.
SIZE IN µm
Several points are worth mentioning in regards to setting a specification on the
percent below 100nm as in this example specifically and for sub-micron materials
generally. The particle size distribution is dependent upon many factors including SPECIFICATION INCLUDING ERROR
the sample preparation method. The laser diffraction technique works best within
a certain particulate concentration range. This sometimes requires that samples DYNAMIC LIGHT SCATTERING
undergo dilution. In some cases this dilution may change the state of the particles
The primary results from dynamic light scattering (DLS) systems are typically
and affect the apparent size distribution. Additionally, ultrasonic energy can be
reported as an intensity distribution. Key values included in DLS-based specifica-
applied to improve the dispersion of agglomerates which can significantly change
tions are the intensity-weighted average (often called the “z average”) and the
the result.
polydispersity index (PI). The results can be transformed into a volume-based
distribution and D10, D50, and D90 results can also be used.
TESTING REPRODUCIBILITY
figure 15 There are currently two internationally accepted standards written on the use of
| REPORTING PSD PERCENTAGE
SMALLER THAN THE GIVEN SIZE
In this example, percentage of the
laser diffraction: ISO 13320 (ref. 9) and USP<429> (ref. 10). Both standards state
that samples should be measured at least three times and reproducibility must
ACOUSTIC SPECTROSCOPY
12 13
LA-950
IMAGE ANALYSIS
LASER
The primary result reported by image analysis is a number distribution since the
particles are inspected one at a time. Setting specifications based on the number DIFFRACTION
distribution is acceptable, but this is the one example where conversion to
another basis (i.e. volume) is both acceptable and often preferred. As long as a TECHNIQUE
sufficient number of particles are inspected to fully define the distribution, then
the conversion from number to volume does not introduce unknown errors into The LA-950 combines the most popular modern sizing technique with state
the result. The pharmaceutical industry discussed the subject at a meeting of the art refinements to measure wet and dry samples measuring 10 nano-
organized by the AAPS (ref. 6) and concluded that results are preferably reported meters to 3 millimeters. The central idea in laser diffraction is that a particle will
as volume distributions. scatter light at an angle determined by that particle’s size. Larger particles will scatter
at small angles and smaller particles scatter at wide angles. A collection of particles
RANGE IN MICRONS
Particle size distribution specifications based on the image analysis technique will produce a pattern of scattered light defined by intensity and angle that can be
10nm - 3,000 (3mm)
often include the mean, D10, D50, and D90 values. Care should be taken to avoid transformed into a particle size distribution result.
OPTIMAL APPLICATIONS
basing specifications on the number-based mean since this value may not track POWDERS, SUSPENSIONS,
process changes such as milling or agglomeration (ref. 12). Conversion from INTRODUCTION AND EMULSIONS
number to volume distribution can be performed with high accuracy by specifying The knowledge that particles scatter light is not new. Rayleigh scattering of light from WEIGHT 56kG (123 lbs)
the typical particle shape (spherical, cylindrical, ellipsoidal, tetragonal, etc.). particles in the atmosphere is what gives the sky a blue color and makes sunsets FOOTPRINT
yellow, orange, and red. Light interacts with particles in any of four ways: diffraction, WIDTH 705mm (28”)
Particle shape parameters such as roundness, aspect ratio, and compactness reflection, absorption, and refraction. Figure 17 shows the idealized edge diffraction DEPTH 565mm (22”)
are used to describe particle morphology. Specifications for shape parameters of an incident plane wave on a spherical particle. Scientists discovered more than a HEIGHT 500mm (20”)
are typically reported using just the number-based mean value, so this is century ago that light scattered differently off of differently sized objects. Only the
recommended for setting specifications. relatively recent past, however, has seen the science of particle size analysis embrace
light scattering as not only a viable technique, but the backbone of modern sizing.
CONCLUSIONS
figure 17
The task of setting a particle size specification for a material requires knowledge
of which technique will be used for the analysis and how size affects product | DIFFRACTION PATTERN
OF A PLANE WAVE
SCATTERING FROM
performance. Sources of error must be investigated and incorporated into the final A SPHEROID
14
LA-950
IMAGE ANALYSIS
LASER
The primary result reported by image analysis is a number distribution since the
particles are inspected one at a time. Setting specifications based on the number DIFFRACTION
distribution is acceptable, but this is the one example where conversion to
another basis (i.e. volume) is both acceptable and often preferred. As long as a TECHNIQUE
sufficient number of particles are inspected to fully define the distribution, then
the conversion from number to volume does not introduce unknown errors into The LA-950 combines the most popular modern sizing technique with state
the result. The pharmaceutical industry discussed the subject at a meeting of the art refinements to measure wet and dry samples measuring 10 nano-
organized by the AAPS (ref. 6) and concluded that results are preferably reported meters to 3 millimeters. The central idea in laser diffraction is that a particle will
as volume distributions. scatter light at an angle determined by that particle’s size. Larger particles will scatter
at small angles and smaller particles scatter at wide angles. A collection of particles
RANGE IN MICRONS
Particle size distribution specifications based on the image analysis technique will produce a pattern of scattered light defined by intensity and angle that can be
10nm - 3,000 (3mm)
often include the mean, D10, D50, and D90 values. Care should be taken to avoid transformed into a particle size distribution result.
OPTIMAL APPLICATIONS
basing specifications on the number-based mean since this value may not track POWDERS, SUSPENSIONS,
process changes such as milling or agglomeration (ref. 12). Conversion from INTRODUCTION AND EMULSIONS
number to volume distribution can be performed with high accuracy by specifying The knowledge that particles scatter light is not new. Rayleigh scattering of light from WEIGHT 56kG (123 lbs)
the typical particle shape (spherical, cylindrical, ellipsoidal, tetragonal, etc.). particles in the atmosphere is what gives the sky a blue color and makes sunsets FOOTPRINT
yellow, orange, and red. Light interacts with particles in any of four ways: diffraction, WIDTH 705mm (28”)
Particle shape parameters such as roundness, aspect ratio, and compactness reflection, absorption, and refraction. Figure 17 shows the idealized edge diffraction DEPTH 565mm (22”)
are used to describe particle morphology. Specifications for shape parameters of an incident plane wave on a spherical particle. Scientists discovered more than a HEIGHT 500mm (20”)
are typically reported using just the number-based mean value, so this is century ago that light scattered differently off of differently sized objects. Only the
recommended for setting specifications. relatively recent past, however, has seen the science of particle size analysis embrace
light scattering as not only a viable technique, but the backbone of modern sizing.
CONCLUSIONS
figure 17
The task of setting a particle size specification for a material requires knowledge
of which technique will be used for the analysis and how size affects product | DIFFRACTION PATTERN
OF A PLANE WAVE
SCATTERING FROM
performance. Sources of error must be investigated and incorporated into the final A SPHEROID
14
Such an instrument consists of at least one source of high intensity, monochro- BUILDING A STATE OF THE ART
LASER DIFFRACTION ANALYZER
matic light, a sample handling system to control the interaction of particles and
incident light, and an array of high quality photodiodes to detect the scattered The basics of what needs to be measured and how it’s transformed into particle
light over a wide range of angles. This last piece is the primary function of a laser size data are understood (ref. 14). What constitutes a basic particle size analyzer
diffraction instrument: to record angle and intensity of scattered light. This informa- has also been discussed, but there’s a wide gulf between bare minimum and
tion is then input into an algorithm which, while complex, reduces to the following state of the art. The latter is always the industry leader in accuracy, repeatability,
basic truth: usability, flexibility, and reliability. The current state of the art in laser diffraction is
the Partica LA-950 featuring two high intensity light sources, a single, continuous
LARGE PARTICLES SCATTER INTENSELY AT NARROW ANGLES cast aluminum optical bench (Figure 20), a wide array of sample handling sys-
tems, and expert refinements expected from the fifth revision in the 900 series.
SMALL PARTICLES SCATTER WEAKLY AT WIDE ANGLES
The algorithm, at its core, consists of an optical model with the mathematical
figure 21
transformations necessary to get particle size data from scattered light. However,
not all optical models were created equally. 3
2
| LIGHT SCATTERING PATTERNS
FOR 50nm AND 70nm PARTICLES
USING 650nm LASER
In the beginning there was the Fraunhofer Approximation and it was good. This
model, which was popular in older laser diffraction instruments, makes certain 1
assumptions (hence the approximation) to simplify the calculation. Particles are
4 4
assumed…
to be spherical
to be opaque figure 20
to scatter equivalently at wide angles as narrow angles
to interact with light in a different manner than the medium
| SIMPLIFIED LAYOUT OF THE LA-950 OPTICAL BENCH
1. Red wavelength laser diode for particles > 500nm
2. Blue LED for particles < 500nm
3. Low angle detectors for large particles
4. Side and back angle
Practically, these restrictions render the Fraunhofer Approximation a very poor
figure 22
choice for particle size analysis as measurement accuracy below roughly 20
microns is compromised. Using two light sources of different wavelengths is of critical importance because | LIGHT SCATTERING PATTERNS
FOR THE SAME SAMPLES
USING 405nm LED
the measurement accuracy of small particles is wavelength dependent. Figure
figure 18
| REPRESENTATIONS OF
FRAUNHOFER (TOP) AND MIE
SCATTERING MODELS
The Mie scattering theory overcomes these limitations. Gustav Mie developed a
closed form solution (not approximation) to Maxwell’s electromagnetic equations
21 shows the 360° light scattering patterns from 50nm and 70nm particles as
generated from a 650 nm red laser. The patterns are practically identical across
60
40
Angle, energy and size are used as for scattering from spheres; this solution exceeds Fraunhofer to include sensitivity all angles and the algorithm will not be able to accurately calculate the different 50 LATEX
parameters in these examples. to smaller sizes (wide angle scatter), a wide range of opacity (i.e. light absorption), particle sizes. Figure 22 shows the same experiment using a 405nm blue LED. STANDARDS
(µm)
50
and the user need only provide the refractive index of particle and dispersing Distinct differences are now seen on wide angle detectors which allows for 30 70
medium. Accounting for light that refracts through the particle (a.k.a. secondary accurate calculation of these materials. Integrating a second, shorter wavelength
40
scatter) allows for accurate measurement even in cases of significant transpar- light source is the primary means of improving nano-scale performance beyond
ency. The Mie theory likewise makes certain assumptions that the particle… the bare minimum laser diffraction analyzer.
30
is spherical
CONCLUSIONS
ensemble is homogeneous 20
refractive index of particle and surrounding medium is known The HORIBA LA-950 particle size analyzer uses the laser diffraction method to
measure size distributions. This technique uses first principles to calculate size 10
Figure 18 shows a graphical representation of Fraunhofer and Mie models using using light scattered off the particle (edge diffraction) and through the particle
scattering intensity, scattering angle, and particle size (ref. 13). The two models (secondary scattering refraction). The LA-950 incorporates the full Mie scattering q (%)
begin to diverge around 20 microns and these differences become pronounced theory to cover the widest size range currently available. Wide measurement 0.010 0.100 1.000
below 10 microns. Put simply, the Fraunhofer Approximation contributes a magni- ranges, fast analyses, exceptional precision, and reliability have made laser diffrac- DIAMETER (µm)
figure 19 tion the most popular modern sizing technique in both industry and academia.
| MIE (RED) AND FRANHOFER
(BLUE) RESULTS FOR
SPHERICAL GLASS BEADS
tude of error for micronized particles that is typically unacceptable to the user.
A measurement of spherical glass beads is shown in Figure 19 and calculated
figure 23
| 30, 40, 50 AND 70 NANOMETER
MATERIALS MEASURED
INDEPENDENTLY ON THE LA-950
using the Mie (red) and Fraunhofer (blue) models. The Mie result meets the USING THE BLUE LED
material specification while the Fraunhofer result fails the specification and splits
the peak. The over-reporting of small particles (where Fraunhofer error is signifi-
cant) is a typical comparison result.
16 17
Such an instrument consists of at least one source of high intensity, monochro- BUILDING A STATE OF THE ART
LASER DIFFRACTION ANALYZER
matic light, a sample handling system to control the interaction of particles and
incident light, and an array of high quality photodiodes to detect the scattered The basics of what needs to be measured and how it’s transformed into particle
light over a wide range of angles. This last piece is the primary function of a laser size data are understood (ref. 14). What constitutes a basic particle size analyzer
diffraction instrument: to record angle and intensity of scattered light. This informa- has also been discussed, but there’s a wide gulf between bare minimum and
tion is then input into an algorithm which, while complex, reduces to the following state of the art. The latter is always the industry leader in accuracy, repeatability,
basic truth: usability, flexibility, and reliability. The current state of the art in laser diffraction is
the Partica LA-950 featuring two high intensity light sources, a single, continuous
LARGE PARTICLES SCATTER INTENSELY AT NARROW ANGLES cast aluminum optical bench (Figure 20), a wide array of sample handling sys-
tems, and expert refinements expected from the fifth revision in the 900 series.
SMALL PARTICLES SCATTER WEAKLY AT WIDE ANGLES
The algorithm, at its core, consists of an optical model with the mathematical
figure 21
transformations necessary to get particle size data from scattered light. However,
not all optical models were created equally. 3
2
| LIGHT SCATTERING PATTERNS
FOR 50nm AND 70nm PARTICLES
USING 650nm LASER
In the beginning there was the Fraunhofer Approximation and it was good. This
model, which was popular in older laser diffraction instruments, makes certain 1
assumptions (hence the approximation) to simplify the calculation. Particles are
4 4
assumed…
to be spherical
to be opaque figure 20
to scatter equivalently at wide angles as narrow angles
to interact with light in a different manner than the medium
| SIMPLIFIED LAYOUT OF THE LA-950 OPTICAL BENCH
1. Red wavelength laser diode for particles > 500nm
2. Blue LED for particles < 500nm
3. Low angle detectors for large particles
4. Side and back angle
Practically, these restrictions render the Fraunhofer Approximation a very poor
figure 22
choice for particle size analysis as measurement accuracy below roughly 20
microns is compromised. Using two light sources of different wavelengths is of critical importance because | LIGHT SCATTERING PATTERNS
FOR THE SAME SAMPLES
USING 405nm LED
the measurement accuracy of small particles is wavelength dependent. Figure
figure 18
| REPRESENTATIONS OF
FRAUNHOFER (TOP) AND MIE
SCATTERING MODELS
The Mie scattering theory overcomes these limitations. Gustav Mie developed a
closed form solution (not approximation) to Maxwell’s electromagnetic equations
21 shows the 360° light scattering patterns from 50nm and 70nm particles as
generated from a 650 nm red laser. The patterns are practically identical across
60
40
Angle, energy and size are used as for scattering from spheres; this solution exceeds Fraunhofer to include sensitivity all angles and the algorithm will not be able to accurately calculate the different 50 LATEX
parameters in these examples. to smaller sizes (wide angle scatter), a wide range of opacity (i.e. light absorption), particle sizes. Figure 22 shows the same experiment using a 405nm blue LED. STANDARDS
(µm)
50
and the user need only provide the refractive index of particle and dispersing Distinct differences are now seen on wide angle detectors which allows for 30 70
medium. Accounting for light that refracts through the particle (a.k.a. secondary accurate calculation of these materials. Integrating a second, shorter wavelength
40
scatter) allows for accurate measurement even in cases of significant transpar- light source is the primary means of improving nano-scale performance beyond
ency. The Mie theory likewise makes certain assumptions that the particle… the bare minimum laser diffraction analyzer.
30
is spherical
CONCLUSIONS
ensemble is homogeneous 20
refractive index of particle and surrounding medium is known The HORIBA LA-950 particle size analyzer uses the laser diffraction method to
measure size distributions. This technique uses first principles to calculate size 10
Figure 18 shows a graphical representation of Fraunhofer and Mie models using using light scattered off the particle (edge diffraction) and through the particle
scattering intensity, scattering angle, and particle size (ref. 13). The two models (secondary scattering refraction). The LA-950 incorporates the full Mie scattering q (%)
begin to diverge around 20 microns and these differences become pronounced theory to cover the widest size range currently available. Wide measurement 0.010 0.100 1.000
below 10 microns. Put simply, the Fraunhofer Approximation contributes a magni- ranges, fast analyses, exceptional precision, and reliability have made laser diffrac- DIAMETER (µm)
figure 19 tion the most popular modern sizing technique in both industry and academia.
| MIE (RED) AND FRANHOFER
(BLUE) RESULTS FOR
SPHERICAL GLASS BEADS
tude of error for micronized particles that is typically unacceptable to the user.
A measurement of spherical glass beads is shown in Figure 19 and calculated
figure 23
| 30, 40, 50 AND 70 NANOMETER
MATERIALS MEASURED
INDEPENDENTLY ON THE LA-950
using the Mie (red) and Fraunhofer (blue) models. The Mie result meets the USING THE BLUE LED
material specification while the Fraunhofer result fails the specification and splits
the peak. The over-reporting of small particles (where Fraunhofer error is signifi-
cant) is a typical comparison result.
16 17
LB-550
P( )
DYNAMIC figure 24
|
3
POWER SPECTRUM PCS VS. POWER SPECTRUM
ANALYZERS
3
photons, whereas the power spectrum
system can measure both fast- and
TECHNIQUE The LB-550 Dynamic Light Scattering (DLS) System measures particle size
slow-moving particles.
ER
R
TO
SPECTRUM ANALYZER
S
LA
from 1 nanometer to 6 microns at concentrations up to 40% w/v. All DLS
EC
CORRELATOR
ET
D
systems measure light scattering effects arising from the Brownian motion
of particles in suspension. Unlike systems using correlators, the LB-550 uses particle size distribution F(DP)
R ( T)
a fast Fourier transform to create a power spectrum from the light scattering
RANGE IN MICRONS AUTO CORRELATION
0.01 - 6,000nm (6µm) fluctuations. The power spectrum is then used to create the particle size distribu- FUNCTION
OPTIMAL APPLICATIONS
tion. This technical note explains the underlying principles used by the LB-550
SUSPENSIONS AND EMULSIONS nanoparticle size analyzer. delay time ( T)
UNDER 6µm
WEIGHT 26kG (57 lbs) INTRODUCTION frequencies, which represent slow-moving particles, to high frequencies, which
represent the behavior of fast-moving particles. This permits analysis of every
FOOTPRINT Particle size distribution analyzers based on measuring the phenomenon of 0
signal from each particle, thus ensuring that particle size distributions can be
WIDTH 340mm (13.4”) Brownian motion can be broadly classified as being based on either autocorrela-
DEPTH 565mm (22”) characterized with high precision.
tors or on power spectrums. For the purpose of this document, systems using
HEIGHT 305mm (12”)
autocorrelators will be called Photon Correlation Spectroscopy (PCS) systems.
In both cases a laser light source interacts with the sample in a cuvette and a
Power spectrum analyzers such as the HORIBA LB-550 are designed to examine (volt)
detector at some angle removed from the light source measures light scattered LIGHT FLUCTUATION SIGNAL
the differences in frequency of light scattered off particles.
due to the Brownian motion of the particles. Either the correlation function or
power spectrum is then used to calculate the particle size distribution.
The LB-550 technique is designed to analyze fluctuations in the intensity of any
scattered light from a body in relation to the incident light. This method is also
THE POWER SPECTRUM AND FAST FOURIER TRANSFORM FAST FOURIER
referred to as the frequency analysis method. PCS analyzers are based on the TRANSFORM
method in which the number of photons per time unit is counted, assuming that For this technology the power spectrum is defined by the different frequency
light consists of a series of photons. components in the scattered light signal and how many instances there are
for each different frequency. The power spectrum graph refers to a graphical
As described above, the PCS instrument is designed to count moving par- spectrum representation, in which frequency is depicted on the horizontal axis
ticles in terms of the number of photons. Therefore, it must simultaneously and intensity on the vertical axis. It shows the level of light intensity at each
P( )
measure particles which are moving at both fast and slow speeds. The si- frequency. Power spectrum (frequency distribution) data is calculated using a POWER SPECTRUM
3
multaneous measurement requires that fast-moving particles be determined mathematical technique called the fast Fourier transform.
at high speeds, and slow-moving ones over extended periods of time. In
actual practice, however, it is very difficult to create an instrument that com- The power spectrum (frequency/intensity distribution) provides information re-
bines the above functions with continuous data multiplication capability. garding the intensity of light as a function of its frequency. However, any such
power spectrum cannot be obtained without transformation of all input signals
The power spectrum apparatus, on the other hand, treats to the detector. A mixture of light waves at 1 Hz, 2 Hz and so on are incident
light as a traveling wave, and can thus obtain the frequency upon the detector. These incident signals are commingled (Figure 25) to an frequency ( )
3
spectra of scattered light by both fast- and slow-moving extent that does not provide any helpful information concerning frequency/
figure 25
particles. It then temporarily introduces the obtained
signals for arithmetic conversion into power spectrum data
intensity distribution. The Fourier transform technique is therefore applied to
elicit pertinent information regarding the intensity of light at a frequency of, say,
| LIGHT SCATTERING TO THE
POWER SPECTRUM
The conversion from the light
1Hz, from these unavoidably messy signals. fluctuation signal into the power
using the Fourier transformation method. This form of data
spectrum.
contains all frequency information ranging from low
Fourier transform methods are available in a variety of calculation techniques,
some characterized by high precision and others by short computation time.
Included among these is a technique called the fast Fourier transform. This
method innovatively performs high-speed computation with regards to the
sample whose size is a power of two. The LB-550 adopts this mathematical
process.
19
LB-550
P( )
DYNAMIC figure 24
|
3
POWER SPECTRUM PCS VS. POWER SPECTRUM
ANALYZERS
3
photons, whereas the power spectrum
system can measure both fast- and
TECHNIQUE The LB-550 Dynamic Light Scattering (DLS) System measures particle size
slow-moving particles.
ER
R
TO
SPECTRUM ANALYZER
S
LA
from 1 nanometer to 6 microns at concentrations up to 40% w/v. All DLS
EC
CORRELATOR
ET
D
systems measure light scattering effects arising from the Brownian motion
of particles in suspension. Unlike systems using correlators, the LB-550 uses particle size distribution F(DP)
R ( T)
a fast Fourier transform to create a power spectrum from the light scattering
RANGE IN MICRONS AUTO CORRELATION
0.01 - 6,000nm (6µm) fluctuations. The power spectrum is then used to create the particle size distribu- FUNCTION
OPTIMAL APPLICATIONS
tion. This technical note explains the underlying principles used by the LB-550
SUSPENSIONS AND EMULSIONS nanoparticle size analyzer. delay time ( T)
UNDER 6µm
WEIGHT 26kG (57 lbs) INTRODUCTION frequencies, which represent slow-moving particles, to high frequencies, which
represent the behavior of fast-moving particles. This permits analysis of every
FOOTPRINT Particle size distribution analyzers based on measuring the phenomenon of 0
signal from each particle, thus ensuring that particle size distributions can be
WIDTH 340mm (13.4”) Brownian motion can be broadly classified as being based on either autocorrela-
DEPTH 565mm (22”) characterized with high precision.
tors or on power spectrums. For the purpose of this document, systems using
HEIGHT 305mm (12”)
autocorrelators will be called Photon Correlation Spectroscopy (PCS) systems.
In both cases a laser light source interacts with the sample in a cuvette and a
Power spectrum analyzers such as the HORIBA LB-550 are designed to examine (volt)
detector at some angle removed from the light source measures light scattered LIGHT FLUCTUATION SIGNAL
the differences in frequency of light scattered off particles.
due to the Brownian motion of the particles. Either the correlation function or
power spectrum is then used to calculate the particle size distribution.
The LB-550 technique is designed to analyze fluctuations in the intensity of any
scattered light from a body in relation to the incident light. This method is also
THE POWER SPECTRUM AND FAST FOURIER TRANSFORM FAST FOURIER
referred to as the frequency analysis method. PCS analyzers are based on the TRANSFORM
method in which the number of photons per time unit is counted, assuming that For this technology the power spectrum is defined by the different frequency
light consists of a series of photons. components in the scattered light signal and how many instances there are
for each different frequency. The power spectrum graph refers to a graphical
As described above, the PCS instrument is designed to count moving par- spectrum representation, in which frequency is depicted on the horizontal axis
ticles in terms of the number of photons. Therefore, it must simultaneously and intensity on the vertical axis. It shows the level of light intensity at each
P( )
measure particles which are moving at both fast and slow speeds. The si- frequency. Power spectrum (frequency distribution) data is calculated using a POWER SPECTRUM
3
multaneous measurement requires that fast-moving particles be determined mathematical technique called the fast Fourier transform.
at high speeds, and slow-moving ones over extended periods of time. In
actual practice, however, it is very difficult to create an instrument that com- The power spectrum (frequency/intensity distribution) provides information re-
bines the above functions with continuous data multiplication capability. garding the intensity of light as a function of its frequency. However, any such
power spectrum cannot be obtained without transformation of all input signals
The power spectrum apparatus, on the other hand, treats to the detector. A mixture of light waves at 1 Hz, 2 Hz and so on are incident
light as a traveling wave, and can thus obtain the frequency upon the detector. These incident signals are commingled (Figure 25) to an frequency ( )
3
spectra of scattered light by both fast- and slow-moving extent that does not provide any helpful information concerning frequency/
figure 25
particles. It then temporarily introduces the obtained
signals for arithmetic conversion into power spectrum data
intensity distribution. The Fourier transform technique is therefore applied to
elicit pertinent information regarding the intensity of light at a frequency of, say,
| LIGHT SCATTERING TO THE
POWER SPECTRUM
The conversion from the light
1Hz, from these unavoidably messy signals. fluctuation signal into the power
using the Fourier transformation method. This form of data
spectrum.
contains all frequency information ranging from low
Fourier transform methods are available in a variety of calculation techniques,
some characterized by high precision and others by short computation time.
Included among these is a technique called the fast Fourier transform. This
method innovatively performs high-speed computation with regards to the
sample whose size is a power of two. The LB-550 adopts this mathematical
process.
19
DT-1201
CALCULATING PARTICLE SIZE FROM THE POWER SPECTRUM
ACOUSTIC
The algorithm for calculation is based on the principle in which f(a) is determined
from any measured frequency/intensity distribution S(ω) by solving the following SPECTROSCOPY
general Fredholm’s integration of the first kind:
TECHNIQUE
EQUATION 1 S (ω) = ∫ K (ω,a) f (a) da
Both particle size distribution and zeta potential of particles in suspension
where ω is the angular frequency and a is the particle size. This solution can be measured using acoustics. The technique used for particle size
(elucidation) must solve very difficult non-linear problems called inverse opera- distribution is typically called acoustic spectroscopy. The technique used
tions. In order to accomplish this, the LB-550 employs a uniquely optimized to measure zeta potential is typically called electro acoustics. Acoustic
iterative method for particle sizing. K (ω, a) is an intermediate function referred spectroscopy applies pulses of sound to the test slurry and the instrument
RANGE IN MICRONS
to as a response function, which is calculated as follows: measures the attenuation and propagation velocity of the sound for a wide range 0.05 (5nm) - 1,000µm
of ultrasonic frequencies, typically 1 to 100 MHz. Simply put: Attenuation = sound OPTIMAL APPLICATIONS
Let k be the Boltzmann constant, T the absolute temperature, η the viscosity in – sound out. A simplified block diagram is shown in Figure 26. SUSPENSIONS AND EMULSIONS
coefficient of a solvent, a the particle size, and D the diffusion coefficient. Then AT HIGH CONCENTRATIONS
the diffusion coefficient D can be expressed from the Stokes-Einstein equation:
TRANSMITTER
L RECEIVER
WEIGHT 30kG (66 lbs)
(excluding the computer)
EQUATION 2 D = k T / (3 πηa)
FOOTPRINT
WIDTH 406mm (16”)
In addition, suppose that λ is the wavelength of a laser beam in the full vacuum,
n is the refractive index of a solvent and a is the angle through which the laser I in I out DEPTH 254mm (10”)
HEIGHT 406mm (16”)
beam is scattered. Then, the scatter vector K can be described as
Since it has been proven that for any spherical particle, its frequency/intensity 10 I in
distribution is in agreement with any distribution obtained using the Lorentzian α= log
f [Mhz]L [cm] I out
function, the calculated frequency/intensity distribution S0(ω) for each particle
size is given by
ATTENUATION [dB/cm/MHz]
8
A group of the calculated frequency/intensity distributions S0(ω) for all particle
7
sizes involved are employed to calculate the response function K(ω,a), which is
required to characterize the particle size distribution f(a) by the repetition operation. 6
Suppose that the particle size distribution f0(a) is an initial hypothetical distribution. 5
For example, consider a particle size distribution which occurs for all particle sizes 4
with the same frequency. Then, the difference between this and the observed 3
frequency/intensity distributions is determined, the hypothetical frequency/inten- 2 diameter 1 micron
sity distribution is modified so as to decrease the difference, and the modified diameter 0.5 micron
1
distribution is re-defined as f1(a). This loop is operated repeatedly. When Eq.1 is
established, that is, the measured frequency/intensity distribution coincides with 100 101 102
any distribution determined from the hypothetical particle size distribution f(a) FREQUENCY [MHZ]
using the response function, the particle size distribution operation is completed
by regarding this f(a) as the true particle size distribution. figure 26
CONCLUSIONS
| ATTENUATION
The plot shows that attenuation for
small, monodispersed particles is a
bell-shaped curve and shifts to the
The HORIBA LB-550 nanoparticle size analyzer uses the dynamic light scattering
right with decreasing particle size.
method to measure the size distribution of particles undergoing Brownian motion.
The power spectrum approach is employed to convert light scattering fluctuations
into the particle size distribution utilizing a fast Fourier transform and the approach
described in this section. The advantages of the LB-550 include the ability to
better measure broad particle size distributions and the ability to measure at high
concentrations while maintaining a wide size measurement range.
20
DT-1201
CALCULATING PARTICLE SIZE FROM THE POWER SPECTRUM
ACOUSTIC
The algorithm for calculation is based on the principle in which f(a) is determined
from any measured frequency/intensity distribution S(ω) by solving the following SPECTROSCOPY
general Fredholm’s integration of the first kind:
TECHNIQUE
EQUATION 1 S (ω) = ∫ K (ω,a) f (a) da
Both particle size distribution and zeta potential of particles in suspension
where ω is the angular frequency and a is the particle size. This solution can be measured using acoustics. The technique used for particle size
(elucidation) must solve very difficult non-linear problems called inverse opera- distribution is typically called acoustic spectroscopy. The technique used
tions. In order to accomplish this, the LB-550 employs a uniquely optimized to measure zeta potential is typically called electro acoustics. Acoustic
iterative method for particle sizing. K (ω, a) is an intermediate function referred spectroscopy applies pulses of sound to the test slurry and the instrument
RANGE IN MICRONS
to as a response function, which is calculated as follows: measures the attenuation and propagation velocity of the sound for a wide range 0.05 (5nm) - 1,000µm
of ultrasonic frequencies, typically 1 to 100 MHz. Simply put: Attenuation = sound OPTIMAL APPLICATIONS
Let k be the Boltzmann constant, T the absolute temperature, η the viscosity in – sound out. A simplified block diagram is shown in Figure 26. SUSPENSIONS AND EMULSIONS
coefficient of a solvent, a the particle size, and D the diffusion coefficient. Then AT HIGH CONCENTRATIONS
the diffusion coefficient D can be expressed from the Stokes-Einstein equation:
TRANSMITTER
L RECEIVER
WEIGHT 30kG (66 lbs)
(excluding the computer)
EQUATION 2 D = k T / (3 πηa)
FOOTPRINT
WIDTH 406mm (16”)
In addition, suppose that λ is the wavelength of a laser beam in the full vacuum,
n is the refractive index of a solvent and a is the angle through which the laser I in I out DEPTH 254mm (10”)
HEIGHT 406mm (16”)
beam is scattered. Then, the scatter vector K can be described as
Since it has been proven that for any spherical particle, its frequency/intensity 10 I in
distribution is in agreement with any distribution obtained using the Lorentzian α= log
f [Mhz]L [cm] I out
function, the calculated frequency/intensity distribution S0(ω) for each particle
size is given by
ATTENUATION [dB/cm/MHz]
8
A group of the calculated frequency/intensity distributions S0(ω) for all particle
7
sizes involved are employed to calculate the response function K(ω,a), which is
required to characterize the particle size distribution f(a) by the repetition operation. 6
Suppose that the particle size distribution f0(a) is an initial hypothetical distribution. 5
For example, consider a particle size distribution which occurs for all particle sizes 4
with the same frequency. Then, the difference between this and the observed 3
frequency/intensity distributions is determined, the hypothetical frequency/inten- 2 diameter 1 micron
sity distribution is modified so as to decrease the difference, and the modified diameter 0.5 micron
1
distribution is re-defined as f1(a). This loop is operated repeatedly. When Eq.1 is
established, that is, the measured frequency/intensity distribution coincides with 100 101 102
any distribution determined from the hypothetical particle size distribution f(a) FREQUENCY [MHZ]
using the response function, the particle size distribution operation is completed
by regarding this f(a) as the true particle size distribution. figure 26
CONCLUSIONS
| ATTENUATION
The plot shows that attenuation for
small, monodispersed particles is a
bell-shaped curve and shifts to the
The HORIBA LB-550 nanoparticle size analyzer uses the dynamic light scattering
right with decreasing particle size.
method to measure the size distribution of particles undergoing Brownian motion.
The power spectrum approach is employed to convert light scattering fluctuations
into the particle size distribution utilizing a fast Fourier transform and the approach
described in this section. The advantages of the LB-550 include the ability to
better measure broad particle size distributions and the ability to measure at high
concentrations while maintaining a wide size measurement range.
20
PSA300 | CAMSIZER
The gap between the transmitting and receiving transducer is computer controlled
by means of a stepping motor. The signal level at the output transducer is mea-
IMAGE
sured for a set of discrete frequencies and gaps. The rate of change in the signal ANALYSIS
level with gap, expressed in dB/cm, corresponds to the attenuation due to losses
in the sample. These losses arise from several mechanisms including scattering, TECHNIQUE
viscous, and thermal loss. A key part of the method is a predictive theory which
allows the instrument to calculate the expected attenuation for a given size distri- The microscope has always been the referee technique in particle
bution, taking into account these various loss mechanisms. Particle size distribu- characterization since it is accepted as the most direct measurement of
tion is computed by finding the distribution that minimizes the difference between particle size and morphology. Automating manual microscopy has been
acoustic spectrum computed from theory and experimental spectrum. The fitting driven by the desire to replace a tedious, somewhat subjective measure-
error between theory and experiment provides a confidence factor for the final ment with a sophisticated technique for quantifying size and shape of PSA300 static image analysis
result. Software provides either log normal or bimodal distributions, as necessary a sufficient number of particles to assure statistical confidence with the RANGE IN MICRONS
to best match theory to experiment consistent with the experimental errors. end result. Analysts performing manual microscopy tend to describe particle 0.05nm - 1,000µm
shape using language such as round, blocky, sharp, fibrous, etc. By assigning OPTIMAL APPLICATIONS
Acoustic spectroscopy has the advantage of being able to measure suspensions quantitative values rather than qualitative to various shape descriptors, image POWDERS AND SUSPENSIONS
(solid particles, soft particles, or emulsions) without dilution. This enables particle analysis systems provide numerical distributions of well defined shape WEIGHT 34kG (75 lbs) w/o computer
slipping size analysis in a sample’s native state which has real benefits for the applicability parameters FOOTPRINT
pane
of the final result to the product as it will be used. This technique works best for WIDTH 686mm (27”)
samples with a weight fraction between 1 and 40%, depending on the material. Two distinct development paths have emerged over time differing in how the DEPTH 483mm (19”)
HEIGHT 446mm (17.5”)
The user must know the concentration of the sample as well as the density of sample is introduced to the measurement zone: dynamic image analysis where
the dispersed phase. particles flow past one or more cameras and static image analysis where particles
CAMSIZER dynamic image analysis
sit on a slide moved by an automated stage for inspection by camera and
RANGE IN MICRONS
The surface charge, or zeta potential (Figure 27), can also be measured using microscope. 30µm - 30mm
zeta electroacoustic spectroscopy either with the same instrument, or as a stand alone
potential OPTIMAL APPLICATIONS
sensor. Ultrasound introduced into the sample using the CVI sensor shown in Many basic functions operate the same with either approach POWDERS
mV
Figure 28 induces a motion of the particles relative to the liquid. This motion (Figure 29): particles are presented to the measurement zone, WEIGHT 34kG (75 lbs) w/o computer
negatively charged disturbs the double layer surrounding the particles, shifting the screening cloud
particle surface images are captured with a digital (CCD) camera, the particles are FOOTPRINT
of counter-ions. This displacement of the ionic cloud with respect to the particle distinguished from the background, various size and shape parameters WIDTH 390mm (15”)
dispersion
surface creates a dipole moment. The sum of these dipole moments over many are measured for each particle, and a result report is generated. DEPTH 850mm (33.5”)
HEIGHT 220mm (9”)
particles creates an electric field which is measured by a two-electrode sensor. Additional features built into modern image analysis software
figure 27 The magnitude of this field depends on the zeta potential value which can be include the ability to automatically separate two particles
| ZETA POTENTIAL
The zeta potential is the charge in
mV measured at the slipping plane.
calculated through the application of appropriate theory. This calculation requires
information about the density contrast between the particles as the surround-
touching each other, filling holes, smoothing or removing
small protuberances, separating overlapping acicular
ing liquid, the viscosity and dielectric permittivity of the liquid, as well as weight objects, and keeping track of incomplete objects in a field
fraction of particles. Accurate determination of zeta potential may require some in order to recombine them once all fields are analyzed.
knowledge of particle size if the particles are larger than roughly 300nm.
The powerful combination of measuring particle size and zeta potential with one
instrument is extremely helpful for formulators optimizing the surface chemistry of
new suspension and emulsion products. Various experiments may be conducted
using such an instrument outfitted with an auto-titrator. The relationships between
pH, temperature, salinity, conductivity, size, and zeta potential can all be character-
ized with acoustic and electroacoustic spectroscopy.
figure 28
| CVI SENSOR
The sensor measures the CVI,
which is then used to calculate
zeta potential.
22
PSA300 | CAMSIZER
The gap between the transmitting and receiving transducer is computer controlled
by means of a stepping motor. The signal level at the output transducer is mea-
IMAGE
sured for a set of discrete frequencies and gaps. The rate of change in the signal ANALYSIS
level with gap, expressed in dB/cm, corresponds to the attenuation due to losses
in the sample. These losses arise from several mechanisms including scattering, TECHNIQUE
viscous, and thermal loss. A key part of the method is a predictive theory which
allows the instrument to calculate the expected attenuation for a given size distri- The microscope has always been the referee technique in particle
bution, taking into account these various loss mechanisms. Particle size distribu- characterization since it is accepted as the most direct measurement of
tion is computed by finding the distribution that minimizes the difference between particle size and morphology. Automating manual microscopy has been
acoustic spectrum computed from theory and experimental spectrum. The fitting driven by the desire to replace a tedious, somewhat subjective measure-
error between theory and experiment provides a confidence factor for the final ment with a sophisticated technique for quantifying size and shape of PSA300 static image analysis
result. Software provides either log normal or bimodal distributions, as necessary a sufficient number of particles to assure statistical confidence with the RANGE IN MICRONS
to best match theory to experiment consistent with the experimental errors. end result. Analysts performing manual microscopy tend to describe particle 0.05nm - 1,000µm
shape using language such as round, blocky, sharp, fibrous, etc. By assigning OPTIMAL APPLICATIONS
Acoustic spectroscopy has the advantage of being able to measure suspensions quantitative values rather than qualitative to various shape descriptors, image POWDERS AND SUSPENSIONS
(solid particles, soft particles, or emulsions) without dilution. This enables particle analysis systems provide numerical distributions of well defined shape WEIGHT 34kG (75 lbs) w/o computer
slipping size analysis in a sample’s native state which has real benefits for the applicability parameters FOOTPRINT
pane
of the final result to the product as it will be used. This technique works best for WIDTH 686mm (27”)
samples with a weight fraction between 1 and 40%, depending on the material. Two distinct development paths have emerged over time differing in how the DEPTH 483mm (19”)
HEIGHT 446mm (17.5”)
The user must know the concentration of the sample as well as the density of sample is introduced to the measurement zone: dynamic image analysis where
the dispersed phase. particles flow past one or more cameras and static image analysis where particles
CAMSIZER dynamic image analysis
sit on a slide moved by an automated stage for inspection by camera and
RANGE IN MICRONS
The surface charge, or zeta potential (Figure 27), can also be measured using microscope. 30µm - 30mm
zeta electroacoustic spectroscopy either with the same instrument, or as a stand alone
potential OPTIMAL APPLICATIONS
sensor. Ultrasound introduced into the sample using the CVI sensor shown in Many basic functions operate the same with either approach POWDERS
mV
Figure 28 induces a motion of the particles relative to the liquid. This motion (Figure 29): particles are presented to the measurement zone, WEIGHT 34kG (75 lbs) w/o computer
negatively charged disturbs the double layer surrounding the particles, shifting the screening cloud
particle surface images are captured with a digital (CCD) camera, the particles are FOOTPRINT
of counter-ions. This displacement of the ionic cloud with respect to the particle distinguished from the background, various size and shape parameters WIDTH 390mm (15”)
dispersion
surface creates a dipole moment. The sum of these dipole moments over many are measured for each particle, and a result report is generated. DEPTH 850mm (33.5”)
HEIGHT 220mm (9”)
particles creates an electric field which is measured by a two-electrode sensor. Additional features built into modern image analysis software
figure 27 The magnitude of this field depends on the zeta potential value which can be include the ability to automatically separate two particles
| ZETA POTENTIAL
The zeta potential is the charge in
mV measured at the slipping plane.
calculated through the application of appropriate theory. This calculation requires
information about the density contrast between the particles as the surround-
touching each other, filling holes, smoothing or removing
small protuberances, separating overlapping acicular
ing liquid, the viscosity and dielectric permittivity of the liquid, as well as weight objects, and keeping track of incomplete objects in a field
fraction of particles. Accurate determination of zeta potential may require some in order to recombine them once all fields are analyzed.
knowledge of particle size if the particles are larger than roughly 300nm.
The powerful combination of measuring particle size and zeta potential with one
instrument is extremely helpful for formulators optimizing the surface chemistry of
new suspension and emulsion products. Various experiments may be conducted
using such an instrument outfitted with an auto-titrator. The relationships between
pH, temperature, salinity, conductivity, size, and zeta potential can all be character-
ized with acoustic and electroacoustic spectroscopy.
figure 28
| CVI SENSOR
The sensor measures the CVI,
which is then used to calculate
zeta potential.
22
STATIC IMAGE ANALYSIS DYNAMIC IMAGE ANALYSIS
The samples measured by static image analysis typically rest on a slide that is Dynamic image analysis utilizes many of the same steps as static image analysis
moved by an automated stage. With the PSA300 static image analysis system with a few notable exceptions. Sample preparation is completely different since
a microscope and digital camera collect images of the particles as the slide is the sample itself is moving during the measurement. Sample preparation steps
scanned. Samples prepared on slides can include powders, suspensions, or could include an ionizer to mitigate static interactions between particles thus
creams. Aerosol delivery forms such as metered dose inhalers or dry powder improving flowability or a sample director to specifically orientate particles through
inhalers can be inspected using static image analysis by actuating the device onto the measurement zone. Many of the same image processing steps used for
a slide for measurement. In addition, particles in suspension (such as parenterals) static image analysis are also used in dynamic systems, but it is less common
can be collected on a filter for characterization. that the operator actively selects the functions being utilized. A basic diagram of
the CAMSIZER dynamic image analysis system is shown in Figure 30.
The majority of static image analysis measurements are made on powders,
typically used for solid oral dosage forms. Most powders require a sample prepa- The sample is transported to the measurement zone via a vibratory feeder where
ration step prior to analysis. Powder preparation devices—using either positive the particles drop between a backlight and two CCD cameras. The projected par-
pressure to impact on a hard surface or pulling and releasing a vacuum—break ticle shadows are recorded at a rate of more than 60 images (frames) per second
apart agglomerates and create an even dispersion on the slide. After the sample and analyzed. In this way each particle in the bulk material flow is recorded and
has been prepared and the automated stage has presented multiple fields to the evaluated, making it possible to measure a wide range of particles (30 microns
optics and camera for capture, a series of image processing steps occur in the to 30 millimeters) with extreme accuracy without needing operator involvement
software. The first step is to separate the particles from the background by setting to switch lenses or cameras as can be the case with other technologies. A great
a parameter with some threshold value. Setting this threshold can be done depth of sharpness, and therefore maximum precision across the entire measur-
manually or automatically based on phases in the grayscale image or through a ing range, is obtained with the two-camera system. The zoom camera provides
contrast threshold function based on the particle/background contrast. maximum resolution down to the fine range, while the basic camera also records
larger particles and guarantees a high statistical certainty in the results.
After the threshold operation is completed several functions may be applied to the
image to improve the edge definition. The basic functions of erosion and dilation Because of the size range measured by dynamic image analysis, this is a popular
improve edge definition by performing opposite tasks of removing or adding dark technique for applications historically using sieves. By choosing the appropriate
pixels at the particle edge. Advanced functions using combinations of erosion and size parameters the results can closely match sieve results, while providing the
dilation steps such as delineation and convex hull improve the edge definition of benefits of quick, easy analyses with the bonus information about particle shape.
particles, leading to accurate area and perimeter determinations that are critical In those cases where matching historic sieve data is required the CAMSIZER can
for shape factor calculations. Other software functions perform the task of be easily configured to “think like a sieve” to ensure the closest possible correla-
separating touching particles including the crossed fibers in order to quantify fiber tion. This is made possible by collecting shape information for each particle and
length distributions and aspect ratios. calculating how that shape would pass through a square mesh of known size.
Such a function could be used to satisfy existing quality control specifications
75 100
while simultaneously measuring the true, non-biased particle size and shape
distributions for the first time ever.
60 80
45 60
30 40
15 20
0 0
1 2 5 10 30 50 100
figure 29
| BASIC IMAGE ANALYSIS FUNCTIONS
Both static and dynamic image analysis
involve these basic steps.
figure 30
| DYNAMIC IMAGE ANALYSIS
Particles fall in front of the zoom
and basic cameras that capture
digital images.
24 25
STATIC IMAGE ANALYSIS DYNAMIC IMAGE ANALYSIS
The samples measured by static image analysis typically rest on a slide that is Dynamic image analysis utilizes many of the same steps as static image analysis
moved by an automated stage. With the PSA300 static image analysis system with a few notable exceptions. Sample preparation is completely different since
a microscope and digital camera collect images of the particles as the slide is the sample itself is moving during the measurement. Sample preparation steps
scanned. Samples prepared on slides can include powders, suspensions, or could include an ionizer to mitigate static interactions between particles thus
creams. Aerosol delivery forms such as metered dose inhalers or dry powder improving flowability or a sample director to specifically orientate particles through
inhalers can be inspected using static image analysis by actuating the device onto the measurement zone. Many of the same image processing steps used for
a slide for measurement. In addition, particles in suspension (such as parenterals) static image analysis are also used in dynamic systems, but it is less common
can be collected on a filter for characterization. that the operator actively selects the functions being utilized. A basic diagram of
the CAMSIZER dynamic image analysis system is shown in Figure 30.
The majority of static image analysis measurements are made on powders,
typically used for solid oral dosage forms. Most powders require a sample prepa- The sample is transported to the measurement zone via a vibratory feeder where
ration step prior to analysis. Powder preparation devices—using either positive the particles drop between a backlight and two CCD cameras. The projected par-
pressure to impact on a hard surface or pulling and releasing a vacuum—break ticle shadows are recorded at a rate of more than 60 images (frames) per second
apart agglomerates and create an even dispersion on the slide. After the sample and analyzed. In this way each particle in the bulk material flow is recorded and
has been prepared and the automated stage has presented multiple fields to the evaluated, making it possible to measure a wide range of particles (30 microns
optics and camera for capture, a series of image processing steps occur in the to 30 millimeters) with extreme accuracy without needing operator involvement
software. The first step is to separate the particles from the background by setting to switch lenses or cameras as can be the case with other technologies. A great
a parameter with some threshold value. Setting this threshold can be done depth of sharpness, and therefore maximum precision across the entire measur-
manually or automatically based on phases in the grayscale image or through a ing range, is obtained with the two-camera system. The zoom camera provides
contrast threshold function based on the particle/background contrast. maximum resolution down to the fine range, while the basic camera also records
larger particles and guarantees a high statistical certainty in the results.
After the threshold operation is completed several functions may be applied to the
image to improve the edge definition. The basic functions of erosion and dilation Because of the size range measured by dynamic image analysis, this is a popular
improve edge definition by performing opposite tasks of removing or adding dark technique for applications historically using sieves. By choosing the appropriate
pixels at the particle edge. Advanced functions using combinations of erosion and size parameters the results can closely match sieve results, while providing the
dilation steps such as delineation and convex hull improve the edge definition of benefits of quick, easy analyses with the bonus information about particle shape.
particles, leading to accurate area and perimeter determinations that are critical In those cases where matching historic sieve data is required the CAMSIZER can
for shape factor calculations. Other software functions perform the task of be easily configured to “think like a sieve” to ensure the closest possible correla-
separating touching particles including the crossed fibers in order to quantify fiber tion. This is made possible by collecting shape information for each particle and
length distributions and aspect ratios. calculating how that shape would pass through a square mesh of known size.
Such a function could be used to satisfy existing quality control specifications
75 100
while simultaneously measuring the true, non-biased particle size and shape
distributions for the first time ever.
60 80
45 60
30 40
15 20
0 0
1 2 5 10 30 50 100
figure 29
| BASIC IMAGE ANALYSIS FUNCTIONS
Both static and dynamic image analysis
involve these basic steps.
figure 30
| DYNAMIC IMAGE ANALYSIS
Particles fall in front of the zoom
and basic cameras that capture
digital images.
24 25
Selecting a particle size analyzer.
DYNAMIC RANGE OF THE HORIBA The decision process may be different if the instrument is being purchased Beginning the selection of a particle
PARTICLE CHARACTERIZATION SYSTEMS for a specific application as opposed to a general analytical technique for size analyzer should start with asking
many possible samples. For specific application it makes sense to search the these basic questions:
industry literature to determine if a particular technique is favored over others.
If for example the application is liposomes and 90% of all literature found in this Why am I making the measurement?
1nm 1µm 1mm 1m
field is DLS, then the decision is simple. On the other hand, if this is the first Must the new instrument
particle size analyzer bought by a company for general purpose use, then flexibility match historic data?
LA-950 10nm 3mm
LASER DIFFRACTION and a wide dynamic range should be important factors.
Do I need only particle size distribution,
LB-550 or do I need additional information
1nm 6µm Sometimes the goal to buy a new instrument includes being able to correlate
DYNAMIC LIGHT SCATTERING such as shape or surface charge?
to existing data. Accomplishing this goal can range from easy to difficult. Just
DT-1201 upgrading from an older to newer model diffraction analyzer could cause a change
5nm 1000µm
ACOUSTIC SPECTROSCOPY in results. The changes originate from many sources including differences in
dynamic range, advances in algorithms, and mechanic improvements to
PSA300 0.5µm 1000µm samplers. Switching from an existing technique such as sieving to newer tech-
IMAGE ANALYSIS
niques like laser diffraction or dynamic image analysis could also lead to changes
CAMSIZER in results. Data from sieves are typically smaller than data from laser diffraction
30µm 30mm
IMAGE ANALYSIS depending on the shape of the particles. The less spherical the particle, the
greater the difference will likely be. The CAMSIZER dynamic image analyzer has
multiple approaches built into the software to facilitate data matching with sieves.
As a general rule, data can be manipulated to approach existing results, but under-
standing this issue during the selection process can ease the implementation of
a new technique.
Particle size distribution is sufficient information for the majority of particle char-
acterization applications. But some techniques are higher resolution than others.
Ensemble technologies including light scattering and acoustic spectroscopy are
powerful techniques than are “resolution limited” compared to high resolution
techniques which are based on particle counting (such as electro zone counting
or image analysis). If the goal of the measurement is finding small populations of
particles larger or smaller than the main distribution, then an investigation of the
sensitivity to second distributions should be part of the selection process.
26 27
Selecting a particle size analyzer.
DYNAMIC RANGE OF THE HORIBA The decision process may be different if the instrument is being purchased Beginning the selection of a particle
PARTICLE CHARACTERIZATION SYSTEMS for a specific application as opposed to a general analytical technique for size analyzer should start with asking
many possible samples. For specific application it makes sense to search the these basic questions:
industry literature to determine if a particular technique is favored over others.
If for example the application is liposomes and 90% of all literature found in this Why am I making the measurement?
1nm 1µm 1mm 1m
field is DLS, then the decision is simple. On the other hand, if this is the first Must the new instrument
particle size analyzer bought by a company for general purpose use, then flexibility match historic data?
LA-950 10nm 3mm
LASER DIFFRACTION and a wide dynamic range should be important factors.
Do I need only particle size distribution,
LB-550 or do I need additional information
1nm 6µm Sometimes the goal to buy a new instrument includes being able to correlate
DYNAMIC LIGHT SCATTERING such as shape or surface charge?
to existing data. Accomplishing this goal can range from easy to difficult. Just
DT-1201 upgrading from an older to newer model diffraction analyzer could cause a change
5nm 1000µm
ACOUSTIC SPECTROSCOPY in results. The changes originate from many sources including differences in
dynamic range, advances in algorithms, and mechanic improvements to
PSA300 0.5µm 1000µm samplers. Switching from an existing technique such as sieving to newer tech-
IMAGE ANALYSIS
niques like laser diffraction or dynamic image analysis could also lead to changes
CAMSIZER in results. Data from sieves are typically smaller than data from laser diffraction
30µm 30mm
IMAGE ANALYSIS depending on the shape of the particles. The less spherical the particle, the
greater the difference will likely be. The CAMSIZER dynamic image analyzer has
multiple approaches built into the software to facilitate data matching with sieves.
As a general rule, data can be manipulated to approach existing results, but under-
standing this issue during the selection process can ease the implementation of
a new technique.
Particle size distribution is sufficient information for the majority of particle char-
acterization applications. But some techniques are higher resolution than others.
Ensemble technologies including light scattering and acoustic spectroscopy are
powerful techniques than are “resolution limited” compared to high resolution
techniques which are based on particle counting (such as electro zone counting
or image analysis). If the goal of the measurement is finding small populations of
particles larger or smaller than the main distribution, then an investigation of the
sensitivity to second distributions should be part of the selection process.
26 27
Surface charge or zeta potential of suspensions is important information for determine the nature of the particle-particle interactions and presence of REFERENCES
formulators. For these applications techniques providing both particle size and zeta multiple scattering. Easy samples are simply a matter of pipetting the sample 1 (PAGE 3)
potential (along with other such as pH or conductivity) may be the best options. into a cuvette and clicking one button. More sophisticated DLS systems can also ISO 9276-2:2001 : Representation of results of
particle size analysis – Part 2: Calculation of average
These options include DLS systems with integrated zeta potential and acoustic measure other sample characteristics including zeta potential, molecular weight,
particle sizes/diameters and moments from particle
systems like the DT-1201 that can automatically measure many of the desired and second virial coefficient. Generating this additional information may require size distributions
parameters. a greater skill set of the operator.
2 (PAGE 3, 4)
One question worth asking would be will I need other capabilities in the future? driven by an interest to measure at full concentration because of concerns with TN154, Particle Size Result Interpretation:
Number vs. Volume Distributions, available at
If I might need zeta potential in the future, this removes laser diffraction from the changes to the sample with dilution. Samples must typically be above 1 wt % www.horiba.com/us/particle
list of possible techniques. If I might have particles > 1µm in the future, this would for this technique and the concentration must be known. A library of known
eliminate DLS. Be forewarned that future requirements can be difficult to 4 (PAGE 5)
samples is included in the software, or the user is required to add some infor- ISO 13320-1 Particle size analysis – Laser diffraction
ascertain and additional capabilities always carry incremental cost. mation about the particles (typically density) and the diluent. methods
5 (PAGE 7)
WHEN TO CHOOSE LASER DIFFRACTION This technique is more often used for research than QC measurements, and ISO 13322-2 Particle size analysis – Image analysis
Laser diffraction is the most popular particle size technique for reasons including typically for focused applications rather than for a broad range of unknown methods – Part 2: Dynamic image analysis methods
speed, ease of use, and flexibility. The most basic laser diffraction system can samples. Excellent applications for acoustic spectroscopy include; colloids,
6 (PAGES 8-9, 14)
measure solid particles in suspensions and emulsions. With the addition of a dry nanoparticle suspensions, ceramics, and formulation studies where the Burgess, J., Duffy, E., Etzler, F., Hickey, A., Particle
powder feeder the instrument can then also measure dry powders in air. This is a combination of particle size and zeta potential as a function of surface chemistry Size Analysis: AAPS Workshop Report, Cosponsored
by the Food and Drug Administration and the United
low concentration technique, so dilution is often required. The complex refractive can be used for predicting dispersion stability. States Pharmacopeia, AAPS Journal 2004; 6 (3)
index of the sample and diluent must be known for optimum accuracy, but this Article 20 (http://www.aapsi.org)
information is easier to obtain than is often indicated (more often by competitors WHEN TO CHOOSE IMAGE ANALYSIS
7 (PAGE 10)
than informed scientists). The HORIBA LA-950 has a wide dynamic range capable Many laboratories are now replacing manual microscopy with automated image TN154, Particle Size Result Interpretation:
of measuring down to 30nm and up to 3000µm. This unique ability to measure analysis. While microscopy provides qualitative accuracy and shape information,
Number vs. Volume Distributions, available at
www.horiba.com/us/particle
particles < 100nm as well as agglomerates as large as hundreds of microns makes it requires automated image analysis to inspect the number of particles requited
this a credible choice even for nanotechnology applications. Since this is such a to obtain statistically valid quantitative results. Choosing image analysis is often 8 (PAGE 11)
powerful, flexible technique laser diffraction is often the best option for companies TN156, Particle Size Result Interpretation: Under-
driven by the desire to generate results that are accurate, sensitive to second standing Particle Size Distribution Calculations,
buying their first analyzer, or hoping to satisfy multiple needs and applications. populations, contains shape information, and includes images of the particles. available at www.horiba.com/us/particle
Dynamic image analysis is used in both research and QC laboratories for
9 (PAGE 12)
WHEN TO CHOOSE DYNAMIC LIGHT SCATTERING particles ranging from 30µm to 30mm. Static image analysis is typically a ISO 13320-1 Particle size analysis – Laser diffraction
Dynamic Light Scattering (DLS) can basically measure suspensions and emul- research tool for measuring particles in the 0.5 to 2000µm range. Deciding methods
sions from 1nm to 1µm. Both the lower and upper limits are sample dependent. between dynamic or static image analysis is seldom difficult, as the applications 10 (PAGE 12)
The lower limit is influenced by concentration and how strongly the particles are typically better served by one technique or the other, as proven through USP<429> Light Diffraction Measurement of
scatter light. A low concentration sample of weakly scattering particles near 1nm application development studies. Particle Size
can be extremely difficult or at least difficult to reproduce. The upper size limit is 11 (PAGE 13)
determined mainly by the density of the particles. DLS algorithms are based on all Wheeler, D., How to Establish Manufacturing Speci-
fications, posted on spcspress.com at http://www.
particle movement coming from Brownian motion. Motion due to settling is not
spcpress.com/pdf/Manufacturing_Specification.pdf
interpreted correctly by DLS systems. In addition, particles settled on the bottom
of the sample cuvette can not be inspected by the laser light source. Particles with 12 (PAGE 14)
Neumann et. al. “What does a mean size mean?”
a high density will settle more quickly than low density particles. The upper limit 2003 AIChE presentation at Session 39 Characteriza-
of DLS may be 6µm for emulsion samples where the two phases have similar tion of Engineered particles November 16–21 San
Francisco, CA
density. The upper limit of uranium particles may be as small as 300nm. The upper
limit of particles with a density of 1.7 may be around 1µm. 13 (PAGE 16)
ISO 13320, Particle size analysis – Laser diffraction
methods – Part 1: General principles
Using DLS does not require any knowledge of the sample RI (it would be
required to convert from intensity to volume distribution), or concentration. What 14 (PAGE 17)
is required is viscosity, especially for higher concentration samples. Although most “Understanding Calculation Level and Iterative
Deconvolution.” www.horiba.com/us/particle
modern DLS systems claim the ability to work at higher concentrations, this is
again sample dependent. Serious DLS work could involve a dilution study to
28 29
Surface charge or zeta potential of suspensions is important information for determine the nature of the particle-particle interactions and presence of REFERENCES
formulators. For these applications techniques providing both particle size and zeta multiple scattering. Easy samples are simply a matter of pipetting the sample 1 (PAGE 3)
potential (along with other such as pH or conductivity) may be the best options. into a cuvette and clicking one button. More sophisticated DLS systems can also ISO 9276-2:2001 : Representation of results of
particle size analysis – Part 2: Calculation of average
These options include DLS systems with integrated zeta potential and acoustic measure other sample characteristics including zeta potential, molecular weight,
particle sizes/diameters and moments from particle
systems like the DT-1201 that can automatically measure many of the desired and second virial coefficient. Generating this additional information may require size distributions
parameters. a greater skill set of the operator.
2 (PAGE 3, 4)
One question worth asking would be will I need other capabilities in the future? driven by an interest to measure at full concentration because of concerns with TN154, Particle Size Result Interpretation:
Number vs. Volume Distributions, available at
If I might need zeta potential in the future, this removes laser diffraction from the changes to the sample with dilution. Samples must typically be above 1 wt % www.horiba.com/us/particle
list of possible techniques. If I might have particles > 1µm in the future, this would for this technique and the concentration must be known. A library of known
eliminate DLS. Be forewarned that future requirements can be difficult to 4 (PAGE 5)
samples is included in the software, or the user is required to add some infor- ISO 13320-1 Particle size analysis – Laser diffraction
ascertain and additional capabilities always carry incremental cost. mation about the particles (typically density) and the diluent. methods
5 (PAGE 7)
WHEN TO CHOOSE LASER DIFFRACTION This technique is more often used for research than QC measurements, and ISO 13322-2 Particle size analysis – Image analysis
Laser diffraction is the most popular particle size technique for reasons including typically for focused applications rather than for a broad range of unknown methods – Part 2: Dynamic image analysis methods
speed, ease of use, and flexibility. The most basic laser diffraction system can samples. Excellent applications for acoustic spectroscopy include; colloids,
6 (PAGES 8-9, 14)
measure solid particles in suspensions and emulsions. With the addition of a dry nanoparticle suspensions, ceramics, and formulation studies where the Burgess, J., Duffy, E., Etzler, F., Hickey, A., Particle
powder feeder the instrument can then also measure dry powders in air. This is a combination of particle size and zeta potential as a function of surface chemistry Size Analysis: AAPS Workshop Report, Cosponsored
by the Food and Drug Administration and the United
low concentration technique, so dilution is often required. The complex refractive can be used for predicting dispersion stability. States Pharmacopeia, AAPS Journal 2004; 6 (3)
index of the sample and diluent must be known for optimum accuracy, but this Article 20 (http://www.aapsi.org)
information is easier to obtain than is often indicated (more often by competitors WHEN TO CHOOSE IMAGE ANALYSIS
7 (PAGE 10)
than informed scientists). The HORIBA LA-950 has a wide dynamic range capable Many laboratories are now replacing manual microscopy with automated image TN154, Particle Size Result Interpretation:
of measuring down to 30nm and up to 3000µm. This unique ability to measure analysis. While microscopy provides qualitative accuracy and shape information,
Number vs. Volume Distributions, available at
www.horiba.com/us/particle
particles < 100nm as well as agglomerates as large as hundreds of microns makes it requires automated image analysis to inspect the number of particles requited
this a credible choice even for nanotechnology applications. Since this is such a to obtain statistically valid quantitative results. Choosing image analysis is often 8 (PAGE 11)
powerful, flexible technique laser diffraction is often the best option for companies TN156, Particle Size Result Interpretation: Under-
driven by the desire to generate results that are accurate, sensitive to second standing Particle Size Distribution Calculations,
buying their first analyzer, or hoping to satisfy multiple needs and applications. populations, contains shape information, and includes images of the particles. available at www.horiba.com/us/particle
Dynamic image analysis is used in both research and QC laboratories for
9 (PAGE 12)
WHEN TO CHOOSE DYNAMIC LIGHT SCATTERING particles ranging from 30µm to 30mm. Static image analysis is typically a ISO 13320-1 Particle size analysis – Laser diffraction
Dynamic Light Scattering (DLS) can basically measure suspensions and emul- research tool for measuring particles in the 0.5 to 2000µm range. Deciding methods
sions from 1nm to 1µm. Both the lower and upper limits are sample dependent. between dynamic or static image analysis is seldom difficult, as the applications 10 (PAGE 12)
The lower limit is influenced by concentration and how strongly the particles are typically better served by one technique or the other, as proven through USP<429> Light Diffraction Measurement of
scatter light. A low concentration sample of weakly scattering particles near 1nm application development studies. Particle Size
can be extremely difficult or at least difficult to reproduce. The upper size limit is 11 (PAGE 13)
determined mainly by the density of the particles. DLS algorithms are based on all Wheeler, D., How to Establish Manufacturing Speci-
fications, posted on spcspress.com at http://www.
particle movement coming from Brownian motion. Motion due to settling is not
spcpress.com/pdf/Manufacturing_Specification.pdf
interpreted correctly by DLS systems. In addition, particles settled on the bottom
of the sample cuvette can not be inspected by the laser light source. Particles with 12 (PAGE 14)
Neumann et. al. “What does a mean size mean?”
a high density will settle more quickly than low density particles. The upper limit 2003 AIChE presentation at Session 39 Characteriza-
of DLS may be 6µm for emulsion samples where the two phases have similar tion of Engineered particles November 16–21 San
Francisco, CA
density. The upper limit of uranium particles may be as small as 300nm. The upper
limit of particles with a density of 1.7 may be around 1µm. 13 (PAGE 16)
ISO 13320, Particle size analysis – Laser diffraction
methods – Part 1: General principles
Using DLS does not require any knowledge of the sample RI (it would be
required to convert from intensity to volume distribution), or concentration. What 14 (PAGE 17)
is required is viscosity, especially for higher concentration samples. Although most “Understanding Calculation Level and Iterative
Deconvolution.” www.horiba.com/us/particle
modern DLS systems claim the ability to work at higher concentrations, this is
again sample dependent. Serious DLS work could involve a dilution study to
28 29
HORIBA Instruments, Inc.
34 Bunsen Drive
Copyright 2010, HORIBA Instruments, Inc. Irvine, CA 92618 USA
For further information on this document
or our products, please contact us. 1- 800 -4-HORIBA
www.horiba.com/us/particle