Image Reconstruction 2
Image Reconstruction 2
February
2012
217
I. Introduction
2012 IEEE
218
February
2012
T0 s = E arc o, (2)
madore and meral: reconstruction algorithm for improved ultrasound image quality
D 0 = E arc +. (3)
Alternatively, the operations involved in a digital delayand-sum beamforming reconstruction (i.e., multiplying
the raw signal in et space with an arc, summing over
all locations in et space, and repeating these steps for a
collection of different arcs to reconstruct a collection of
different image points) can be performed by multiplying
the RF signal with EarcH, where the superscript H represents a Hermitian transpose. In other words, D0 in (1) is
givenby
D 0 = E arc H. (4)
In other words, delay-and-sum beamforming reconstructions assume that assembling all et space arcs together
in a matrix format yields an orthogonal matrix. This assumption is badly flawed, as we will show.
219
Fig. 2. (a) A point in the imaged object is typically assumed to give rise
to an arc in et space. (b) However, reconstructing the arc in (a) with
delay-and-sum beamforming does not yield a point in the image plane,
but rather a spatially broad distribution of signal. (c) At least in principle, when applied to artificial signals such as that in (a), the proposed
approach can reconstruct images that are vastly improved in terms of
spatial resolution compared with delay-and-sum beamforming.
220
greatly superior to those reconstructed with delay-andsum beamforming and the associated R0 matrix [e.g., Fig.
2(b)] when dealing with artificial et space data such as
those in Fig. 2(a), such improvements are typically not
duplicated when using more realistic data. The reason for
this discrepancy is explored in more detail in the next section.
C. Including the Shape of the Wavepacket in the Solution
Data sets acquired from a single point-like object do
not actually look like a simple arc in et space. In an actual data set, the arc from Fig. 2(a) would be convolved
with a wavepacket along t, whereby the shape of the wavepacket depends mostly on the voltage waveform used at
the transmit stage and on the frequency response of the
piezoelectric elements. A wavepacket has both positive
and negative lobes, whereas the arc in Fig. 2(a) was entirely positive. Even though the delay-and-sum beamforming assumption in (5) is very inaccurate, negative errors
stemming from negative lobes largely cancel positive errors from positive lobes. For this reason, delay-and-sum
beamforming tends to work reasonably well for real-life
signals, even though it may mostly fail for artificial signals
such as those in Fig. 2(a).
The reconstruction process from (6) avoids the approximation made by delay-and-sum beamforming as expressed
in (5), but it remains inadequate because it is based on
Earc, and thus assumes object points to give rise to arcs
in et space. Whereas Earc associates each point in the
object o with an arc in the raw signal s through (2),
an alternate encoding matrix Ewav associates each point
with a wavepacket function instead. Because Ewav features
several nonzero time points per receiver element, the reconstruction process truly becomes two-dimensional in
nature, as whole areas of et space may be used in the
reconstruction of any given pixel location, as opposed to
one-dimensional arc-shaped curves as in delay-and-sum
beamforming. Sample computer code to generate Ewav is
provided in the Appendix.
The solution presented in (6) can be rewritten using
a more accurate model relying on Ewav rather thanEarc:
February
2012
(8)
o = D T s = R s.
Based on (8), an image o can be generated in a single
processing step by multiplying the raw RF signal s with a
reconstruction matrixR.
D. Geometric Analogy
There is a simple geometric analogy that may help provide a more intuitive understanding of the least-squares
solution
in (8). Imagine converting a 2-D vector, s = s 1i
+ s 2j , from
the usual xy reference system defined by unit
vectors i and j to a different
reference system
by
defined
the unit vectors u 1 = (i + j )/ 2 and u 2 = (i j )/ 2
instead. This can be done through projections, using a dot
product: s = l (s u l )u l . Projections are appropriate in
this case because the basis vectors u l form an orthonormal
set: u l u k = lk. In contrast, projections would not be ap
propriate when converting s to a non-orthonormal
refer
v
ence system,
such
as
that
defined
by
=
(
i
+
j
)/
2
and
1
by v 2 = i . In such case, the coefficients s and s in s =
s v 1 + s v 2 should instead be obtained by solving the following system of linear equations:
(ss ) = 11//
1
2
1
D 2 = (E wav E wav + L) E wav ,
s 1/ 2 1 s 1
s
0
2
2
s
1
2
=
.
=
=
s 1/ 2 0 s 2
1 1 s 2
s1 s 2
o = D 2 T2 s = R 2 s,
(7) (9)
H
2 1 s
;
2 0 s
( ) (
)( ) (
madore and meral: reconstruction algorithm for improved ultrasound image quality
ter image reconstruction algorithm is obtained when taking these correlations into account, as is done in (8) and
(9).
E. Generalization to Multi-Shot Imaging
As presented in Section II-C, (8) involved reconstructing a single et space data set s from a single transmit
event into an image o. However, (8) can readily be generalized to multi-shot acquisitions, whereby transmit beamforming is employed and only part of the image is reconstructed from each transmit event. In such a case, data
from all Nshot shots are concatenated into the columnvector s, which would now feature Nshot Nt Ne elements. The number of columns in the reconstruction matrix R also increases to Nshot Nt Ne. In the simplest
scenario, in which any given image voxel would be reconstructed based on RF data from a single transmit event
(rather than through a weighted sum of multiple voxel
values reconstructed from multiple transmit events), the
number of nonzero elements in R would remain the same
as in the single-shot imaging case. As shown in Section IV,
the number of nonzero elements in R is the main factor
determining reconstruction time. Although the increased
size of s and R may cause some increase in reconstruction
time, the fact that the number of nonzero elements in the
sparse matrix R would remain unchanged suggests that
the increase in reconstruction time may prove to be modest.
F. On Extending the Proposed Model
The present work offers a framework whereby information anywhere in the et space can be used, in principle
at least, to reconstruct any given image pixel. This more
flexible, two-dimensional approach may lend itself to the
modeling and correction of various effects and artifacts.
Two possible examples are considered.
1) Multiple Reflections: The number of columns in the
encoding matrix Ewav could be greatly increased to include not only the et space response associated with each
point in the reconstructed FOV, but also several extra
versions of these et space responses shifted along the t
axis, to account for the time delays caused by multiple
reflections. Such an increase in the size of Ewav could,
however, lead to a prohibitive increase in memory requirements and computing load.
2) Effect of Proximal Voxels on More Distal Voxels: Ultrasound waves are attenuated on their way to a distal
location, which may give rise to the well-known enhancement and shadowing artifacts, but they are also attenuated on their way back to the transducer, which may affect the et space function associated with distal points.
For example, whole segments of the et space signal might
be missing if one or more proximal hyperintense object(s)
would cast a shadow over parts of the transducer face.
221
222
February
2012
E N nz =
voxels
o N nz o ref
)/(
voxels
o ref
), (10)
madore and meral: reconstruction algorithm for improved ultrasound image quality
223
1-D FOV that passes through the line of beads from Fig.
1(b). Of particular interest are the 3 results plotted with
the same black line in Fig. 4(a). These results are indistinguishable in the sense that differences between them were
much smaller than the thickness of the black line in Fig.
4(a). One was obtained using delay-and-sum beamforming and R0 from (1), the two others using R1 and (6),
including only one ray (i.e., one image column) at a time
into the encoding matrix. Results from (6) diverged from
delay-and-sum beamforming results only when many or
all image rays were included at once in the same encoding matrix [gray and dashed lines in Fig. 4(a)]. The main
point is that differences between our approach and delayand-sum beamforming reported here do not appear to
come from one being better implemented than the other,
but rather from our method resolving the correlation between adjacent voxels and rays, as it was designed todo.
2) Time Gain Compensation: Fig. 4(b) shows that with
delay-and-sum beamforming and R0 in (1), a TGC term
based on a 0.30dB/cmMHz attenuation seemed appropriate, as it would keep the amplitude of the various beads
in Fig. 1(b) roughly constant with depth. On the other
hand, when using R1, a correction based on a higher attenuation of 0.50dB/cmMHz proved more appropriate.
Documentation on the CIRS phantom lists the true, physical attenuation as 0.50 0.05dB/cmMHz the same
value used here with our proposed reconstruction method. It would appear that with the proposed reconstruction, TGC might become a more quantitative operation
based on true signal attenuation. However, as shown in
Fig. 4(b) (gray arrow), signals at shallow depths tend to
be overcompensated when employing a value of 0.50dB/
cmMHz. To prevent the near-field region from appearing
too bright in the images presented here, further ad hoc
TGC was applied over the shallower one-third of the FOV.
Furthermore, an ad hoc value of 0.35dB/cmMHz (rather
than 0.50dB/cmMHz) had to be used when reconstructing data from the higher-frequency Acuson array, so that
homogeneous-looking images could be obtained. Overall,
although the TGC operation does appear to become more
quantitative in nature with the proposed approach, ad hoc
adjustments could not be entirely avoided.
3) Regularization: The 50-frame data set from Fig. 1
was reconstructed several times using different values for
2, the regularization parameter. For each reconstruction,
the standard deviation along the time-frame direction was
computed and then spatially averaged over 5 ROIs located
at different depths [shown in Fig. 1(b) as white rectangular boxes]. Fig. 5 gives the mean standard deviation associated with each of these ROIs, as a function of the regularization parameter 2. For each curve in Fig. 5, an
indicates the amount of regularization that appears to be
roughly the smallest 2 values that can be used, while still
avoiding significant noise increases. Defining a normalized
depth r = ( x 2 + (z + d vf )2 d vf ) /w probe, where dvf is
the distance to the virtual focus behind the transducer
224
February
2012
Fig. 4. A single column from a phantom image, highlighted in Fig. 1(b), is plotted here for different reconstruction algorithms and settings. (a)
When reconstructing one column at a time, our modified reconstruction from (6) gives results that are essentially identical to the delay-and-sum
beamforming reconstruction from (1) (black solid line). As more columns are included in the reconstruction, our method diverges from delay-andsum beamforming (gray solid and black dashed lines). (b) With all columns included in the reconstruction, the TGC must be changed from 0.30 to
about 0.50dB/cmMHz to restore the magnitude at greater depths. The nominal attenuation value for this phantom is 0.50 0.05dB/cmMHz, in
good agreement with the TGC compensation required with our method. However, signal becomes overcompensated at shallow depths (gray arrow).
The plots use a linear scale, normalized to the maximum signal from the curve in(a).
Fig. 5. A 50-frame data set was reconstructed several times, with different settings for the regularization parameter 2. The standard deviation
across all 50 frames was taken as a measure of noise, and averaged over
the 5 ROIs shown in Fig. 1(b). With d = z/wprobe, the ROIs were located
at a depth of d = 1.0, 2.5, 3.5, 4.5, and 5.5. For each ROI, the standard
deviation is plotted as a function of the regularization parameter 2, and
an indicates the 2 = d/20 setting selected here.
madore and meral: reconstruction algorithm for improved ultrasound image quality
225
Fig. 6. (a) The D matrix (8) tends to be very sparse. (b) The areas where
nonzero signal is expected are covered using many overlapping smaller
patches, greatly reducing the computing requirements compared with
solving for the entire D matrix all at once. (c) The R matrix in (8) and/
or the D matrix can be thresholded, so that only the Nnz largest values are allowed to remain nonzero. The smaller Nnz becomes, the faster
the reconstruction can proceed; about 32.3ms were needed for every
106 nonzero elements, using the IBM workstation described in the text.
However, thresholding that is too aggressive leads to increased artifact
content. A compromise was reached in this case for Nnz = 40 Nnz0 =
2.852 108 elements.
TABLE I. Measurements of PSF Size and L1-Norm for Delay-And-Sum Beamforming and for the Proposed Approach for
Different Probes and FOV Settings.
Point-object
x-z location
(cm)
ATL P42
rectangular FOV
ATL P42
sector FOV
Acuson
rectangular FOV
(0.0, 9.2)
1.55
(0.0, 9.2)
1.51
(0.3, 3.7)
0.324
0.973
0.972
1.26
1.23
0.322
Improvement
(%)
37.3
37.4
16.6
18.5
0.64
Delay-and-sum
L1-norm
(mm2)
9.20
11.3
6.13
Proposed
method
L1-norm
(mm2)
Improvement
(%)
6.64
6.53
8.28
8.11
3.81
27.8
29.0
26.7
28.3
37.8
226
February
2012
TABLE II. Matrix Sizes and Reconstruction Times, For Our Proposed Approach and for Delay-And-Sum Beamforming, for
Different Probes and FOV Settings.
Raw data size
ATL P42,
rectangular FOV
ATL P42,
sector FOV
Acuson,
rectangular FOV
64 2176
same
64 2048
same
128 2560
same
Image size
Npatch
Voxel size
(with = c/f)
Nnz
Reconstruction time,
stage 1
Reconstruction time,
stage 2
64 924
64 1850
286 716
286 1434
213 985
213 1971
10
1
15
1
3 40
1
pitch /4
pitch /8
/4
/8
/4
/8
3.03e8
7.40e6
1.05e9
2.58e7
3 8.58e8
4.75e7
13.6 h/0 s
31.62 s/0 s
66.7 h/0 s
4.17 min/0 s
3 52.2 hb/0 s
16.0 minb/0 s
Fig. 7. Imaging results from a metal-wire phantom are interpreted here in terms of a point-spread-function (PSF). (a) Prior knowledge about the
shape of the wavepacket is used as part of the reconstruction. (b)(d) Single-shot images reconstructed with delay-and-sum beamforming [R0 in (1)]
and with the proposed approach [R in (8)] are shown side-by-side. (b) ATL probe, rectangular field of view (FOV); (c) ATL probe, sector-shaped
FOV; (d) Acuson probe, rectangular FOV. All images are windowed such that black is zero or less, white is equal to the window width w or greater,
and all possible shades of gray are linearly distributed in-between. The ROIs indicated by white ellipses/circles were used for the calculations of the
L1-norms listed in Table I [3cm in diameter, 2cm minor diameter for the ellipse in (a)]. Gray boxes show the area surrounding the point-object using a window width w that is 1/4 that used for the corresponding main images, to better show background signals. Profiles across the location of the
point-object are also shown, along both the z- and x-directions, for delay-and-sum beamforming (gray curves) and for the proposed method (black
curves). All plots use a linear scale normalized to the maximum response.
madore and meral: reconstruction algorithm for improved ultrasound image quality
227
Fig. 8. (a) Imaging results were obtained from the phantom depicted
here. (b)(d) Single-shot images reconstructed with delay-and-sum
beamforming [R0 in (1)] and with the proposed approach [R in (8)] are
shown side-by-side. (b) ATL probe, rectangular field of view (FOV); (c)
ATL probe, sector-shaped FOV; (d) Acuson probe, rectangular FOV.
A magnification of the region surrounding the axial-lateral resolution
targets is shown in (c) (the window width, w, was increased by 250% to
better show the individual objects). Overall, spatial resolution appears
to be improved in the images reconstructed with the proposed method
[i.e., with R in (8)]. Contrast was improved with the proposed method
in (d), as tested using the circular ROI covering the hyperechoic region
indicated with a white arrow and the ring-shaped region that surrounds
it. See the text for more detail.
228
February
2012
madore and meral: reconstruction algorithm for improved ultrasound image quality
229
230
February
2012