EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments
1. Introduction
As the papers in this special issue and the 4th Conference on Biomedical Applications
of EIT show, reconstruction algorithms for electrical impedance tomography remain
an active and exciting area of research. In this article we aim to draw attention to the
best current practice as well to pitfalls to avoid, also to highlight some active areas of
development and some of the promising methods that have not yet been implemented
in practical algorithms. We hope this paper will be a useful contribution for those new
to the field as well as being thought provoking for those of us who have been working
in the area for some time. Like any review this article has a personal slant on the
subject, it also fails to be either a comprehensive review or a history of the subject.
In some areas we have deliberately provided less detail where there accessible articles
covering the material, in this volume or elsewhere.
By Electrical Impedance Tomography (EIT) we mean the process of estimating
internal admittivity (complex conductivity) changes from low frequency current and
voltage measurements through a system of electrodes at the surface. Although the
details of the instrumentation design varies between biomedical, geophysical and
industrial applications the mathematics of the reconstruction problem is essentially
the same. The main differences being in the configuration of the electrodes used, and
the a priori information available.
2. Three dimensionality
In most cases where electrical imaging is used the problem is genuinely three-
dimensional. An exception being industrial or non-destructive testing problems where
the conductivity and the electrodes can be assumed independent of one coordinate.
Why then is there such a large body of published work in which data is collected on a
three dimensional body and the reconstruction performed assuming the body was two
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments 2
dimensional? There are two main reasons: speed and the fact that to some limited
extent it worked. The issues of speed include the speed of data collection. The belief
that electrodes arranged in a single plane would yield an adequate reconstruction of
the conductivity in that plane, and consequent use of a small number of electrodes
resulted in a fast data collection system. The number of electrodes applied is also a
significant factor when they have to be applied individually to the skin which in itself
is a time consuming process. However it is known that data measured on a three
dimensional body cannot be fitted accurately to any two dimensional conductivity
distribution [41]. Moreover attempts to fit a two dimensional model result in errors
of position and shape of anomalies [29, 74]. Another factor in the choice of two
dimensional reconstruction methods was the cost of fast processors and memory
required to perform three-dimensional forward modelling and reconstruction, which
in the 1980s and early 1990s were prohibitive.
Of these factors the one which remains is the inconvenience of applying a fully
three dimensional system (for example multiple planes) of electrodes to a human
subject, a technological problem which must be overcome if EIT is to be used
effectively as a medical imaging technique.
2.1. Is it tomography?
It is worth mentioning here that EIT, despite its now traditional name agreed at
the first Sheffield EIT meeting in 1986, is not tomographic in that it is not possible to
reconstruct an image slice-by-slice. (One can of course reconstruct a three dimensional
conductivity distribution and then display it on any desired slice.) The physical
explanation for this is that, unlike X-rays, low frequency electric current cannot be
confined to a plane even by a system of external electrodes, and that a change in
conductivity anywhere in the domain can affect all measurements not just those on a
ray path.
An obvious extension of the traditional array of equally spaced electrodes in a
plane to a three dimensional data collection system is to employ multiple planes of
electrodes. If electrodes are excited in pairs on a given plane it is necessary to make
measurements of voltage on the electrodes in other planes as well. If one is modifying
the traditional planar system for this purpose, and voltages are measured between
adjacent pairs of electrodes, one must also make measurements between planes as
well for a complete set of transfer impedance data. Clearly only one measurement
between one pair of electrodes on adjacent planes is sufficient by superposition. This
configuration is however only an expedient way to employ inflexible equipment and
is not likely to be an ideal data collection scheme. Optimal current drive patterns
have been described for a cylindrical tank with several planes of electrodes [31]. The
optimal arrangement of electrodes for imaging brain or lung function for example has
yet to be determined.
There is interesting connection between EIT and tomographic imaging which has
yet to be exploited in a practical reconstruction algorithm. Suppose that one is able to
measure the complete transfer impedance on a plane intersecting a three dimensional
object, while there is no known way of reconstructing the conductivity on this plane
Greenleaf and Uhlmann [33] show that the integral of the conductivity over that plane
is known. Suppose now that we use a large number of surface electrodes so that the
transfer impedance, and hence the integral of the conductivity, is known for a wide
class of planes. Reconstructing the conductivity reduces to the problem of inverting
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments 3
three dimensional mesh generators is a major research topic in itself, and we have
yet to find an existing program, commercial or free, which is ideal for EIT. The main
requirements are for an efficient mesh of an object composed of smooth but irregular
surfaces, which respects interior boundaries, and electrodes on the surface. The mesh
density needs to be determined as a function of position so that high field strengths
(for example near electrodes) can be accurately represented without excessive density
in areas where the field varies slowly. Even if such a program were available one would
still need to measure the external shape of the body accurately, and in the case of the
human head intricate internal structures such as the skull needs to be segmented from
X-Ray CT or MRI scans [5].
method approximation is well studied and there are a priori estimates [3] for the
error in terms of the mesh size (r convergence) and the order of the elements used (h
convergence). There are also a posteriori error estimates based on calculated solutions
[3] which have not yet been widely used in EIT. Although there is some work using
higher order elements [74] the best choice of element for EIT remains an open problem,
and the possibility of using vector elements [10] to calculate electric fields and current
densities accurately in the interior is largely unexplored for our problem. The use of
infinite elements to model unbounded regions, or at least regions which while bounded
have a substantial part where we have no surface data, is an interesting possibility [75].
Possible applications include limbs when the torso is being imaged, as well as problems
where the body is treated as an infinite half space such as in geophysical imaging or
the use of a small surface electrode array in medical EIT.
While numerous ad hoc reconstruction methods have been tried for EIT the standard
approach is to use one of a family of regularized Newton-type methods. The approach
is to some extent the obvious one: the problem is non-linear so linearize, the linear
problem is ill-posed so regularize, the linear approximation cannot reconstruct large
contrasts or complex geometries so the process must be applied iteratively. There are
of course many variations on this basic approach and we will sketch some typical ones.
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments 6
First let us assume that the conductivity σ has been represented by a finite number
of parameters s. In the simplest case, this is taken as a sum of basis functions such
as the characteristic functions of a set of regular or irregular voxels, or smooth basis
functions. Other choices would include a detailed model involving conductivity values
as well as parameters describing shape internal regions [38, 39].
4.1. Regularization
Our forward operator F gives us V = F (s) the simulated voltages at the boundary. We
will leave aside the adaptive current approach [28, 12] where the current patterns used
depend on the estimated conductivity. As the goal is to fit the voltage measurements
Vmeas , the simplest approach is to minimize the sum of squares error
||Vmeas − F (s)||2 (1)
the so called output least squares approach. Here k · k is the standard 2-norm on
vectors. In practice it is not usual to use the raw least squares approach, but at
least a weighted sum of squares which reflects the reliability of each voltage. Such
approaches are common both in optimization and the statistical approach to inverse
problems.
Minimization of the voltage error (for simple parameterizations of σ) is doomed
to failure as the problem is illposed [68]. In practice the minimum lies in a long narrow
valley of the objective function[11]. For a unique solution one must include additional
information about the conductivity, an example is to include a penalty G(s) for highly
oscillatory conductivites, hence in our problem we seek to minimize
f (s) = ||Vmeas − F (s)||2 + G(s). (2)
A typical simple choice [70] is
G(s) = α2 ||L(s − sref )||2 (3)
where L is a matrix approximation to some partial differential operator and sref
a reference conductivity (for example including known anatomical features). The
minimisation of f represents a trade-off between fitting the data exactly and
not making the derivatives of σ too large, the trade off being controlled by the
regularization parameter α. Other smooth choices of G include the inverse of a
Gaussian smoothing filter [9]. In these cases where G is smooth and for α large
enough the Hessian of f will be positive definite, we can then deduce that f is a convex
function [77, Ch 2], so that a critical point will be a strict local minimum, guaranteeing
the success of smooth optimization methods. Such regularization however will prevent
us from reconstructing conductivities with a sharp transition, such as an organ
boundary. Including the Total Variation, that is the integral of |∇σ|, still rules
out wild fluctuations in conductivity while allowing step changes. The cost is that
the inclusion of an absolute value destroys the differentiability of f and means that
we must employ non-smooth optimization methods. This is both more difficult and
computationally expensive than smooth optimization problems. Early applications
to EIT [22] employed total variation regularization to the linearized EIT problem. A
detailed review of the literature on this subject as well as a more efficient optimization
algorithm can be found in the thesis [7]. See [78] for analysis of Total Variation
regularization of EIT.
In the statistical approach [77, Ch 4] to regularization the minimizer of f
maximizes the a posteriori probability (the MAP estimate) assuming independent
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments 7
Gaussian error with mean zero and unit variance on the measurements and the a
priori information on σ represented by the probability distribution exp(−G(s)/2). To
many the probabilistic approach to regularization provides a more rational frame work
for the recovery of a finite number of parameters from a discrete set of data than an
approach to regularization based on functional analysis. Ideally we would incorporate
a probability model for the errors in the data and using a prior probability density
for the unknown parameters find not just the maximum of the posterior probability
density, but a more compete description of the probability of a range of conductivity
images. For an excellent review of the statistical regularization in EIT see [37]. Despite
the low spatial resolution of EIT the temporal resolution is high. In biomedical
applications to time varying conductivity distributions, caused for example by blood
or air flow, images are correlated temporally as well as spatially. Statistical time series
methods such can be used to include both spatial and temporal regularization [72, 76].
Applications to industrial process modelling [61] are better developed than biomedical
applications, although this is a highly active area of research.
there are many other forms of regularization possible for a linear ill-conditioned
problem this generalized Tikhonov (or Tikhonov-Phillips) regularization has the
benefit that the a priori information it incorporates is made explicit and that under
Gaussian assumptions it is the statistically defensible MAP estimate. If only a
linearised solution is to be used with a fixed initial estimate s0 the Jacobian J and a
factorization of (J ∗ J + +α2 L∗ L) can be precalculated off-line. The efficiency of this
calculation is then immaterial and the regularized solution can be calculated using
the factorization with complexity O(N 2 ) for N degrees of freedom in the conductivity
(which should be smaller than the number of independent measurements). Although
LU factorization would be one alternative, perhaps a better choice is to use the
Generalized Singular Value Decomposition GSVD [34], which allows the regularized
solution to be calculated efficiently for any value of α. The GSVD is now a standard
tool for understanding the effect of the choice of the regularization matrix L in a
linear ill-conditioned problem, and has been applied to linearised EIT[9]. The use of
a single linearized Tikhonov regularized solution is widespread in medical industrial
and geophysical EIT, the NOSER algorithm [17] being a well known example. Such
algorithms one step linear algorithms were the first three-dimensional algorithms to
be applied to experimental data from tanks [30], and the human thorax [50, 51]. It
must be emphasized that a linearized solution will only be accurate when the true
conductivity is close to the initial estimate.
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments 8
4.3. Backprojection
It is an interesting historical observation that in the medical and industrial applications
of EIT numerous authors have calculated J and then proceeded to use ad hoc
regularized inversion methods to calculate an approximate solution. Often these
are variations on standard iterative methods which, if continued would for a well
posed problem converge to the Moore-Penrose generalised solution. It is a standard
method in inverse problems to use an iterative method but stop short of convergence
(Morozov’s stopping criteria tells us to stop when the output error first falls below
the measurement noise). Many linear iterative schemes can be represented as a filter
on the singular values [77, Ch 1]. However they have the weakness that the a priori
information included is not as explicit as in Tikhonov regularization. One extreme
example of the use of an ad hoc method is the method described by Kotre [40] in which
the normalized transpose of the Jacobian is applied to the voltage difference data. In
the Radon transform used in X-Ray CT [53], the formal adjoint of the Radon transform
is called the back projection operator. It produces at a point in the domain the sum of
all the values measured along rays through that point. Although not an inverse to the
Radon transform itself, a smooth image can be obtained by backprojecting smoothed
data, or equivalently by back-projecting and then smoothing the resulting image.
The Tikhonov regularization formula (5) can be interpreted in a loose way
as the back-projection operator J ∗ followed be the application of the spatial filter
(J ∗ J + +α2 L∗ L)−1 . Although this approach is quite different from the filtered back
projection along equipotential lines of Barber and Brown [4, 59] it is sometimes
confused with this in the literature. Kotre’s back projection was until recently widely
used in the process tomography community for both resistivity (ERT) and permittivity
(ECT) imaging [80]. Often supported by the fallacious arguments, in particular that
it is fast (it is no faster than the application of any precomputed regularized inverse)
and that it is commonly used (only by those who know no better). In an interesting
development the application of a normalised adjoint to the residual voltage error for
the linearised problem was suggested for ECT, and later recognised as yet another
reinvention of the well known Landweber iterative method [79]. Although there is
no good reason to use pure linear iteration schemes directly on problems with such
small a number of parameters as they can be applied much faster using the SVD, an
interesting variation is to use such a slowly converging linear solution together with
projection on to a constraint set. A method which has been shown to work well in
ECT [15].
there is no such possibility and it may be that the mismatch between the measured data
and the predictions from the forward model is dominated by the errors in electrode
position, boundary shape and contact impedance rather than interior conductivity.
Until these problems are overcome it is unlikely, in the author’s opinion, to be worth
using iterative non-linear methods in vivo using individual surface electrodes. Note
however that such methods are in routine use in geophysical problems [82, 44, 45].
Computational complexity of both forward solution and inversion of the linearized
system meant that, although iterative nonlinear algorithms had been implemented
for simulated data on modest meshes earlier [43] it was only in the mid 1990s
that affordable computers had sufficient floating point speed and memory to handle
sufficiently dense three-dimensional meshes to fit tank data adequately [71, 74]
The essence of non-linear solution methods is to repeat the process of calculating
the Jacobian and solving a regularised linear approximation. However a common way
to explain this is to start with the problem of minimizing f , which for a well chosen G
will have a critical point which is the minimum. At this minimum ∇f (s) = 0 which is a
system of N equations in N unknowns which can be solved by multi-variable Newton-
Raphson method. In practice for noisy data there may not be an exact solution.
The Gauss-Newton approximation to this, which neglects terms involving second
derivatives of F , is a familiar Tikhonov formula updating the n th approximation
to the conductivity parameters sn
sn+1 = sn +(Jn∗ Jn +α2 L∗ L)−1 (Jn∗ (Vmeas −F (sn ))+α2 L∗ L(sref −sn )(6)
where Jn is the Jacobian evaluated at sn , and care has to be taken with signs . Notice
that in this formula the Tikhonov parameter is held constant throughout the iterations,
by contrast Levenberg-Marquardt[46] method applied to ∇f = 0 would add a diagonal
matrix λD in addition to the regularization term α2 L∗ L but would reduce λ to zero
as a solution was approached. For an interpretation of λ as a Lagrangian multiplier
for an optimization constrained by a trust region see [77, Ch 3]. Another variation
on this family of methods is, given an update direction from the Tikhonov formula,
to do an approximate line search to minimize f in that direction. Both methods are
described in [77, Ch 3].
The parameterization of the conductivity can be much more specific than voxel
values or coefficients of smooth basis functions. One example is to assume that the
conductivity is piecewise constant on smooth domains and reconstruct the shapes
parameterized by Fourier series [38, 39] or by level sets [25]. For this and other model
based approaches the same family of smooth optimization techniques can be used as for
simpler parameterizations, although the Jacobian calculation may be more involved.
5. Jacobian calculations
method. There are also Quasi-Newton methods in which the Jacobian is updated
approximately from the forward solutions that have been made. Indeed this has been
used in geophysics [45]. It also worth pointing out that were the conductivity is
parameterized in a nonlinear way for example using shapes of an anatomical model,
the Jacobian with respect to those new parameters can be calculated using the chain
rule.
and ∂u/∂n = 0 away from electrodes. For the special case w = u we have the power
conservation formula,
Z Z
∂u XZ ∂u
∂u
σ|∇u|2 dV = uσ dS = Vl − zl σ σ dS (11)
Ω ∂Ω ∂n El ∂n ∂n
l
hence
Z XZ 2 X
2 ∂u
σ|∇u| dV + zl σ = Vl Il (12)
Ω El ∂n
l l
This simply states that the power input is dissipated either in the domain Ω or by the
contact impedance layer under the electrodes.
This gives only the total change in power, to get the change in voltage on a
particular electrode Em when a current pattern is driven in some or all of the other
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments 11
electrodes we simply solve for the special ‘measurement current pattern’ Ielm = δlm .
To emphasize the dependance of the potential on a vector of electrode currents
I = (I1 , . . . , IL ) we write u(I). The hypothetical measurement potential is u(Im ),
by contrast the potential for the d-th drive pattern is u(Id ). Applying the power
perturbation formula (13) to u(Id ) + u(Im ) and u(Id ) − u(Im ) and then subtracting
gives the familiar formula
Z
δVdm = − δσ∇u(Id ) · ∇u(Im ) dV (14)
Ω
Standard arguments based on series expansions of operators [16, 11] can be used to
show that this is indeed the Fréchet derivative for δσ ∈ L∞ (Ω), considerable care is
needed to show that the voltage data is Fréchet differentiable in other normed spaces,
such as those needed to show that the total variation regularization scheme works [78].
For a finite dimensional subspace of L∞ (Ω) a proof of differentiability is given by [37].
In the special case of the Sheffield adjacent pair drive, adjacent pair measurement
protocol, commonly used in two dimensional EIT we have potentials ui for the i-
th drive pair and voltage measurement Vij for a fixed current Ia across the j-th
measurement pair
Z
1
δVij = − 2 δσ∇ui · ∇uj dV (15)
I Ω
To calculate the Jacobian matrix one must choose a discretizarion of the conductivity.
The simplest case is to take the conductivity to be piecewise constant on polyhedral
domains such as voxels or tetrahedral elements. Taking δσ to be the characteristic
function of the k-th voxel we have for a fixed current pattern
Z
∂Vdm
=− ∇u(Id ) · ∇u(Im ) dV (16)
∂σk voxel k
With the double indices dm renumbered as a single index, these functions form the
elements of the Jacobian matrix J. For the case of a complex admittance one must
repeat this calculation taking care to use the real component of dissipated power Vl I¯l .
Some EIT and capacitance tomography systems use a constant voltage source and in
this case the change in power of an increase in admittivity will have the opposite sign
to the constant current case.
Some iterative nonlinear reconstruction algorithms, such as nonlinear Landweber,
or non-linear conjugate gradient [77, 67] require the evaluation of transpose (or adjoint
in the complex case) of the Jacobian multiplied by a vector J ∗ z. For problems where
the Jacobian is very large it may be undesirable to store the Jacobian and then apply
its transpose to z. Instead the block of zi corresponding to the ith current drive is
written as a distributed source on the measurement electrodes. A forward solution is
performed with this as the boundary current pattern so that when this measurement
field is combined with the field for the drive pattern as (16), and this is accumulated
to give J ∗ z. For details of this applied to diffuse optical tomography see [2], and for
a general theory of adjoint sources see [77]. For an example of application to EIT see
for example [56, Ch 4] and to electromagnetic imaging [24].
For fast calculation of the Jacobian using (16) one can precompute the integrals
of products of finite element basis functions over elements. If non constant basis
functions are used on elements, or higher order elements used one could calculate the
product of gradients of FE basis functions at quadrature points in each element. As
this depends only on the geometry of the mesh and not on the conductivity this can be
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments 12
precomputed unless one is using an adaptive meshing strategy. The same data is used
in assembling the FE system matrix efficiently when the conductivity has changed but
not the geometry. It is these factors particularly which make current commercial FEM
software unsuitable for use in an efficient EIT solver. While there are some details
of efficient methods for Jacobian calculation in the literature ([37]), more specific
implementation details can often be found in theses, such as [81, 11, 70, 74, 56].
The ill-posed nature of inverse problems means that any reconstruction algorithm will
have limitations on what images it can accurately reconstruct and that the images
degrade with noise in the data. When developing a reconstruction algorithm it is usual
to test it initially on simulated data. Moreover the reconstruction algorithms typically
incorporates a forward solver. A natural first test is to use the same forward model to
generate simulated data with no simulated noise and to then find to one’s delight that
the simulated conductivity can be recovered fairly well, the only difficulties being if it
violates the a priori assumptions built into the reconstruction and the limitations of
floating point arithmetic. Failure of this basic test is used as a diagnostic procedure
for the program.
including variable contact impedance, motion artifact and variable surface geometry
all of which produce correlated errors in the data.
Even simple simulation of discretization error requires some understanding of the
measurement system one has in mind. While the data is discretized into a binary
representation of the voltage with a fixed precision by an analogue to digital converter
(ADC), the input to the ADC has already been scaled by an amplifier. In a Sheffield-
type adjacent pair drive system [13] this scale factor is determined by the position of
the measurement pair relative to the drive pair to make best use of the range of the
ADC. Multiple drive systems employ different strategies [20, 83]. One should at least
add identically distributed noise to suitably scaled measurements. Whatever scheme
is chosen for simulating noise it should be carefully described, often a phrase such
as “5% random noise was added” is used without saying if this is 5% of the largest
voltage (which could completely destroy smaller measurements) or that percentage of
each measurement.
conference papers where reconstruction algorithms are shown to perform very well
on single circular anomalies, while reconstructions of complex objects with varying
contrasts are absent. As we know the problem is illposed, it is inevitable that there are
some conductivity distributions which can not be reconstructed well with a particular
algorithm, in particular ones which violate the a priori assumptions. It is therefore
no dishonour to present the failures as well as the successes. To avoid the temptation
to tweak the algorithm to produce the best results given that the correct result is
known, the best procedure would be to conduct blind trials for both simulated and
experimental data.
The author is aware that what is suggested here best practice is a high standard
to aim for, and that examples of this crimes and misdemeanours can be found in
his own work, but the intention is to elevate the standard in published work in
EIT reconstruction generally and to highlight these pitfalls. While in conference
presentations it is acceptable to describe work in progress, and to confess to any
inverse crimes and misdemeanours, it is the author’s opinion that they should be
avoided in journal publications.
In this breif review there is no space to describe in any detail many of the exciting
current development in reconstruction algorithms. Fortunately many of them are
treated in other articles in this special issue. Before highlighting some of these
developments it is worth emphasising that for ill-posed problem a priori information
is essential for stable reconstruction algorithms, and it is better that this information
is incorporated in the algorithm in a systematic and transparent way. Another general
priciple of inverse problems is to think carefully what information is required by the end
user. Rather than attempting to produce an accurate image what is often required
in medical (and indeed most other) applications is an estimate of a much smaller
number of parameters which can be used for diagnosis. For example we may know
that a patient has two lungs as well as other anatomical features but we might want to
estimate their water content to diagnose pulminary oedema. A sensible strategy would
be to devise an anatomical model of the thorax and fit a few parameters of shape and
conductivity rather than pixel conductivity values. The disadvantage of this approach
is that each application of EIT gives rise to its own specialised reconstruction method,
which must be carefully designed for the purpose. In the author’s opinion the future
development of EIT systems, including electrode arrays and data acquisition systems
as well as reconstruction software, should focus increasingly on specific applications,
although of course such systems will share many common components.
the EIT problem by McCormick and Wade [48], and for the non-linear problem by
Borcea [6]. In the same vein adaptive meshing can be used for the inverse problem [52].
In both cases there is the interesting possibility to explore the interaction between the
meshes used for forward and inverse solution.
At the extreme end of this spectrum we would like to describe the prior probability
distribution and for a known distribution of measurement noise and calculate the entire
posterior distribution. Rather than giving one image, such as the MAP estimate, this
gives a complete description of the probability of any image. If the probability is
bimodal for example, one could present the two local maximum probability images.
If one needed a diagnosis, say of a tumour, the posterior probability distribution
could be used to calculate the probability that a tumour like feature was there.
The computational complexity of calculating the posterior distribution for all but
the simplest distributions is enormous, however the posterior distribution can be
explored using the Markov Chain Monte Carlo Method (MCMC) [37]. This was
applied to simulated EIT data [26],and more recently to tank data, including in this
special issue [49]. For this to be a viable technique for the 3D problem highly efficient
forward solution will be required, and an efficient and fast 3D MCMC algorithm for
EIT presents a serious but very worthwhile challenge.
8. Conclusions
In conclusion, until medical EIT data is reconstructed using the best available
methods the results will be inconclusive. Many an experimental study has be spoilt
when carefully collected data has been attacked by a crude two-dimensional linear
reconstruction algorithm and the resulting ‘blurry blobs’ taken as evidence that EIT
is not suitable for the desired task. Careful consideration of a priori information,
measurement error and the model parameters required is needed, together with close
collaboration between mathematicians and experimentalists. Periodically a mood
arises at an EIT meeting that the technique will never find real application in medicine,
however it is my contention that a particular application should not be dismissed as
impossible until both hardware and software specialists working together have given
it their ‘best shot’.
9. Acknowledgements
The author would like to thank the referees for their helpful comments, and the
colleagues, especially Nick Polydorides, who commented on earlier drafts.
10. References
[1] Abscal JFP, 2003, The Anisotropic Inverse Conductivity Problem, MSc Thesis, University of
Manchester, UK.
[2] Arridge S, 1999, Optical tomography in medical imaging, Inverse Problems, 15, R41-93.
[3] Babuska I, and Strouboulis T, 2001, The Finite Element Method and its Reliability, Oxford
University Press, Oxford
[4] Barber D and Brown B, 1986, Recent developments in applied potential tomography-APT,
Information Processing in Medical Imaging, ed S L Bacharach, Nijhoff, Amsterdam 106-121.
[5] Bayford RH, Gibson A, Tizzard A, Tidswell AT, and Holder DS, 2001, Solving the forward
problem for the human head using IDEAS (Integrated Design Engineering Analysis Software)
a finite element modelling tool, Physiol Meas, 22, 55-63.
[6] Borcea L, 2001, A nonlinear multigrid for imaging electrical conductivity and permittivity at
low frequency, Inverse Problems, 17, 329-359.
[7] Borsic A, 2002, Regularization Methods for Imaging from Electrical Measurements, PhD thesis,
Oxford Brookes University.
[8] Borsic A, McLeod CN and Lionheart WRB, 2001, Total variation regularisation in EIT
reconstruction 2nd World Congr. on Industrial Process Tomography (Hannover), 579 -587.
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments 17
[38] Kolehmainen V, Arridge SR, Lionheart WRB, Vauhkonen M and Kaipio JP, 1999, Recovery
of region boundaries of piecewise constant coefficients of elliptic PDE from boundary data,
Inverse Problems 15, 1375-1391.
[39] Kolehmainen V, Vauhkonen M, Kaipio JP and Arridge SR, 2000, Recovery of piecewise constant
coefficients in optical diffusion tomography, Optics Express, 7, 468-480.
[40] Kotre CJ, 1989, A sensitivity coefficient method for the reconstruction of electrical impedance
tomograms Clin Phys Physiol Meas, 10, 275-281.
[41] Lionheart WRB, 1999, Uniqueness, shape, and dimension in EIT, Annals New York Acad Sci,
873, 466-471.
[42] Lionheart WRB, 1997, Conformal uniqueness results in anisotropic electrical impedance imaging.
Inverse Problems, 13, 125-134.
[43] Liu WP, Hua P and Webster JG, 1988, Three-dimensional reconstruction in electrical impedance
tomography, Clin Phys Physiol Meas, 9, 131-135
[44] Loke MH and Barker R, 1996, Rapid least-squares inversion of apparent resistivity
pseudosections by a quasi-Newton method, Geophysical Prospecting, 44, 131-152.
[45] Loke MH and Barker RD, 1996, Practical techniques for 3D resistivity surveys and data inversion,
Geophysical Prospecting, 44, 499-523.
[46] Marquardt D, 1963, An algorithm for least squares estimation of nonlinear parameters, SIAM J
Appl Math, 11, 431-441.
[47] Mayavi, 2003, The MayaVi Data Visualizer, http://mayavi.sourceforge.net
[48] McCormick SF and Wade JG, 1993, Multigrid solution of a linearized, regularized least-squares
problem in electrical impedance tomography, Inverse Problems, 9, 697-713.
[49] Meng S, West R, Ackroyd R, 2004, Markov Chain Monte Carlo techniques and spatial-temporal,
modelling for medical EIT, this issue.
[50] Metherall P, Barber DC, Smallwood RH and Brown BH, 1996, Three Dimensional Electrical
Impedance Tomography, Nature 380, 509-512.
[51] Metherall P, 1998, Three Dimensional Electrical Impedance Tomography of the Human Thorax,
PhD thesis, University of Sheffield.
[52] Molinari M, 2003, High Fidelity Imaging in Electrical Impedance Tomography, PhD thesis,
University of Southampton.
[53] Natterer F, 1982, The Mathematics of Comuterized Tomogrpahy, Wiley, Chichester.
[54] Patterson RP, Zhang J, 2003, Evaluation of an EIT reconstruction algorithm using finite
difference human thorax models as phantoms, Physiol Meas,24, 467-475.
[55] Polydorides N and Lionheart WRB, 2002, A Matlab toolkit for three-dimensional electrical
impedance tomography: a contribution to the Electrical Impedance and Diffuse Optical
Reconstruction Software project, Meas Sci Technol, 13, 1871-1883.
[56] Polydorides, N, 2002, Image Reconstruction Algorithms for Soft Field Tomography, PhD Thesis,
UMIST.
[57] Polydorides N, Lionheart WRB, McCann H, 2002, Krylov subspace iterative techniques: on the
detection of brain activity with electrical impedance tomography, IEEE Trans Med Imaging,
21, 596-603.
[58] Saad Y and Schultz MH, 1986, GMRES: A generalized minimal residual algorithm for solving
nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7, 856-869.
[59] Santosa F, Vogelius M, 1991, A backprojection algorithm for electrical impedance imaging, SIAM
J. Appl. Math., 50, 216-243
[60] Schappel B, 2003, Electrical Impedance Tomography of the Half Space: Locating Obstacles
by Electrostatic Measurements on the boundary, Proceedings of the 3rd World Congress on
Industrial Process Tomography, Banff, Canada, September 2-5, 788-793.
[61] Seppanen A, Vauhkonen M, Vauhkonen PJ, Somersalo E, and Kaipio JP ,2001, State estimation
with fluid dynamical evolution models in process tomography an application to impedance
tomography, Inverse Problems, 17, 467-484.
[62] Siltanen S, Mueller J, Isaacson D, 2000, An implementation of the reconstruction algorithms of
Nachman for the 2D inverse conductivity problem, Inverse Problems, 16, 681-699.
[63] Somersalo E, Isaacson D and Cheney M 1992 A linearized inverse boundary value problem for
Maxwell s equations J. Comput. Appl. Math., 42, 123-136.
[64] Soleimani M, Powell C, 2004, Black-box Algebraic Multigrid for the 3D Forward Problem arising
in Electrical Resistance Tomography, this issue.
[65] Somersalo E, Cheney M and Isaacson D, 1992, Existence and uniqueness for electrode models
for electric current computed tomography, SIAM J Appl Math, 52, 1023-1040.
[66] Tamburrino A, Rubinacci G, 2002, A new non-iterative inversion method in electrical resistance
tomography, Inverse Problems, 18, 1809-1829
EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments 19