Paper The following article is Open access

Machine learning enhanced electrical impedance tomography for 2D materials

, , , , , , , , , , , and

Published 27 June 2022 © 2022 The Author(s). Published by IOP Publishing Ltd
, , Citation Adam Coxson et al 2022 Inverse Problems 38 085007 DOI 10.1088/1361-6420/ac7743

0266-5611/38/8/085007

Abstract

Electrical impedance tomography (EIT) is a non-invasive imaging technique that reconstructs the interior conductivity distribution of samples from a set of voltage measurements performed on the sample boundary. EIT reconstruction is a non-linear and ill-posed inverse problem. Consequently, the non-linearity results in a high computational cost of solution, while regularisation and the most informative measurements must be used to overcome ill-posedness. To build the foundation of future research into EIT applications for 2D materials, such as graphene, we designed and implemented a novel approach to measurement optimisation via a machine learning adaptive electrode selection algorithm (A-ESA). Furthermore, we modified the forward solver of a python-based EIT simulation software, pyEIT, to include the complete electrode model (CEM) and employed it on 2D square samples (Liu B et al 2018 SoftwareX 7 304–8; Somersalo E et al 1992 SIAM J. Appl. Math. 52 1023–40). In addition, the deep D-Bar U-Net convolutional neural network architecture was applied to post-process conductivity map reconstructions from the GREIT algorithm (Hamilton and Hauptmann 2018 IEEE Trans. Med. Imaging 37 2367–77; Adler et al 2009 Physiol. Meas. 30 S35). The A-ESA offered around 20% lower reconstruction losses in fewer measurements than the standard opposite–adjacent electrode selection algorithm, on both simulated data and when applied to a real graphene-based device. The CEM enhanced forward solver achieved a 3% lower loss compared to the original pyEIT forward model. Finally, an experimental evaluation was performed on a graphene laminate film. Overall, this work demonstrates how EIT could be applied to 2D materials and highlights the utility of machine learning in both the experimental and analytical aspects of EIT.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Electrical impedance tomography (EIT) is a visualization method used to reconstruct the conductivity distribution within an object by performing a series of four-terminal measurements along the sample boundary. EIT is a non-invasive imaging technique, pioneered in geophysics for subsurface imaging and medical physics to probe variations in tissues and organs by monitoring the conductivity changes [47]. The inverse problem in EIT image reconstruction is ill-posed, and consequently much effort has been devoted to improving the accuracy and quality of EIT since its inception. To date, many algorithms have been proposed which attempt to solve the inverse problem [7, 8], including those that use artificial neural networks (ANNs) [9, 1013]. A recent study applied deep learning to the two-dimensional (2D) D-Bar reconstruction algorithm, where an ANN was trained and tested on numerically simulated data [9]. It successfully reconstructed the conductivities of artificial agar targets and demonstrated how neural networks could offer an improvement in EIT reconstruction accuracy [9, 14, 15]. Since EIT is an imaging technique, convolutional neural network (CNN) architectures specifically designed for image data are particularly useful. For example, in [16], a CNN was used to represent a linearized forward and inverse map for both 2D and 3D systems. Another CNN architecture in [17] was used to process EIT images to classify between two different types of stroke, since standard EIT images are not of sufficient quality due to their typical blurriness. For a comprehensive review on deep learning and CNNs for inverse imaging problems see [1824]. For a more general list of artificial intelligence applications for EIT, see the reference tables in [25], which includes 7 references on genetic algorithms, 17 references on particle swarm optimisation, and 26 references on neural network approaches.

Machine learning is not only useful for analysing the images that come from EIT, but can also be used to optimise the positions of electrodes around the sample, rather than just space electrodes at regular intervals [26]. Currently, there is a large variety of commonly used current patterns, such as the adjacent drive pattern [27], opposite (polar) drive pattern [28], the cross method [29], and the trigonometric pattern [30]. A number of studies have reviewed these patterns, or provided a theoretical analysis of how to optimise electrode selection [31, 32]. Machine learned electrode selection models can replace the more common algorithmic protocols, such as the opposite or adjacent pattern; the adjacent pattern is still regularly used throughout the literature, despite having been shown to be particularly inaccurate [33, 34].

Recently, EIT has been used to probe the 2D conductivity distributions of thin films and graphene [3537]. In the first application to graphene [36], the EIT reconstruction was compared to a conductivity map obtained using time-domain spectroscopy (TDS), which is a low resolution technique performed in a current-off state with relatively expensive equipment. Only a small discrepancy of approximately 4% was found between the TDS and EIT maps, which indicated the utility of EIT for characterisation of 2D materials. 2D EIT is well studied as it generally involves simpler approaches, but it is not representative of the use cases in traditional medical applications [38]. However, EIT applications for 2D materials are able to benefit from the wealth of literature focused on 2D EIT.

In our work, we intend to build the foundations of machine learning enabled EIT for use on 2D materials. A novel, machine learning adaptive electrode selection algorithm (A-ESA) was developed, and by combining this with a forward solver enhanced with the complete electrode model (CEM) [2], the GREIT inverse solver [3], and the deep D-Bar U-Net CNN for image post-processing [39], an approach to obtain conductivity reconstructions of 2D materials was formulated. The EIT measurements were simulated on a square sample geometry, which was achieved using the pyEIT python-based software. Originally, this software only used a simplified forward solver, but was updated in this work to utilise the CEM [1]. The synthetic conductivity distributions were made using either single 2D Gaussian anomalies, or a combination of different geometrical shapes of varying conductivity, such as ellipses, triangles, and line segments. A small amount of random noise was also injected into each sample. This data generation and measurement simulation procedure was used for both the training and testing of the A-ESA and CNN. To test the software, we first compared the performance of the CEM-enhanced forward solver to the original pyEIT version of the forward solver. This was then followed by an evaluation of the A-ESA prototype against standard selection methods. Next the U-Net CNN post processing of GREIT reconstructions was performed and analysed. Finally, an experimental test of our A-ESA was performed using a graphene laminate film device. All the developed code is available at https://github.com/AdamCoxson/ML-EIT-Graphene.

2. Theory

EIT is used to reconstruct the conductivity distribution on a continuous sample, where the continuum representation of Ohm's law, J = σE , links the current density J, the conductivity σ, and the electric field E. In general, the conductivity σ is a rank-2 tensor. However, the investigated samples are all isotropic, so the conductivity is a scalar field. Provided the frequency of the a.c. input current is sufficiently small, the effects of magnetic fields can be neglected. Consequently, the electric field can be expressed as E = −∇u, where u is the potential, leading to Laplace's equation

Equation (1)

2.1. The forward problem

In the forward problem, equation (1) is used with suitable boundary conditions to simulate the potential for a given conductivity distribution on the sample. The CEM solves equation (1) using a form of the Robin boundary condition applied to EIT:

Equation (2)

where zn is the contact impedance between the sample and the nth electrode and Vn is the voltage reading on the electrode, while en represents the position of the nth electrode. x is a spatial variable of the conductive surface, Ω, and n is the normal to the boundary, ∂Ω. The model also enforces the current density to be zero on the boundary except at the electrodes [40]:

Equation (3)

The complexity of the second-order differential equation (equation (1)) that is solved in EIT requires numerical methods to approximate the solution. One of the methods for the solution of the forward problem is the finite element method (FEM) that is based on the Galerkin method [41]. Firstly, we consider the surface of interest, Ω, and its boundary, ∂Ω. Ω is discretised by creating a mesh of N triangles which form the elements of the model. The vertices formed by the mesh are known as nodes. Electrode contact points correspond to labelled nodes on the boundary of the mesh, where each contact is modelled using multiple nodes to account for the finite width of the contacts. An example of the square mesh geometry used in this work, with 5 electrodes on each side giving 20 in total, is shown in figure 1. To represent all of the elements of the mesh, an N × N admittance matrix, A, is assembled such that the electric field is the same at the nodes where different triangles are connected. This admittance matrix is then used to form the linear system of equations

Equation (4)

where f is a vector of the potential in each element and b is a vector that gives the total flux of the current at each node. For details of the CEM and a full derivation of the admittance matrix, see [42, 43] and [44]. Once the forward model is established, the Jacobian, J, may be calculated. Its elements are defined as

Equation (5)

where Vij is the voltage measured between electrodes i and j, σk is the conductivity of element k in the forward model, and the integral is over the kth element's surface, Ωk . The Jacobian is a measure which quantifies the sensitivity of the conductivity distribution to the value of the voltages measured along the sample boundary. See the appendix of reference [45] for a full derivation of the Jacobian. This variable is essential for the inverse problem and the electrode selection algorithm (ESA) discussed in the following sections.

Figure 1. Refer to the following caption and surrounding text.

Figure 1. An example of the finite element meshing used in this work. The positions of electrodes and their finite widths (10% the length of one side) are represented by red lines. There are 20 electrodes in total, with three nodes used per electrode. Note the finer meshing around the electrodes compared to the centre. The x and y axes are arbitrary reference coordinates.

Standard image High-resolution image

2.2. The inverse problem

The inverse problem is non-linear due to the fact that current does not travel in straight lines. More current passes through regions of lower impedance, which in turn makes the current's trajectory non-uniform and non-linear. Accordingly, simple linear models such as back projection (used in CT scans where x-rays are exploited) cannot capture the complexity of the current flow. In contrast to the forward problem, where equation (1) is solved for the potential, u, for an assumed conductivity distribution, the inverse problem serves to fit a conductivity distribution that matches measured boundary voltages. Since physically measuring voltages only samples the value of the potential at discrete points along the boundary ∂Ω, the given input information is incomplete. Subsequently, the inverse problem is by definition ill-posed and ill-conditioned. As such, the EIT inverse problem requires a form of regularisation to obtain stable solutions [46, 47]. One widely used linear numerical method is called GREIT (Graz consensus Reconstruction algorithm for EIT), which relies on Tikhonov regularisation [3]. GREIT and most of the other inverse solver algorithms utilise the Jacobian matrix (equation (5)), derived in [45, 48].

The GREIT algorithm is a difference EIT inverse solver, which reconstructs the vector of conductivity change, x = σ σ r , where σ and σ r are the sample and reference conductivities, respectively, and x is of length m. GREIT is a linear approximation, which makes use of a reconstruction matrix, R, and the vector of voltage measurement differences, Δ V , to estimate x,

Equation (6)

where V r is the vector of reference voltages obtained from the forward model using σ r . GREIT acknowledges that the resolution of EIT is limited and therefore small features in x will be blurred. So, rather than attempting to reconstruct x exactly, GREIT attempts to construct a 'desired image' $\tilde{\mathbf{x}}$,

Equation (7)

where D is a matrix which represents a blurring operation on x. This is a one-to-one mapping, where each $\tilde{\mathbf{x}}$ element is centred on its associated x element. However, each $\tilde{\mathbf{x}}$ is defined to have a larger spatial area than its corresponding x, which gradually decreases to zero at the outer boundary of the $\tilde{\mathbf{x}}$ area. This larger 'desired image' smooths out differences in the resolution between the boundary and the centre, giving a more uniform distribution overall by exploiting the blurry nature of EIT. The original GREIT paper considered circular anomaly targets, where the desired image was a circle with a slightly larger radius [3]. In this work, D was of size m by m and based on a uniform weighting matrix w, which utilised a sigmoid function as the blurring operator; this was done using the GREIT algorithm as it is implemented in pyEIT [1].

2.3. Electrode selection algorithms (ESAs)

2.3.1. Conventional algorithms

Each EIT measurement uses a pair of electrodes to drive a current and a pair of electrodes to measure the resulting voltage. As the minimum feature size distinguishable by EIT is limited by the distance between regularly spaced electrodes, it is desirable to increase the number of electrodes used. The total number of possible measurements that may be taken rises factorially with the number of electrodes, which results in a rapid increase of the time required to take each scan and process the data. Taking more, potentially redundant, measurements adds new information, enabling higher resolution images [49]. However, only a finite number of measurements are necessary to perform EIT as long as they form a complete set, which is usually achieved by application of ESAs.

One common type of electrode selection is known as the adjacent–adjacent current–voltage drive method, in which the current is passed through a pair of adjacent electrodes and voltage is sequentially measured on all remaining adjacent pairs [50, 51]. This and other similar methods, such as the opposite–adjacent method, scale quadratically with the number of electrodes [28, 31], which is a much slower rate than factorial scaling. Such ESAs are often used in EIT, with GREIT being specified for use with the adjacent–adjacent method [3]. However, it has been shown that the adjacent–adjacent method is in fact one of the poorest performing ESAs [33, 34]. The adjacent–adjacent pattern has good sensitivity around the sample edges, but is poor in the centre and is susceptible to boundary shape, electrode position, and measurement error. The opposite–adjacent pattern has good overall sensitivity across the sample, but is limited in sensitivity near the edge. Another choice of ESA is the trigonometric drive pattern, which has good performance across the entire sample and its edges, but has multiple independent current drivers and the effect of unknown contact impedance is more significant [5254]. All of these methods have their advantages and disadvantages, and are all relatively straightforward, but are not flexible; they always perform the same combinations of measurements. In light of this, we suggest using an adaptive ESA (A-ESA), which infers new optimal measurement positions from prior reconstruction knowledge as the scan progresses.

2.3.2. Machine learning adaptive electrode selection algorithm (A-ESA)

In this work, a novel A-ESA was developed to use machine learning and representative training data to achieve a more efficient measurement selection procedure. The current implementation of the A-ESA relies solely on the GREIT reconstruction image, and subsequently does not require information on the sample geometry. However, it does make two assumptions about the nature of EIT and the data available:

  • (a)  
    Each new voltage measurement provides information gain, but may increase the total error associated with matrix R, which is the parameter to be minimized for reconstruction [3].
  • (b)  
    The GREIT algorithm is more likely to neglect anomalies which are close to a previously detected deviation, since the gradient of the conductivity is larger at the deviation.

The A-ESA aims to identify zones of interest which have a high probability of containing a deviation from the current estimate of the conductivity reconstruction image, σ , which is in a nx × ny pixel shape. The closer one gets to the boundary of a detected anomaly, the more the conductivity gradient increases. One can establish a deviation map, which is a combination of the gradient map and conductivity map logarithm. The gradient map, G, for a 2D distribution is given by

Equation (8)

where i and j run over the x and y pixel elements of σij , and all contributions of second order and above have been neglected. The logarithmic map, L, is the logarithm of the conductivity

Equation (9)

which enables the deviation map to be calculated using

Equation (10)

⊙ represents a Hadamard product (element-wise), while the superscript ◊ represents an element-wise power. qh denotes a hyperparameter intrinsic to the A-ESA, which must be pre-determined by Bayesian optimization and training on simulated data [55].

Next, the influence map S is obtained by applying perturbations to all previously measured voltages, which are stored in a vector V of size m. A constant perturbing matrix P, is used to perturb all voltages at once:

Equation (11)

Equation (12)

where 1 is an m by m matrix of ones, I is an m by m identity matrix, and ph is the perturbation constant hyperparameter. Vpert (m × m) is a matrix of perturbed voltages, where each row is equal to the original voltage vector, V , except for one perturbed value. That is, in the kth row, ${\mathbf{V}}_{\text{pert}}^{(k)}$, only the kth value is perturbed by a factor of ph,

Equation (13)

Vectors of the difference between the perturbed and reference voltages are found for each row of Vpert,

Equation (14)

where ${\Delta}{\boldsymbol{V}}_{\text{pert}}^{(k)}$ is the kth difference vector. Since there are m different ${\Delta}{\boldsymbol{V}}_{\text{pert}}^{(k)}$ vectors, one for each k-value, m conductivity deviation vectors are reconstructed using GREIT,

Equation (15)

Equation (16)

where R is the reconstruction matrix from equation (6). After this stage, the influence map is created by summing all $m\ {\Delta}{\sigma }_{i}^{(k)}$ vectors along the kth axis. Then the resulting vector is reshaped into a matrix of nx × ny , which are the horizontal and vertical spatial coordinates of the GREIT pixel grid:

Equation (17)

Equation (18)

The Hadamard product of the deviation and influence map is taken to produce the nx × ny total map,

Equation (19)

Equation (20)

where rh is a third hyperparameter.

Therefore, the total map T is a measure of which pixels are the best for measurements in terms of gaining information about the overall conductivity distribution. T is constructed by combining the deviation map D (the variation of conductivity gradient across the pixels of the previous GREIT reconstruction) and the influence map S (sensitivity of the conductivity of certain pixels to deviations in measured voltage). The total map thus considers both the pixel sensitivity to voltage change and the gradient of pixel conductivity.

Figure 2. Refer to the following caption and surrounding text.

Figure 2. Illustrations of the A-ESA current and voltage pair selection, where electrodes are shown by small red squares, unless stated otherwise. Note, the blank grid represents the pixelated format of the GREIT reconstruction map, which would be 64 × 64 or 128 × 128 pixels in actual use. (a) In current pair selection, a line is drawn between each pair of electrodes in turn, crossing over some pixels, which are shown in blue. The total map values for the blue pixels of each current pair are summed, and the subset of pairs with the highest sum are suggested for further measurements. (b) In voltage pair selection, pixels with a high enough total map value belong to a region of interest, as per equation (21), which are shown in green. The optimal current pair is shown in blue and the voltage pair being considered at this point is shown in purple. The purple pair is considered the most optimal if it has the highest sum of Jacobian elements associated with the green region of interest, thus offering the most information and improvement for subsequent reconstructions.

Standard image High-resolution image

2.3.3. A-ESA current and voltage pair selection

After the total map has been calculated, the A-ESA uses it to identify all possible current pairs and corresponding voltage pairs that offer the most information gain on the conductivity distribution. The A-ESA first traverses all combinations of current pairs, then selects an optimal subset using the total map value sum for each pair. Then for each optimal current pair, it traverses all voltage pairs to select an optimal subset of voltage pairs for that current pair. This is done by considering a straight line between every possible pair of current electrodes and evaluating the sum of the total map values for the pixels crossed by the line, as shown by the illustration of a pixel grid in figure 2(a). The current pairs are ranked according to their 'crossed pixel total map sum', current pairs with the highest sum of total map values are most optimal. Subsequently, the single best current pair, or a subset of the top pairs from the ranking, are the first to be considered. After an optimal current pair has been identified, all possible voltage pairs associated with the selected current pair will be evaluated in turn to obtain a subset of optimal voltage pairs. This stage utilises the Jacobian matrix, which was calculated when solving the forward problem. The Jacobian is a measure of how sensitive each voltage pair is to the conductivity of each pixel, whereas the total map is the metric of how much information can be gained by measuring the conductivity over each pixel. Each voltage pair has a unique set of Jacobian elements, where each element corresponds to a particular conductivity pixel and associated total map value. As shown in figure 2(b), regions of interest are identified by considering pixels whose total map value is above a given percentage of the maximum total map value;

Equation (21)

where th is a hyperparameter bound between 0 and 1. For each voltage measurement, Va , its corresponding Jacobian elements associated with the regions of interest are summed

Equation (22)

where σb is the conductivity value corresponding to pixel b of the pixels which make up the region of interest. Voltage pairs that are the most sensitive to the conductivity in the region of interest offer the most information and potential improvement in the next reconstruction. The subset of voltages with the highest sum of Jacobian elements are considered to be the most sensitive, hence the most effective measuring pairs. The described current and voltage pair selection offers an optimised set of four-terminal measurements to be performed on the sample, which will subsequently give a better GREIT reconstruction. The process repeats iteratively until some user-defined stopping condition, such as after 10 iterations of 50 measurements each, or a desired accuracy is achieved.

2.4. U-Net convolutional neural network (CNN)

One far-reaching application of machine learning is the use of CNNs for image post-processing. CNNs have proven to be effective at filtering images for feature extraction and noise suppression, resulting in higher clarity images [56], and have recently seen application in EIT [16, 18, 20, 24]. One such CNN commonly used for image segmentation is known as U-Net, which was designed for use in neuron imaging and cell tracking [39]. The deep D-Bar CNN architecture is shown in figure 3, which was adapted from [9] for this work to use GREIT instead of D-Bar reconstructions. CNNs utilise a series of convolutional layers that transform the input into a latent space, which achieves a more compact representation of features in the data. This is performed using element-wise multiplication between a 3 × 3 pixel convolutional filter and a region of the image of the same size. By scanning the filter sequentially across the image, the filters act to average regions of pixels together, which are then pooled and combined. A typical feature of this model is the skipping of multiple layers that gives the architecture's floor-like structure [9]. U-Net utilises the common rectified linear activation function to perform discrete convolutions on an input image. For a full description of how the U-Net CNN works, see [9, 39].

Figure 3. Refer to the following caption and surrounding text.

Figure 3. U-Net CNN architecture based on the deep D-Bar CNN, modified to improve reconstructions from GREIT. The image shows how the reconstructed conductivity map, σ , is input as a 64 × 64 pixel image (can also be 128 × 128) and passes through several convolutional layers. After the signal has propagated through the network, it produces a processed image, σ pred. Reproduced from [9]. CC BY 4.0.

Standard image High-resolution image

In this work, we used the quadratic loss function, $\mathcal{L}$, as the measure of error between the true conductivity map and the output produced by the deep learning model via the back propagation process:

Equation (23)

where σ True is the true conductivity map and σ Pred is the prediction output of the CNN. We modified the standard U-Net architecture by introducing zero padding, achieved by adding a frame of zeros to the result of the convolution at each layer. This ensured the preservation of the size of the input image as it propagated through the network to the output. The neural network structure was written using the Keras library and Tensorflow back-end [57, 58]. Note that the same loss measure was also used to evaluate the forward solvers and ESAs, in which case σ Pred was the raw reconstruction of the GREIT algorithm.

3. Implementation and testing

The work described here was performed in three main parts. The first was the enhancement of pyEIT simplified forward solver with the CEM-based forward solver, as well as its evaluation using the GREIT algorithm. The second was the development of the adaptive ESA and the optimisation of its four trainable hyperparameters. Finally, the training of the U-Net CNN was performed and the CNN processed reconstructions compared to the unprocessed reconstructions, as well as some sensitivity tests on Gaussian shaped anomalies.

3.1. Complete electrode model (CEM) enhanced forward solver

The newly implemented forward solver is the first forward solver that utilises the CEM and is written in Python using the external Numpy and CuPy (for GPU acceleration) libraries [59, 60]. The code is completely vectorised to reduce the execution time and uses the 2D FEM to assemble an admittance matrix from three sub-matrices [44]. The sub-matrices were calculated prior to the assembly by using integrating matrices that deliver the corresponding integral result to each matrix element. This CEM-enhanced forward solver models the width of the electrodes on the sample and the contact impedance, which was the main difference between this solver and the simplified model used in the original pyEIT package [1]. To evaluate their relative performance, both solvers were used to generate GREIT reconstructions for the same test set of conductivity distributions.

The GREIT algorithm was evaluated on its ability to produce reconstructions which identify the main features of the conductivity distribution. Here, sample conductivity distributions were generated artificially, including both continuous and discontinuous anomalies, as well as their combinations. The continuous anomalies were Gaussian distributions, which imitates the presence of nearby point charges, while the discontinuous anomalies were triangles, ellipses and lines of uniform conductivity. 1 to 5 anomalies of random size were used per sample and were randomly positioned on the sample. The triangles and ellipses had a normally distributed conductivity value with a mean at the reference conductivity, whereas the lines had a conductivity at 0.001% of the reference, to simulate an electrical discontinuity or cut through the sample. Synthetic Gaussian noise with a standard deviation of 3% was added to each simulated voltage measurement. Since the samples were simulated, their true conductivity distributions were known and were subsequently used to calculate the $\mathcal{L}$ losses of the corresponding GREIT reconstructions.

3.2. A-ESA implementation and hyperparameter optimisation

The A-ESA was written in Python, in a fully vectorised form, using the NumPy library [59] and the pyEIT implementation of the GREIT algorithm [1, 3]. Before use, the A-ESA's four hyperparameters were optimised on simulated training data, where 1000 samples with different conductivity distributions were used. Each conductivity distribution was reconstructed using 15 iterations of 10 voltage measurements and their losses calculated. 150 measurements per sample for 1000 samples resulted in significant computation time for a single evaluation of the objective function. Bayesian optimisation is a technique commonly used to sample time-expensive objective functions [55]. As such, this method was applied to the A-ESA for 100 calls, where each evaluation used different values of the hyperparameters. The A-ESA can be represented in an objective function form by

Equation (24)

where ${f}_{i}^{(k)}$ represents the ith component of the objective function vector for the kth training sample, which measures the speed of convergence towards the true conductivity map. ${n}_{i}^{(k)}$ is the number of performed voltage measurements before the ith loss value, ${\mathcal{L}}_{i}^{(k)}$, is calculated. The index i runs from 1 to Nm/10, where Nm is the total number of measurements (150 in this case). The sample number k runs from 1 to 1000 and to get the objective function value for a sample k, the elements of f(k) are summed. Bayesian optimisation evaluates the error loss for different values of the hyperparameters, quickly finding an optimal set with a minimised loss. The final normalised loss value for the optimised A-ESA is given by

Equation (25)

The Bayesian search was performed on continuous conductivity samples. The bounds on the hyperparameters were chosen according to the mathematical limits of the parameters and some initial intuitions gained from small, trial optimisation runs. After several Bayesian searches over 1000 different samples, a final set of optimal hyperparameters was identified, as shown in table 1.

Table 1. The initial bounds and results of the A-ESA hyperparameter optimisation. For real number bound types the hyperparameter could take on any real continuous value between the defined limits.

Hyperparameter optimisation results
Hyperparameter nameSymbolInitial boundsBound typeOptimal value
Perturbation constant ph [0.001, 1]Real0.156
Conductivity logarithm power qh [−40, 100]Integer−14
Influence matrix power rh [−40, 100]Integer24
Total map cut-off fraction th [0.001, 1]Real0.690

After the training process, these optimised hyperparameters were used in the A-ESA for its evaluation against two standard measuring techniques: the opposite–adjacent ESA (opposite electrode current driver with adjacent electrodes as the voltage measurement pairs) and the adjacent–adjacent ESA.

3.3. Neural network training and sensitivity tests

One of the motivations of this research was to determine whether EIT is suitable to detect charges close to the surface of an examined sample. As such, point charges were modelled using Gaussian-distributed spots of conductivity with added uncorrelated noise. The U-Net CNN was used for post-processing of the GREIT conductivity reconstructions, to see whether the neural network was able to improve upon the reconstruction accuracy. To enable this the CNN was trained and tested on a total of 130 000 samples, where each of their losses were minimised during the training process using the Adam optimiser [61]. To train, sample reconstructions were input into the network, processed, and the network output evaluated against the true conductivity map of the sample using equation (23).

The training samples were generated in the same way as outlined in section 3.1. GREIT reconstructions were made for each of the samples using an ordered electrode selection. First, the number of source-sink pairs was set to 15. 15 sources were taken from a list, ordered by the remainder of their division by 5 (i.e. 0, 5, 10, 15, 1, 6, 11, 16, 2, 7, 12, 17, 3, 8, 13, 18, 4, 9, 14, 19). The 0th electrode was at the leftmost edge of one of the square sample sides and the rest were ordered clockwise. Next, a random fixed electrode distance between the source-sink pairs was used to select the corresponding sink electrode. This process was used to spread the source-sink pairs around the sample, enabling a more consistent reconstruction. A limited number of voltage pairs were selected randomly, using a random distance between the voltage pair electrodes for each sample. Any measurement lines where sources or sinks crossed over were discarded. Subsequently, for each sample, an array of current and voltage pairs without repeated lines was created. This selection algorithm allowed for randomisation in the selection of current–voltage pairs, while retaining consistency of reconstructions by spreading measurements around the training sample. In order to improve the similarity to real EIT data, white Gaussian noise with a mean of zero was added to the simulated voltage readings. 60 000 samples of mixed geometric and Gaussian type anomalies were produced for training, 10 000 more were used for testing of the CNN. A further 40 000 reconstructions of samples with only a single Gaussian anomaly were also used to train the network.

An initial test was performed only on the geometric type anomalies. The CNN processing was applied to qualitatively evaluate how well the CNN could reproduce the sharp, discontinuous edges of these anomalies. As EIT is inherently blurry, there is some utility in post-processing procedures that can reduce this. Next, to quantify the performance of the CNN, sensitivity tests were devised, which featured uncorrelated Gaussian anomalies of 20 different amplitudes and 10 different standard deviations, giving 200 different types of anomalies in total. The amplitudes were uniformly distributed from −1 to 1, at intervals of 0.1, while the standard deviations varied between 5% to 50% of the side of the square sample, at intervals of 5%, where all values were fractional deviations from the reference conductivity. Each type of Gaussian anomaly was simulated with their means at differing spatial positions within a 10 × 10 grid format, where the xy coordinates varied between ±0.9, as shown in figure 4(c).

Figure 4. Refer to the following caption and surrounding text.

Figure 4. (a) and (b) show the true images of a simulated conductivity distribution. They each contain a Gaussian anomaly of the same amplitude (0.25 above the reference) and the same position (0.1, 0.1), but differing standard deviations of 0.1 for (a) and 0.5 for (b). (c) is the regular grid positioning that each Gaussian anomaly type was averaged over.

Standard image High-resolution image

This resulted in a total of 20 000 conductivity distributions for the CNN's testing set. For each anomaly of particular amplitude and standard deviation, their reconstruction losses were averaged over the 100 different mean locations to obtain an average $\mathcal{L}$ loss. The deviation in position between the Gaussian anomaly mean for the true, raw and processed maps was also calculated, which gave a direct measure of how similar the raw and processed maps were to the true data. Once the raw and processed reconstructions were made for each anomaly type, their averaged losses and errors in mean position were plotted against the Gaussian's amplitude, with different plots for each of the 10 different standard deviations. At the end of simulated data generation there was a total of 130 000 true maps and GREIT reconstructions that were used for training and testing of the CNN, which included geometrically shaped and Gaussian type anomalies. This training data allowed evaluation of the CNN on its ability to process the GREIT reconstructions and obtain an image closer to the true conductivity map.

4. Simulation results

The training, optimisation, and testing of all the different algorithms and software was performed on square samples with 20 electrodes. Each contact width was 10% the length of the side, as shown in figure 1. The GREIT regularisation parameter, λ, was fixed to the default (λ = 0.01) used by the original python pyEIT package. We found fixing λ was sufficient for the simulated conductivity distributions, as the visual clarity of reconstructions did not vary much for λ = [0.001, 0.01], which was most likely due to the simplicity of the synthetic anomaly generation.

4.1. Forward solver evaluation

The comparison between the new, CEM-enhanced FEM solver and the original simplified version found in the pyEIT package involved evaluation of the losses $(\mathcal{L})$ of the conductivity reconstructions for 10 000 different samples. A visual comparison of the reconstructions between the new and the old solvers for three different samples is shown in figure 5. From left to right, the first three images show the true conductivity map (a), the old forward solver reconstruction (b), and the new CEM-enhanced forward solver reconstruction (c). Panel (a) shows that the sample was composed of a triangular, elliptical, and linear anomaly. Visually comparing (b) and (c) shows that both the old and new solvers performed to the same degree. However, the CEM-solver was able to better capture the conductivity with more consistency than the old solver. In images (d)–(f) the new solver performed better than the old solver, resolving more of the true triangle anomaly shape. In images (g)–(i), both solvers distinguished the two anomalies, capturing their approximate shapes and the surrounding conductivity variation. However, in this case the old solver performed slightly better at capturing the shape of the elliptical and triangular anomalies than the new solver. Overall, the old and new solver reconstructions only show minor differences under visual comparison. However, the calculated loss of the new solver when averaged over 10 000 samples was 3% lower than that of the old simplified forward solver.

Figure 5. Refer to the following caption and surrounding text.

Figure 5. Comparison between the original pyEIT solver and CEM-enhanced forward solver. The left column (a), (d) and (g) shows maps of true conductivity. The middle (b), (e) and (h) and right (c), (f) and (i) columns show corresponding reconstructions using the original and enhanced solvers, respectively. Three cases are studied: (a)–(c) a high conductivity triangle and intermediate conductivity line against a portion of a low conductivity ellipse; (d)–(f) a short, high conductivity line and larger, low conductivity triangle on an intermediate background; (g)–(i) a high conductivity triangle and a half cut-off low conductivity ellipse against an intermediate background. The jagged edges along the shape boundaries on the true conductivity maps are due to the injection of noise. Note that all conductivity values are normalised so that the reference background conductivity is centered on the colour bar scale at 0, and true zero conductivity is at −1.

Standard image High-resolution image

4.2. A-ESA evaluation

To evaluate the A-ESA, its reconstruction error loss was compared to the losses achieved by two commonly-used ESAs: the opposite–adjacent and adjacent–adjacent measurement protocols. Initially a direct comparison between all three methods was performed on two different samples, where their losses were recorded over 120 measurements in steps of 10, as shown in figure 6. Panels (a) and (b) show the reconstruction loss comparison for the two samples, where the losses are normalised by the lowest error achieved by the A-ESA. Both plots show that the A-ESA achieved a consistently lower loss than the opposite–adjacent method before both methods converged beyond 100 measurements. The adjacent–adjacent method clearly performed very poorly, with a final loss of approximately three times that of the A-ESA in either case.

Figure 6. Refer to the following caption and surrounding text.

Figure 6. Reconstruction loss $(\mathcal{L})$ comparison between the different ESAs, where the A-ESA loss data is shown by red circles, the Opp–Adj has green squares and the Adj–Adj has blue triangles. Panels (a) and (b) are for different samples, and both show how the A-ESA outperforms the standard measurement techniques, achieving lower loss values in fewer measurements. All losses are normalised by the lowest loss reached by A-ESA.

Standard image High-resolution image

Next, the different ESAs were evaluated over 2000 different samples. This was repeated over three runs using the same samples but re-initialising the forward solver mesh each time. For each run the 2000 sample losses were summed and normalised, after which the three data sets were averaged and plotted, as shown in figure 7. For all 120 measurements the averaged $\mathcal{L}$ losses of the A-ESA reconstructions were lower than the losses of the opposite–adjacent protocol and significantly lower than the adjacent–adjacent method. This difference in performance was even more apparent in the region of 40 to 60 voltage measurements, where the loss of the A-ESA was approximately 75% the loss of the opposite–adjacent method. The decrease in the loss was steeper at fewer measurements for both the A-ESA and Opp–Adj method. However, the A-ESA loss gradient was slightly larger up until the 80th measurement, after which both methods became comparable and the final loss of the Opp–Adj was only 12% higher than the A-ESA. The Adj–Adj method performed poorly, never achieving anything lower than 3.5 times the final loss of the A-ESA. Despite the eventual loss convergence the of Opp–Adj and A-ESA methods, by the 60th measurement the A-ESA loss was within 20% of the final loss, whereas the Opp–Adj was only within 60%. The goal of the A-ESA development was not to get a lower loss overall, but to achieve some desired loss level faster than conventional ESAs. Figure 7 clearly shows this goal was achieved, with the extra benefit of lower overall losses as well.

Figure 7. Refer to the following caption and surrounding text.

Figure 7. Reconstruction loss $(\mathcal{L})$ comparison between the different ESAs, where the A-ESA loss data is shown by red circles, the Opp–Adj has green squares and the Adj–Adj has blue triangles. The data shown is the summed average of three repeated runs of 2000 samples, where the forward mesh was re-initialised for each run. The A-ESA performed the best with the fastest convergence to low losses, followed by the Opp–Adj method, whereas the Adj–Adj method had significantly higher losses overall. The standard deviation of the A-ESA was also less than the other methods, as indicated by the error bars, some of which are too small to be seen. All losses are normalised by the lowest loss reached by A-ESA.

Standard image High-resolution image

4.3. U-Net CNN post-processing and sensitivity tests

The CNN was applied to a variety of GREIT reconstructions, examples of which are shown in figure 8. In general, the CNN post-processing was able to offer significant improvement over raw GREIT reconstructions. The extremely high clarity and accuracy may partially be the result of some over-fitting of the training data. Nevertheless, the analysis demonstrates the utility of machine learning neural networks for post-processing of the reconstruction images: whether for improving image clarity, as in this work; or for classification of specific anomalies that may occur. A focused and thorough analysis would enable a more rigorous training and testing procedure to be developed, along with a more developed CNN for use in EIT applications.

Figure 8. Refer to the following caption and surrounding text.

Figure 8. Comparison between raw and CNN post-processed reconstructions from true conductivity maps: (a)–(c) insulating line crossing regions with different conductivity, (d)–(f) insulating triangle on a homogeneous conductive background. The CNN post-processing significantly improved upon the clarity of the raw GREIT reconstruction. Note that due to extensive training, there may be some over-fitting present.

Standard image High-resolution image

The sensitivity of the GREIT algorithm was tested on samples with continuous conductivity distributions by simulating Gaussian anomalies at 100 different positions on the sample, and averaging the losses of their reconstructions. The losses of the reconstruction and the errors in the estimation of the mean position were plotted against the amplitudes of the anomalies, as a function of deviation from the reference conductivity. The graphs from before and after CNN post-processing for a Gaussian anomaly with a standard deviation of 0.5 (25% of the square side length) are illustrated in figure 9. An example of the true image, reconstruction and processed reconstruction are shown in (a)–(c) for the positive amplitude of 0.25, with a mean at the slightly off-centre position of (0.1, 0.1). These show that the raw reconstruction reduced the true Gaussian anomaly peak to a much smaller region of high intensity, failing to retain the broad profile of the true conductivity deviation. The CNN improved upon the raw reconstruction significantly, managing to reinstate the broad nature of the true Gaussian, but skewing it to the right slightly. Three images depicting an anomaly of the same amplitude but centered on (0.7, −0.7) are shown in (d)–(f). Again, the CNN processing retrieved the smooth conductivity gradient, although not without incorrectly drawing more intensity into the bottom right corner at (1.0, −1.0).

Figure 9. Refer to the following caption and surrounding text.

Figure 9. Improvement on GREIT reconstruction by CNN processing. (a)–(c) True conductivity, raw, and CNN processed reconstructions for the conductivity of a Gaussian anomaly with amplitude 0.25, standard deviation 0.5, and a slightly off-centre mean position at (x = 0.1, y = 0.1); (d)–(f) plots for the same Gaussian, but with a mean position at (0.7, −0.7). (g) Squared loss and (h) variation of the error in predicted mean position versus the true Gaussian anomaly mean for the raw and CNN-processed maps.

Standard image High-resolution image

The $\mathcal{L}$ losses of the raw and post-processed reconstructions shown in (g) are comparable at small amplitudes. However, for large amplitudes the raw reconstructions had a significant amount of error loss. The CNN processing improved significantly upon this, reducing the loss by almost 500% in the case of the extreme +0.5 amplitude point. As shown in (h), the deviation in Gaussian mean positions of the raw and processed reconstructions are almost identical, with very small deviation. However, the CNN data points are slightly lower by up to 2%. Despite the differences between the reconstructions and the true image, knowledge of the mean position was retained to the same degree between the raw GREIT maps and CNN processed maps.

5. Application to graphene laminate film

For experimental evaluation, we chose a graphene laminate film, which was prepared using a liquid-phase exfoliated graphene suspension, details can be found elsewhere [62]. A 1 cm by 1 cm, 35 μm thick graphene sample was fabricated with 64 contacts; the high number was chosen to test the selection ability of the A-ESA. The contact widths were set to 0.02 cm (2% the length of sample side) and the distance from each square corner to the outermost electrodes was 0.04 cm (4% the length of the edge). The FEM mesh used in the previous simulated experiments with 20 contacts was too coarse. As such, the density of nodes within the FEM mesh was adjusted to accommodate the higher number of contacts and their width. The mesh creation built into CEM-enhanced pyEIT code enabled the mesh to vary in density, becoming finer near the electrodes and coarser towards the sample centre. The relevant variables, such as the initial internodal distance and parameters within the software, were evaluated using Bayesian optimisation. This analysis resulted in a new mesh as shown in figure 10.

Figure 10. Refer to the following caption and surrounding text.

Figure 10. The FEM meshing for 64 electrodes spaced around the perimeter of the square sample. Each triangular element is made up of three nodes, with finer meshing nearer the electrodes than in the centre. Red lines represent the electrodes and their width, which is 2% of the length of the square side.

Standard image High-resolution image

The A-ESA and the opposite–adjacent selection methods were used for comparison. The four-terminal measurements were automated using the pyVisa package that interfaced directly with our modified pyEIT software [63]. A cut was made near the centre of the sample through its full thickness and a spot of conductive silver paint put onto the surface nearby. The setup and sample are shown in figure 11. The setup consisted of a desktop multiplexed with a SR860 Stanford Research Systems lock-in amplifier via an Ethernet connection, and a CYTEC VX/256 switch box via a General Purpose Interface Bus (GPIB) port. The switch box was configured into a 64 by 4 relay-module format, to match the 64 contacts on the sample perimeter. The sinusoidal 1 V lock-in signal generator was connected to a 10 kΩ resistor to provide 100 μA current source. All measurements were performed at an ac frequency of 1330 Hz. The lock-in also interfaced with the switch box to perform voltage measurements across pairs of the electrodes when the corresponding switches were opened, as shown in figure 11. The desktop enabled automation of the lock-in and switch box through commands from pyVisa's application programming interface.

Figure 11. Refer to the following caption and surrounding text.

Figure 11. (a) Scanning electron micrographs (SEM) of graphene laminate film device. I, contacts of the sample. II, zoomed-in area marked with the red box in I with 54° tilt. The actual thickness is obtained from the apparent thickness: 20 μm/cos 54° = 35 μm. III, cross-section of the sample processed with Zeiss crossbeam focused ion beam milling. IV, zoom-in of the gap in III reveals individual graphite sheet thicknesses. (b) 1 cm by 1 cm, 35 μm thick graphene sample after modification with a silver spot and central cut through the full sample thickness. The sample is oriented the same way as the reconstruction plots, where the first contact (numbered 0), is indicated by the red arrow in the top left. (c) is a diagram of the experimental setup, which interfaced with desktop analysis and control software via the pyVisa python commands. The lock-in was controlled via an Ethernet port, while GPIB was used to open and close the relevant relays of the switch box. The I± connection output a 1 V (rms) sinusoidal signal across a 10 kΩ resistor to produce a 100 μA current. For clarity, not all of the contacts and switch connections have been drawn.

Standard image High-resolution image

EIT reconstructions using the opposite–adjacent ESA and the A-ESA were attempted, as shown in figure 12. The regularisation was set to the default value λ = 0.01, as λ had no significant effect on the reconstruction. Figure 12(a) has a high conductivity in the upper-right region, shown in yellow, which is approximately where the silver conductive spot was (red dashed circle). In the centre-left, the conductivity is lower than the upper-right, as shown by the green–blue colour, which corresponds to the location of the cut (red dashed line). In figure 12(b), the upper-right is a pale blue, so is of higher conductivity when compared to the dark blue, centre-left region. Both reconstructions show lower conductivity in the centre-left of the plots, and higher conductivity in the upper-right, which corresponds to reduced conductivity around the cut. The regions of higher conductivity in the top right were most likely the result of EIT reacting to the lower conductivity around the cut region, and also corresponds to the region of the silver paint. However, the general trends are weak and there is some anomalous variation around the boundaries that affects the clarity.

Figure 12. Refer to the following caption and surrounding text.

Figure 12. Experimental GREIT reconstructions for the graphene sample. (a) shows the opposite–adjacent reconstruction, (b) shows the reconstruction from the A-ESA. As shown by the red dashed lines, the cut line runs from approximately (−0.1, −0.1) to (−0.4, 0.3), while the position of the silver spot is approximately centred on (0.3, 0.4) with a radius of 0.35. The general location and direction of the cut may be inferred by considering the change from low conductivity on the centre-left to higher conductivity on upper-right regions of each plot. Both reconstructions were of similar accuracy and showed the variation between the upper-right and centre-left to the same degree. However, the A-ESA was slightly better overall as it achieved comparable performance, but in fewer measurements than the opposite–adjacent method.

Standard image High-resolution image

Qualitatively, both reconstructions were of similar accuracy, as they both exhibited lower conductivity around the cut. Comparing the ESAs, the A-ESA performed 1500 measurements (10 sets of 150), whereas the opposite–adjacent method required 3840. The average time for a single voltage measurement was (3.352 ± 0.005) s for the opposite–adjacent method, and (3.532 ± 0.009) s for the A-ESA, which was 0.2 s longer per measurement. The slightly longer time for the A-ESA was caused by the analysis of perturbed GREIT reconstructions in the current and voltage pair selection. However, the reduction in the total number of measurements required makes up for any extra computation time. Overall, the A-ESA outperformed the opposite–adjacent ESA by achieving comparable accuracy in fewer measurements, which is consistent with conclusions drawn from the simulation studies.

6. Discussion

6.1. CEM-enhanced forward solver

The new, CEM-enhanced forward solver performed better than the original PyEIT forward solver, achieving a 3% lower averaged squared loss. The loss decrease with the new forward solver could be due to the addition of electrode width to the model, as the electrodes in the new solver encompassed a greater proportion of the sample boundary, thus capturing more information about the conductivity of the interior elements. The GPU acceleration from the CuPy package was implemented as part of the new forward solver enhancement [60], which resulted in an increased computation speed by up to five times that of the old solver, despite the increased model complexity.

6.2. The A-ESA

The Bayesian optimisation successfully found an optimal set of hyperparameters that enabled the A-ESA to outperform the opposite–adjacent and adjacent–adjacent electrode selection methods. The simulation studies showed that the A-ESA achieved approximately 20% lower overall loss than the Opp–Adj method, and faster convergence to lower losses.

The A-ESA requires time to perform analysis of the total map between iterations of measurements. For example, for 10 sets of 100 measurements, the A-ESA will reconstruct the GREIT image 10 times. If the time taken to perform a set of electrical measurements is significantly faster than the operation of the A-ESA, then this will cause a computational bottleneck. The time taken for a single iteration of the A-ESA scales on the order of $\mathcal{O}({n}^{2})$ for the number of pixels used in the GREIT image. Generally for an increasing number of contacts, a greater number of potential measurements are available. Standard ESAs will increase in the number of required measurements, but are not flexible in which ones can be taken, whereas the A-ESA will also increase in total number of measurements, but is not restricted to any particular pattern. In the A-ESA the user can define the number of initial measurements to take before activating the A-ESA, the number of iterations (Niter), and the number of measurements to perform each iteration (Nm). An appropriate choice of Niter and Nm is situation- and application-specific; a rule-of-thumb applied in this work was to perform no more than half of the measurements that would otherwise be taken if the opposite–adjacent technique was used. The time increase to perform a single iteration of the A-ESA against contact number and number of pixels per row is shown in figure 13. Note, this is only the time taken for the total map analysis and pair selection, the time taken for electrical measurements was omitted. The overall complexity of the A-ESA can be taken as the pixel scaling and the product of Niter and Nm, $\mathcal{O}({n}^{2})$ + Niter Nm, where any extra scaling with contact number is accounted for by deciding the total number of measurements to perform.

Figure 13. Refer to the following caption and surrounding text.

Figure 13. A-ESA complexity scaling. The time taken per iteration of the A-ESA is plotted against (a) contact number and (b) number of pixels in the GREIT image (size of the total map). The data is averaged over three sets of 40 anomalies with five iterations of the A-ESA per anomaly. Both graphs show scaling on the order of $\mathcal{O}({n}^{2})$. In (a), the number of pixels was fixed to 64 and the A-ESA performed half the number of measurements as the opposite–adjacent method, which is why it has a similar scaling rate. However, the number of A-ESA iterations and number of measurements to perform each iteration is user defined and application specific, so the scaling of the A-ESA with contact number is not straight-forward. In (b) the number of electrodes used was 32.

Standard image High-resolution image

As this is the first attempt to apply ML to the selection of electrodes for EIT measurements, we foresee potential improvements that can be implemented in the future: (1) for samples with little variation, or if the conductivity deviations are comparable to the noise floor, the A-ESA is unlikely to select the same set of measurements each run. This may result in different reconstructions, limiting reconstruction reproducibility. This could be addressed with more varied samples or different types of meshes in the training data. (2) Training and optimisation of the A-ESA hyperparameters is computationally expensive. There are many parameters within EIT, such as the lambda regularisation, contact number and contact width, which could influence the reconstruction. On top of these, the A-ESA has four hyperparameters that determine certain thresholds of variables and raises matrices to powers element-wise. Introducing new variables, more rigorous training and better tuning of the hyperparameters would be beneficial, enabling a higher performance of the A-ESA. (3) The underlying deviation and total map theory was developed specifically for application to GREIT reconstructions. The relative simplicity of the operations underlying the A-ESA makes its improved performance over the opposite–adjacent method even more apparent. However, additional signal processing steps or the use of static inverse solvers, such as the Gauss–Newton method, can be applied to obtain more consistent solutions.

For most clinical or industrial applications, the number of electrodes used is small, hence the current speed of EIT is sufficient. In this work, we are extending the application of EIT to the domain of microfabricated devices based on 2D materials such as graphene. Using conventional microfabrication techniques, devices with hundreds of electrodes can be routinely made, which could enable novel types of microscopic charge or current imaging sensors. For these types of applications in nanotechnology and biomedicine, the time required for electrical measurements themselves are the limiting factor. This is particularly important for the time consuming electrical measurements of small signals in nano- or microscale devices, where the measurement time dominates over typical EIT computation times. Our work offers a solution to this problem: we keep the electrode count high (to allow for flexibility in measurements) but reduce the reconstruction time by only measuring a small subset (selected by the A-ESA) of all available electrode configurations. Therefore, by adopting the A-ESA method, reconstruction performance comparable to or better than that of the conventional methods can be achieved, but at a fraction of the measurement time.

6.3. U-Net CNN post-processing

The CNN post-processing of both the Gaussian and geometric anomalies showed an appreciable improvement over the raw reconstructions. The comparison of geometric anomalies in figure 8 shows that the CNN achieved an impressive degree of improvement over the raw GREIT reconstruction, demonstrating the benefits of machine learning and post-processing. However, this drastic improvement may partially be due to the similarity between the training data and testing data, despite introduction of synthetic noise and anomaly placement randomisation. In the Gaussian anomaly sensitivity tests, the CNN did not achieve such a radical improvement over the raw reconstructions. However, it retained more consistency as the Gaussian amplitudes increased, whereas the raw reconstruction deviated significantly. This is somewhat apparent in figures 9(a)–(f), where the CNN processing smoothed out some of the anomalous features introduced by EIT.

7. Conclusions

By considering the electrode width, the CEM-enhanced forward solver offered some improvement over the simplified solver from the original pyEIT software. More complex modelling resulted in a better reconstruction accuracy, while GPU acceleration reduced the computation time five-fold. Such features are important for future application to 2D materials, where the finite width of contacts becomes more significant. In addition, the development of a machine learning A-ESA proved useful, as it consistently achieved lower reconstruction losses and better performance compared to the standard opposite–adjacent and adjacent–adjacent methods. The application of the U-Net CNN for reconstruction post processing also showed promising initial results and highlighted the utility of deep learning, which has become more widely employed within many fields, including EIT. This work demonstrates the potential use of EIT for 2D materials characterisation and shows that the integration of machine learning techniques can offer much improvement in both the experimental and analytical aspects of such work.

One of the next steps would be to test rectangular shaped samples as the code already has this functionality: the mesh generation, GREIT pixel images and total map matrix can be of shape nx × ny . Future work could also investigate different shaped geometries, such as an ellipse or an irregular shape. Like the study in [26], machine learning can also be used to optimise the spatial positions of the electrodes around the sample, rather than just placing them at regular intervals. One can even envisage an iterative robotic system that combines adaptive electrode selection and adaptive electrode in situ placement, where a set of measurements could be taken, the electrodes moved to more optimal positions and then another set of measurements performed at the new contact locations.

Acknowledgments

This research was supported by the EPSRC Early Career Fellowship (EP/N007131/1), the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 865590), and the Royal Society International Exchanges 2019 Cost Share Program (IEC∖R2∖192001). SS acknowledges the support from European Graphene Flagship Core 3 Project. QY acknowledges the support from Leverhulme Early Career Fellowship (ECF-2019-612).

Data availability statement

The data that support the findings of this study will be openly available following an embargo at the following URL/DOI: https://github.com/AdamCoxson/ML-EIT-Graphene. Data will be available from 31 January 2022.

Please wait… references are loading.