3D Coherent X-Ray Imaging Via Deep Convolutional Neural Networks
3D Coherent X-Ray Imaging Via Deep Convolutional Neural Networks
3D Coherent X-Ray Imaging Via Deep Convolutional Neural Networks
Networks
11973, USA
3Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park,
Abstract
As a critical component of coherent X-ray diffraction imaging (CDI), phase retrieval has been
extensively applied in X-ray structural science to recover the 3D morphological information inside
measured particles. Despite meeting all the oversampling requirements of Sayre and Shannon,
current phase retrieval approaches still have trouble achieving a unique inversion of experimental
data in the presence of noise. Here, we propose to overcome this limitation by incorporating a 3D
Machine Learning (ML) model combining (optional) supervised training with unsupervised
refinement. The trained ML model can rapidly provide an immediate result with high accuracy,
which will benefit real-time experiments. More significantly, the Neural Network model can be
used without any prior training to learn the missing phases of an image based on minimization of
experimental Bragg CDI data over traditional iterative phase retrieval algorithms.
1
Introduction
Coherent X-ray diffraction imaging (CDI) has been widely utilized to characterize the internal
three-dimensional (3D) structure of single particles1-4. Particularly, Bragg CDI has emerged as a
promising technique for 3D strain imaging of crystalline particles5-10. As modern X-ray sources,
such as diffraction-limited storage rings and fourth-generation X-ray free-electron lasers, are
developing worldwide to provide higher coherent flux densities, time-resolved and in-situ CDI
experiments for single-particle imaging are becoming more and more capable to explore small
particles' dynamical phenomena such as driven melting, thermal fluctuation, driven phase
transitions, catalysis, and high-pressure phenomena11-15. Due to the lost phase information in
measured coherent X-ray diffraction signals, it is necessary to use phase retrieval16-19 as a key
component of CDI, to reconstruct the real-space 3D images with morphological details. These
phase retrieval methods are applied directly to the measured reciprocal-space coherent X-ray
diffraction patterns.
Until now, the most successful approach for CDI phase retrieval is to use iterative methods, such
as the hybrid input-output (HIO) method by Fienup17, the difference map (DM) by Elser20, and the
relaxed averaged alternating reflection (RAAR) method by Luke21. In general, these iterative phase
retrieval methods can be expressed as successive projections20,22, using both forward and inverse
discrete Fourier Transforms iterated in both directions. Theoretically, for a finite object, when the
modulus of its Fourier Transform is oversampled (i.e., sampled finer than the Nyquist frequency),
a unique result is guaranteed for these iterative methods17,18. However, for experimental data with
inherent noise, these projection-based iterative methods are found to struggle with local minima,
which leads to an ambiguous, rather than unique, solution23. Thus, when inverting coherent X-ray
diffraction patterns, conventional iterative phase retrieval methods typically need thousands of
2
iterations and switch algorithms to confidently converge to a reproducible solution and require
tuning of many algorithmic parameters and expert strategies22,24,25. Because these iterative
methods are based on projections, the calculated error (for example, the 𝜒 between calculated and
measured data) usually is only used to monitor the convergence and rarely used as feedback to
adjust the related algorithmic parameters, which makes these methods sensitive to their
initialization conditions.
approach is data-driven, it is empowered by big data, which allows a deep network to be trained
for superior phase retrieval. There has been rapid progress for 2-dimensional (2D) phase retrieval
using convolutional neural networks (CNN) recently26,28. So far, most of the proposed neural
networks employ a supervised training strategy, matching input diffraction patterns in reciprocal
space to output particle morphological information in real space. Training a deep neural network
to map the input data to the output data, not only solves the data fitting problem but also
usually needs a large training dataset to optimize the weight and bias parameters of the neural
network so that it can represent a universal approximation function. When a deep neural network
is trained with limited data, its ability to generalize as a universal function is reduced, as seen in
the accuracy of the reconstructed results, and a subsequent refinement procedure is needed to
follow the supervised learning26,27. Furthermore, in practice, it is difficult to obtain enough ground-
truth experimental coherent X-ray diffraction data for training. When applied to experimental data,
an ML model trained with less data than ideal may also suffer and may need very lengthy
experiment-specific retraining.
3
Here, we demonstrate a comprehensive 3D CNN based approach to reconstruct the interior
complex morphological information of a measured particle from its coherent diffraction pattern.
When trained in a supervised learning approach, this CNN model, it can be applied to real-time
3D single-particle imaging experiments, for example using an X-ray Free-electron Laser (XFEL)29.
Further, to improve the accuracy of the result, we find that this CNN model can be significantly
CDI data as well as experimentally measured data. Additionally, when recovering the 3D structure
of a particle with the new unsupervised learning approach, we find no distinguishable difference
between the quality of the obtained results whether training is used or not, except for their
convergence speed. This is important in situations where the acquisition of training data is
challenging. In the unsupervised ML-based approach, the flexibility of the self-defined “loss” (or
error) function in the CNN model makes this method more robust to coherent X-ray diffraction
Results
Fig. 1 Overall scheme of 3D deep neural network for single-particle coherent diffraction imaging inversion. The
proposed 3D neural network is comprised of an encoder network and two decoder networks. In the network, the
amplitude of a 3D coherent X-ray diffraction pattern in reciprocal space is used as input, and the output is the complex
structure information (i.e., amplitude and phase) of the particle in real space.
4
3D CNN architecture. Generally, for coherent X-ray diffraction imaging experiments, either
forward-scattering CDI or for Bragg CDI30,31, the measured X-ray diffraction intensity 𝐼 𝑸 is
𝐼 𝑸 𝜌 𝒓 𝑒 𝑸⋅𝒓 𝑑𝒓 , (1)
diffracted X-ray wavevectors 𝒌𝒊 and 𝒌𝒇 . Here, 𝒉 equals to zero for a forward CDI experiment, and
𝒉 is a reciprocal lattice vector of the crystal in a Bragg CDI experiment. In equation (1), the
complex field 𝜌 𝒓 is related to the local complex refractive index of a particle in a forward CDI
experiment and in Bragg diffraction geometry, it mainly represents the local crystal lattice strain
inside a particle7,12. In all cases, this complex-valued structure information inside the particle could
𝒓
also be expressed as 𝜌 𝒓 𝑠 𝒓 𝑒 , where 𝑠 𝒓 and 𝜙 𝒓 are the corresponding amplitude
The goal of a CDI experiment is to numerically obtain this complex particle density function
uniquely in real space, whose modulus squared of the Fourier Transform best matches the
experimental coherent X-ray diffraction intensity distribution of the measured particle18. As shown
in Fig.1, our developed deep neural network for 3D coherent X-ray diffraction imaging adopts the
typical "encoder-decoder" architecture. It takes the amplitude of the 3D coherent X-ray diffraction
pattern in reciprocal space as input and outputs the real-space amplitude and phase images. As
presented in Fig. 1, the proposed model is implemented using an architecture composed entirely
of 3D convolutional, max-pooling, and upsampling layers. The model has three main parts. The
first part is a 3D convolutional autoencoder. Then, the encoded result is separated equally into two
decoder parts to generate the amplitude and phase information of the measured particles. In this
5
3D CNN, the leaky rectified linear unit (LRLU) is used for all activations function except for the
final 3D convolutional layer, where the rectified linear unit (RLU) activation function is used. The
modules used in Fig. 1 to connect the input from previous layer to the next layer’s output are
convolution blocks (3×3×3 convolution + LRLU + BN, where BN refers to batch normalization),
LRLU + BN). It should also be mentioned that the array size of output particle image arrays (i.e.,
amplitude and phase) in each dimension is half of the size of the input diffraction data to keep the
problem overdetermined.
Supervised learning approach. In the deep neural network supervised learning method, the
quantity and diversity of the training dataset directly affect the network’s performance when new
𝒓
data are presented. In the real world, the complex structure 𝜌 𝒓 𝑠 𝒓 𝑒 for a particle varies
a lot from particle to particle. For demonstration purposes, a shape known as a superellipsoid is
used to describe the particle shape 𝑠 𝒓 and a 3D Gaussian correlated profile is used to describe
the corresponding phase 𝜙 𝒓 distribution (see Method for details). Then, after the generated
particle is randomly rotated in real space, a 3D coherent diffraction pattern is obtained by Fourier
Transformation. Only the amplitude information of the diffraction pattern is kept for training, and
6
Fig. 2 Performance of 3D CNN model in testing data. a 3D isointensities of test input coherent X-ray diffraction
patterns, which were not used for training. Here, the colors correspond to the radial distance from the origin of the
reciprocal space. b Isointensity of the ground truth for the corresponding particles. c The complex-valued image
predicted by the CNN model. Here, the isosurface plots in c and b are obtained by the amplitude of the particles and
By applying this method with a wide range of random parameters, we simulated 30,000 3D
diffraction patterns and used them to train the CNN model. With the corresponding particles
known a priori, the proposed 3D CNN model was trained in a supervised learning approach, by
solving
where, 𝜌 is the output from the CNN model, and 𝜌 is the corresponding ground truth for the
complex particle. For this loss function 𝑙 , which was minimized during the training, we used a
combination of the relative root mean square error 𝜒 , and the modified Pearson correlation
coefficient 𝑟 (see Method for details) to measure the agreement between the output amplitude and
phase images of the predicted particles with their ground truth both in real and reciprocal space.
7
This is appropriate for diffraction data with a large dynamic range, since the 𝜒 is dominated by the
strong central part of the diffraction pattern while the weaker parts, having many more pixels,
affect the 𝑟 more. When training the 3D CNN model, the prepared training data were divided into
two disjoint sets, where 95% of them were used to train the model, and the rest of them were used
While the 3D CNN model was being trained by the supervised learning approach, Supplementary
Fig. 1 shows the training and validation loss as a function of the training epochs. It can be seen
that the loss for the validation testing is generally continually decreasing. After 100 training epochs,
the loss for the validation can reach 0.031, which illustrates that the proposed 3D CNN model can
performance of this trained CNN model, Fig. 2 shows four representative predictions from test
diffraction patterns, not used for training the CNN model. The predicted amplitude and phase of
the particles show excellent agreement with their ground truth. This CNN model is a ML method
of phase retrieval, which provides a very fast inversion of a diffraction pattern. Unlike an iterative
phase-retrieval method, this could be very useful in a real-time 3D CDI experiment, for example
When the CNN model learns to match input coherent diffraction data to output particle data, it
does not only learn to solve the data fitting problem but also incorporates comprehensive prior
information in a data-driven manner. Perhaps the greatest strength for the machine learning-based
phase retrieval method is that the model can learn far more complex prior information than the
traditional iterative methods. The ability to ultimately learn both the best possible inverse solver
and the specific prior information makes the model very powerful. Additionally, the self-defined
8
loss function used as feedback to optimize the bias and weight parameters of the model make the
sometimes the predicted results might miss subtle features in the data which were not captured by
the training. To improve the quality of the reconstruction obtained, we have developed a
reconstruction of a single diffraction pattern at a time. The problem of phase retrieval for coherent
as
where, 𝑙 is an unsupervised “loss” (error) function, describing the difference between the
numerically obtained particle 𝜌 𝒓 and the measured coherent X-ray diffraction intensity 𝐼 𝑸 .
In Fig 3, we demonstrate that the proposed unsupervised learning approach can further improve
the reconstruction quality and reach a high accuracy, comparable with the best iterative algorithms.
We tested this approach with the 3D diffraction pattern and corresponding real-space particle
shown in Supplementary Fig. 2. While the particle shape is symmetrical, the broken symmetry of
the diffraction pattern is resulted from the internal asymmetry of the real-space 3D phase
distribution, as is common in Bragg CDI from particles with strain distributions. The trained 3D
CNN model yields the reconstructed amplitude and phase images shown in Supplementary Fig.
2c, with a corresponding estimated error of 0.13. Compared with its ground truth in Supplementary
Fig. 2b, the trained CNN model gives a relatively poor prediction, indicating that the refinement
9
is necessary. The trained CNN model was then used in an unsupervised approach to further refine
the reconstructed result using the loss function of equation (3). Figure 3b shows the result of this
unsupervised learning approach, and Figure 3e shows the trend of the corresponding loss function
(or error metric) with training epoch. After this refinement approach on a noise-free test particle,
significant improvement was achieved, where the error decreased from 0.13 to 2·10-5.
Fig. 3 Representative result for 3D CNN model using the unsupervised approach. a Calculated coherent X-ray
diffraction pattern from b, the predicted particle by using the trained CNN model followed by unsupervised refinement.
c, d Same using an untrained CNN model with unsupervised refinement. e Loss (or error) as a function of the training
epochs for the CNN model during refinement. f Phase Retrieval Transfer Function (PRTF) and spectral weights plotted
To explore the importance of the training in this CNN model in the unsupervised learning mode,
we tested the model with the same data by using a randomly initialized bias and weight parameters
for the CNN model. Comparing Figs. 3b and 3d, it can be seen that there is no obvious difference
between the final reconstructed results. Figure 3e shows the convergence of the untrained model
is slower, but it reaches the same final error. This is a significant discovery: the ability of the
Neural Network to retrieve phases directly from its structure alone and not training as it is
10
commonly assumed. Even though the method is “solving” a Fourier transform, there is no explicit
Fourier transform appearing anywhere except in the definition of the loss function. We note the
structure of the coding side of the Neural Network, designed to replicate the structure of a brain,
transforms the data into bigger and bigger connected groups somewhat reminiscent of the structure
Figure 3e shows that the loss from the trained CNN model converges faster than the untrained
version, however there is no significant difference between the final results. To further quantify
the effect on the final results, we also calculate the phase retrieval transfer function (PRTF)34,35 of
these two reconstructions (see Method for details), shown in Fig. 3f. The PRTF is close to unity
everywhere, which indicates that the refined particle agrees quite well with the reconstruction from
crystalline particle is usually unknown in CDI experiments, it is vital that our proposed ML
approach gives a credible reconstruction result for phase retrieval in the presence of unavoidable
noise. With CDI experiments there is little prior knowledge of the structure available for building
a training dataset. Fig. 4a-d shows isosurface renditions of four very different experimental Bragg
coherent X-ray diffraction patterns of individual SrTiO3, BaTiO3, Pd, and Au nanocrystals, which
were measured at beamline 34-ID-C of the Advanced Photon Source using methods reported by
Robinson & Harder7. From Fig. 4a-d, these four 3D Bragg coherent X-ray different patterns have
different diffraction fringe spacings and directions, indicating their distinct sizes and facets in real-
space.
11
Fig. 4 Performance of 3D CNN model on experimental coherent X-ray diffraction data. a-d 3D plots of the
isointensity for the measured 3D coherent X-ray diffraction patterns of SrTiO3, BaTiO3, Pd, and Au nanocrystals. e-f
Corresponding isointensity plots of the 3D diffraction patterns of the predicted particles from the CNN model in the
reciprocal space. The colors in a-h correspond to the radial distance from the origin of the reciprocal space. i-l The
corresponding reconstructed particle structures from the model in real space. In i-l, the surface colors encode the
complex phase value on the surfaces of these particles. The scale bars are all 150 nm.
By using our trained 3D CNN model with the proposed unsupervised learning refinement approach,
the corresponding predicted results are shown in Fig. 4e-l. Fig. 4e-h shows the corresponding
calculated X-ray diffraction intensities, obtained as the modulus squared of the Fourier Transform
of the predicted CNN model structures shown in Fig. 4i-l. There is excellent agreement between
the experimental and calculated X-ray diffraction patterns, validating the CNN model can reach a
high reconstruction accuracy. This result also extends to the untrained, unsupervised CNN model
with experimental data: Supplementary Fig. 3 shows the corresponding reconstruction results with
the four sets of experimental Bragg CDI data. Comparing Fig. 4 with Supplementary Fig. 3, it can
be seen that there is no obvious difference between the final reconstructed particles, which
endorses the capability of the untrained CNN model in the presence of experimental noise.
12
Fig. 5. Comparison of error metrics for two different methods. a Histogram of the observed 𝜒 for the
reconstructions from the conventional iterative method and the untrained CNN model starting from random weight
and bias parameters. b The corresponding histogram of the Pearson correlation coefficient for both methods.
Given the tendency of the conventional iterative algorithms to show imperfect convergence and
non-degenerate solutions with real (noisy) experimental data, as mentioned in the introduction, we
investigated the reproducibility of the solutions obtained with our new unsupervised learning
approach for the untrained CNN model by using different random numbers seeding the calculation.
Using the coherent X-ray diffraction pattern in Fig. 4a, we repeated the reconstruction 100 times
with two different methods: the untrained CNN model with random parameters in the unsupervised
learning approach and the conventional iteration method with random initiations. As shown in Fig.
5, the corresponding statistic error was found to have multiple solutions in both cases with roughly
the same 𝜒 = 0.0241 ± 0.0005 (standard deviation). The 𝑟 error of 0.9922 for ML was slightly
better than 0.9915 for iterative. This difference is probably because the calculated loss (or error)
explicitly optimized the rp and 2 together. It is reassuring that the conventional iterative method
performs so well since it is based on projection, although it shows a slightly wider distribution of
solutions. We also noticed qualitatively that the solutions from the ML method appeared sharper
and had flatter, better distinguished facets than the iterative algorithms, as can be seen in
13
Based on above results, it can be concluded that our 3D CNN approach has great potential to be
applied to asymmetric data previously untested owing to the need to solve for a complex density
function whether there is enough training dataset to obtain a well-optimized ML learning model
or not. When there are enough suitable data to train the CNN model, it can be used in a real-time
experiment, such as in a single-shot XFEL setup, to provide a rapid estimation of the reconstruction.
When needed, a subsequent unsupervised learning refinement can then make the final predicted
Furthermore, the unsupervised learning approach makes it possible to use an untrained ML model
for ab-initio phase retrieval of the structure of sub-micron-sized particles in 3D. This will be
valuable when building a related training dataset for the training of ML model is challenging.
Additionally, in the unsupervised mode, the self-defined loss function, used for feedback to
optimize the prediction, makes the ML model more powerful than the conventional iterative
methods where the calculated error during iteration is only used to monitor the convergence. In
this work, we used a combination of Correlation Coefficient and 𝜒 error to define the loss
function for the unsupervised mode, which extracted more sensitivity to the weaker features in the
data than the 𝜒 error alone. However, one is not limited to this form and could further extend it
for different phase-retrieval problems, for example using likelihood function to account for the
statistical error.
Conclusion
structures in real space from their experimental coherent X-ray diffraction intensities in reciprocal
space. The trained CNN model can provide immediate high accuracy results, which will benefit a
real-time CDI experiment. More importantly, we found that an unsupervised ML-approach was
14
able to reach a high reconstruction accuracy, comparable with traditional methods, either starting
from a trained model or just a purely random configuration. The flexibility of the self-defined loss
function in the ML model should make the CNN model more robust to experimental coherent
diffraction data when used in the unsupervised mode. The quality of the images obtained in the
four examples shown here is better than can be achieved with current state-of-the-art iterative
algorithms in use today. We believe our results will see very broad applications in coherent
imaging and related research fields. This will open a new paradigm of neural network design,
where the concept of supervised training and unsupervised refinement can be generalized to solve
Method
3D training dataset
The 3D diffraction intensities were generated by taking the Fourier Transform of the simulated
𝒓
complex-valued 3D particles 𝜌 𝒓 𝑠 𝒓 𝑒 , created from the particle's amplitudes 𝑠 𝒓 and
phases 𝜙 𝒓 . The simulated particles 𝜌 𝒓 were then randomly rotated in 3D. Only the amplitude
of the computed 3D diffraction intensities was retained for both training and testing of the CNN
model. We used a superellipsoid shape for the amplitude 𝑠 𝒓 , whose implicit form is
/ / / /
1, (4)
where the exponents parameter 𝑛 and 𝑒 control the roundedness of the particle. 𝑎, 𝑏, and 𝑐 are the
upper bounds of the particle size along the 𝑥, 𝑦, and 𝑧 directions, respectively. All of these values
were selected from random distributions to create a diverse set of shapes. For the phase distribution
15
/ / /
𝜙 𝒓 / ∭𝑒 𝑧 𝑥, 𝑦, 𝑧 𝑑𝑥 𝑑𝑦 𝑑𝑧 , (5)
transverse correlation lengths along the 𝑥, 𝑦, and 𝑧 directions, respectively. During the simulation
of the 3D diffraction patterns, the phase of the simulated particle was scaled and shifted to (0, 1),
and outside the particle, the phase is set to zero. The generated training dataset contains a wide
The 3D CNN model was trained in a supervised approach on pairs of real-space objects and their
reciprocal-space diffraction patterns. We used a loss function 𝑙 , to constrain the real and
𝑙 𝛼 𝐿 𝐴 ,A 𝛼 𝐿 𝜙 ,𝜙 𝛼 𝐿 𝐼 , 𝐼 , (6)
+ +
∑ ∑ ̅ ∙ ̅
where 𝐿 𝑥 , 𝑥 𝐿 𝑥 ,𝑥 and 𝐿 𝑥 , 𝑥 1 .
∑ ∑ ̅ ∑ ̅
In equation (6), 𝐿 and 𝐿 are the loss function for the amplitude and phase of the particle in real
space, separately. 𝐿 is the loss function for the X-ray diffraction intensity in reciprocal space,
which is used to constrain the relation between the predicted amplitude and phase from the ML
model in reciprocal space. Here, the subscript p denotes the predicted result from ML model, and
the subscript g denotes the corresponding ground truth. 𝐿 , is based on the Pearson correlation
coefficient. For the training, we used 𝛼 1, 𝛼 1 and 𝛼 1. The proposed CNN model was
implemented based on the Pytorch platform using Python37. When training the CNN model, we
adopted two optimizers: Adaptive Moment Estimation (ADAM) and Stochastic Gradient Descent
16
(SGD) to optimize the weights and biases of the CNN model38,39. During the training, the two
optimizers were switched every 25 epochs for a total of 100. The start learning rate for both
optimizers were 0.01, and after every 25 epochs, the learning rate was reduced by a factor of 0.95.
In our study, the size of the input 3D coherent X-ray diffraction pattern was 64×64×64 pixels.
The training was completed on a computer with 256 GB of RAM and two NVIDIA Quadro V100
GPUs.
When the 3D CNN model was used in an unsupervised learning approach, only the 3D coherent
X-ray diffraction pattern was available as input. During the optimization, the loss function 𝑙 was
defined as:
𝑙 𝛽𝐿 𝐼 , 𝐼 𝛽𝐿 𝐼 , 𝐼 , (7)
+
density predicted by the CNN model. 𝐼 is the measured or testing 3D coherent X-ray diffraction
∑
intensity. 𝐿 is the conventional 𝜒 error function defined as 𝐿 ∑
. We used two
different ways to initiate the CNN model's weight and bias parameters, either from our trained
CNN model or by using random numbers. In equation (7), the weighting coefficients 𝛽 followed
/
𝛽 𝑎 𝑒 𝑎 , (8)
where 𝑘 1, 𝜆 0.5. 𝑛 is the training epoch. 𝑎 and 𝑎 are the scale factors to let 𝛽 gradually
change from 104 to 1 during the unsupervised learning (see Supplementary Fig. 6 for details), while
17
𝛽 remained equal to 1. Two optimizers, ADAM and SGD were utilized to optimize the results,
switching every 200 epochs. The learning rate for both optimizers started at 0.006 and after every
The Bragg CDI experiments were performed at 34-ID-C at Advanced Photon Source (APS),
Argonne National Laboratory, USA. A front-end horizontal slit of 100m was used to improve the
source coherence, and a double crystal monochromator was used to set the energy of the incident
X-ray to 9 keV. A coherent beam of 30 70 𝜇𝑚 was selected and focused to ~ 630 470 nm
by Kirkpatrick-Baez (KB) mirrors before impinging on the samples. The four samples were
chemically synthesized in nanocrystal format by different methods and attached to silicon wafer
substrates for handling. The corresponding 3D coherent diffraction intensities were obtained by a
rocking curve of the target Bragg peak of the samples, {101} for BaTiO3 and SrTiO3 and {111}
for Au and Pd, as a series of 2D coherent diffraction patterns were recorded by a Medipix detector
with 55 55 𝜇𝑚 pixels. All the experimental Bragg coherent X-ray diffraction patterns, as well
For the conventional iterative phase retrieval, the 3D diffraction pattern was used as input to an
iterative phase-retrieval scheme described by Robinson & Harder7, where the algorithm was
switched between error reduction and hybrid input-output after every 50 iterations for a total of
2000 iterations. After 100 iterations, the shrink-wrap method41 was applied in real space to
Data availability
18
The data supporting the plots and figures within this paper are available from the corresponding
Code availability
The Python scripts for the CNN model and generation of the training and testing data used in this
References
1 Chapman, H. N. & Nugent, K. A. Coherent lensless X-ray imaging. Nature Photonics 4, 833-
839, doi:10.1038/nphoton.2010.240 (2010).
2 Xiong, G., Moutanabbir, O., Reiche, M., Harder, R. & Robinson, I. Coherent X-ray diffraction
imaging and characterization of strain in silicon-on-insulator nanostructures. Adv. Mater. 26,
7747-7763, doi:10.1002/adma.201304511 (2014).
3 Walmsley, I. A. Quantum optics: science and technology in a new light. Science 348, 525-530,
doi:10.1126/science.aab0097 (2015).
4 Bielecki, J., Maia, F. & Mancuso, A. P. Perspectives on single particle imaging with x rays at
the advent of high repetition rate x-ray free electron laser sources. Struct Dyn 7, 040901,
doi:10.1063/4.0000024 (2020).
5 Williams, G. J., Pfeifer, M. A., Vartanyants, I. A. & Robinson, I. K. Three-dimensional
imaging of microstructure in Au nanocrystals. Phys. Rev. Lett. 90, 175501,
doi:10.1103/PhysRevLett.90.175501 (2003).
6 Pfeifer, M. A., Williams, G. J., Vartanyants, I. A., Harder, R. & Robinson, I. K. Three-
dimensional mapping of a deformation field inside a nanocrystal. Nature 442, 63-66,
doi:10.1038/nature04867 (2006).
7 Robinson, I. & Harder, R. Coherent X-ray diffraction imaging of strain at the nanoscale. Nat.
Mater. 8, 291-298, doi:10.1038/nmat2400 (2009).
8 Yau, A., Cha, W., Kanan, M. W., Stephenson, G. B. & Ulvestad, A. Bragg coherent diffractive
imaging of single-grain defect dynamics in polycrystalline films. Science 356, 739-742,
doi:10.1126/science.aam6168 (2017).
9 Wen, H., Cherukara, M. J. & Holt, M. V. Time-Resolved X-Ray Microscopy for Materials
Science. Annual Review of Materials Research 49, 389-415, doi:10.1146/annurev-matsci-
070616-124014 (2019).
10 Decking, W. et al. A MHz-repetition-rate hard X-ray free-electron laser driven by a
superconducting linear accelerator. Nature Photonics 14, 391-397, doi:10.1038/s41566-020-
0607-z (2020).
11 Clark, J. N. et al. Ultrafast three-dimensional imaging of lattice dynamics in individual gold
nanocrystals. Science 341, 56-59, doi:10.1126/science.1236034 (2013).
12 Newton, M. C., Leake, S. J., Harder, R. & Robinson, I. K. Three-dimensional imaging of strain
in a single ZnO nanorod. Nat. Mater. 9, 120-124, doi:10.1038/nmat2607 (2010).
13 Ulvestad, A. et al. Three-dimensional imaging of dislocation dynamics during the hydriding
phase transformation. Nat. Mater. 16, 565-571, doi:10.1038/nmat4842 (2017).
19
14 Rose, M. et al. Single-particle imaging without symmetry constraints at an X-ray free-electron
laser. IUCrJ 5, 727-736, doi:10.1107/S205225251801120X (2018).
15 Passos, A. R. et al. Three-dimensional strain dynamics govern the hysteresis in heterogeneous
catalysis. Nat. Commun. 11, 4733, doi:10.1038/s41467-020-18622-2 (2020).
16 Gerchberg, R. W. A practical algorithm for the determination of phase from image and
diffraction plane pictures. Optik 35, 237-246 (1972).
17 Fienup, J. R. Phase retrieval algorithms: a comparison. Appl. Opt. 21, 2758-2769,
doi:10.1364/AO.21.002758 (1982).
18 Bates, R. H. T. Uniqueness of solutions to two-dimensional fourier phase problems for
localized and positive images. Computer Vision, Graphics, and Image Processing 25, 205-217,
doi:10.1016/0734-189x(84)90103-8 (1984).
19 Robinson, I. K., Vartanyants, I. A., Williams, G. J., Pfeifer, M. A. & Pitney, J. A.
Reconstruction of the shapes of gold nanocrystals using coherent x-ray diffraction. Phys. Rev.
Lett. 87, 195505, doi:10.1103/PhysRevLett.87.195505 (2001).
20 Elser, V. Phase retrieval by iterated projections. J Opt Soc Am A Opt Image Sci Vis 20, 40-55,
doi:10.1364/josaa.20.000040 (2003).
21 Luke, D. R. Relaxed averaged alternating reflections for diffraction imaging. Inverse Problems
21, 37-50, doi:10.1088/0266-5611/21/1/004 (2005).
22 Marchesini, S. Invited article: a [corrected] unified evaluation of iterative projection algorithms
for phase retrieval. Rev. Sci. Instrum. 78, 011301, doi:10.1063/1.2403783 (2007).
23 Thibault, P. & Guizar-Sicairos, M. Maximum-likelihood refinement for coherent diffractive
imaging. New Journal of Physics 14, doi:10.1088/1367-2630/14/6/063004 (2012).
24 Gao, Y., Huang, X., Yan, H. & Williams, G. J. Bragg coherent diffraction imaging by
simultaneous reconstruction of multiple diffraction peaks. Phys. Rev. B 103,
doi:10.1103/PhysRevB.103.014102 (2021).
25 Shechtman, Y. et al. Phase Retrieval with Application to Optical Imaging: A contemporary
overview. IEEE Signal Processing Magazine 32, 87-109, doi:10.1109/msp.2014.2352673
(2015).
26 Wu, L., Juhas, P., Yoo, S. & Robinson, I. Complex imaging of phase domains by deep neural
networks. IUCrJ 8, 12-21, doi:10.1107/s2052252520013780 (2021).
27 Scheinker, A. & Pokharel, R. Adaptive 3D convolutional neural network-based reconstruction
method for 3D coherent diffraction imaging. J. Appl. Phys. 128, doi:10.1063/5.0014725 (2020).
28 Cherukara, M. J., Nashed, Y. S. G. & Harder, R. J. Real-time coherent diffraction inversion
using deep generative networks. Sci Rep 8, 16520, doi:10.1038/s41598-018-34525-1 (2018).
29 Harder, R. Deep neural networks in real-time coherent diffraction imaging. IUCrJ 8, 1-3,
doi:10.1107/S2052252520016590 (2021).
30 Chapman, H. N. et al. Femtosecond diffractive imaging with a soft-X-ray free-electron laser.
Nature Physics 2, 839-843, doi:10.1038/nphys461 (2006).
31 Harder, R., Pfeifer, M. A., Williams, G. J., Vartaniants, I. A. & Robinson, I. K. Orientation
variation of surface strain. Phys. Rev. B 76, doi:10.1103/PhysRevB.76.115425 (2007).
32 Bauschke, H. H., Combettes, P. L. & Luke, D. R. Phase retrieval, error reduction algorithm,
and Fienup variants: a view from convex optimization. J Opt Soc Am A Opt Image Sci Vis 19,
1334-1345, doi:10.1364/josaa.19.001334 (2002).
33 Press, W. H., Teukolsky, S. A., Vetterling, W. T. & Flannery, B. P. Numerical recipes 3rd
edition: The art of scientific computing. (Cambridge university press, 2007).
20
34 Chapman, H. N. et al. High-resolution ab initio three-dimensional x-ray diffraction microscopy.
J Opt Soc Am A Opt Image Sci Vis 23, 1179-1200, doi:10.1364/josaa.23.001179 (2006).
35 Shapiro, D. et al. Biological imaging by soft x-ray diffraction microscopy. Proc Natl Acad Sci
U S A 102, 15343-15346, doi:10.1073/pnas.0503305102 (2005).
36 Garcia, N. & Stoll, E. Monte Carlo Calculation for Electromagnetic-Wave Scattering from
Random Rough Surfaces. Phys. Rev. Lett. 52, 1798-1801, doi:10.1103/PhysRevLett.52.1798
(1984).
37 Paszke, A. et al. Pytorch: An imperative style, high-performance deep learning library. arXiv
preprint arXiv:1912.01703 (2019).
38 Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980 (2014).
39 Saad, D. On-line learning in neural networks. (Cambridge University Press, 2009).
40 Pfeifer, M. A. Structural studies of lead nanocrystals using coherent X-ray diffraction.
(University of Illinois at Urbana-Champaign, 2005).
41 Marchesini, S. et al. X-ray image reconstruction from a diffraction pattern alone. Phys. Rev. B
68, doi:10.1103/PhysRevB.68.140101 (2003).
Acknowledgements
Work at Brookhaven National Laboratory was supported by the U.S. Department of Energy, Office
of Science, Office of Basic Energy Sciences, under Contract No. DE-SC0012704. Measurements
were carried out at the Advanced Photon Source (APS) beamline 34-ID-C, which was supported
by the U. S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under
Contract No. DE-AC02-06CH11357. The beamline 34-ID-C was built with U.S. National Science
Author contributions
L.W. developed the convolutional neural networks. I.K.R., L.W., A.F.S., T.A.A., J.C., R.H., and
W.C. carried out the BCDI experiments. L.W. and I.K.R. wrote the manuscript and all the authors
Competing interests
21