Medical imaging systems are commonly assessed and optimized by use of objective-measures of image quality (IQ) that quantify the performance of an observer at specific tasks. Variation in the objects to-be-imaged is an important source of variability that can significantly limit observer performance. This object variability can be described by stochastic object models (SOMs). In order to establish SOMs that can accurately model realistic object variability, it is desirable to use experimental data. To achieve this, an augmented generative adversarial network (GAN) architecture called AmbientGAN has been developed and investigated. However, AmbientGANs cannot be immediately trained by use of advanced GAN training methods such as the progressive growing of GANs (ProGANs). Therefore, the ability of AmbientGANs to establish realistic object models is limited. To circumvent this, a progressively-growing AmbientGAN (ProAmGAN) has been proposed. However, ProAmGANs are designed for generating two-dimensional (2D) images while medical imaging modalities are commonly employed for imaging three-dimensional (3D) objects. Moreover, ProAmGANs that employ traditional generator architectures lack the ability to control specific image features such as fine-scale textures that are frequently considered when optimizing imaging systems. In this study, we address these limitations by proposing two advanced AmbientGAN architectures: 3D ProAmGANs and Style-AmbientGANs (StyAmGANs). Stylized numerical studies involving magnetic resonance (MR) imaging systems are conducted. The ability of 3D ProAmGANs to learn 3D SOMs from imaging measurements and the ability of StyAmGANs to control fine-scale textures of synthesized objects are demonstrated.
|