Intriguing property and counterfactual explanation of gan for remote sensing image generation
Generative adversarial networks (GANs) have achieved remarkable progress in the natural
image field. However, when applying GANs in the remote sensing (RS) image generation
task, an extraordinary phenomenon is observed: the GAN model is more sensitive to the
amount of training data for RS image generation than for natural image generation (Fig. 1).
In other words, the generation quality of RS images will change significantly with the number
of training categories or samples per category. In this paper, we first analyze this …
image field. However, when applying GANs in the remote sensing (RS) image generation
task, an extraordinary phenomenon is observed: the GAN model is more sensitive to the
amount of training data for RS image generation than for natural image generation (Fig. 1).
In other words, the generation quality of RS images will change significantly with the number
of training categories or samples per category. In this paper, we first analyze this …
Abstract
Generative adversarial networks (GANs) have achieved remarkable progress in the natural image field. However, when applying GANs in the remote sensing (RS) image generation task, an extraordinary phenomenon is observed: the GAN model is more sensitive to the amount of training data for RS image generation than for natural image generation (Fig. 1). In other words, the generation quality of RS images will change significantly with the number of training categories or samples per category. In this paper, we first analyze this phenomenon from two kinds of toy experiments and conclude that the amount of feature information contained in the GAN model decreases with reduced training data (Fig. 2). Then we establish a structural causal model (SCM) of the data generation process and interpret the generated data as the counterfactuals. Based on this SCM, we theoretically prove that the quality of generated images is positively correlated with the amount of feature information. This provides insights for enriching the feature information learned by the GAN model during training. Consequently, we propose two innovative adjustment schemes, namely uniformity regularization and entropy regularization, to increase the information learned by the GAN model at the distributional and sample levels, respectively. Extensive experiments on eight RS datasets and three natural datasets show the effectiveness and versatility of our methods. The source code is available at https://github.com/rootSue/Causal-RSGAN.
Springer
Showing the best result for this search. See all results