A Transformer-Unet Generative Adversarial Network for the Super-Resolution Reconstruction of DEMs
Abstract
:1. Introduction
2. Background
3. Model Structure
3.1. Generator Network Architecture
3.2. Discriminator Network Architecture
3.3. Loss Function
3.4. Data Normalization
4. Experimental Results and Analysis
4.1. Data Descriptions
4.2. Results for the Four Test Regions
4.3. Comparison with Other SR Methods
4.4. Terrain Attribute Analysis for DEM
4.4.1. Slope Reconstruction Analysis
4.4.2. Aspect Reconstruction Analysis
5. Future Work
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- de Almeida, G.A.; Bates, P.; Ozdemir, H. Modelling urban floods at submetre resolution: Challenges or opportunities for flood risk management? J. Flood Risk Manag. 2018, 11, S855–S865. [Google Scholar] [CrossRef]
- Cook, A.; Merwade, V. Effect of topographic data, geometric configuration and modeling approach on flood inundation mapping. J. Hydrol. 2009, 377, 131–142. [Google Scholar] [CrossRef]
- Baier, G.; Rossi, C.; Lachaise, M.; Zhu, X.X.; Bamler, R. A nonlocal InSAR filter for high-resolution DEM generation from TanDEM-X interferograms. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6469–6483. [Google Scholar] [CrossRef]
- Muthusamy, M.; Casado, M.R.; Butler, D.; Leinster, P. Understanding the effects of Digital Elevation Model resolution in urban fluvial flood modelling. J. Hydrol. 2021, 596, 126088. [Google Scholar] [CrossRef]
- Liu, X. Airborne LiDAR for DEM generation: Some critical issues. Prog. Phys. Geogr. 2008, 32, 31–49. [Google Scholar] [CrossRef]
- Shan, J.; Aparajithan, S. Urban DEM generation from raw LiDAR data. Photogramm. Eng. Remote Sens. 2005, 71, 217–226. [Google Scholar] [CrossRef]
- Zhou, A.; Chen, Y.; Wilson, J.P.; Su, H.; Xiong, Z.; Cheng, Q. An enhanced double-filter deep residual neural network for generating super resolution DEMs. Remote Sens. 2021, 13, 3089. [Google Scholar] [CrossRef]
- Chen, Z.; Wang, X.; Xu, Z.; Hou, W. Convolutional neural network based dem super resolution. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 247–250. [Google Scholar] [CrossRef]
- Demiray, B.Z.; Sit, M.; Demir, I. D-SRGAN: DEM super-resolution with generative adversarial networks. SN Comput. Sci. 2021, 2, 48. [Google Scholar] [CrossRef]
- Wang, Y.; Jin, S.; Yang, Z.; Guan, H.; Ren, Y.; Cheng, K.; Zhao, X.; Liu, X.; Chen, M.; Liu, Y.; et al. TTSR: A Transformer-based Topography Neural Network for Digital Elevation Model Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4403719. [Google Scholar] [CrossRef]
- Shepard, D. A two-dimensional interpolation function for irregularly-spaced data. In Proceedings of the 23rd ACM National Conference, Princeton, NJ, USA, 27–29 August 1968; pp. 517–524. [Google Scholar] [CrossRef]
- Chaplot, V.; Darboux, F.; Bourennane, H.; Leguédois, S. Accuracy of interpolation techniques for the derivation of digital elevation models in relation to landform types and data density. Geomorphology 2006, 77, 126–141. [Google Scholar] [CrossRef]
- Sibson, R. A brief description of natural neighbour interpolation. In Interpreting Multivariate Data; John Wiley & Sons: New York, NY, USA, 1981; pp. 21–36. [Google Scholar]
- Wang, B.; Shi, W.; Liu, E. Robust methods for assessing the accuracy of linear interpolated DEM. Int. J. Appl. Earth Obs. Geoinf. 2015, 34, 198–206. [Google Scholar] [CrossRef]
- Aguilar, F.J.; Agüera, F.; Aguilar, M.A.; Carvajal, F. Effects of terrain morphology, sampling density, and interpolation methods on grid DEM accuracy. Photogramm. Eng. Remote Sens. 2005, 71, 805–816. [Google Scholar] [CrossRef]
- Yang, W.; Zhang, X.; Tian, Y.; Wang, W.; Xue, J.H.; Liao, Q. Deep Learning for Single Image Super-Resolution: A Brief Review. IEEE Trans. Multimed. 2019, 21, 3106–3121. [Google Scholar] [CrossRef]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 1833–1844. [Google Scholar] [CrossRef]
- Lu, L.; Li, W.; Tao, X.; Lu, J.; Jia, J. Masa-sr: Matching acceleration and spatial adaptation for reference-based image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 6368–6377. [Google Scholar] [CrossRef]
- Yang, F.; Yang, H.; Fu, J.; Lu, H.; Guo, B. Learning texture transformer network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 5791–5800. [Google Scholar] [CrossRef]
- Zheng, H.; Ji, M.; Wang, H.; Liu, Y.; Fang, L. Crossnet: An end-to-end reference-based super resolution network using cross-scale warping. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 88–104. [Google Scholar] [CrossRef]
- Yue, H.; Sun, X.; Yang, J.; Wu, F. Landmark image super-resolution by retrieving web images. IEEE Trans. Image Process. 2013, 22, 4865–4878. [Google Scholar] [CrossRef]
- Zheng, X.; Bao, Z.; Yin, Q. Terrain Self-Similarity-Based Transformer for Generating Super Resolution DEMs. Remote Sens. 2023, 15, 1954. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar] [CrossRef]
- Schonfeld, E.; Schiele, B.; Khoreva, A. A u-net based discriminator for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8207–8216. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 June 2015; pp. 234–241. [Google Scholar] [CrossRef]
- Guo, C.; Li, C.; Guo, J.; Cong, R.; Fu, H.; Han, P. Hierarchical features driven residual learning for depth map super-resolution. IEEE Trans. Image Process. 2018, 28, 2545–2557. [Google Scholar] [CrossRef]
- Gao, F.; Xu, X.; Yu, J.; Shang, M.; Li, X.; Tao, D. Complementary, heterogeneous and adversarial networks for image-to-image translation. IEEE Trans. Image Process. 2021, 30, 3487–3498. [Google Scholar] [CrossRef]
- Miyato, T.; Kataoka, T.; Koyama, M.; Yoshida, Y. Spectral normalization for generative adversarial networks. arXiv 2018, arXiv:1802.05957. [Google Scholar]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of wasserstein gans. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar] [CrossRef]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 214–223. [Google Scholar] [CrossRef]
- Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar]
- Ruangsang, W.; Aramvith, S. Efficient super-resolution algorithm using overlapping bicubic interpolation. In Proceedings of the 2017 IEEE 6th Global Conference on Consumer Electronics (GCCE), Nara, Japan, 10–13 October 2017; pp. 1–2. [Google Scholar]
Regions | Maximum Elevation (m) | Minimal Elevation (m) | Maximum Elevation Difference (m) |
---|---|---|---|
Region 1 | 2206 | 1260 | 946 |
Region 2 | 2528 | 190 | 2338 |
Region 3 | 1109 | 906 | 203 |
Region 4 | 129 | 5 | 124 |
Regions | MAE (m) | RMSE (m) | PSNR (dB) | SSIM (%) |
---|---|---|---|---|
Region 1 | 3.08 | 4.24 | 35.57 | 99.21% |
Region 2 | 10.32 | 13.52 | 25.51 | 99.32% |
Region 3 | 1.29 | 1.69 | 43.57 | 97.21% |
Region 4 | 1.25 | 1.64 | 43.84 | 95.99% |
Regions | Methods | MAE (m) | RMSE (m) | PSNR (dB) | SSIM (%) |
---|---|---|---|---|---|
Region 1 | Bicubic | 6.12 | 7.30 | 30.47 | 97.31% |
SRGAN | 6.17 | 8.15 | 29.90 | 96.09% | |
SRCNN | 5.02 | 6.26 | 31.91 | 98.38% | |
EDSR | 3.80 | 5.06 | 35.31 | 98.60% | |
SSTRans | 4.44 | 5.65 | 33.06 | 98.93% | |
TUGAN | 3.08 | 4.24 | 35.57 | 99.21% | |
Region 2 | Bicubic | 15.24 | 19.28 | 21.39 | 98.21% |
SRGAN | 17.79 | 23.10 | 20.86 | 97.54% | |
SRCNN | 14.86 | 18.20 | 22.02 | 98.85% | |
EDSR | 11.40 | 15.34 | 27.38 | 99.32% | |
SSTRans | 12.96 | 16.52 | 23.77 | 99.04% | |
TUGAN | 10.32 | 13.52 | 25.51 | 99.32% | |
Region 3 | Bicubic | 2.46 | 3.18 | 38.08 | 86.32% |
SRGAN | 2.10 | 2.78 | 39.22 | 87.71% | |
SRCNN | 2.22 | 2.87 | 38.99 | 89.37% | |
EDSR | 1.98 | 3.24 | 39.91 | 89.80% | |
SSTRans | 1.55 | 2.03 | 41.99 | 96.13% | |
TUGAN | 1.29 | 1.69 | 43.57 | 97.21% | |
Region 4 | Bicubic | 2.53 | 3.32 | 37.07 | 74.76% |
SRGAN | 2.48 | 3.29 | 37.79 | 77.08% | |
SRCNN | 2.40 | 3.17 | 38.10 | 78.35% | |
EDSR | 2.31 | 3.08 | 38.36 | 79.22% | |
SSTRans | 1.63 | 2.14 | 41.51 | 94.11% | |
TUGAN | 1.25 | 1.64 | 43.84 | 95.99% |
Regions | Bicubic | SRGAN | SRCNN | EDSR | SSTrans | TUGAN |
---|---|---|---|---|---|---|
Region 1 | 3.30 | 4.07 | 3.05 | 3.02 | 1.96 | 1.85 |
Region 2 | 5.28 | 7.66 | 5.17 | 4.66 | 4.04 | 3.61 |
Region 3 | 2.50 | 2.13 | 2.11 | 2.19 | 1.12 | 1.01 |
Region 4 | 2.93 | 2.42 | 2.52 | 2.48 | 1.64 | 0.92 |
Regions | Bicubic | SRGAN | SRCNN | EDSR | SSTrans | TUGAN |
---|---|---|---|---|---|---|
Region 1 | 68.11 | 75.39 | 63.97 | 62.58 | 39.78 | 37.51 |
Region 2 | 29.60 | 42.39 | 28.71 | 25.37 | 22.17 | 20.36 |
Region 3 | 84.41 | 86.05 | 79.25 | 78.13 | 33.29 | 33.15 |
Region 4 | 86.99 | 87.07 | 83.74 | 81.83 | 33.75 | 30.87 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zheng, X.; Xu, Z.; Yin, Q.; Bao, Z.; Chen, Z.; Wang, S. A Transformer-Unet Generative Adversarial Network for the Super-Resolution Reconstruction of DEMs. Remote Sens. 2024, 16, 3676. https://doi.org/10.3390/rs16193676
Zheng X, Xu Z, Yin Q, Bao Z, Chen Z, Wang S. A Transformer-Unet Generative Adversarial Network for the Super-Resolution Reconstruction of DEMs. Remote Sensing. 2024; 16(19):3676. https://doi.org/10.3390/rs16193676
Chicago/Turabian StyleZheng, Xin, Zhaoqi Xu, Qian Yin, Zelun Bao, Zhirui Chen, and Sizhu Wang. 2024. "A Transformer-Unet Generative Adversarial Network for the Super-Resolution Reconstruction of DEMs" Remote Sensing 16, no. 19: 3676. https://doi.org/10.3390/rs16193676
APA StyleZheng, X., Xu, Z., Yin, Q., Bao, Z., Chen, Z., & Wang, S. (2024). A Transformer-Unet Generative Adversarial Network for the Super-Resolution Reconstruction of DEMs. Remote Sensing, 16(19), 3676. https://doi.org/10.3390/rs16193676