Reconstruction of OFDM Signals Using a Dual Discriminator CGAN with BiLSTM and Transformer
Abstract
:1. Introduction
- We propose a novel algorithm for reconstructing OFDM signal timing sequence data, integrating the BiLSTM (Bi-directional Long Short-Term Memory) and Transformer models within a dual-discriminator CGAN architecture. This approach effectively extracts complex temporal information and accurately captures correlations among IQ signal components, achieving high-precision OFDM signal reconstruction;
- Utilizing an enhanced CGAN framework, we trained the network on data from all modulation format categories to learn comprehensive OFDM signal characteristics. This enables the generation of OFDM signals with different modulation categories by adjusting input parameters;
- Experimental results demonstrate that under common OFDM modulation schemes like BPSK (Binary Phase Shift Keying), QPSK (Quadrature Phase Shift Keying), and 16QAM (16-Quadrature Amplitude Modulation), the reconstructed signals closely resemble the original OFDM signals in time-domain waveforms, constellation diagrams, and spectrograms. Furthermore, the algorithm maintains complexity while improving the quality of the reconstructed signal compared to other reconstruction algorithms.
2. Background
2.1. Conditional Generative Adversarial Network
2.2. Transformer
3. Establishment of an OFDM Signal Dataset
3.1. OFDM System Simulation
3.2. Dataset Details
4. LSTM and Transformer-Based Dual-Discriminator CGAN Model Architecture
4.1. OFDM Signal Reconstruction Model
4.2. Conditional GAN Model
4.3. Signal Reconstruction Network Structure
4.3.1. Discriminator Network
4.3.2. Generator Network
5. Experiment and Analysis
5.1. Training Details and Parameters
5.2. Experimental Results
5.2.1. Comparison and Analysis of Signal Reconstruction Results
5.2.2. Comparison and Analysis of the Original OFDM Sequences
5.2.3. Model Comparison and Similarity Evaluation
6. Conclusions
7. Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhao, F.; Jin, H. Communication jamming waveform generation technology based on GAN. Syst. Eng. Electron. 2021, 43, 1080–1088. [Google Scholar]
- Zhang, J.Y.; Li, C.; Yang, Y. A Study on key techniques in cognitive communication countermeasures. Radio Eng. 2020, 50, 619–623. [Google Scholar]
- Kumar, A.; Majhi, S.; Gui, G.; Wu, H.-C.; Yuen, C. A Survey of Blind Modulation Classification Techniques for OFDM Signals. Sensors 2022, 22, 1020. [Google Scholar] [CrossRef] [PubMed]
- Gu, Y.; Xu, S.; Zhou, J. Automatic Modulation Format Classification of USRP Transmitted Signals Based on SVM. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 3712–3717. [Google Scholar]
- Chai, X. Research on Key Technology of Blind Demodulation for OFDM Signals. Master’s Thesis, Nanchang University, Nanchang, China, 2023. [Google Scholar]
- Li, H.; Bar-Ness, Y.; Abdi, A.; Somekh, O.S.; Su, W. OFDM modulation classification and parameters extraction. In Proceedings of the 1st International Conference on Cognitive Radio Oriented Wireless Networks and Communications, Mykonos, Greece, 8–10 June 2006; pp. 1–6. [Google Scholar]
- Xu, Y.; Liu, j.; Liu, S.; Zeng, X.; Lu, J. A Novel Timing Synchronization Algorithm for CO-OFDM systems. In Proceedings of the 2018 Asia Communications and Photonics Conference (ACP), Hangzhou, China, 26–29 October 2018; pp. 1–3. [Google Scholar]
- Muhlhaus, M.S.; Oner, M.; Dobre, O.A.; Jkel, H.U.; Jondral, F.K. Automatic modulation classification for MIMO systems using fourth-order cumulants. In Proceedings of the 2012 IEEE Vehicular Technology Conference (VTC Fall), Quebec, QC, Canada, 3–6 September 2012; pp. 1–5. [Google Scholar]
- Jagannath, A.; Jagannath, J.; Melodia, T. Redefining Wireless Communication for 6G: Signal Processing Meets Deep Learning With Deep Unfolding. IEEE Trans. Artif. Intell. 2021, 2, 528–536. [Google Scholar] [CrossRef]
- Karanov, B.; Chagnon, M.; Aref, V.; Ferreira, F.; Lavery, D.; Bayvel, P.; Schmalen, L. Experimental Investigation of Deep Learning for Digital Signal Processing in Short Reach Optical Fiber Communications. In Proceedings of the 2020 IEEE Workshop on Signal Processing Systems (SiPS), Coimbra, Portugal, 20–22 October 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Yang, H.J.; Chen, L.; Zhang, J.Y. Research on digital signal generation technology based on Generative adversarial network. Electron. Meas. Technol. 2020, 43, 127–132. [Google Scholar]
- Feng, Q.; Zhang, J.; Chen, L.; Liu, F. Waveform generation technology of communication signal based on DRAGAN. Hebei J. Ind. Sci. Technol. 2022, 39, 2–8. [Google Scholar]
- Lin, J.H. PAPR Reduction in OFDM System Based on Machine Learning. Master’s Thesis, Xidian University, Xi’an, China, 2020. [Google Scholar]
- Li, Y.; Fan, Y.; Hou, S.; Xu, Z.; Wang, H.; Fang, S. TOR-GAN: A Transformer-Based OFDM Signals Reconstruction GAN. Electronics 2024, 13, 750. [Google Scholar] [CrossRef]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, L. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X. Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. arXiv 2021, arXiv:2103.14030. [Google Scholar]
- Fan, H.; Xiong, B.; Mangalam, K.; Li, Y.; Yan, Z.; Malik, J.; Feichtenhofer, C. Multiscale Vision Transformers. arXiv 2021, arXiv:2104.11227. [Google Scholar]
- Li, Y.; Mao, H.; Girshick, R.; He, K. Exploring Plain Vision Transformer Backbones for Object Detection. arXiv 2022, arXiv:2203.16527. [Google Scholar]
- Jiang, Y.; Chang, S.; Wang, Z. Transgan: Two pure transformers can make one strong gan, and that can scale up. Adv. Neural Inf. Process. Syst. 2021, 34, 14745–14758. [Google Scholar]
- Lee, K.; Chang, H.; Jiang, L.; Zhang, H.; Tu, Z.; Liu, C. Vitgan: Training gans with vision transformers. In Proceedings of the International Conference on Learning Representations, Virtual, 25–29 April 2022. [Google Scholar]
- Wang, S.; Gao, Z.; Liu, D. Swin-gan: Generative adversarial network based on shifted windows transformer architecture for image generation. Vis. Comput. 2022, 39, 1–11. [Google Scholar] [CrossRef]
- Cao, K.; Zhang, T.; Huang, J. Advanced hybrid LSTM-transformer architecture for real-time multi-task prediction in engineering systems. Sci. Rep. 2024, 14, 4890. [Google Scholar] [CrossRef] [PubMed]
- Kim, S.; Lee, S.-P. A BiLSTM–Transformer and 2D CNN Architecture for Emotion Recognition from Speech. Electronics 2023, 12, 4034. [Google Scholar] [CrossRef]
- Li, X.; Metsis, V.; Wang, H.; Ngu, A.H.H. TTS-GAN: A Transformer-Based Time-Series Generative Adversarial Network. In Artificial Intelligence in Medicine; Michalowski, M., Abidi, S.S.R., Abidi, S., Eds.; AIME 2022; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2022; Volume 13263. [Google Scholar]
- Li, X.; Ngu, A.H.H.; Metsis, V. TTS-CGAN: A Transformer Time-Series Conditional GAN for Biosignal Data Augmentation. arXiv 2022, arXiv:2206.13676. [Google Scholar]
- Chang, R.W. Synthesis of band-limited orthogonal signals for multichannel data transmission. Bell Syst. Tech. J. 1966, 45, 1775–1796. [Google Scholar] [CrossRef]
- Choi, Y.; Choi, M.; Kim, M.; Ha, J.-W.; Kim, S.; Choo, J. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8789–8797. [Google Scholar]
Details | |
---|---|
Modulation mode | 16QAM, QPSK, BPSK |
Sample length | 256 |
SNR range | (10 dB, 15 dB, 20 dB) |
Signal sample size | 18,000 |
Types | Paraments | Output Size |
---|---|---|
IQ + AP | - | 4 × 256 |
BiLSTM | Num_layers = 1 | 256 × Embed_size |
Positional_Encoding | - | 256 × Embed_size |
Types | Size/Step | Output Size |
---|---|---|
2D Convolution | 2 × 2/2 | 128 × Embed_size |
Positional_Encoding | - | 128 × Embed_size |
Types | Paraments | Output Size |
---|---|---|
BatchNorm 1D | Embed_size | 256/128 × Embed_size |
Multi-Head Attention | Num_heads = 16 | 256/128 × Embed_size |
Dropout | Probability = 0.3 | 256/128 × Embed_size |
Layer Normalization | Embed_size | 256/128 × Embed_size |
Linear | Expansion = 4 | 256/128 × 4 × Embed_size |
GELU | - | 256/128 × 4 × Embed_size |
Linear | Expansion = 4 | 256/128 × Embed_size |
Dropout | Probability = 0.3 | 256/128 × Embed_size |
Types | Paraments | Output Size |
---|---|---|
Reduce | Reduction = mean | Embed_size |
Layer Normalization | Embed_size | Embed_size |
Linear | - | 1 |
Types | Paraments | Output Size |
---|---|---|
Linear | Laten_dim | 2 × 256 × Embed_size |
Positionl_Encoding | - | 2 × 256 × Embed_size |
Types | Paraments | Output Size |
---|---|---|
BatchNorm 1D | Embed_size | 512 × Embed_size |
Prob Attention | Num_heads = 16 | 512 × Embed_size |
Layer Normalization | Embed_size | 128 × Embed_size |
Dropout | Probability = 0.3 | 512 × Embed_size |
Linear | Expansion = 4 | 512 × 4 × Embed_size |
GELU | - | 512 × 4 × Embed_size |
Linear | Expansion = 4 | 512 × 4 × Embed_size |
Dropout | Probability = 0.3 | 512 × Embed_size |
2D Convolution | Kernel_size = 1 | 2 × 256 |
Model Hyperparameters | |
---|---|
Generator Embed_dim | 160 |
Discriminator Embed_dim | 160 |
LSTM Hiddem_dim | 320 |
Batch size | 64 |
Epochs | 75 |
Learning rate | 0.001 |
β1, β2 | 0.9, 0.999 |
Optimizer | Adam |
Environment | Technical Parameters |
---|---|
OS | Windows 10 |
CPU | Intel Xeon Silver 4212R |
GPU | NVIDIA GeForce 4090 |
Memory | 128 G |
Python | Python 3.8.8 |
Pytorch | Pytorch 1.8.1 |
Model Name | Generator | Discriminator | ||
---|---|---|---|---|
Parameters /B | Time Complexity /FLOPs | Parameters /B | Time Complexity /FLOPs | |
LSTM&Transformer Based CGAN | 1.65 × 107 | 3.42 × 108 | 4.85 × 106 | 1.97 × 108 |
TOR-GAN [15] | 1.69 × 107 | 3.45 × 108 | 5.37 × 106 | 1.82 × 108 |
Pattern-Constellation dual GAN [14] | 1.65 × 107 | 1.78 × 108 | 2.04 × 107 | 2.34 × 108 |
DRAGAN [13] | 1.43 × 106 | 8.53 × 107 | 6.99 × 107 | 1.93 × 109 |
Model Name | Similarity Analysis | BPSK | QPSK | 16QAM | ||||||
---|---|---|---|---|---|---|---|---|---|---|
10 dB | 15 dB | 20 dB | 10 dB | 15 dB | 20 dB | 10 dB | 15 dB | 20 dB | ||
LSTM&Transformer Based CGAN | MSE | 0.3715 | 0.3129 | 0.1303 | 0.3008 | 0.2960 | 0.1520 | 0.3903 | 0.3224 | 0.2140 |
MAE | 0.3606 | 0.2731 | 0.1096 | 0.2975 | 0.2608 | 0.1403 | 0.4120 | 0.3055 | 0.1948 | |
EVM | 1.1057 | 0.9635 | 0.7202 | 1.0359 | 0.9986 | 0.8134 | 1.3538 | 1.1489 | 1.0673 | |
TOR-GAN | MSE | 0.3970 | 0.3083 | 0.1460 | 0.4307 | 0.3086 | 0.1981 | 0.4624 | 0.3531 | 0.2529 |
MAE | 0.3413 | 0.2849 | 0.1284 | 0.4142 | 0.2981 | 0.1660 | 0.4546 | 0.2164 | 0.2502 | |
EVM | 1.1395 | 0.9588 | 0.7911 | 1.2415 | 0.9935 | 0.8303 | 1.4145 | 0.9303 | 0.9137 | |
DRAGAN | MSE | 0.7051 | 0.5319 | 0.4197 | 0.6424 | 0.5740 | 0.4200 | 0.7135 | 0.6501 | 0.5265 |
MAE | 0.6376 | 0.4945 | 0.4086 | 0.5359 | 0.5542 | 0.3447 | 0.6995 | 0.5914 | 0.5209 | |
EVM | 2.7872 | 1.5969 | 1.2761 | 2.4858 | 1.3715 | 1.1097 | 2.5895 | 1.4049 | 1.2388 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Fan, Y.; Hou, S.; Niu, Y.; Fu, Y.; Li, H. Reconstruction of OFDM Signals Using a Dual Discriminator CGAN with BiLSTM and Transformer. Sensors 2024, 24, 4562. https://doi.org/10.3390/s24144562
Li Y, Fan Y, Hou S, Niu Y, Fu Y, Li H. Reconstruction of OFDM Signals Using a Dual Discriminator CGAN with BiLSTM and Transformer. Sensors. 2024; 24(14):4562. https://doi.org/10.3390/s24144562
Chicago/Turabian StyleLi, Yuhai, Youchen Fan, Shunhu Hou, Yufei Niu, You Fu, and Hanzhe Li. 2024. "Reconstruction of OFDM Signals Using a Dual Discriminator CGAN with BiLSTM and Transformer" Sensors 24, no. 14: 4562. https://doi.org/10.3390/s24144562
APA StyleLi, Y., Fan, Y., Hou, S., Niu, Y., Fu, Y., & Li, H. (2024). Reconstruction of OFDM Signals Using a Dual Discriminator CGAN with BiLSTM and Transformer. Sensors, 24(14), 4562. https://doi.org/10.3390/s24144562