State Transition Graph-Based Spatial–Temporal Attention Network for Cell-Level Mobile Traffic Prediction
Abstract
:1. Introduction
- We propose a novel STG-STAN for predicting mobile traffic at the cell level, which leverages the dynamic evolution of underlying state transitions to achieve precise prediction.
- In the STG-STAN, we construct state transition graphs for capturing the spatial–temporal correlations presented in mobile traffic data. This approach enables the STG-STAN to effectively model the complex dependencies and interactions between different traffic states over time. Then, a parallel LSTM branch in the STG-STAN enhances the ability to capture both small-scale and large-scale temporal characteristics of mobile traffic.
- Extensive experiments demonstrate that the STG-STAN can better exploit the spatial–temporal information and achieve superior performance gain compared with typical baselines.
2. Related Work
2.1. Cellular Traffic Prediction
2.2. Dynamic Graph Neural Networks
3. Preliminary Observation and Analysis
4. STG-STAN Framework for Mobile Traffic Prediction
4.1. Overview of the STG-STAN
- (1)
- In step 1, we model traffic changes over time as state transitions and construct dynamic graphs to represent the spatial–temporal correlations of traffic states.
- (2)
- In step 2, the node embeddings of dynamic graphs are fed into the spatial attention module. In this module, GCNs and node attention layers are employed to extract local and global spatial correlations between nodes. Specifically, local structural correlation is based on the aggregation of information between adjacent nodes, while global spatial information is captured through the state transition similarity on the whole graph. The node vectors containing spatial information are then fed into the temporal information evolution module to capture the dynamic temporal patterns of each node.
- (3)
- In step 3, considering the short-term fluctuation characteristics of mobile traffic, we adopt an LSTM branch to extract small-scale temporal patterns of traffic data.
- (4)
- In step 4, the prediction is obtained by fusing the node vector output and the parallel LSTM output. Through the extraction of the spatial–temporal state correlations and short-term fluctuation characteristics, more abundant mobile traffic patterns can be obtained.
4.2. Graph Representation for State Transition
4.2.1. Segmentation of the Traffic Time Series
4.2.2. Constructing State Transition Graphs
4.3. Spatial–Temporal Graph Attention Network
4.3.1. Spatial Attention Module
4.3.2. Temporal Evolution Module
4.4. Independent LSTM Branch and Fusion Output
5. Experiments
5.1. Experimental Configuration
5.1.1. Parameter Settings
5.1.2. Baselines
- SVR: SVR is a supervised learning algorithm based on the principles of support vector machines. Its goal is to find the optimal hyperplane that maximizes the margin while ensuring that a predefined level of error is allowed for some data points.
- RF: RF is a machine learning algorithm that builds multiple decision trees and combines their predictions to reduce overfitting. Each tree is trained on a random subset of the data and the features, which helps to decorrelate the trees and make the model more robust.
- LSTM: long short-term memory (LSTM) can handle the vanishing gradient problem by using a gating mechanism, which has been widely applied in mobile traffic prediction.
- CNN-LSTM [36]: CNN-LSTM is a hybrid neural network architecture that combines a CNN and LSTM to effectively extract spatial–temporal feature from mobile traffic.
- DenseNet [11]: DenseNet (Densely Connected CNN) is a DL-based model which utilizes a dense CNN to capture both the spatial and temporal dependence of mobile traffic.
5.1.3. Performance Metrics
5.2. Experimental Results
5.2.1. Performance Comparison of Models
5.2.2. Model Efficiency
5.2.3. Ablation Studies
- The STG-STAN outperforms all variants, which demonstrates that our proposed method can capture effective information through spatial–temporal modeling. Despite lacking real-world geo-location information, the STG-STAN can achieve accurate predictions by mining potential state transition information from mobile traffic sequences.
- The performance degradation of the STG-STAN without the node attention mechanism indicates that capturing the global correlation of states is critical in the graph structure. After integrating information from neighboring nodes in a GCN, the node attention mechanism is capable of exploiting spatial correlations among nodes in the global graph structure.
- The poor performance of the STG-STAN without parallel LSTM indicates the importance of mining short-term local temporal characteristics. In the case of highly dynamic mobile traffic, the integration of small-scale information can greatly enhance the prediction performance of the model.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
A set of historical mobile traffic | X |
Size of window size | |
The k-th state transition graph | |
Number of traffic states | N |
Set of vertices on the k-th graph | |
Set of edges on the k-th graph | |
Adjacency matrix of the k-th graph | |
Number of times when traffic transits from state-i to state-j | |
Length of segment-k | |
Embeddings of node-i | , , , |
Trainable weights | , |
Attention matrix |
References
- Fotouhi, A.; Qiang, H.; Ding, M.; Hassan, M.; Giordano, L.G.; Garcia-Rodriguez, A.; Yuan, J. Survey on UAV Cellular Communications: Practical Aspects, Standardization Advancements, Regulation, and Security Challenges. IEEE Commun. Surv. Tut. 2019, 21, 3417–3442. [Google Scholar] [CrossRef]
- Gu, T.; Fang, Z.; Abhishek, A.; Fu, H.; Hu, P.; Mohapatra, P. IoTGaze: IoT Security Enforcement via Wireless Context Analysis. In Proceedings of the IEEE INFOCOM 2020—IEEE Conference on Computer Communications, Toronto, ON, Canada, 6–9 July 2020; pp. 884–893. [Google Scholar]
- Lv, G.; Wu, Q.; Wang, W.; Li, Z.; Xie, G. Lumos: Towards Better Video Streaming QoE through Accurate Throughput Prediction. In Proceedings of the IEEE INFOCOM 2022—IEEE Conference on Computer Communications, London, UK, 2–5 May 2022; pp. 650–659. [Google Scholar]
- Lai, C.; Lu, R.; Zheng, D.; Shen, X. Security and Privacy Challenges in 5G-Enabled Vehicular Networks. IEEE Netw. 2020, 34, 37–45. [Google Scholar] [CrossRef]
- Zhang, C.; Patras, P. Long-Term Mobile Traffic Forecasting Using Deep Spatio-Temporal Neural Networks. In Proceedings of the Eighteenth ACM International Symposium on Mobile Ad Hoc Networking and Computing, Los Angeles, CA, USA, 26–29 June 2018; pp. 231–240. [Google Scholar]
- Wu, Q.; Chen, X.; Zhou, Z.; Chen, L.; Zhang, J. Deep Reinforcement Learning with Spatio-Temporal Traffic Forecasting for Data-Driven Base Station Sleep Control. IEEE/ACM Trans. Netw. 2021, 29, 935–948. [Google Scholar] [CrossRef]
- Xu, F.; Li, Y.; Wang, H.; Zhang, P.; Jin, D. Understanding Mobile Traffic Patterns of Large Scale Cellular Towers in Urban Environment. IEEE ACM Trans. Netw. 2017, 25, 1147–1161. [Google Scholar] [CrossRef]
- Yao, Y.; Gu, B.; Su, Z.; Guizani, M. MVSTGN: A Multi-View Spatial-Temporal Graph Network for Cellular Traffic Prediction. IEEE Trans. Mob. Comput. 2021, 22, 2837–2849. [Google Scholar] [CrossRef]
- Zhao, N.; Wu, A.; Pei, Y.; Liang, Y.C.; Niyato, D. Spatial-Temporal Aggregation Graph Convolution Network for Efficient Mobile Cellular Traffic Prediction. IEEE Commun. Lett. 2022, 26, 587–591. [Google Scholar] [CrossRef]
- Wang, X.; Zhou, Z.; Xiao, F.; Xing, K.; Yang, Z.; Liu, Y.; Peng, C. Spatio-Temporal Analysis and Prediction of Cellular Traffic in Metropolis. IEEE Trans. Mob. Comput. 2019, 18, 2190–2202. [Google Scholar] [CrossRef]
- Zhang, C.; Zhang, H.; Yuan, D.; Zhang, M. Citywide Cellular Traffic Prediction Based on Densely Connected Convolutional Neural Networks. IEEE Commun. Lett. 2018, 22, 1656–1659. [Google Scholar] [CrossRef]
- Zhang, C.; Zhang, H.; Yuan, D.; Zhang, M. Time-Wise Attention Aided Convolutional Neural Network for Data-Driven Cellular Traffic Prediction. IEEE Commun. Lett. 2021, 22, 1747–1751. [Google Scholar]
- He, K.; Chen, X.; Wu, Q.; Yu, S.; Zhou, Z. Graph Attention Spatial-Temporal Network with Collaborative Global-Local Learning for Citywide Mobile Traffic Prediction. IEEE Trans. Mob. Comput. 2022, 21, 1244–1256. [Google Scholar] [CrossRef]
- Wu, Q.; He, K.; Chen, X.; Yu, S.; Zhang, J. Deep Transfer Learning Across Cities for Mobile Traffic Prediction. IEEE ACM Trans. Netw. 2022, 30, 1255–1267. [Google Scholar] [CrossRef]
- Wang, Z.; Hu, J.; Min, G.; Zhao, Z.; Chang, Z.; Wang, Z. Spatial-Temporal Cellular Traffic Prediction for 5G and Beyond: A Graph Neural Networks-Based Approach. IEEE Trans. Industr. Inform. 2022, 19, 5722–5731. [Google Scholar] [CrossRef]
- Rao, Z.; Xu, Y.; Pan, S.; Guo, J.; Yan, Y.; Wang, Z. Cellular Traffic Prediction: A Deep Learning Method Considering Dynamic Nonlocal Spatial Correlation, Self-Attention, and Correlation of Spatiotemporal Feature Fusion. IEEE Trans. Netw. Serv. Manag. 2023, 20, 426–440. [Google Scholar] [CrossRef]
- Zhang, C.; Zhang, H.; Qiao, J.; Yuan, D.; Zhang, M. Deep Transfer Learning for Intelligent Cellular Traffic Prediction Based on Cross-Domain Big Data. IEEE J. Sel. Areas Commun. 2019, 37, 1389–1401. [Google Scholar] [CrossRef]
- Yu, L.; Li, M.; Jin, W.; Guo, Y.; Wang, Q.; Yan, F.; Li, P. STEP: A Spatio-Temporal Fine-Granular User Traffic Prediction System for Cellular Networks. IEEE Trans. Mob. Comput. 2021, 20, 3453–3466. [Google Scholar] [CrossRef]
- Zhao, N.; Ye, Z.; Pei, Y.; Liang, Y.C.; Niyato, D. Spatial-Temporal Attention-Convolution Network for Citywide Cellular Traffic Prediction. IEEE Commun. Lett. 2020, 24, 2532–2536. [Google Scholar] [CrossRef]
- Zhang, C.; Yu, J.J.Q.; Liu, Y. Spatial-Temporal Graph Attention Networks: A Deep Learning Approach for Traffic Forecasting. IEEE Access 2019, 7, 166246–166256. [Google Scholar] [CrossRef]
- Zhang, Z.; Cai, J.; Zhang, Y.; Wang, J. Learning hierarchy-aware knowledge graph embeddings for link prediction. Proc. AAAI 2020, 34, 3065–3072. [Google Scholar] [CrossRef]
- Zhao, T.; Zhang, X.; Wang, S. GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, Jerusalem, Israel, 8–12 March 2021; pp. 833–841. [Google Scholar]
- Gu, J.; Zhao, H.; Lin, Z.; Li, S.; Cai, J.; Ling, M. Scene graph generation with external knowledge and image reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1969–1978. [Google Scholar]
- Gao, C.; Wang, X.; He, X.; Li, Y. Graph neural networks for recommender system. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, Tempe, AZ, USA, 21–25 February 2022; pp. 1623–1625. [Google Scholar]
- Taheri, A.; Gimpel, K.; Berger-Wolf, T. Learning to Represent the Evolution of Dynamic Graphs with Recurrent Models. In Proceedings of the 2019 World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 301–307. [Google Scholar]
- Ma, Y.; Guo, Z.; Ren, Z.; Tang, J.; Yin, D. Streaming Graph Neural Networks. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Xi’an, China, 25–30 July 2020; pp. 719–728. [Google Scholar]
- Pareja, A.; Domeniconi, G.; Chen, J.; Ma, T.; Suzumura, T.; Kanezashi, H.; Kaler, T.; Schardl, T.; Leiserson, C. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. Proc. AAAI 2020, 34, 5363–5370. [Google Scholar] [CrossRef]
- Sankar, A.; Wu, Y.; Gou, L.; Zhang, W.; Yang, H. Dysat: Deep neural representation learning on dynamic graphs via self-attention networks. In Proceedings of the 13th International Conference on Web Search and Data Mining, Houston, TX, USA, 3–7 February 2020; pp. 519–527. [Google Scholar]
- Hu, W.; Yang, Y.; Cheng, Z.; Yang, C.; Ren, X. Time-series event prediction with evolutionary state graph. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, Jerusalem, Israel, 8–12 March 2021; pp. 580–588. [Google Scholar]
- Zhao, L.; Song, Y.; Zhang, C.; Liu, Y.; Wang, P.; Lin, T.; Deng, M.; Li, H. T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction. IEEE Trans. Intell. Transp. Syst. 2020, 21, 3848–3858. [Google Scholar] [CrossRef]
- Hu, J.; Lin, X.; Wang, C. DSTGCN: Dynamic Spatial-Temporal Graph Convolutional Network for Traffic Prediction. IEEE Sens. J. 2022, 22, 13116–13124. [Google Scholar] [CrossRef]
- Yu, J.J.Q. Graph Construction for Traffic Prediction: A Data-Driven Approach. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15015–15027. [Google Scholar] [CrossRef]
- Barlacchi, G.; De Nadai, M.; Larcher, R.; Casella, A.; Chitic, C.; Torrisi, G.; Antonelli, F.; Vespignani, A.; Pentland, A.; Lepri, B. A multi-source dataset of urban life in the city of Milan and the Province of Trentino. Sci. Data 2015, 2, 150055. [Google Scholar] [CrossRef]
- Truong, C.; Oudre, L.; Vayatis, N. Selective review of offline change point detection methods. Signal Process. 2020, 167, 107299. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017; pp. 6000–6010. [Google Scholar]
- Huang, C.W.; Chiang, C.T.; Li, Q. A study of deep learning networks on mobile traffic forecasting. In Proceedings of the International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Montreal, QC, Canada, 8–13 October 2017; pp. 1–6. [Google Scholar]
Dataset | Traffic | Models | n = 1 | n = 20 | ||||
---|---|---|---|---|---|---|---|---|
RMSE | MAE | R2 | RMSE | MAE | R2 | |||
Milan | SMS | SVR | 27.8938 | 10.8571 | 0.7896 | 30.4487 | 16.2678 | 0.8409 |
RF | 25.4097 | 9.4293 | 0.8253 | 27.4147 | 11.2501 | 0.868 | ||
LSTM | 27.4289 | 10.9540 | 0.7943 | 27.1170 | 11.2600 | 0.8706 | ||
CNN-LSTM | - | - | - | 32.2569 | 14.8794 | 0.8128 | ||
DenseNet | - | - | - | 31.7915 | 18.1345 | 0.8196 | ||
STG-STAN | 26.6621 | 10.6130 | 0.8056 | 26.8549 | 11.0842 | 0.8725 | ||
Call | SVR | 15.4176 | 6.3774 | 0.9185 | 24.6290 | 13.2826 | 0.9241 | |
RF | 16.3987 | 5.8604 | 0.9077 | 24.7684 | 9.5313 | 0.9241 | ||
LSTM | 11.8367 | 8.4304 | 0.7405 | 23.5098 | 9.6391 | 0.9317 | ||
CNN-LSTM | - | - | - | 24.2783 | 10.7070 | 0.9287 | ||
DenseNet | - | - | - | 29.7551 | 15.8865 | 0.8820 | ||
STG-STAN | 10.9830 | 7.3755 | 0.7670 | 22.0636 | 9.0541 | 0.9395 | ||
Internet | SVR | 174.0304 | 59.2985 | 0.7617 | 79.4623 | 58.0813 | 0.9558 | |
RF | 110.8407 | 44.1866 | 0.9034 | 68.4560 | 35.8034 | 0.9685 | ||
LSTM | 135.5389 | 54.2510 | 0.8555 | 60.9615 | 30.9476 | 0.9756 | ||
CNN-LSTM | - | - | - | 114.6127 | 66.4044 | 0.9142 | ||
DenseNet | - | - | - | 276.5915 | 110.6223 | −0.1705 | ||
STG-STAN | 152.2309 | 53.8450 | 0.8178 | 60.3678 | 31.1022 | 0.9761 | ||
Trentino | SMS | SVR | 23.3452 | 7.8508 | 0.9093 | 15.1005 | 6.1116 | 0.6828 |
RF | 19.8042 | 5.9814 | 0.9347 | 11.3088 | 5.2303 | 0.8400 | ||
LSTM | 24.8605 | 7.4796 | 0.8943 | 10.1354 | 4.5860 | 0.8640 | ||
CNN-LSTM | - | - | - | 16.1642 | 7.6263 | 0.6406 | ||
DenseNet | - | - | - | 14.2120 | 7.5922 | 0.7342 | ||
STG-STAN | 23.2465 | 7.2252 | 0.9072 | 9.6616 | 4.5008 | 0.8763 | ||
Call | SVR | 12.9342 | 3.5366 | 0.9461 | 5.9257 | 2.4553 | 0.8266 | |
RF | 13.3254 | 2.9403 | 0.9427 | 4.7544 | 1.9579 | 0.9091 | ||
LSTM | 16.7034 | 4.4159 | 0.8850 | 4.4325 | 2.0017 | 0.9202 | ||
CNN-LSTM | - | - | - | 6.9517 | 3.2601 | 0.8260 | ||
DenseNet | - | - | - | 7.5137 | 3.7362 | 0.7853 | ||
STG-STAN | 15.5026 | 3.9873 | 0.9010 | 4.1654 | 1.9170 | 0.9318 | ||
Internet | SVR | 93.5274 | 27.1957 | 0.9080 | 58.0123 | 17.8620 | 0.6548 | |
RF | 64.6846 | 19.2679 | 0.9560 | 44.9334 | 17.3566 | 0.7921 | ||
LSTM | 63.2925 | 22.9900 | 0.8546 | 38.4114 | 13.2221 | 0.8359 | ||
CNN-LSTM | - | - | - | 45.2172 | 21.0177 | 0.7800 | ||
DenseNet | - | - | - | 147.6278 | 79.1948 | −0.7530 | ||
STG-STAN | 77.2071 | 24.5499 | 0.9347 | 38.0165 | 13.1691 | 0.8386 |
Dataset | N | SMS | Call | Internet | ||||||
---|---|---|---|---|---|---|---|---|---|---|
RMSE | MAE | R2 | RMSE | MAE | R2 | RMSE | MAE | R2 | ||
Milan | 5 | 26.7188 | 11.1977 | 0.8745 | 24.5285 | 10.0807 | 0.9261 | 65.2909 | 33.2255 | 0.9721 |
10 | 26.9428 | 11.1375 | 0.8718 | 25.5865 | 10.0202 | 0.9202 | 60.3678 | 31.1022 | 0.9761 | |
15 | 26.8549 | 11.0842 | 0.8725 | 22.0636 | 9.0541 | 0.9395 | 62.8848 | 32.8871 | 0.9724 | |
20 | 27.1671 | 11.3420 | 0.8696 | 25.4098 | 9.8879 | 0.9210 | 63.6375 | 33.0716 | 0.9736 | |
30 | 27.4558 | 11.2727 | 0.8669 | 25.3670 | 10.3641 | 0.9213 | 64.3278 | 33.6158 | 0.9728 | |
40 | 27.4626 | 11.5221 | 0.8667 | 25.0931 | 9.9517 | 0.9233 | 64.9026 | 30.8612 | 0.9723 | |
50 | 27.3804 | 11.4718 | 0.8675 | 25.8404 | 10.3079 | 0.9186 | 64.8200 | 34.2479 | 0.9726 | |
60 | 27.0784 | 11.4278 | 0.8711 | 25.7499 | 10.1071 | 0.9190 | 64.5283 | 33.9147 | 0.9728 | |
70 | 27.2537 | 11.5924 | 0.8693 | 26.0037 | 10.1500 | 0.9174 | 63.9375 | 33.8695 | 0.9732 | |
80 | 27.3340 | 11.4508 | 0.8688 | 25.3610 | 10.2044 | 0.9216 | 64.7184 | 34.5756 | 0.9726 | |
90 | 27.3176 | 11.7105 | 0.8685 | 26.1878 | 10.7755 | 0.9165 | 63.8565 | 33.5565 | 0.9735 | |
100 | 27.2328 | 11.5327 | 0.8690 | 25.2173 | 9.9323 | 0.9224 | 65.2322 | 34.4176 | 0.9722 | |
Trentino | 5 | 10.0422 | 4.5140 | 0.8670 | 4.3159 | 1.9284 | 0.9267 | 38.0165 | 13.1691 | 0.8386 |
10 | 10.0360 | 4.4825 | 0.8682 | 4.2012 | 1.9425 | 0.9312 | 39.5458 | 13.5004 | 0.8231 | |
15 | 10.4851 | 4.5731 | 0.8589 | 4.1654 | 1.9170 | 0.9318 | 41.6017 | 13.7929 | 0.8005 | |
20 | 10.5031 | 4.7588 | 0.8628 | 4.3731 | 1.9480 | 0.9220 | 39.4425 | 13.2419 | 0.8323 | |
30 | 10.2293 | 4.5512 | 0.8617 | 4.2240 | 1.9434 | 0.9288 | 40.5948 | 13.2685 | 0.8317 | |
40 | 10.3206 | 4.5226 | 0.8574 | 4.3486 | 1.9785 | 0.9228 | 40.6837 | 13.5051 | 0.8116 | |
50 | 10.1893 | 4.5529 | 0.8628 | 4.3337 | 1.9690 | 0.9252 | 42.0312 | 13.7087 | 0.8079 | |
60 | 10.0265 | 4.5322 | 0.8684 | 4.3698 | 1.9884 | 0.9247 | 40.1723 | 13.4775 | 0.8182 | |
70 | 9.8508 | 4.4974 | 0.8723 | 4.3199 | 1.9535 | 0.9266 | 40.3572 | 13.4586 | 0.8180 | |
80 | 10.0287 | 4.5460 | 0.8677 | 4.2664 | 1.9801 | 0.9288 | 40.8583 | 13.5146 | 0.8140 | |
90 | 10.2194 | 4.5794 | 0.8625 | 4.2759 | 1.9577 | 0.9299 | 40.3226 | 13.4432 | 0.8162 | |
100 | 9.6616 | 4.5008 | 0.8763 | 4.2954 | 1.9798 | 0.9288 | 40.6897 | 13.5521 | 0.8154 |
Models | Parameters | Computation Time | |
---|---|---|---|
Training | Inference | ||
SVR | - | 40.98 (s) | 1.97 (s) |
RF | - | 28.57 (s) | 0.02 (s) |
LSTM | 21377 | 4.17 (s/epoch) | 0.89 (s) |
CNN-LSTM | 388692 | 0.43 (s/epoch) | 0.02 (s) |
DenseNet | 83272 | 0.92 (s/epoch) | 0.05 (s) |
STG-STAN | 39360 | 13.39 (s/epoch) | 1.85 (s) |
Dataset | Traffic | Models | Metrics | ||
---|---|---|---|---|---|
RMSE | MAE | R2 | |||
Milan | SMS | STG-STAN | 26.8549 | 11.0842 | 0.8725 |
w/o Att | 26.9117 | 11.3635 | 0.8723 | ||
w/o PLSTM | 36.0434 | 22.7099 | 0.7599 | ||
w/o Att-PLSTM | 36.6125 | 22.4961 | 0.7511 | ||
Call | STG-STAN | 22.0636 | 9.0541 | 0.9395 | |
w/o Att | 23.9859 | 9.7559 | 0.9286 | ||
w/o PLSTM | 30.8017 | 14.8696 | 0.8809 | ||
w/o Att-PLSTM | 32.1083 | 15.6051 | 0.8701 | ||
Internet | STG-STAN | 60.3678 | 31.1022 | 0.9761 | |
w/o Att | 62.4799 | 33.5622 | 0.9742 | ||
PLSTM | 147.2356 | 109.5837 | 0.8424 | ||
w/o Att-PLSTM | 151.5408 | 113.4919 | 0.8359 | ||
Trentino | SMS | STG-STAN | 9.6616 | 4.5008 | 0.8763 |
w/o Att | 10.2407 | 4.6374 | 0.8622 | ||
w/o PLSTM | 12.8424 | 5.6754 | 0.7802 | ||
w/o Att-PLSTM | 11.8233 | 5.4809 | 0.8185 | ||
Call | STG-STAN | 4.1654 | 1.9170 | 0.9318 | |
w/o Att | 4.4178 | 1.9967 | 0.9234 | ||
w/o PLSTM | 6.7825 | 3.2252 | 0.8021 | ||
w/o Att-PLSTM | 6.2948 | 3.2013 | 0.8452 | ||
Internet | STG-STAN | 38.0165 | 13.1691 | 0.8386 | |
w/o Att | 38.9355 | 17.6448 | 0.8314 | ||
PLSTM | 63.8827 | 41.5215 | 0.6228 | ||
w/o Att-PLSTM | 63.9516 | 41.7513 | 0.6038 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shi, J.; Cui, L.; Gu, B.; Lyu, B.; Gong, S. State Transition Graph-Based Spatial–Temporal Attention Network for Cell-Level Mobile Traffic Prediction. Sensors 2023, 23, 9308. https://doi.org/10.3390/s23239308
Shi J, Cui L, Gu B, Lyu B, Gong S. State Transition Graph-Based Spatial–Temporal Attention Network for Cell-Level Mobile Traffic Prediction. Sensors. 2023; 23(23):9308. https://doi.org/10.3390/s23239308
Chicago/Turabian StyleShi, Jianrun, Leiyang Cui, Bo Gu, Bin Lyu, and Shimin Gong. 2023. "State Transition Graph-Based Spatial–Temporal Attention Network for Cell-Level Mobile Traffic Prediction" Sensors 23, no. 23: 9308. https://doi.org/10.3390/s23239308
APA StyleShi, J., Cui, L., Gu, B., Lyu, B., & Gong, S. (2023). State Transition Graph-Based Spatial–Temporal Attention Network for Cell-Level Mobile Traffic Prediction. Sensors, 23(23), 9308. https://doi.org/10.3390/s23239308