Resource Allocation With Edge-Cloud Collaborative Traffic Prediction in Integrated Radio and Optical Networks
Resource Allocation With Edge-Cloud Collaborative Traffic Prediction in Integrated Radio and Optical Networks
Resource Allocation With Edge-Cloud Collaborative Traffic Prediction in Integrated Radio and Optical Networks
ABSTRACT By integrating communications in different domains, integrated radio and optical networks
can serve a wider range of applications and services. Integrated radio and optical network scenarios will
involve more weak-computation-ability network nodes, such as small-cell base stations. To pursue efficient
integrated radio and optical networks, more efficient ways to conduct transmission under the demand of edge
and cloud collaboration are required. The lack of forward-looking resource allocation may easily lead to a
waste of network resources without an expected return. Therefore, an efficient resource allocation scheme
needs to consider certain issues: 1) a comprehensive perspective of traffic prediction; 2) a release of pressure
on the transmission pipeline during the prediction process; and 3) a reduction of loss of edge nodes due to the
computation. In this paper, benefiting from machine learning, we propose a resource allocation with edge-
cloud collaborative traffic prediction (TP-ECC) in integrated radio and optical networks, where an efficient
resource allocation scheme (ERAS) is designed based on the prediction results with the gated recurrent
unit model. We maximize the utilization of limited resources to improve the awareness of network status.
We present three evaluation indicators and build a network architecture to evaluate our resource allocation
scheme. Through edge-cloud collaboration, our proposal can improve traffic prediction accuracy by 9.5%
compared with single-point traffic prediction, and resource utilization is also improved by edge-cloud
collaborative traffic prediction.
INDEX TERMS Integrated radio and optical networks, resource allocation, edge-cloud collaboration, traffic
prediction.
I. INTRODUCTION (MEC) with cloud platform and edge nodes become a new
The integrated radio and optical networks can serve diver- and attractive computing paradigm, which integrates the com-
sified applications and services by introducing the Internet puting power of the cloud platform with the flexible tasks of
of Things (IoT) supporting technology which can provide edge nodes. It can support various computationally complex
seamless interconnection among heterogeneous devices [1]. delay-sensitive service applications, such as face recognition,
With the access of a large number of network devices, a high natural language processing, and interactive games [2], [3],
volume of data would be stored or processed at the edge [4]. Therefore, the MEC architecture has become a typical
of weak-computation-ability nodes in integrated radio and networking mode for integrated radio and optical networks.
optical networks. The architecture of mobile edge computing However, the available resources in a single edge node (such
as a small cell base station) are very limited, which is still
The associate editor coordinating the review of this manuscript and an important issue in this scenario [5]. Although there have
approving it for publication was Shadi Alawneh . been some works to upload computing tasks that exceed the
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 11, 2023 7067
B. Bao et al.: Resource Allocation With TP-ECC in Integrated Radio and Optical Networks
capacity of edge node to the remote cloud, the total resource provides a new solution for service-oriented network intel-
consumption in the system may be high due to the bandwidth ligent optimization configuration. We also design a control
occupation on the transmission pipeline [6], [7]. In other experiment in the simulation environment to demonstrate the
words, the limited resource of a single edge node severely effectiveness of the whole system architecture and strategy
degrades the performance of integrated radio and optical through data analysis. The performance evaluation with the
networks. simulation of ERAS shows that the accuracy of TP-ECC
A reasonable resource allocation algorithm can achieve module performs well. Under its guidance, the performances
load balancing among edge nodes, which enables the of ERAS are generally better than that of the previous works.
resource-constrained edge nodes to help each other to real- The main contribution of this paper can be summarized in
ize more flexible sharing for workloads and resources [8]. the following aspects.
In computing-intensive tasks, it can meet the heterogeneous 1) A system architecture is set up to support edge cloud
needs of access terminals [9]. For resource allocation, traffic collaboration in integrated radio and optical networks, which
prediction can be regarded as a key and important first-hand is composed of several modules arranged on the cloud plat-
operation that would be regrettable to be skipped. High- form and edge nodes. We discuss the function and workflow
precise traffic prediction can instruct people to not only make of each module in detail.
flexible switch adjustments but also form backup paths in 2) An edge-cloud collaboration based traffic prediction
burst traffic [10], which may further enable them to break the mechanism is proposed with the adoption of GRU, which can
limitation in integrated radio and optical networks, so as to effectively improve the accuracy of traffic prediction, thereby
improve resource utilization and reduce the blocking rate as contributing to the resource allocation scheme.
well as average queue delay. 3) Based on the traffic prediction, an efficient resource
Machine learning has been applied to the present traf- allocation method is further proposed. The service recon-
fic prediction, including Recurrent Neural Network (RNN), figuration and migration process are triggered by the load
Long Short Term Memory (LSTM), and Hierarchical Tempo- balancing technology, and the transmission path with the
ral Memory (HTM) [11], [12], [13], which can iteratively pre- minimum resource consumption is selected to achieve the
dict the next traffic flow of network in time series. Although purpose of resource-saving in integrated radio and optical
it has received great attention, most of the existing solutions network scenarios.
simplify the assumption that the perception of network state The rest of the paper is organized as follows. A sum-
is only based on the commanding perspective of the cloud mary of the literature is provided in Section II. In Sec-
platform without taking into account those of edge nodes. tion III, we present the model with edge and cloud archi-
Motivated by this, we consider and forecast traffic from the tecture considered in this paper and define the relations
perspective of both cloud and edge nodes. Compared with the among the involved nodes and neural networks. In Section IV,
case where the neural network is executed in a single location, we explain how the edge-cloud collaboration traffic pre-
we propose an edge-cloud collaborative traffic prediction diction takes place. Section V designs the derived resource
(TP-ECC), which is more promising in obtaining higher pre- allocation algorithm. Further, in Section VI, we present the
diction accuracy. Besides, considering the limited computing dataset and define the metrics used for the evaluation, and
capacity in edge nodes, a simple traffic prediction model is we also evaluate the performances of our algorithm. Finally,
required to support the function, which also needs to support Section VII concludes this manuscript.
the rapid processing of large-scale datasets in cloud nodes.
Thus, a gated recurrent unit (GRU) is adopted to forecast II. RELATED WORKS
the traffic in the TP-ECC model, which has a simpler gate With the improvements in network infrastructure and the
structure than the LSTM traffic prediction model [14]. With high demands of network users, the network scale continues
the designed edge-cloud collaboration system architecture, expanding, and multiple service providers and network oper-
the TP-ECC module may improve the accuracy of network ators coexist, leading to the emergence of tidal flow effects.
state perception, and give an accurate forecast of edge node Although the tide peaks can be predicted as many as possible,
traffic. they still may bring uncertainty to network operation and
Based on the proposed TP-ECC, we further propose a new maintenance [15]. In order to solve this problem, a reconfig-
efficient resource allocation scheme (ERAS) in integrated urable network supporting a software-defined network (SDN)
radio and optical networks. On the basis of node traffic has recently attracted much attention and can adapt to the
prediction results, ERAS uses the load balancing theory to service demand flow as much as possible [16], [17]. The
allocate the resources among the edge nodes in order to general reconfiguration framework based on SDN technology
carry end-to-end services. The proposed ERAS algorithm consists of two parts, including modeling and forecasting
together with the traffic prediction can make continuous traffic demand flow, and using prediction for active (offline)
strategic adjustments based on the real-time network status, network optimization between predefined (reconfiguration)
formulate an optimal strategy that meets the needs of users, time points [18]. The overall goal is to find a resource alloca-
and send the results to quickly execute network configuration tion strategy that is most suitable for the future traffic demand
in the device. This optimized resource allocation method of network [19].
Generally, the previous works mainly have focused on opti- several recursive short-term prediction processes (5 minutes
mizing the efficiency of the resource allocation scheme, with- each) to obtain the alternative effect of long-term traffic
out considering the uniformity and sustainability of occupied prediction, which inevitably led to cumulative errors and low
resources [17], [18], , [21]. In this case, reliability.
although the resource allocation strategy can be solved in Motivated by the drawbacks above, the traffic prediction of
a very short time, it is easy to lead to highly unbalanced edge cloud collaboration with GRU is explicitly considered in
allocation, and especially after the network reconfiguration, this paper. The purpose is to break the boundary of resource
each edge node does not pay enough attention to the quality sharing between cloud and edge nodes, so as to provide a
of service for services. more responsive and accurate resource allocation strategy.
Although there have been various resource allocation meth-
ods using the results of traffic prediction, load balancing algo-
rithms and their related service reconfiguration mechanism
have been the focus of network research in recent years.
Specifically, load balancing resource allocation scheme has
been studied in a wireless network, SDN IP network, and
transmission control protocol (TCP) network [13], [15], [16],
[27], [28], [29]. However, to the best of our knowledge, few
load balancing schemes consider the traffic prediction results
based on edge-cloud collaboration in integrated radio and
optical networks.
B. STRUCTURE OF EDGE-CLOUD COLLABORATION module will back up the network traffic data locally at first,
access,p
As shown in Fig. 1, an access node may include a data then upload rt+1 and the size of the network traffic data
p
collecting module, a data clustering module, an access traffic st to the cloud platform. It should be noted that the traffic
p
prediction model, and an information uploading module. threshold ht is determined and issued to each access node
Data Collecting Module is to collect traffic data of the by the cloud platform according to (1) where Mp denotes
access,p
access node through NetFlow technology. The traffic data of the physical traffic limit for each access node and rt−1
the current access node, after being collected, is stored in a denotes the prediction result of network traffic at the last
database in the format of a tuple, i.e., <date, time, duration, moment t-1.
size of traffic, source node IP, destination node IP, source p access,p
node port, destination node port, collection node ID>. ht = Mp − rt−1 (1)
Furthermore, the information uploading module also
TABLE 1. Mathematical definitions.
uploads the network traffic data backed up locally to the cloud
platform in chronological order according to the size of the
local data required to be uploaded to the cloud platform at the
next moment issued from the cloud platform. Such an oper-
ation can make the best of the advantage of storage resource
of the cloud platform by transferring the storage task of the
access node without the heavy burdens of forwarding data
to the cloud platform. In this way, limited edge computing
resources of the access node can be spared, and the network
operation and maintenance costs can be reduced.
The access node may further include a data cleaning mod-
ule between the data collecting module and the data clustering
module, and a data conversion module between the data
clustering module and the access traffic prediction model.
The former is to clean incomplete, repeated data records and
Data Clustering Module is to cluster the traffic data col-
records roaming to the local in the traffic data information
lected by the data collecting module according to the path
collected by the data collecting module. In this solution, the
information. Specifically, it determines whether the traffic
traffic data information is sample-edited before being upload
data is the access traffic data or the network traffic data
to the cloud platform, so that the burden on transmission
based on the source node IP and the destination node IP in
channel can be greatly reduced while reserving effective
the path information of the traffic data. In addition, if traffic
information. The latter is to convert the access traffic data into
data information is generated between the access node and a
a data format of a training set for the access traffic prediction
transmission network node or between access nodes, the path
model.
information thereof needs to be recorded. However, if it is
As shown in Fig. 1, a cloud platform may include a data-
generated between a terminal device and the access node, the
receiving module, a network traffic prediction model, and a
path information is directly marked as the access traffic. For
traffic prediction module.
example, it can be recorded as Nan, without recording the IP
The data-receiving module is to receive network traffic
address and port information of the source/destination node
data reported by each access node and the prediction result
of the traffic, in order to save storage cost of edge nodes.
of access traffic at the next moment through the transmission
For easy understanding, we will denote some definitions
network.
that may be frequently mentioned in the following as sym-
The network traffic prediction model is to take the network
bols, which are summarized in Table 1.
traffic reported by each access node as input, and output the
Access Traffic Prediction Model is to take the access traffic
access,p prediction result of traffic to be forwarded of node p on port n,
data as input, and output a prediction result rt+1 of access network,p,n
which can be denoted as rt+1 . Obviously, we have (2).
traffic at the next moment. It adds the access traffic data to
a training set and perform a real-time training to update the network,p
rt+1 =
X network,p,n
rt+1 (2)
access traffic prediction neural network. n
Information Uploading Module is to upload the prediction Further, the network traffic prediction model also regularly
access,p
result rt+1 output by the access traffic prediction model adds the network traffic data to the training set and performs
as well as the network traffic data to the cloud platform. real-time training in order to update the network traffic pre-
The volume of data to be forwarded will be measured first, diction model.
access,p p
by comparing rt+1 with the traffic threshold ht . When The traffic prediction module is to determine and output
access,p p
rt+1 < ht , the information uploading module will upload the prediction result of traffic at the next moment for each
access,p
rt+1 as well as the network traffic data to the cloud access node, respectively according to the prediction result
access,p p access,p
platform. When rt+1 ≥ ht , the information uploading rt+1 reported by each access node, and the prediction
network,p
result rt+1 calculated by cloud and data to be uploaded excessive storage for historical data. Therefore, we appro-
to the cloud platform at the next moment. priately shorten the time window of these neural networks.
In examples of the present disclosure, the prediction result In other words, truncate backpropagation through time, and
of traffic at the next moment for the access node p may be hand over the tasks of storage as well as learning long-term
determined through the following equation. traffic characteristics to cloud nodes.
p network,p access,p For the neural network deployed at the cloud platform,
rt+1 = rt+1 + rt+1 (3) it learns the traffic pattern of the entire network topology from
The cloud platform may further include a traffic threshold a macro perspective. The pre-processed traffic information
determination module and a parameter issuing module. The regularly uploaded by each edge node is used as a training
former is to determine the traffic threshold for each access set. The neural network will iteratively acquire multiple other
node corresponding to the next moment according to the input features besides traffic data, including the connection
prediction result of network traffic for each access node, relationship between nodes and network macro events, and
respectively. so on. This is a significant improvement to the weakness of
the edge nodes with their insufficient computing power and
limited vision.
Step 4, each access node uploads the prediction result of Step 7, the cloud platform determines and outputs the
access,p
access traffic at the next moment rt+1 and the network prediction result of traffic for each access node according to
access,p
traffic data to the cloud platform, respectively. the prediction result of access traffic rt+1 , the prediction
network,p
Firstly, for a port n at an access node, the traffic threshold result of network traffic rt+1 , and the size of the data
corresponding to the next moment is calculated according to p
network,p,n
required to be uploaded to the cloud platform kt+1 at the next
the prediction result of traffic rt+1 to be forwarded at moment.
the next moment through the following equation. The prediction result of traffic at the next moment for
p,n
X network,p,i the access node p may be determined through the following
ht+1 = m0 − rt+1 ,i ∈ I (8) equation.
i
FIGURE 3. (a) Schematic diagram of business request setting (b) schematic diagram of the optical network of the data center (c) the auxiliary diagram of
proposed Efficient Resource Allocation Scheme (ERAS).
i,j
shown in Fig. 3 (a). The yellow circle represents the virtual where Breq denotes the bandwidth requirement of access
i,j
access node, which is the edge node in edge-cloud collabora- services with i and j as source and destination nodes. Bres
tion. The orange circle represents the virtual cloud platform, denotes the remaining bandwidth of the optical link with i
the red hexagon denotes the switching resources required and j as the source and destination nodes. Rireq denotes the
by the virtual edge node, the red octagon is the application number of end-to-end service requests accessed from node i.
resources required by the virtual cloud platform for the per- Rires denotes the remaining bandwidth resources of node i that
ception of network status, and the blue quad indicates the can be used for forwarding services after TP-ECC prediction.
bandwidth resources required by the access service request. It can be seen that the request resource is proportional to
On the basis of the above network topology and service the weight, and the residual resource is inversely proportional
request definition, Fig. 3 (b) shows the schematic diagram of to the weight. This setting follows the idea that the more
the integrated radio and optical networks with the edge-cloud sufficient the residual resource is, the smaller the weight will
architecture, in which the yellow circle represents the physi- be, which is more advantageous in the routing process.
cal access (edge) node, the orange circle is the cloud platform, When a service can be mapped to a physical network with
the red hollow hexagon represents the remaining switching multiple routing options, the equilibrium factor of each link
resources of the physical edge node, the red hollow octagon is calculated one by one, and the path with the least sum
indicates the remaining application resources of the cloud of weights is selected, and the result is associated with the
platform, and the purple triangle represents the number of virtual node pair. In the case of severe resource constraints,
remaining ports of the physical edge node or cloud platform. the priority of traffic distribution to the node with less heavy
The blue quadrilateral represents the remaining bandwidth load to route can maximize the reduction of network blocking
resource of the optical link. rate and traffic delay.
Fig. 3 (c) shows the auxiliary diagram of ERAS process. Since the evaluation of cloud platform has been completed
First, judge whether the remaining application resources of in the front stage, the weight value of double solid line is
the cloud platform corresponding to the optimal and subop- taken as 1 to reduce the impact on the calculation of the
timal virtual cloud platform meet the access service require- shortest path as much as possible. On the basis of the previous
ments in turn. If they meet the requirements, use double solid step, the auxiliary graph topology after updating the weight
lines to connect them. If both cloud platforms are lack of is rerouted, and the shortest path is selected for mapping. The
resources, the mapping fails, and then traverse all the physical whole ERAS algorithm above can be abstractly summarized
edge nodes. If the remaining switching resources meet the in Algorithm 1.
needs of the virtual edge node, use the dotted line to connect In the stage of creating auxiliary graph, the time complex-
them. Otherwise, the link between the node and other nodes ity is O(m2 )+O(l), where m is the number of virtual cloud
is regarded as open circuit. platforms and l is the number of links in physical topology.
After drawing the auxiliary diagram, ERAS supplements In the stage of routing calculation, the shortest path is calcu-
the weight value for each link. The weight of the dotted line lated based on the topology with updated weight, and the time
is set to a maximum integer value, which is far more than complexity is O(n3 ), so the comprehensive time complexity
the total weight of all links in the topology. The weight value of the algorithm is O(m2 )+O(l)+O(n3 ).
of the real line comprehensively considers the demand and
residual relationship of nodes and links, as the equilibrium
factor 1, which can be calculated by (12). VI. PERFORMANCE EVALUATION
A. EXPERIMENTAL SETUP
v Experiments in this section are designed to demonstrate the
i,j
s u j traffic prediction accuracy of TP-ECC as well as the overall
Breq Rireq 1 u Rreq 1
1= i,j
+ · i t · performance of ERAS.
Bres Rires RNumres + 1 Rjres RjNumres + 1 Python is used to generate the underlying physical topol-
(12) ogy [34], in which 200 nodes are evenly distributed in four
Algorithm 1 Efficient Resource Allocation Scheme of service queue in a single simulation is set to 1000, and
Input: Virtualization business request RPi (S, D, d, Cp ); the average value of multiple simulation data is taken as the
Output: Virtualization resource allocation mapping results, simulation result.
Success / False;
1: for each working cycle B. ACCURACY OF TRAFFIC PREDICTION
2: Initialize virtualization mapping results Rov as False. Figure 4(a) shows the prediction results of TP-ECC and the
3: for virtual cloud platform d v in set D do ground traffic series. The API of neural networks used in the
4: if the remaining application resources of the TP-ECC algorithm are based on TensorFlow 1.2.1 in Python
corresponding cloud platform meet the demand, 2.6. It can be seen that the prediction is generally accurate,
p
that is, Ar di > A(d v ) then and some special phenomena can be further explained. First,
5: Use double solid lines to connect the the prediction results are not disturbed by any instantaneous
corresponding cloud platform. traffic reductions, so the future results are still reasonable.
6: end if This is because the cloud’s grasp of the overall trend pre-
7: end for vents edge nodes from being over-sensitive to those abnormal
8: for virtual edge node d v in set D do reduction data. Second, early predictions are made precisely
9: if the remaining exchange resources of for the accidental traffic spikes. The reason is that when the
corresponding nodes meet the demand, that storage task is transferred to the cloud, the edge node has
p
is, Cr vi > C(vv ) then more resource to ensure the local prediction is refined and
10: Use dotted lines to connect the efficient.
corresponding nodes.
11: else
12: Remove the physical edge node.
13: end if
14: end for
15: Add weight value for all links.
16: The shortest path of virtual network
mapping is
calculated on the basis of Gp = P nj , x∗nk
after the link weight is supplemented.
17: Set virtualization mapping results Rov as True.
end for
18: return P nj , x∗nk , Rov
VII. CONCLUSION [11] H. Yang, K. Zhan, B. Bao, Q. Yao, J. Zhang, and M. Cheriet, ‘‘Automatic
In this paper, we have first proposed a new deployment archi- guarantee scheme for intent-driven network slicing and reconfiguration,’’
J. Netw. Comput. Appl., vol. 190, Sep. 2021, Art. no. 103163.
tecture in order to achieve edge cloud collaboration in inte- [12] R.-K. Shiu, Y.-W. Chen, P.-C. Peng, J. Chiu, Q. Zhou, T.-L. Chang,
grated radio and optical networks. Then, using the functional S. Shen, J.-W. Li, and G.-K. Chang, ‘‘Performance enhancement of optical
entities of the architecture, we have proposed the accurate comb based microwave photonic filter by machine learning technique,’’
J. Lightw. Technol., vol. 38, no. 19, pp. 5302–5310, Oct. 1, 2020.
traffic prediction algorithm TP-ECC which is based on edge [13] X. Fu and C. Zhou, ‘‘Predicted affinity based virtual machine placement
cloud collaboration with GRU model. We further have pro- in cloud computing environments,’’ IEEE Trans. Cloud Comput., vol. 8,
posed a resource allocation scheme ERAS based on load bal- no. 1, pp. 246–255, Jan. 2020.
[14] B. Bao, H. Yang, Y. Wan, Q. Yao, A. Yu, J. Zhang, B. C. Chatterjee, and
ancing theory. Their performance has been demonstrated by
E. Oki, ‘‘Node-oriented traffic prediction and scheduling based on graph
experiments in integrated radio and optical networks testbed. convolutional network in metro optical networks,’’ in Proc. Opt. Fiber
We also have evaluated the performance of the proposed Commun. Conf. (OFC), 2021, pp. 1–3.
algorithm for end-to-end services under heavy traffic load, [15] G. O. Perez, A. Ebrahimzadeh, M. Maier, J. A. Hernandez, D. L. Lopez,
and M. F. Veiga, ‘‘Decentralized coordination of converged tactile internet
and compared it with other conventional resource allocation and MEC services in H-CRAN fiber wireless networks,’’ J. Lightw. Tech-
schemes. Numerical results have shown that with large ter- nol., vol. 38, no. 18, pp. 4935–4947, Sep. 15, 2020.
minal sets, TP-ECC is fully capable to improve the accuracy [16] Z. Ni, H. Chen, Z. Li, X. Wang, N. Yan, W. Liu, and F. Xia, ‘‘MSCET:
A multi-scenario offloading schedule for biomedical data processing
rates of traffic prediction by up to 9.5%, compared with and analysis in cloud-edge-terminal collaborative vehicular networks,’’
the methods without edge-cloud collaboration. Furthermore, IEEE/ACM Trans. Comput. Biol. Bioinf., early access, Nov. 30, 2021, doi:
ERAS can improve the resource utilization of the whole net- 10.1109/TCBB.2021.3131177.
[17] S. Yin, Y. Chu, C. Yang, Z. Zhang, and S. Huang, ‘‘Load-adaptive energy-
work, while reducing the average queue delay and blocking saving strategy based on matching game in edge-enhanced metro FiWi,’’
probability. Opt. Fiber Technol., vol. 68, Jan. 2022, Art. no. 102762.
In the future, we would like to study more complex [18] E. Ganesan, I.-S. Hwang, A. T. Liem, and M. S. Ab-Rahman, ‘‘SDN-
enabled FiWi-IoT smart environment network traffic classification using
resource allocation optimization technology and edge cloud supervised ML models,’’ Photonics, vol. 8, no. 6, p. 201, Jun. 2021.
collaboration architecture, which can jointly mobilize the [19] T. Wu, P. Zhou, B. Wang, A. Li, X. Tang, Z. Xu, K. Chen, and X. Ding,
resources of cloud and edge nodes and further enhance the ‘‘Joint traffic control and multi-channel reassignment for core back-
security, reliability, and accuracy of the fast-growing inte- bone network in SDN-IoT: A multi-agent deep reinforcement learning
approach,’’ IEEE Trans. Netw. Sci. Eng., vol. 8, no. 1, pp. 231–245,
grated radio and optical networks. Jan. 2021.
[20] C. Li, H. Yang, Z. Sun, Q. Yao, B. Bao, J. Zhang, and A. V. Vasilakos,
‘‘Federated hierarchical trust-based interaction scheme for cross-domain
REFERENCES industrial IoT,’’ IEEE Internet Things J., vol. 10, no. 1, pp. 447–457,
[1] J. Yu, X. Liu, Y. Gao, and X. Shen, ‘‘3D channel tracking for UAV-satellite Jan. 2023.
communications in space-air-ground integrated networks,’’ IEEE J. Sel. [21] Y. Li, X. Sun, H. Zhang, Z. Li, L. Qin, C. Sun, and Z. Ji, ‘‘Cellular
Areas Commun., vol. 38, no. 12, pp. 2810–2823, Dec. 2020. traffic prediction via a deep multi-reservoir regression learning network
[2] X. Chen, Y. Bi, X. Chen, H. Zhao, N. Cheng, F. Li, and W. Cheng, for multi-access edge computing,’’ IEEE Wireless Commun., vol. 28, no. 5,
‘‘Dynamic service migration and request routing for microservice in mul- pp. 13–19, Oct. 2021.
ticell mobile-edge computing,’’ IEEE Internet Things J., vol. 9, no. 15, [22] J. Bi, S. Li, H. Yuan, and M. Zhou, ‘‘Integrated deep learning method for
pp. 13126–13143, Aug. 2022. workload and resource prediction in cloud systems,’’ Neurocomputing, vol.
[3] H. Yang, X. Zhao, Q. Yao, A. Yu, J. Zhang, and Y. Ji, ‘‘Accurate fault 424, pp. 35–48, Feb. 2021.
location using deep neural evolution network in cloud data center inter- [23] X. Song, Y. Guo, N. Li, and L. Zhang, ‘‘Online traffic flow prediction
connection,’’ IEEE Trans. Cloud Comput., vol. 10, no. 2, pp. 1402–1412, for edge computing-enhanced autonomous and connected vehicles,’’ IEEE
Apr. 2022. Trans. Veh. Technol., vol. 70, no. 3, pp. 2101–2111, Mar. 2021.
[4] M. Zhao, J. Yu, W. Li, D. Liu, S. Yao, W. Feng, C. She, and T. Quek, [24] H. Yang, A. Yu, J. Zhang, J. Nan, B. Bao, Q. Yao, and M. Cheriet, ‘‘Data-
‘‘Energy-aware task offloading and resource allocation for time-sensitive driven network slicing from core to RAN for 5G broadcasting services,’’
services in mobile edge computing systems,’’ IEEE Trans. Veh. Technol., IEEE Trans. Broadcast., vol. 67, no. 1, pp. 23–32, Mar. 2021.
vol. 70, no. 10, pp. 10925–10940, Oct. 2021. [25] J. Chen and Y. Wang, ‘‘An adaptive short-term prediction algorithm
for resource demands in cloud computing,’’ IEEE Access, vol. 8,
[5] H. Yang, J. Yuan, C. Li, G. Zhao, Z. Sun, Q. Yao, B. Bao, A. V. Vasilakos,
pp. 53915–53930, 2020.
and J. Zhang, ‘‘BrainIoT: Brain-like productive services provisioning with
federated learning in industrial IoT,’’ IEEE Internet Things J., vol. 9, no. 3, [26] X. Yin, G. Wu, J. Wei, Y. Shen, H. Qi, and B. Yin, ‘‘Deep learning
pp. 2014–2024, Feb. 2022. on traffic prediction: Methods, analysis and future directions,’’ IEEE
Trans. Intell. Transp. Syst., vol. 23, no. 6, pp. 4927–4943, Jun. 2022, doi:
[6] C. Ding, A. Zhou, X. Ma, N. Zhang, C.-H. Hsu, and S. Wang, ‘‘Towards 10.1109/TITS.2021.3054840.
diversified IoT services in mobile edge computing,’’ IEEE Trans. Cloud
[27] M. Chen, S. Huang, X. Fu, X. Liu, and J. He, ‘‘Statistical model checking-
Comput., early access, Sep. 3, 2021, doi: 10.1109/TCC.2021.3109385.
based evaluation and optimization for cloud workflow resource alloca-
[7] A. Naouri, H. Wu, N. A. Nouri, S. Dhelim, and H. Ning, ‘‘A novel tion,’’ IEEE Trans. Cloud Comput., vol. 8, no. 2, pp. 443–458, Apr. 2020.
framework for mobile-edge computing by optimizing task offloading,’’ [28] W. Wei, H. Gu, K. Wang, J. Li, X. Zhang, and N. Wang, ‘‘Multi-
IEEE Internet Things J., vol. 8, no. 16, pp. 13065–13076, Aug. 2021. dimensional resource allocation in distributed data centers using deep
[8] C. Li, H. Yang, Q. Yao, Z. Sun, and J. Zhang, ‘‘High-precision edge-cloud reinforcement learning,’’ IEEE Trans. Netw. Service Manage., early access,
collaboration with federated learning in edge optical network,’’ in Proc. Oct. 11, 2022, doi: 10.1109/TNSM.2022.3213575.
Opt. Fiber Commun. Conf. (OFC), 2021, pp. 1–3. [29] R. Kumar and V. Tiwari, ‘‘Opt-ACM: An optimized load balancing
[9] H. Ke, H. Wang, W. Sun, and H. Sun, ‘‘Adaptive computation offloading based admission control mechanism for software defined hybrid wireless
policy for multi-access edge computing in heterogeneous wireless net- based IoT (SDHW-IoT) network,’’ Comput. Netw., vol. 188, Apr. 2021,
works,’’ IEEE Trans. Netw. Service Manage., vol. 19, no. 1, pp. 289–305, Art. no. 107888.
Mar. 2022. [30] Q. Yao, H. Yang, B. Bao, A. Yu, J. Zhang, and M. Cheriet, ‘‘Core and
[10] D. P. Isravel, S. Silas, and E. B. Rajsingh, ‘‘Centrality based congestion spectrum allocation based on association rules mining in spectrally and
detection using reinforcement learning approach for traffic engineering in spatially elastic optical networks,’’ IEEE Trans. Commun., vol. 69, no. 8,
hybrid SDN,’’ J. Netw. Syst. Manage., vol. 30, no. 1, pp. 1–22, Jan. 2022. pp. 5299–5311, Aug. 2021.
[31] H. Yang, B. Bao, C. Li, Q. Yao, A. Yu, J. Zhang, and Y. Ji, ‘‘Blockchain- QIUYAN YAO (Member, IEEE) received the M.S.
enabled tripartite anonymous identification trusted service provisioning degree in computer science and technology from
in industrial IoT,’’ IEEE Internet Things J., vol. 9, no. 3, pp. 2419–2431, the Hebei University of Engineering, Handan,
Feb. 2022. China, in 2015, and the Ph.D. degree from the
[32] B. Bao, H. Yang, Q. Yao, A. Yu, B. C. Chatterjee, E. Oki, and J. Zhang, Beijing University of Posts and Telecommunica-
‘‘SDFA: A service-driven fragmentation-aware resource allocation in elas- tions (BUPT), Beijing, China, in 2020. She is
tic optical networks,’’ IEEE Trans. Netw. Service Manage., vol. 19, no. 1, currently working at BUPT. Her research interests
pp. 353–365, Mar. 2022.
include the AI driven routing, spectrum assign-
[33] X. Long, J. Wu, and L. Chen, ‘‘Energy-efficient offloading in mobile edge
ment strategy in elastic optical networks, and space
computing with edge-cloud collaboration,’’ in Proc. Int. Conf. Algorithms
Archit. Parallel Process. Cham, Switzerland: Springer, 2018, pp. 460–475.
division multiplexing networks. She received the
[34] Z. Sun, H. Yang, C. Li, Q. Yao, D. Wang, J. Zhang, and A. V. Vasilakos, Best Paper Award at OECC/PSC’ 19.
‘‘Cloud-edge collaboration in industrial Internet of Things: A joint offload-
ing scheme based on resource prediction,’’ IEEE Internet Things J., vol. 9,
no. 18, pp. 17014–17025, Sep. 2022.
[35] H. Yang, Q. Yao, B. Bao, A. Yu, J. Zhang, and A. V. Vasilakos, ‘‘Multi-
associated parameters aggregation-based routing and resources allocation LIN GUAN received the M.S. degree from the
in multi-core elastic optical networks,’’ IEEE/ACM Trans. Netw., vol. 30, Beijing University of Posts and Telecommuni-
no. 5, pp. 2145–2157, Oct. 2022. cations (BUPT), Beijing, China, in 2021. Her
research interests include the artificial intelligence
model, edge-cloud collaboration, and resource
allocation.
HUI YANG (Senior Member, IEEE) is currently MOHAMED CHERIET (Senior Member, IEEE)
the Vice Dean and a Professor at the Beijing Uni- is currently a Full Professor with the Depart-
versity of Posts and Telecommunications (BUPT). ment of System Engineering, Ecole de Technolo-
His research interests include SDN, AI, elastic gie Supérieure, Montreal, QC, Canada. He has
optical networks, and fixed-mobile access net- authored or coauthored more than 300 technical
works. He has authored or coauthored 100 papers papers in renowned international journals and con-
in prestigious journals and conferences, and he ferences, and has delivered more than 50 invited
is the first author of more than 50 of them. talks. He was a recipient of the 2016 IEEE Canada
He received the Best Paper Award at IWCMC’ J. M. Ham Outstanding Engineering Educator
19/NCCA’ 15 and the Young Scientist Award at Award, the 2013 ETS Research Excellence Prize,
IEEE ICOCN’ 17. He was the General Chair of ISAI’ 16. He has served as and the 2012 Queen Elizabeth Diamond Jubilee Medal. He is a fellow of
the Guest Editor for IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS and IAPR, CAE, EIC, and EC.
an Associate Technical Editor for IEEE Communications Magazine.