Resource Allocation With Edge-Cloud Collaborative Traffic Prediction in Integrated Radio and Optical Networks

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Received 1 December 2022, accepted 12 January 2023, date of publication 16 January 2023, date of current version 24 January 2023.

Digital Object Identifier 10.1109/ACCESS.2023.3237257

Resource Allocation With Edge-Cloud


Collaborative Traffic Prediction in Integrated
Radio and Optical Networks
BOWEN BAO 1 , HUI YANG 1 , (Senior Member, IEEE), QIUYAN YAO 1 , (Member, IEEE),
LIN GUAN1 , JIE ZHANG1 , (Member, IEEE), AND MOHAMED CHERIET2 , (Senior Member, IEEE)
1 State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
2 Department of System Engineering, École de technologie supérieure (ÉTS), University of Quebec, Montreal, QC H3C 1K3, Canada
Corresponding author: Hui Yang ([email protected])
This work was supported in part by NSFC Project under Grant 62271075 and Grant 62201088, and in part by the Fund of State Key
Laboratory (SKL) of Information Photonics and Optical Communications (IPOC) [Beijing University of Posts and Telecommunications
(BUPT)] under Grant IPOC2021ZT04 and Grant IPOC2020A004.

ABSTRACT By integrating communications in different domains, integrated radio and optical networks
can serve a wider range of applications and services. Integrated radio and optical network scenarios will
involve more weak-computation-ability network nodes, such as small-cell base stations. To pursue efficient
integrated radio and optical networks, more efficient ways to conduct transmission under the demand of edge
and cloud collaboration are required. The lack of forward-looking resource allocation may easily lead to a
waste of network resources without an expected return. Therefore, an efficient resource allocation scheme
needs to consider certain issues: 1) a comprehensive perspective of traffic prediction; 2) a release of pressure
on the transmission pipeline during the prediction process; and 3) a reduction of loss of edge nodes due to the
computation. In this paper, benefiting from machine learning, we propose a resource allocation with edge-
cloud collaborative traffic prediction (TP-ECC) in integrated radio and optical networks, where an efficient
resource allocation scheme (ERAS) is designed based on the prediction results with the gated recurrent
unit model. We maximize the utilization of limited resources to improve the awareness of network status.
We present three evaluation indicators and build a network architecture to evaluate our resource allocation
scheme. Through edge-cloud collaboration, our proposal can improve traffic prediction accuracy by 9.5%
compared with single-point traffic prediction, and resource utilization is also improved by edge-cloud
collaborative traffic prediction.

INDEX TERMS Integrated radio and optical networks, resource allocation, edge-cloud collaboration, traffic
prediction.

I. INTRODUCTION (MEC) with cloud platform and edge nodes become a new
The integrated radio and optical networks can serve diver- and attractive computing paradigm, which integrates the com-
sified applications and services by introducing the Internet puting power of the cloud platform with the flexible tasks of
of Things (IoT) supporting technology which can provide edge nodes. It can support various computationally complex
seamless interconnection among heterogeneous devices [1]. delay-sensitive service applications, such as face recognition,
With the access of a large number of network devices, a high natural language processing, and interactive games [2], [3],
volume of data would be stored or processed at the edge [4]. Therefore, the MEC architecture has become a typical
of weak-computation-ability nodes in integrated radio and networking mode for integrated radio and optical networks.
optical networks. The architecture of mobile edge computing However, the available resources in a single edge node (such
as a small cell base station) are very limited, which is still
The associate editor coordinating the review of this manuscript and an important issue in this scenario [5]. Although there have
approving it for publication was Shadi Alawneh . been some works to upload computing tasks that exceed the

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 11, 2023 7067
B. Bao et al.: Resource Allocation With TP-ECC in Integrated Radio and Optical Networks

capacity of edge node to the remote cloud, the total resource provides a new solution for service-oriented network intel-
consumption in the system may be high due to the bandwidth ligent optimization configuration. We also design a control
occupation on the transmission pipeline [6], [7]. In other experiment in the simulation environment to demonstrate the
words, the limited resource of a single edge node severely effectiveness of the whole system architecture and strategy
degrades the performance of integrated radio and optical through data analysis. The performance evaluation with the
networks. simulation of ERAS shows that the accuracy of TP-ECC
A reasonable resource allocation algorithm can achieve module performs well. Under its guidance, the performances
load balancing among edge nodes, which enables the of ERAS are generally better than that of the previous works.
resource-constrained edge nodes to help each other to real- The main contribution of this paper can be summarized in
ize more flexible sharing for workloads and resources [8]. the following aspects.
In computing-intensive tasks, it can meet the heterogeneous 1) A system architecture is set up to support edge cloud
needs of access terminals [9]. For resource allocation, traffic collaboration in integrated radio and optical networks, which
prediction can be regarded as a key and important first-hand is composed of several modules arranged on the cloud plat-
operation that would be regrettable to be skipped. High- form and edge nodes. We discuss the function and workflow
precise traffic prediction can instruct people to not only make of each module in detail.
flexible switch adjustments but also form backup paths in 2) An edge-cloud collaboration based traffic prediction
burst traffic [10], which may further enable them to break the mechanism is proposed with the adoption of GRU, which can
limitation in integrated radio and optical networks, so as to effectively improve the accuracy of traffic prediction, thereby
improve resource utilization and reduce the blocking rate as contributing to the resource allocation scheme.
well as average queue delay. 3) Based on the traffic prediction, an efficient resource
Machine learning has been applied to the present traf- allocation method is further proposed. The service recon-
fic prediction, including Recurrent Neural Network (RNN), figuration and migration process are triggered by the load
Long Short Term Memory (LSTM), and Hierarchical Tempo- balancing technology, and the transmission path with the
ral Memory (HTM) [11], [12], [13], which can iteratively pre- minimum resource consumption is selected to achieve the
dict the next traffic flow of network in time series. Although purpose of resource-saving in integrated radio and optical
it has received great attention, most of the existing solutions network scenarios.
simplify the assumption that the perception of network state The rest of the paper is organized as follows. A sum-
is only based on the commanding perspective of the cloud mary of the literature is provided in Section II. In Sec-
platform without taking into account those of edge nodes. tion III, we present the model with edge and cloud archi-
Motivated by this, we consider and forecast traffic from the tecture considered in this paper and define the relations
perspective of both cloud and edge nodes. Compared with the among the involved nodes and neural networks. In Section IV,
case where the neural network is executed in a single location, we explain how the edge-cloud collaboration traffic pre-
we propose an edge-cloud collaborative traffic prediction diction takes place. Section V designs the derived resource
(TP-ECC), which is more promising in obtaining higher pre- allocation algorithm. Further, in Section VI, we present the
diction accuracy. Besides, considering the limited computing dataset and define the metrics used for the evaluation, and
capacity in edge nodes, a simple traffic prediction model is we also evaluate the performances of our algorithm. Finally,
required to support the function, which also needs to support Section VII concludes this manuscript.
the rapid processing of large-scale datasets in cloud nodes.
Thus, a gated recurrent unit (GRU) is adopted to forecast II. RELATED WORKS
the traffic in the TP-ECC model, which has a simpler gate With the improvements in network infrastructure and the
structure than the LSTM traffic prediction model [14]. With high demands of network users, the network scale continues
the designed edge-cloud collaboration system architecture, expanding, and multiple service providers and network oper-
the TP-ECC module may improve the accuracy of network ators coexist, leading to the emergence of tidal flow effects.
state perception, and give an accurate forecast of edge node Although the tide peaks can be predicted as many as possible,
traffic. they still may bring uncertainty to network operation and
Based on the proposed TP-ECC, we further propose a new maintenance [15]. In order to solve this problem, a reconfig-
efficient resource allocation scheme (ERAS) in integrated urable network supporting a software-defined network (SDN)
radio and optical networks. On the basis of node traffic has recently attracted much attention and can adapt to the
prediction results, ERAS uses the load balancing theory to service demand flow as much as possible [16], [17]. The
allocate the resources among the edge nodes in order to general reconfiguration framework based on SDN technology
carry end-to-end services. The proposed ERAS algorithm consists of two parts, including modeling and forecasting
together with the traffic prediction can make continuous traffic demand flow, and using prediction for active (offline)
strategic adjustments based on the real-time network status, network optimization between predefined (reconfiguration)
formulate an optimal strategy that meets the needs of users, time points [18]. The overall goal is to find a resource alloca-
and send the results to quickly execute network configuration tion strategy that is most suitable for the future traffic demand
in the device. This optimized resource allocation method of network [19].

7068 VOLUME 11, 2023


B. Bao et al.: Resource Allocation With TP-ECC in Integrated Radio and Optical Networks

Generally, the previous works mainly have focused on opti- several recursive short-term prediction processes (5 minutes
mizing the efficiency of the resource allocation scheme, with- each) to obtain the alternative effect of long-term traffic
out considering the uniformity and sustainability of occupied prediction, which inevitably led to cumulative errors and low
resources [17], [18], , [21]. In this case, reliability.
although the resource allocation strategy can be solved in Motivated by the drawbacks above, the traffic prediction of
a very short time, it is easy to lead to highly unbalanced edge cloud collaboration with GRU is explicitly considered in
allocation, and especially after the network reconfiguration, this paper. The purpose is to break the boundary of resource
each edge node does not pay enough attention to the quality sharing between cloud and edge nodes, so as to provide a
of service for services. more responsive and accurate resource allocation strategy.
Although there have been various resource allocation meth-
ods using the results of traffic prediction, load balancing algo-
rithms and their related service reconfiguration mechanism
have been the focus of network research in recent years.
Specifically, load balancing resource allocation scheme has
been studied in a wireless network, SDN IP network, and
transmission control protocol (TCP) network [13], [15], [16],
[27], [28], [29]. However, to the best of our knowledge, few
load balancing schemes consider the traffic prediction results
based on edge-cloud collaboration in integrated radio and
optical networks.

III. SYSTEM MODEL


A. NETWORK ARCHITECTURE
As shown in Fig. 1, the system of integrated radio and optical
FIGURE 1. The proposed system architecture in integrated radio and networks considered in this paper includes a cloud platform,
optical networks. at least one access (edge) node, and at least one terminal
device, where an optical transmission network supports the
In addition, the existing machine learning-assisted traf- transmission channel.
fic prediction is usually based on a unilateral perspective, Cloud Platform refers to an abstraction of the underlying
either cloud or edge, without considering the complemen- infrastructure of the integrated radio and optical networks,
tary perspective of the edge and cloud [22], [23]. Prediction the resources and services of which may be obtained over
based on unilateral perspective may lead to inappropriate a transmission network by components of the integrated
resource allocation decisions. Specifically, a unified analysis radio and optical networks through network virtualization
in cloud via uploading massive raw data not only wastes techniques.
considerable transmission resources but also fails to provide Access (Edge) Node is a network node that provides mutual
a timely response for terminals. Nevertheless, the storage access between a wireless workstation and a wired local area
ability of edge nodes is insufficient to further improve the network. The cloud platform and the access node are con-
accuracy of prediction, which relies heavily on historical nected through the transmission network. The transmission
data [24]. network provides transparent transmission channels for vari-
Both edge and cloud participation in traffic prediction have ous services [30]. The switching devices in the transmission
been considered in [25]. This work analyzed the characteris- network are referred to as transmission network nodes. Their
tics of resource demand and load and proposed an adaptive function is to exchange data streams, including distributing
selection strategy and error adjustment factor to select a and receiving data traffic, over the transmission channels
better prediction algorithm based on a dynamic threshold. through Ethernet ports. Therefore, an access node is con-
Additionally, a short-term forecast of resource demand on nected to at least one of the transmission network nodes. The
the cloud platform was developed. However, the prerequi- cloud platform is also connected to at least one transmission
site is that all access nodes need to regularly report a large network node in the transmission network.
amount of traffic data information to the cloud platform for Terminal Device is a part of a physical layer of integrated
analysis. This step additionally occupies a large amount of radio and optical network architecture. In an IoT system, for
transmission pipeline resources, which affects the regular example, it may include a temperature and humidity sensor,
performance of the network. Moreover, the authors of [26] a QR code tag, a radio frequency identification (RFID) tag,
disclosed a traffic forecasting approach for optical backbone a reader-writer, a camera, a global positioning system (GPS)
network traffic scheduling optimization, which integrates and other perception terminals. The terminal device may also
RNN and GRU at the access node for long-term (1 hour be connected to at least one access node through a wireless
in advance) traffic prediction. However, this proposal used channel [31].

VOLUME 11, 2023 7069


B. Bao et al.: Resource Allocation With TP-ECC in Integrated Radio and Optical Networks

B. STRUCTURE OF EDGE-CLOUD COLLABORATION module will back up the network traffic data locally at first,
access,p
As shown in Fig. 1, an access node may include a data then upload rt+1 and the size of the network traffic data
p
collecting module, a data clustering module, an access traffic st to the cloud platform. It should be noted that the traffic
p
prediction model, and an information uploading module. threshold ht is determined and issued to each access node
Data Collecting Module is to collect traffic data of the by the cloud platform according to (1) where Mp denotes
access,p
access node through NetFlow technology. The traffic data of the physical traffic limit for each access node and rt−1
the current access node, after being collected, is stored in a denotes the prediction result of network traffic at the last
database in the format of a tuple, i.e., <date, time, duration, moment t-1.
size of traffic, source node IP, destination node IP, source p access,p
node port, destination node port, collection node ID>. ht = Mp − rt−1 (1)
Furthermore, the information uploading module also
TABLE 1. Mathematical definitions.
uploads the network traffic data backed up locally to the cloud
platform in chronological order according to the size of the
local data required to be uploaded to the cloud platform at the
next moment issued from the cloud platform. Such an oper-
ation can make the best of the advantage of storage resource
of the cloud platform by transferring the storage task of the
access node without the heavy burdens of forwarding data
to the cloud platform. In this way, limited edge computing
resources of the access node can be spared, and the network
operation and maintenance costs can be reduced.
The access node may further include a data cleaning mod-
ule between the data collecting module and the data clustering
module, and a data conversion module between the data
clustering module and the access traffic prediction model.
The former is to clean incomplete, repeated data records and
Data Clustering Module is to cluster the traffic data col-
records roaming to the local in the traffic data information
lected by the data collecting module according to the path
collected by the data collecting module. In this solution, the
information. Specifically, it determines whether the traffic
traffic data information is sample-edited before being upload
data is the access traffic data or the network traffic data
to the cloud platform, so that the burden on transmission
based on the source node IP and the destination node IP in
channel can be greatly reduced while reserving effective
the path information of the traffic data. In addition, if traffic
information. The latter is to convert the access traffic data into
data information is generated between the access node and a
a data format of a training set for the access traffic prediction
transmission network node or between access nodes, the path
model.
information thereof needs to be recorded. However, if it is
As shown in Fig. 1, a cloud platform may include a data-
generated between a terminal device and the access node, the
receiving module, a network traffic prediction model, and a
path information is directly marked as the access traffic. For
traffic prediction module.
example, it can be recorded as Nan, without recording the IP
The data-receiving module is to receive network traffic
address and port information of the source/destination node
data reported by each access node and the prediction result
of the traffic, in order to save storage cost of edge nodes.
of access traffic at the next moment through the transmission
For easy understanding, we will denote some definitions
network.
that may be frequently mentioned in the following as sym-
The network traffic prediction model is to take the network
bols, which are summarized in Table 1.
traffic reported by each access node as input, and output the
Access Traffic Prediction Model is to take the access traffic
access,p prediction result of traffic to be forwarded of node p on port n,
data as input, and output a prediction result rt+1 of access network,p,n
which can be denoted as rt+1 . Obviously, we have (2).
traffic at the next moment. It adds the access traffic data to
a training set and perform a real-time training to update the network,p
rt+1 =
X network,p,n
rt+1 (2)
access traffic prediction neural network. n
Information Uploading Module is to upload the prediction Further, the network traffic prediction model also regularly
access,p
result rt+1 output by the access traffic prediction model adds the network traffic data to the training set and performs
as well as the network traffic data to the cloud platform. real-time training in order to update the network traffic pre-
The volume of data to be forwarded will be measured first, diction model.
access,p p
by comparing rt+1 with the traffic threshold ht . When The traffic prediction module is to determine and output
access,p p
rt+1 < ht , the information uploading module will upload the prediction result of traffic at the next moment for each
access,p
rt+1 as well as the network traffic data to the cloud access node, respectively according to the prediction result
access,p p access,p
platform. When rt+1 ≥ ht , the information uploading rt+1 reported by each access node, and the prediction

7070 VOLUME 11, 2023


B. Bao et al.: Resource Allocation With TP-ECC in Integrated Radio and Optical Networks

network,p
result rt+1 calculated by cloud and data to be uploaded excessive storage for historical data. Therefore, we appro-
to the cloud platform at the next moment. priately shorten the time window of these neural networks.
In examples of the present disclosure, the prediction result In other words, truncate backpropagation through time, and
of traffic at the next moment for the access node p may be hand over the tasks of storage as well as learning long-term
determined through the following equation. traffic characteristics to cloud nodes.
p network,p access,p For the neural network deployed at the cloud platform,
rt+1 = rt+1 + rt+1 (3) it learns the traffic pattern of the entire network topology from
The cloud platform may further include a traffic threshold a macro perspective. The pre-processed traffic information
determination module and a parameter issuing module. The regularly uploaded by each edge node is used as a training
former is to determine the traffic threshold for each access set. The neural network will iteratively acquire multiple other
node corresponding to the next moment according to the input features besides traffic data, including the connection
prediction result of network traffic for each access node, relationship between nodes and network macro events, and
respectively. so on. This is a significant improvement to the weakness of
the edge nodes with their insufficient computing power and
limited vision.

B. TRAFFIC PREDICTION BASED ON EDGE CLOUD


COLLABORATION WITH GRU MODEL
We present a traffic prediction strategy based on edge-cloud
collaboration (TP-ECC) with GRU model. To capture the
feature of traffic, we first construct the GRU model, which
can be expressed as follows.
ut = σ (Wu [xt , ht−1 ] + bu ) (4)
rt = σ (Wr [xt , ht−1 ] + br ) (5)
ct = tanh (Wc [xt , (rt · ht−1 )] + bc ) (6)
ht = ut · ht−1 + (1 − ut ) · ct (7)
where ut represents the update gate that is used to control
the degree of to which the status information at the previous
time is brought into the current status, rt represents the reset
gate that is used to control the degree of ignoring the status
information at the previous moment, ct represents the mem-
ory content stored at time t, ht−1 represents the historical state
at time t-1. xt and ht represent the input and output state at
time t. W and b are the weights and biases in the GRU training
process.
Then, the entire procedure is shown in Fig. 2. We will
FIGURE 2. Procedure of TP-ECC. elaborate on the traffic prediction process with the help of
mathematical formulas, which can be broken down into the
following steps.
IV. TRAFFIC PREDICTION BASED ON EDGE CLOUD Step 1, each access node collects traffic data, respectively.
COLLABORATION During the process of traffic prediction, the data collecting
A. PRINCIPLE OF NEURAL NETWORKS IN EDGE-CLOUD module of each edge node detects and records the raw data of
COLLABORATION traffic flow sequence of access terminals in real time. Each
In this subsection, we illustrate that in the integrated radio and access node may collect time series data as the traffic data
optical networks with edge-cloud architecture considered in through NetFlow technology.
this paper, how the neural networks are deployed and enabled Step 2, each access node clusters the collected traffic data
on cloud and edge nodes, respectively. according to the path information thereof and classifies the
The access traffic collected by each edge node is denoted collected traffic data into the access traffic data and the
as a time series. For the neural network deployed at each edge network traffic data.
node, the primary task is to take the access traffic series of its Step 3, each access node inputs the access traffic data
host node as a training set, and to predict the possible access into the access traffic prediction model configured thereon,
traffic value at the next moment by time iterations. respectively, to obtain the prediction result of access traffic
access,p
Obviously, the prediction accuracy is supported by the at the next moment rt+1 output by the traffic prediction
amount of data, but it also needs to occupy the resource of model.

VOLUME 11, 2023 7071


B. Bao et al.: Resource Allocation With TP-ECC in Integrated Radio and Optical Networks

Step 4, each access node uploads the prediction result of Step 7, the cloud platform determines and outputs the
access,p
access traffic at the next moment rt+1 and the network prediction result of traffic for each access node according to
access,p
traffic data to the cloud platform, respectively. the prediction result of access traffic rt+1 , the prediction
network,p
Firstly, for a port n at an access node, the traffic threshold result of network traffic rt+1 , and the size of the data
corresponding to the next moment is calculated according to p
network,p,n
required to be uploaded to the cloud platform kt+1 at the next
the prediction result of traffic rt+1 to be forwarded at moment.
the next moment through the following equation. The prediction result of traffic at the next moment for
p,n
X network,p,i the access node p may be determined through the following
ht+1 = m0 − rt+1 ,i ∈ I (8) equation.
i

where m0 is the physical maximum bearing traffic limit for p


rt+1 = rt+1
network,p access,p
+ rt+1
p
+ kt+1 (11)
the access node. I is a set of ports having path dependencies
with the port n. As can be seen, TP-ECC solves two important pain points
Secondly, the traffic threshold determination module cal- of edge nodes, which are crucial to traffic prediction. First,
culates the traffic threshold for the port n of the access node p macro network events like link or node failures can trigger a
corresponding to the next moment according to the traffic whole body in flow fluctuation. Second, in a fixed topology,
threshold for the port n of the access node p corresponding the influence of the connection relationship between nodes
to the next moment through the following equation, where N cannot be ignored when extracting traffic characteristics. The
is a set of all ports of the access node p. above problems have to, and can only be considered with the
p
X p,n perspective and capabilities of cloud.
ht+1 = ht+1 , n ∈ N (9) In TP-ECC, the method of traffic prediction for edge nodes
n
can predict a relatively long-term network traffic trend by
Step 5, the cloud platform receives network traffic data
utilizing the cloud platform. Also, the method of traffic pre-
and the prediction result of access traffic at the next moment
diction for edge nodes can predict a relatively short-term
reported by each access node through the transmission net-
access traffic change by utilizing the access nodes. Fur-
work. A macro traffic prediction that considers much more
ther, by combining the relatively long-term network traffic
features is going to be performed in the cloud.
trend and the relatively short-term access traffic change, the
Meanwhile, the size of the local data required to be
one-sidedness and limitation caused by a single position of a
uploaded to the cloud platform at the next moment by each
prediction module in a network and a single time granularity
access node is determined by the parameter issuing module
configuration can be avoided. Thus, the accuracy of traffic
through the following method.
p prediction for the system can be greatly improved. Further-
When ht+1 > 0, the size of the local data required to be
more, the access traffic prediction model and the network
uploaded to the cloud platform by the access node at the next
traffic prediction model have good cycle stability, with no
moment may be set as (10).
significant change in performance after a plurality of tests and
p p p experiments.
kt+1 = st − st−1 (10)
p
When ht+1 ≤ 0, the size of local data required to be V. EFFICIENT RESOURCE ALLOCATION SCHEME
uploaded to the cloud platform by the access node at the next In practical scenarios, the network transmission ability is
p
moment may be set as kt+1 = 0. In this case, the parameter restricted by its own hardware capacity, and the iteration and
issuing module may further send alarm information to the expansion of infrastructure require high economic costs. Con-
access node p and establish a standby link, so as to divert the sidering the input-output ratio of hardware, many network
traffic accessing the node p to other access nodes as much as operators and service providers are deterred. However, the
possible. limitation of infrastructure resources is often the main cause
Step 6, the cloud platform inputs the network traffic data of service congestion. Therefore, how to reasonably allocate
into the network traffic prediction model to obtain the pre- resources and provide bandwidth for carrying services under
diction result of network traffic for each access node at the the premise of limited resources is the main goal of this
next moment output by the network traffic prediction model. resource allocation algorithm [32].
After that, the cloud platform adds the network traffic data In this section, we propose an efficient resource alloca-
to the training set of the network traffic prediction model tion strategy that makes full use of the prediction result of
and performs real-time training to update the network traffic TP-ECC. Based on the idea of shortest routing, ERAS intro-
prediction model. duces constructing auxiliary graph routing, which combines
At the same time, the size of the local data required to the routing process with real-time resources, and simplifies
be uploaded to the cloud platform at the next moment by the virtual network mapping process in integrated radio and
each access node and the traffic threshold for each access optical networks. In order to describe the algorithm con-
node corresponding to the next moment are issued to the veniently, we first establish a simplified schematic network
corresponding access node, respectively. topology and a set of service request sets as an example,

7072 VOLUME 11, 2023


B. Bao et al.: Resource Allocation With TP-ECC in Integrated Radio and Optical Networks

FIGURE 3. (a) Schematic diagram of business request setting (b) schematic diagram of the optical network of the data center (c) the auxiliary diagram of
proposed Efficient Resource Allocation Scheme (ERAS).

i,j
shown in Fig. 3 (a). The yellow circle represents the virtual where Breq denotes the bandwidth requirement of access
i,j
access node, which is the edge node in edge-cloud collabora- services with i and j as source and destination nodes. Bres
tion. The orange circle represents the virtual cloud platform, denotes the remaining bandwidth of the optical link with i
the red hexagon denotes the switching resources required and j as the source and destination nodes. Rireq denotes the
by the virtual edge node, the red octagon is the application number of end-to-end service requests accessed from node i.
resources required by the virtual cloud platform for the per- Rires denotes the remaining bandwidth resources of node i that
ception of network status, and the blue quad indicates the can be used for forwarding services after TP-ECC prediction.
bandwidth resources required by the access service request. It can be seen that the request resource is proportional to
On the basis of the above network topology and service the weight, and the residual resource is inversely proportional
request definition, Fig. 3 (b) shows the schematic diagram of to the weight. This setting follows the idea that the more
the integrated radio and optical networks with the edge-cloud sufficient the residual resource is, the smaller the weight will
architecture, in which the yellow circle represents the physi- be, which is more advantageous in the routing process.
cal access (edge) node, the orange circle is the cloud platform, When a service can be mapped to a physical network with
the red hollow hexagon represents the remaining switching multiple routing options, the equilibrium factor of each link
resources of the physical edge node, the red hollow octagon is calculated one by one, and the path with the least sum
indicates the remaining application resources of the cloud of weights is selected, and the result is associated with the
platform, and the purple triangle represents the number of virtual node pair. In the case of severe resource constraints,
remaining ports of the physical edge node or cloud platform. the priority of traffic distribution to the node with less heavy
The blue quadrilateral represents the remaining bandwidth load to route can maximize the reduction of network blocking
resource of the optical link. rate and traffic delay.
Fig. 3 (c) shows the auxiliary diagram of ERAS process. Since the evaluation of cloud platform has been completed
First, judge whether the remaining application resources of in the front stage, the weight value of double solid line is
the cloud platform corresponding to the optimal and subop- taken as 1 to reduce the impact on the calculation of the
timal virtual cloud platform meet the access service require- shortest path as much as possible. On the basis of the previous
ments in turn. If they meet the requirements, use double solid step, the auxiliary graph topology after updating the weight
lines to connect them. If both cloud platforms are lack of is rerouted, and the shortest path is selected for mapping. The
resources, the mapping fails, and then traverse all the physical whole ERAS algorithm above can be abstractly summarized
edge nodes. If the remaining switching resources meet the in Algorithm 1.
needs of the virtual edge node, use the dotted line to connect In the stage of creating auxiliary graph, the time complex-
them. Otherwise, the link between the node and other nodes ity is O(m2 )+O(l), where m is the number of virtual cloud
is regarded as open circuit. platforms and l is the number of links in physical topology.
After drawing the auxiliary diagram, ERAS supplements In the stage of routing calculation, the shortest path is calcu-
the weight value for each link. The weight of the dotted line lated based on the topology with updated weight, and the time
is set to a maximum integer value, which is far more than complexity is O(n3 ), so the comprehensive time complexity
the total weight of all links in the topology. The weight value of the algorithm is O(m2 )+O(l)+O(n3 ).
of the real line comprehensively considers the demand and
residual relationship of nodes and links, as the equilibrium
factor 1, which can be calculated by (12). VI. PERFORMANCE EVALUATION
A. EXPERIMENTAL SETUP
v Experiments in this section are designed to demonstrate the
i,j
s u j traffic prediction accuracy of TP-ECC as well as the overall
Breq Rireq 1 u Rreq 1
1= i,j
+ · i t · performance of ERAS.
Bres Rires RNumres + 1 Rjres RjNumres + 1 Python is used to generate the underlying physical topol-
(12) ogy [34], in which 200 nodes are evenly distributed in four

VOLUME 11, 2023 7073


B. Bao et al.: Resource Allocation With TP-ECC in Integrated Radio and Optical Networks

Algorithm 1 Efficient Resource Allocation Scheme of service queue in a single simulation is set to 1000, and
Input: Virtualization business request RPi (S, D, d, Cp ); the average value of multiple simulation data is taken as the
Output: Virtualization resource allocation mapping results, simulation result.
Success / False;
1: for each working cycle B. ACCURACY OF TRAFFIC PREDICTION
2: Initialize virtualization mapping results Rov as False. Figure 4(a) shows the prediction results of TP-ECC and the
3: for virtual cloud platform d v in set D do ground traffic series. The API of neural networks used in the
4: if the remaining application resources of the TP-ECC algorithm are based on TensorFlow 1.2.1 in Python
corresponding cloud platform meet the demand, 2.6. It can be seen that the prediction is generally accurate,
p
that is, Ar di > A(d v ) then and some special phenomena can be further explained. First,
5: Use double solid lines to connect the the prediction results are not disturbed by any instantaneous
corresponding cloud platform. traffic reductions, so the future results are still reasonable.
6: end if This is because the cloud’s grasp of the overall trend pre-
7: end for vents edge nodes from being over-sensitive to those abnormal
8: for virtual edge node d v in set D do reduction data. Second, early predictions are made precisely
9: if the remaining exchange resources of for the accidental traffic spikes. The reason is that when the
corresponding nodes meet the demand, that storage task is transferred to the cloud, the edge node has
p
is, Cr vi > C(vv ) then more resource to ensure the local prediction is refined and
10: Use dotted lines to connect the efficient.
corresponding nodes.
11: else
12: Remove the physical edge node.
13: end if
14: end for
15: Add weight value for all links.
16: The shortest path of virtual network
 mapping is
calculated on the basis of Gp = P nj , x∗nk
after the link weight is supplemented.
17: Set virtualization mapping results Rov as True.
end for
18: return P nj , x∗nk , Rov


domains, each domain contains a cloud platform, and the


total content in the cloud platform is set to 800. The other
switching nodes are divided into advanced switching nodes
and middle and low-end switching nodes. In the simulation,
the difference between them is mainly reflected in the size
of switching capacity. The former’s switching capacity is set
to 1800, and the latter’s switching capacity is set to 400.
In addition to the above parameters, the number of ports of
the former is 128, and the latter is 10. The resources of the
links connecting the above nodes are represented by the free
spectrum slots, and the total amount of link resources is set to
358 [35]. We generated traffic data with 4510 optical nodes
in the State Key Laboratory in July 2019, comprising over
27,180,000 traffic flows and 3,000 queries around 3.5GB.
The services in the network are composed of end-to-end
services and networking services. The number of switching
resources and application resources requested by nodes are FIGURE 4. Results on traffic prediction: (a) prediction of TP-ECC
(b) Comparison on SMAPE.
randomly generated, and the link resources requested by
virtual links are randomly selected, and the content of service
request is randomly selected, the number of service nodes In Fig. 4(b), by analyzing the Symmetric Mean Absolute
is set as a random integer between 5 and 10. The length Percentage Error (SMAPE) of prediction, we compare the

7074 VOLUME 11, 2023


B. Bao et al.: Resource Allocation With TP-ECC in Integrated Radio and Optical Networks

scale access terminal sets in integrated radio and optical


networks.

C. NETWORK PERFORMANCE OF ERAS SCHEME


In this section, we analyze the overall performance of ERAS
by comparing it with the other three conventional resource
allocation algorithms which are the shortest routing algo-
rithm, the scale constraint algorithm and the high bandwidth
virtualization algorithm. The evolution is conducted from
three aspects including traffic blocking rate, average queue
delay and resource utilization.
Figure 5(a) shows the relationship between the traffic
blocking rate and the running time under the action of the
four algorithms, respectively. It can be seen that all of the
four algorithms have experienced a very low blocking rate
in the initial stage, and then the blocking rate surges with
the passage of time, and finally the increasing rate gradu-
ally decreases to a stable level, but at this stage, different
algorithms have different blocking rate performance. This is
because at the beginning, the network resources are sufficient,
and the controller has more space to select the required
resources for services. When more services flow in, some
resources are occupied but not released, such as the current
hot application resources or key switching nodes, resulting in
the failure of some subsequent services mapping. In the later
period, the occupation and release of network resources tend
to be stable, and the blocking rate of services also presents
a corresponding trend. In the stable stage, for the end-to-end
traffic, the load balancing virtualization algorithm has a lower
blocking rate, which is 6.91% lower than the shortest routing
algorithm. The analysis shows that it can carry more traffic
at the same time because of the optimal path selected after
fully considering the resource occupation when calculating
the routing.
Then, the simulation data of average queue delay shown in
Fig. 5(b) is analyzed. As a whole, no matter which algorithm
is used, the average queue delay increases gradually with
the increase of traffic intensity. This is because when the
resources become tight, the algorithm can only select the opti-
mal path that meets the conditions as far as possible under the
current resource occupation. However, in terms of stability,
FIGURE 5. Network performance: (a) blocking rate (b) average queue
delay (c) resource utilization.
for end-to-end services, the shortest route virtualization algo-
rithm selects the shortest path, which has more advantages in
the calculation of routing stage and information transmission
nodes, and the average delay is reduced by 15.61% compared
performance of TP- ECC with the methods in which neural with the ERAS algorithm which increases with the increase
networks are deployed in different locations, including on of traffic intensity.
the cloud or edge alone, or both on the cloud and edge Finally, we observe the statistics of resource utilization
but not coordinated. Obviously, the SMAPE of TP-ECC is in the network. As shown in Fig. 5(c), with the increase
the lowest in all test groups, which indicates that it has the of service intensity, it has experienced a process of first
best accuracy. Quantitatively, it is 8.1%, 9.5%, and 8.8% increasing and then gradually stabilizing. In the stable stage,
higher than the average accuracy of the other three methods. the load balancing virtualization algorithm presents a higher
This numerically proves that the complementary character- resource utilization, which is 8.64% higher than the shortest
istics of edge nodes and cloud can improve each other by routing algorithm. To summarize, for end-to-end access ser-
TP-ECC, which is helpful for optical network to establish vice, ERAS algorithm achieves lower traffic blocking rate and
countermeasures in the traffic fluctuation introduced by large higher resource utilization at the cost of delay.

VOLUME 11, 2023 7075


B. Bao et al.: Resource Allocation With TP-ECC in Integrated Radio and Optical Networks

VII. CONCLUSION [11] H. Yang, K. Zhan, B. Bao, Q. Yao, J. Zhang, and M. Cheriet, ‘‘Automatic
In this paper, we have first proposed a new deployment archi- guarantee scheme for intent-driven network slicing and reconfiguration,’’
J. Netw. Comput. Appl., vol. 190, Sep. 2021, Art. no. 103163.
tecture in order to achieve edge cloud collaboration in inte- [12] R.-K. Shiu, Y.-W. Chen, P.-C. Peng, J. Chiu, Q. Zhou, T.-L. Chang,
grated radio and optical networks. Then, using the functional S. Shen, J.-W. Li, and G.-K. Chang, ‘‘Performance enhancement of optical
entities of the architecture, we have proposed the accurate comb based microwave photonic filter by machine learning technique,’’
J. Lightw. Technol., vol. 38, no. 19, pp. 5302–5310, Oct. 1, 2020.
traffic prediction algorithm TP-ECC which is based on edge [13] X. Fu and C. Zhou, ‘‘Predicted affinity based virtual machine placement
cloud collaboration with GRU model. We further have pro- in cloud computing environments,’’ IEEE Trans. Cloud Comput., vol. 8,
posed a resource allocation scheme ERAS based on load bal- no. 1, pp. 246–255, Jan. 2020.
[14] B. Bao, H. Yang, Y. Wan, Q. Yao, A. Yu, J. Zhang, B. C. Chatterjee, and
ancing theory. Their performance has been demonstrated by
E. Oki, ‘‘Node-oriented traffic prediction and scheduling based on graph
experiments in integrated radio and optical networks testbed. convolutional network in metro optical networks,’’ in Proc. Opt. Fiber
We also have evaluated the performance of the proposed Commun. Conf. (OFC), 2021, pp. 1–3.
algorithm for end-to-end services under heavy traffic load, [15] G. O. Perez, A. Ebrahimzadeh, M. Maier, J. A. Hernandez, D. L. Lopez,
and M. F. Veiga, ‘‘Decentralized coordination of converged tactile internet
and compared it with other conventional resource allocation and MEC services in H-CRAN fiber wireless networks,’’ J. Lightw. Tech-
schemes. Numerical results have shown that with large ter- nol., vol. 38, no. 18, pp. 4935–4947, Sep. 15, 2020.
minal sets, TP-ECC is fully capable to improve the accuracy [16] Z. Ni, H. Chen, Z. Li, X. Wang, N. Yan, W. Liu, and F. Xia, ‘‘MSCET:
A multi-scenario offloading schedule for biomedical data processing
rates of traffic prediction by up to 9.5%, compared with and analysis in cloud-edge-terminal collaborative vehicular networks,’’
the methods without edge-cloud collaboration. Furthermore, IEEE/ACM Trans. Comput. Biol. Bioinf., early access, Nov. 30, 2021, doi:
ERAS can improve the resource utilization of the whole net- 10.1109/TCBB.2021.3131177.
[17] S. Yin, Y. Chu, C. Yang, Z. Zhang, and S. Huang, ‘‘Load-adaptive energy-
work, while reducing the average queue delay and blocking saving strategy based on matching game in edge-enhanced metro FiWi,’’
probability. Opt. Fiber Technol., vol. 68, Jan. 2022, Art. no. 102762.
In the future, we would like to study more complex [18] E. Ganesan, I.-S. Hwang, A. T. Liem, and M. S. Ab-Rahman, ‘‘SDN-
enabled FiWi-IoT smart environment network traffic classification using
resource allocation optimization technology and edge cloud supervised ML models,’’ Photonics, vol. 8, no. 6, p. 201, Jun. 2021.
collaboration architecture, which can jointly mobilize the [19] T. Wu, P. Zhou, B. Wang, A. Li, X. Tang, Z. Xu, K. Chen, and X. Ding,
resources of cloud and edge nodes and further enhance the ‘‘Joint traffic control and multi-channel reassignment for core back-
security, reliability, and accuracy of the fast-growing inte- bone network in SDN-IoT: A multi-agent deep reinforcement learning
approach,’’ IEEE Trans. Netw. Sci. Eng., vol. 8, no. 1, pp. 231–245,
grated radio and optical networks. Jan. 2021.
[20] C. Li, H. Yang, Z. Sun, Q. Yao, B. Bao, J. Zhang, and A. V. Vasilakos,
‘‘Federated hierarchical trust-based interaction scheme for cross-domain
REFERENCES industrial IoT,’’ IEEE Internet Things J., vol. 10, no. 1, pp. 447–457,
[1] J. Yu, X. Liu, Y. Gao, and X. Shen, ‘‘3D channel tracking for UAV-satellite Jan. 2023.
communications in space-air-ground integrated networks,’’ IEEE J. Sel. [21] Y. Li, X. Sun, H. Zhang, Z. Li, L. Qin, C. Sun, and Z. Ji, ‘‘Cellular
Areas Commun., vol. 38, no. 12, pp. 2810–2823, Dec. 2020. traffic prediction via a deep multi-reservoir regression learning network
[2] X. Chen, Y. Bi, X. Chen, H. Zhao, N. Cheng, F. Li, and W. Cheng, for multi-access edge computing,’’ IEEE Wireless Commun., vol. 28, no. 5,
‘‘Dynamic service migration and request routing for microservice in mul- pp. 13–19, Oct. 2021.
ticell mobile-edge computing,’’ IEEE Internet Things J., vol. 9, no. 15, [22] J. Bi, S. Li, H. Yuan, and M. Zhou, ‘‘Integrated deep learning method for
pp. 13126–13143, Aug. 2022. workload and resource prediction in cloud systems,’’ Neurocomputing, vol.
[3] H. Yang, X. Zhao, Q. Yao, A. Yu, J. Zhang, and Y. Ji, ‘‘Accurate fault 424, pp. 35–48, Feb. 2021.
location using deep neural evolution network in cloud data center inter- [23] X. Song, Y. Guo, N. Li, and L. Zhang, ‘‘Online traffic flow prediction
connection,’’ IEEE Trans. Cloud Comput., vol. 10, no. 2, pp. 1402–1412, for edge computing-enhanced autonomous and connected vehicles,’’ IEEE
Apr. 2022. Trans. Veh. Technol., vol. 70, no. 3, pp. 2101–2111, Mar. 2021.
[4] M. Zhao, J. Yu, W. Li, D. Liu, S. Yao, W. Feng, C. She, and T. Quek, [24] H. Yang, A. Yu, J. Zhang, J. Nan, B. Bao, Q. Yao, and M. Cheriet, ‘‘Data-
‘‘Energy-aware task offloading and resource allocation for time-sensitive driven network slicing from core to RAN for 5G broadcasting services,’’
services in mobile edge computing systems,’’ IEEE Trans. Veh. Technol., IEEE Trans. Broadcast., vol. 67, no. 1, pp. 23–32, Mar. 2021.
vol. 70, no. 10, pp. 10925–10940, Oct. 2021. [25] J. Chen and Y. Wang, ‘‘An adaptive short-term prediction algorithm
for resource demands in cloud computing,’’ IEEE Access, vol. 8,
[5] H. Yang, J. Yuan, C. Li, G. Zhao, Z. Sun, Q. Yao, B. Bao, A. V. Vasilakos,
pp. 53915–53930, 2020.
and J. Zhang, ‘‘BrainIoT: Brain-like productive services provisioning with
federated learning in industrial IoT,’’ IEEE Internet Things J., vol. 9, no. 3, [26] X. Yin, G. Wu, J. Wei, Y. Shen, H. Qi, and B. Yin, ‘‘Deep learning
pp. 2014–2024, Feb. 2022. on traffic prediction: Methods, analysis and future directions,’’ IEEE
Trans. Intell. Transp. Syst., vol. 23, no. 6, pp. 4927–4943, Jun. 2022, doi:
[6] C. Ding, A. Zhou, X. Ma, N. Zhang, C.-H. Hsu, and S. Wang, ‘‘Towards 10.1109/TITS.2021.3054840.
diversified IoT services in mobile edge computing,’’ IEEE Trans. Cloud
[27] M. Chen, S. Huang, X. Fu, X. Liu, and J. He, ‘‘Statistical model checking-
Comput., early access, Sep. 3, 2021, doi: 10.1109/TCC.2021.3109385.
based evaluation and optimization for cloud workflow resource alloca-
[7] A. Naouri, H. Wu, N. A. Nouri, S. Dhelim, and H. Ning, ‘‘A novel tion,’’ IEEE Trans. Cloud Comput., vol. 8, no. 2, pp. 443–458, Apr. 2020.
framework for mobile-edge computing by optimizing task offloading,’’ [28] W. Wei, H. Gu, K. Wang, J. Li, X. Zhang, and N. Wang, ‘‘Multi-
IEEE Internet Things J., vol. 8, no. 16, pp. 13065–13076, Aug. 2021. dimensional resource allocation in distributed data centers using deep
[8] C. Li, H. Yang, Q. Yao, Z. Sun, and J. Zhang, ‘‘High-precision edge-cloud reinforcement learning,’’ IEEE Trans. Netw. Service Manage., early access,
collaboration with federated learning in edge optical network,’’ in Proc. Oct. 11, 2022, doi: 10.1109/TNSM.2022.3213575.
Opt. Fiber Commun. Conf. (OFC), 2021, pp. 1–3. [29] R. Kumar and V. Tiwari, ‘‘Opt-ACM: An optimized load balancing
[9] H. Ke, H. Wang, W. Sun, and H. Sun, ‘‘Adaptive computation offloading based admission control mechanism for software defined hybrid wireless
policy for multi-access edge computing in heterogeneous wireless net- based IoT (SDHW-IoT) network,’’ Comput. Netw., vol. 188, Apr. 2021,
works,’’ IEEE Trans. Netw. Service Manage., vol. 19, no. 1, pp. 289–305, Art. no. 107888.
Mar. 2022. [30] Q. Yao, H. Yang, B. Bao, A. Yu, J. Zhang, and M. Cheriet, ‘‘Core and
[10] D. P. Isravel, S. Silas, and E. B. Rajsingh, ‘‘Centrality based congestion spectrum allocation based on association rules mining in spectrally and
detection using reinforcement learning approach for traffic engineering in spatially elastic optical networks,’’ IEEE Trans. Commun., vol. 69, no. 8,
hybrid SDN,’’ J. Netw. Syst. Manage., vol. 30, no. 1, pp. 1–22, Jan. 2022. pp. 5299–5311, Aug. 2021.

7076 VOLUME 11, 2023


B. Bao et al.: Resource Allocation With TP-ECC in Integrated Radio and Optical Networks

[31] H. Yang, B. Bao, C. Li, Q. Yao, A. Yu, J. Zhang, and Y. Ji, ‘‘Blockchain- QIUYAN YAO (Member, IEEE) received the M.S.
enabled tripartite anonymous identification trusted service provisioning degree in computer science and technology from
in industrial IoT,’’ IEEE Internet Things J., vol. 9, no. 3, pp. 2419–2431, the Hebei University of Engineering, Handan,
Feb. 2022. China, in 2015, and the Ph.D. degree from the
[32] B. Bao, H. Yang, Q. Yao, A. Yu, B. C. Chatterjee, E. Oki, and J. Zhang, Beijing University of Posts and Telecommunica-
‘‘SDFA: A service-driven fragmentation-aware resource allocation in elas- tions (BUPT), Beijing, China, in 2020. She is
tic optical networks,’’ IEEE Trans. Netw. Service Manage., vol. 19, no. 1, currently working at BUPT. Her research interests
pp. 353–365, Mar. 2022.
include the AI driven routing, spectrum assign-
[33] X. Long, J. Wu, and L. Chen, ‘‘Energy-efficient offloading in mobile edge
ment strategy in elastic optical networks, and space
computing with edge-cloud collaboration,’’ in Proc. Int. Conf. Algorithms
Archit. Parallel Process. Cham, Switzerland: Springer, 2018, pp. 460–475.
division multiplexing networks. She received the
[34] Z. Sun, H. Yang, C. Li, Q. Yao, D. Wang, J. Zhang, and A. V. Vasilakos, Best Paper Award at OECC/PSC’ 19.
‘‘Cloud-edge collaboration in industrial Internet of Things: A joint offload-
ing scheme based on resource prediction,’’ IEEE Internet Things J., vol. 9,
no. 18, pp. 17014–17025, Sep. 2022.
[35] H. Yang, Q. Yao, B. Bao, A. Yu, J. Zhang, and A. V. Vasilakos, ‘‘Multi-
associated parameters aggregation-based routing and resources allocation LIN GUAN received the M.S. degree from the
in multi-core elastic optical networks,’’ IEEE/ACM Trans. Netw., vol. 30, Beijing University of Posts and Telecommuni-
no. 5, pp. 2145–2157, Oct. 2022. cations (BUPT), Beijing, China, in 2021. Her
research interests include the artificial intelligence
model, edge-cloud collaboration, and resource
allocation.

BOWEN BAO received the M.S. degree in com-


puter science and technology from the Hebei Uni-
versity of Engineering, Handan, China, in 2019.
He is currently pursuing the Ph.D. degree in
JIE ZHANG (Member, IEEE) is currently a Pro-
information and communication engineering with
fessor and the Dean of the Institute of Information
the Beijing University of Posts and Telecommu-
Photonics and Optical Communications, BUPT.
nications (BUPT), Beijing, China. His research
He is sponsored by more than ten projects of
interests include elastic optical networks, spec-
the Chinese government. He has published eight
trum assignment and routing, fragmentation,
books and more than 100 articles. Seven patents
distance-adaptive transmission, and physical layer
have also been granted. He has served as a TPC
impairments.
Member for ACP 2009, PS 2009, and ONDM
2010. His research interests include optical trans-
port networks and packet transport networks.

HUI YANG (Senior Member, IEEE) is currently MOHAMED CHERIET (Senior Member, IEEE)
the Vice Dean and a Professor at the Beijing Uni- is currently a Full Professor with the Depart-
versity of Posts and Telecommunications (BUPT). ment of System Engineering, Ecole de Technolo-
His research interests include SDN, AI, elastic gie Supérieure, Montreal, QC, Canada. He has
optical networks, and fixed-mobile access net- authored or coauthored more than 300 technical
works. He has authored or coauthored 100 papers papers in renowned international journals and con-
in prestigious journals and conferences, and he ferences, and has delivered more than 50 invited
is the first author of more than 50 of them. talks. He was a recipient of the 2016 IEEE Canada
He received the Best Paper Award at IWCMC’ J. M. Ham Outstanding Engineering Educator
19/NCCA’ 15 and the Young Scientist Award at Award, the 2013 ETS Research Excellence Prize,
IEEE ICOCN’ 17. He was the General Chair of ISAI’ 16. He has served as and the 2012 Queen Elizabeth Diamond Jubilee Medal. He is a fellow of
the Guest Editor for IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS and IAPR, CAE, EIC, and EC.
an Associate Technical Editor for IEEE Communications Magazine.

VOLUME 11, 2023 7077

You might also like