A Survey On Mobile Edge Computing: The Communication Perspective

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 47

A Survey on Mobile Edge Computing:

The Communication Perspective

Prepared by : Supervisor :
Majeed Othman Dr. sadoon azizi
INTRODUCTION
A. Mobile Computing for 5G: From Clouds to Edges
B. Mobile Edge Computing Versus Mobile Cloud Computing
MEC COMPUTATION AND COMMUNICATION MODELS
A. Computation Task Models
B. Communication Models
C. Computation Models of Mobile Devices
D. Computation Models of MEC Servers
RESOURCE MANAGEMENT IN MEC SYSTEMS
A. Single-User MEC Systems

B. Multiuser MEC Systems

C. MEC Systems With Heterogeneous Servers

D. Challenge

ISSUES,CHALLENGES, AND FUTURE RESEARCH DIRECTIONS


A. Deployment of MEC Systems
B. Cache-Enabled MEC
C. Mobility Management for MEC
D. Green MEC
E. Security and Privacy Issues in MEC
INTRODUCTION

Cloud Computing emerging as a new paradigm of computing. Its vision is the centralization of computing, storage
and network management in the Clouds, referring to data centers, backbone IP networks and cellular core
networks

A. Mobile Computing for 5G: From Clouds to Edges

The high-rate and highly reliable air interface allows to run computing services of mobile devices at the
remote cloud data center, resulting in the research area called Mobile Cloud Computing (MCC).

However, there is an inherent limitation of MCC, namely, the long propagation distance from the end user to
the remote cloud center which will result in excessively long latency for mobile applications

MCC is thus not adequate for a wide-range of emerging mobile applications that are latency-critical.
new network architectures are being
designed to better integrate the concept of
Cloud Computing into mobile networks.
. In 5G wireless systems, ultra-dense edge
devices, including small-cell base stations
(BSs), wireless access points (APs), laptops,
tablets, and smartphones, will be deployed,
each having a computation capacity
comparable with that of a computer server
a decade ago.
from 1G to 4G, is the pursuit of increasingly higher wireless speeds to support the transition from voice-
centric to multimedia-centric traffic.
the mission of 5G is different and much more complex, namely to support the explosive evolution of ICT
and Internet

In terms of functions, 5G systems will support communications, computing, control and content delivery (4C).
In terms of applications, a wide-range of new applications and services for 5G are emerging,
such as real-time online gaming, virtual reality (VR) and ultra-high-definition (UHD) video streaming, which
require unprecedented high access speed and low latency.

The past decade also saw the take-off of different visions of next-generation Internet including IoT, Tactile Internet
(with millisecond latency), Internet-of-Me, and social networks. In particular, it was predicted by Cisco that about
50 billion IoT device
MEC
The concept of MEC was firstly proposed by the European Telecommunications Standard Institute
(ETSI) in 2014, and was defined as a new platform that “provides IT and cloud computing capabilities
within the Radio Access Network (RAN) in close proximity to mobile subscribers

MEC refers to the use of BSs for offloading computation tasks from mobile devices

MEC is implemented based on a virtualized platform that leverages recent advancements in


network functions virtualization (NFV),
information-centric networks (ICN)
software-defined networks (SDN)

NFV enables a single edge device to provide computing services to multiple mobile devices by creating multiple
virtual machines (VMs)1 for simultaneously performing different tasks or operating different network functions

ICN provides an alternative end-to-end service recognition paradigm for MEC, shifting from a host-centric to an
information-centric one for implementing context-aware computing
SDN allows MEC network administrators to manage services via function abstraction, achieving scalable and dynamic
computing
we will provide two examples to illustrate the basic principles of MEC

Among these components, the video source


and renderer should be executed locally, while
the most computation-intensive components,
i.e., the tracker, mapper and object recognizer,
can be offloaded for cloud execution

While the image acquisition component needs to be


executed at the mobile device for supporting the user
interface, the other components could be offloaded for
cloud processing, which contain complex computation
such as signal processing and machine learning (ML)
algorithms
B. Mobile Edge Computing Versus Mobile Cloud Computing
Compared with MCC, MEC has the advantages of achieving lower latency, saving energy for mobile devices,
supporting context-aware computing, and enhancing privacy and security for mobile applications.

Low Latency: The latency for a mobile service is the aggregation of three components: propagation, computation, and
communication latency, depending on the propagation distance, computation capacity, and data rate, respectively.

First, the information-propagation distances for MEC are typically tens of meters for the cases of dense small-cell
networks or device-to-device (D2D) transmissions, and typically no longer than 1km for general cases. In contrast,
Cloud Computing requires transmissions from end users to nodes in core networks or data centers with distances
ranging from tens of kilometers to that across continents. This results in much shorter propagation delay for MEC than
that for MCC

Second, MCC requires the information to pass through several networks including the radio access network, backhaul
network and Internet, where traffic control, routing and other network-management operations can contribute to
excessive delay. With the communication constrained at the network edges, MEC is free from these issues

For instance, the edge cloud at a BS has 102-104 times higher computation capability than the minimum requirement (e.g.,
a CPU over 3.3GHz, 8GB RAM, 70GB storage space) for running the Call-of Duty 13, In general, experiments have shown
that the total latency for MCC is in the range of 30-100ms
Mobile Energy Savings, MEC stands out as a promising solution for prolonging battery lives of IoT devices.
Specifically, computation-intensive tasks can be offloaded from IoT devices to edge devices so as to reduce their
energy consumption. Significant energy savings by computation offloading have been demonstrated in
experiments, e.g., the completion of up to 44-time more computation load for a multimedia application eye or
the increase of battery life by 30-50% for different AR applications

Context-Awareness: Another key feature that differentiates MEC from MCC is the ability of an MEC server for
leveraging the proximity of edge devices to end users to track their real-time information such as behaviors,
locations, and environments. Inference based on such information allows the delivery of context-aware services to
end users

privacy/Security Enhancement: The capability of enhancing the privacy and security of mobile applications is also
an attractive benefit brought by MEC compared to MCC
MEC COMPUTATION AND COMMUNICATION MODELS
A. Computation Task Models
There are various parameters that play critical roles in modeling the computation tasks, including latency, bandwidth
utilization, context awareness, generality, and scalability

In this subsection, we introduce two computation-task models popularly used in existing literature on MCC and
MEC, corresponding to binary and partial computation offloading, respectively.

1) Task Model for Binary Offloading: A highly integrated or relatively simple task cannot be partitioned and has to
be executed as a whole either locally at the mobile device or offloaded to the MEC server, called binary offloading

represented by a three-field notation A(L,τd,X). This commonly-used notation contains the


information of the task input-data size L (in bits), the completion deadline τd (in second),
and the computation workload/intensity X (in CPU cycles per bit).

The task A(L,τd,X) is required to be completed before a hard deadline τd. This model can also be generalized to
handle the soft deadline requirement which allows a small portion of tasks to be completed after τd [58]. In this
case, the number of CPU cycles needed to execute 1-bit of task input data is modeled as a random variable X
2) Task Models for Partial Offloading
program can be partitioned into two parts with one executed at the mobile device and the other offloaded for
edge execution.

The simplest task model for partial offloading is the data-partition model, where the task-input bits are bit-wise
independent and can be arbitrarily divided into different groups and executed by different entities in MEC systems,
e.g., parallel execution at the mobiles and MEC server.

• First, the execution order of functions or routines cannot be arbitrarily chosen because the outputs of some
components are the inputs of others.

• Second, due to either software or hardware constraints, some functions or routines can be offloaded to the
server for remote execution, while the ones can only be executed locally such as the image display function
One such model is called the task call graph. The graph is typically a directed acyclic graph (DAG), which is a
finite directed graph with no directed cycles. We shall denote it as G(V,E), where the set of vertices V
represents different procedures in the application and the set of edges E specifies their call dependencies.
There are three typical dependency models of sub-tasks (i.e., task components such as functions or routines),
namely sequential, parallel, and general dependency
B. Communication Models

MCC, communication channels between the mobile devices and cloud servers are typically abstracted as bit pipes
with either constant rates or random rates with given distributions.

The scenario is different for MEC systems. Given small-scale edge clouds and targeting latency-critical
applications, reducing communication latency by designing a highly efficient air interface is the main design focus

the mentioned bit-pipe models are insufficient as they overlook some fundamental properties of wireless
propagation and are too simplified to allow the implementation of advanced communication techniques

wireless channels differ from the wired counterparts in the following key aspects :
1) Due to atmospheric ducting, reflection and refraction from scattering objects in the environment (e.g.,
buildings, walls and trees), there exists the well-known multipath fading in wireless channel.

2) The broadcast nature of wireless transmissions results in a signal being interfered by other signals occupying
the same spectrum

3) Spectrum shortage has been the main foe for very high-rate radio access
In MEC systems, communications are typically between APs and mobile devices with the possibility of direct D2D
communications.
The MEC servers are small-scale data centers deployed by the Cloud Computing/telecom operators, which can be co-
located with the wireless Aps
the wireless APs not only provide the wireless interface for the MEC servers, but also enable the access to the remote data
center through backhaul links

there exist different types of commercialized technologies for mobile communications, including the near-filed
communications (NFC), radio frequency identification (RFID), Bluetooth, WiFi, and cellular technologies such as the
longterm evolution (LTE). Besides, the 5G network, which will be realized by the development of LTE in combination with
new radio-access technologies
C. Computation Models of Mobile Devices

The CPU of a mobile device is the primary engine for local computation. The CPU performance is controlled by
the CPU-cycle frequency fm (also known as the CPU clock speed). The state-of-the-art mobile CPU architecture
adopts the advanced dynamic frequency and voltage scaling (DVFS)

In practice, the value of fm is bounded by a maximum value, fmax CPU

Thus, the energy consumption of a CPU cycle is given by κf2 m, where κ is a constant related to the
hardware architecture.
For the computation task A(L,τ,X) with CPU clock speed fm, the energy consumption can be derived
D. Computation Models of MEC Servers

Two possible models are considered in the literature, corresponding to the deterministic and stochastic server-
computation latency

deterministic model is proposed to consider the exact server computation latency for latency-sensitive applications,
which is implemented using techniques such as VMs and DVFS
the MEC server allocates different VMs for different mobile devices, allowing independent computation
where parallel computing via virtualization techniques is not feasible and thus it needs to process the computation
workloads sequentially. Without loss of generality, denote k as the processing order for a mobile device and name
it as mobile k. Thus, the total server computation latency including the queuing delay for device k denoted by
Ts,k can be given as

For latency-tolerant applications, the average server computation time can be derived based on stochastic models
The energy consumption of an MEC server is jointly determined by the usage of the CPU, storage, memory, and network
interface

Two tractable models are widely used for the energy consumption of MEC serve

One model is based on the DVFS technique described as follows.


Consider an MEC server that handles K computation tasks and the k-th task is allocated with wk CPU cycles with CPU-
cycle frequency fs,k. Hence, the total energy
consumed by the CPU at the MEC server, denoted by Es, can be expressed as
RESOURCE MANAGEMENT IN MEC SYSTEMS

The joint radio-and-computational resource management plays a pivotal role in realizing energy-efficient and low latency MEC.

A. Single-User MEC Systems


The divided according to models, namely,
1. deterministic task model with binary offloading,
2. deterministic task model with partial offloading,
3. stochastic task model.

1) Deterministic Task Model With Binary Offloading:

general guidelines were developed for determining the offloading decision for the purposes of minimizing the
mobile-energy consumption and computation latency

w as the amount of computation (in CPU cycles) for a task


fm as the CPU speed of the mobile device
d as the input data size
fs as the CPU speed at the cloud server
the communication links were typically assumed to
have a fixed rate B
which holds for applications that require heavy computation and have small amount of data input, or when
the cloud server is fast, and the transmission rate is sufficiently high

pm represent the CPU power consumption at the mobile device,


pt as the transmission power,
pi as the power consumption at the device when the task is running at the server.

Offloading the task could help save mobile energy when


holds, i.e., applications with heavy computation and light communication should be offloaded

the data rates for wireless communications are not constant and change with the time-varying channel gains as
well as depend on the transmission power

In addition, as the CPU power consumption increases super-linearly with the CPU-cycle frequency, the computation
energy consumption for mobile execution can be minimized using DVFS techniques
2) Deterministic Task Model With Partial Offloading:
running of a relatively sophisticated mobile application can be decomposed into a set of smaller sub-tasks

in program partitioning was considered where the task-input data can be arbitrarily divided for local and remote
executions.
Joint optimization of the offloading ratio, transmission power and CPU-cycle frequency was performed to
minimize the mobile-energy consumption (or latency) subject to a latency (or energy consumption) constrain

Both the energy and latency minimization problems are non-convex in contrast to the ones for binary-offloading

The problem can be solved optimally with a variable-substitution technique while a sub-optimal algorithm was
proposed for the latter one

by leveraging the concept of load balancing between the mobile device and the server, a heuristic
program-partitioning algorithm was developed to minimize the execution latency

proposed a polynomial-time approximate solution with guaranteed performance. To maximize the energy
savings achieved by computation offloading,
3) Stochastic Task Model: Resource management policies have also been developed for MEC systems with stochastic
task models characterized by random task arrivals, where the arrived but not yet executed tasks join the queues in
task buffers

In [101], in order to minimize the mobile-energy consumption while keeping the proportion of executions violating the
deadline requirement below a threshold, a dynamic offloading algorithm was proposed to determine the offloaded
software components from an application running at a mobile user based on Lyapunov optimization technique

in [102] based on the theory of Markov decision process (MDP), which controls the states of the local processing and
transmission units and the task buffer queue length based on the channel state

105- the energy latency tradeoff in MEC systems with heterogeneous types of applications was investigated, including the
non-offloadable workload, cloud-offloadable workload and network traffic. A Lyapunov optimization-based algorithm was
proposed to jointly decide the offloading policy, task allocation, CPU clock speed, and selected network interface.
B. Multiuser MEC Systems

including the multiuser


1. joint radio-and-computational resource allocation,
2. MEC server scheduling,
3. and multiuser cooperative edge computing.

1. joint radio-and-computational resource allocation


Compared with the central cloud, the MEC servers have much less computational resources. Therefore, one key issue in
designing a multiuser MEC system is how to allocate the finite radio-and-computational resources to multiple mobiles
for achieving a system-level objective, e.g., the minimum sum mobile-energy consumption. Both the centralized and
distributed resource allocation

In the MEC server obtains all the mobile information, including the CSI and computation requests, makes the resource-
allocation decisions, and informs the mobile devices about the decisions. In [82], mobile users time-share a single edge
server and have different computation workloads and local-computation capacities. A convex optimization problem was
formulated to minimize the sum mobile-energy consumption.
The key finding is that the optimal policy for controlling offloading data size and time allocation has a simple
threshold-based structure. Specifically, an offloading priority function was firstly derived according to mobile users’
channel conditions and local computing energy consumption. Then, the users with priorities above and below a given
threshold will perform full and minimum offloading (so as to meet a given computation deadline), respectively. This
result was also extended to the OFDMA-based MEC systems for designing a close-to-optimal computation offloading
policy.

order to reduce the sum mobile-energy consumption. The optimal solution shows that, there exists an optimal one-to-
one mapping between the transmission power and the number of allocated CPU cycles for each mobile device. This work
was further extended

[110] considered the multiuser video compression offloading in MEC and minimized the latency in local compression,
edge cloud compression and partial compression offloading scenarios. Besides, in order to minimize the energy and
delay cost for multiuser MEC systems where each user has multiple tasks, Chen et al. jointly optimized the offloading
decisions and the allocation of communication resource via a separable semidefinite relaxation approach

in [107] under constraints of quality of service (QoS) requirements for all mobile devices. The assumed fixed resource
usage of each user results in a semi-MDP problem, which was transformed into a linear programming (LP) model and
efficiently solved
In [108], assuming a stochastic task arrival model, the energy-latency tradeoff in multiuser MEC systems was
investigated via a Lyapunov optimization-based online algorithm, which jointly manages
MAO et al. the available radio-and-computational resources. Centralized resource management for multiuser
MEC system based on cloud radio access network (C-RAN)

Another thrust of research targets distributed resource allocation for multiuser MEC systems which were
designed using game theory and decomposition techniques

the computation tasks were assumed to be either locally executed or fully offloaded via single and multiple interference
channels, respectively. With fixed mobile-transmission power, an integer optimization problem was formulated to
minimize the total energy consumption and offloading latency, which was proved to be NP-hard. Instead of designing a
centralized solution, the game-theoretic techniques were applied to develop a distributed algorithm that is able to
achieve a Nash equilibrium

where each mobile has multiple tasks and can offload computation to multiple APs connected by a common edge-server,
respectively. For the offloading process, in addition to the transmission energy, this work has also accounted for the
scanning energy of the APs and the fixed circuit power. The proposed distributed offloading policy
2) MEC Server Scheduling

First, the
arrival times of different users are in general asynchronous so that it is desirable for the edge server
with finite computational resource to buffer and compute the tasks sequentially, which incurs the
queuing delay

Second, even for synchronized


task arrivals, the latency requirements can differ significantly over users running different
types of applications ranging from latency-sensitive to latency-tolerant applications. This fact
calls for the server scheduling to assign users different levels of priorities based on their
latency requirements.

Last, some
computation tasks each consists of several dependent sub-tasks such that the scheduling of these modules
must satisfy the task-dependency requirements.
As a result, a heuristic algorithm was proposed to solve the formulated mixed-integer problem. Specifically, it
first optimizes the computation partition for each user. Under these partitions, it will search the time intervals
violating the resource constraint and adjust them accordingly.

this challenge, a measure of ready time was defined for each sub-task as the earliest time when all the
predecessors have been computed. Then, the offloading decision, mobile CPUcycle frequency and mobile-
transmission power were jointly optimized to reduce the sum mobile-energy consumption and
computation latency with a proposed distributed algorithm.

3) Multiuser Cooperative Edge Computing:

First, MEC servers with limited computational resources may be overloaded when they have to serve a large number
of offloading mobile users. In such cases, the burdens on the servers can be lightened via peer-to-peer mobile
cooperative computing.

Second, sharing the computational resources among the users can balance the uneven distribution of the
computation workloads and computation capabilities over users
C. MEC Systems With Heterogeneous Servers

To enable ubiquitous edge computing, heterogeneous MEC (Het-MEC) systems were proposed in
,comprising one central cloud and multiple edge servers.
D. Challenges
they identify three critical research challenges for resource management in MEC that remain to be solved

1) Two-Timescale Resource Management:

wireless channels were assumed to remain static during the whole task execution process for simplicity.
Nevertheless, this assumption may be unreasonable when the channel coherence time is much shorter
than the latency requirement .

For instance, at a carrier frequency of 2GHz, the channel coherence time can be as small as 2.5ms when the
speed is 100km/h. For some mobile applications such as the MMORPG game PlaneShift,4 the acceptable
response time is 440ms and the excellent latency is 120ms

In such scenarios, the task offloading process may be across multiple channel blocks, necessitating the
two-timescale resource management for MEC. This problem is very challenging even for a single-user
MEC system with deterministic task arrivals [79].
2) Online Task Partitioning

To develop online task partitioning policies, one should incorporate the channel
statistics into the formulated task partitioning problem, which may easily belong to an
NP-hard problem even under a static channel.

In [97] and [140], approximate online task partitioning algorithms were derived for
applications with serial and tree-topology task-call graphs, respective, while solutions
for general task models remain unexploited
3) Large-Scale Optimization: The collaboration of multiple MEC servers allows their
resources to be jointly managed for serving a large number of mobile devices
simultaneously.

However, the increase of the network size renders the resource management a large-
scale optimization problem with respect to a large number of offloading decisions as well
as radio-and-computational resource allocation variable

Conventional centralized joint radio-and-computational resource management algorithms


require a huge amount of information and computation when applied to large-scale MEC
systems, which will inevitably incur a significant execution delay and may whittle away
the potential performance improvement

[141] provide powerful tools for radio resource management, they cannot be directly
applied to optimize the computation offloading decision due to its combinatorial and
non-convex nature, which calls for new algorithmic techniques.
ISSUES,CHALLENGES, AND
FUTURE RESEARCH DIRECTIONS
A. Deployment of MEC Systems

However, there is no formal definition of what an MEC server should be, and the server
locations in the system are not specified.

These invoke the site selection problems for MEC servers, which are significantly
different from the conventional BS site selection problems, as the optimal placement
of edge servers is coupled with the computational resource provisioning

MEC deployment, including the site selection for MEC servers, the MEC
network architecture, and server density planning.
1) Site Selection for MEC Servers: Selecting the sites for MEC infrastructures,
especially MEC servers,
is the first step towards building up the MEC system. To make the cost-effective server-
site selection, the system planners and administrators should account for two important
factors: site rentals and computation demands

Besides, the site selection for MEC servers is dependent on the computational
resource-allocation strategy,
the optimal computational resource allocation should take into account both site
rentals and computation demands. For example, for an MEC server at a site with a
high site rental, it is preferred to allocate huge computational resource and thus serve
a large number of users, for achieving the high revenue. Hence, a joint site selection
and computational resource provisioning problem needs to be solved before deploying
MEC systems.
2) MEC Network Architecture:
future mobile computing networks are
envisioned to be consisted of three
layers as shown in Fig. 8,

3-tier heterogeneous MEC system.


Tier-1 servers are located in close proximity to the
end users, such as at WiFi routers and small-cell BSs,
which are of relatively small computation capabilities.

Tier-2 servers are deployed at LTE BSs with moderate


computation capabilities.

Tier-3 servers are the existing Cloud Computing


infrastructures, such as data centers.
the MEC servers in different tiers have distinct computation and
communication capabilities. Such kinds of hierarchical MEC system
structures can not only preserve the advantage of efficient transmission
offered by HetNets, but also possess strong ability to handle the peak
computation workloads by distributing them across different tier
the computation capacity provisioning problem is highly challenging and
remains unsolved, as it should account for many different factors, such as
the workload intensity, communication cost between different tiers,
workload distribution strategies, etc
Another thrust of research efforts focuses on exploiting the potential of the service subscriber layer, and utilizing the
undedicated computational resources, e.g., laptops, smart phones, and vehicles, overlaid with dedicated edge nodes. This
paradigm is termed as the Ad-hoc mobile cloud in literature

However, it also brings difficulties in resource management and security issues due to its ad-hoc and self-organized
nature.
3) Server Density Planning:

the MEC infrastructure may be a combination of different types of edge servers, which
provides various levels of computation experience and contributes different deployment
costs. Hence, it is critical to determine the number of edge nodes as well as the optimal
combination of different types of MEC servers with a given deployment budget and
computation demand statistics

Conventionally, this problem can only be addressed by numerical simulations, which is time-consuming and has poor
scalability. Fortunately, owing to the recent development of stochastic geometry theory and its successful applications
in performance analysis for wireless networks [149]–[152], as well as the similarity between Het-MEC systems and
HetNets, it is feasible to conduct performance analysis for MEC systems using techniques from stochastic geometry
theory
Such analysis of MEC systems should address the following challenges

1) The timescales of computation and wireless channel coherence time may be


different , which makes existing results for wireless networks not readily applicable
for MEC systems. One possible solution is to combine the Markov chain and
stochastic geometry theories to capture the steady behavior of computations.

2) The computation offloading policy will affect the radio resource management
policy, which should be taken into consideration
3) The computation demands are
normally non-uniformly
distributed and clustered (see Fig.
9), prohibiting the use of the
homogeneous Poisson point
process (HPPP) model for edge
servers and service subscribers.

It thus calls for the investigation of


more advanced point processes,
e.g., the Ginibre α-determinantal
point process (DPP), to capture the
clustering behaviors of edge nodes
B. Cache-Enabled MEC

This technology has attracted


extensive attention from both
academia and industry due to its
striking advantages on reducing
content acquisition latency, as well as
relieving heavy overhead burden of
the network backhaul
1) Service Caching for MEC Resource Allocation:

Such a mismatch between resource and demand introduces a key challenge on how to
allocate heterogeneous resources for service caching. Note that similar problems have
been investigated in conventional Cloud Computing systems [160]–[163], termed as
VM placement,

one efficient algorithm was proposed to minimize both the computation latency and
service placement transition cost. These works, however, cannot be directly applied
to design efficient service caching policies for MEC systems
To this end, two possible approaches are described as follows.

The first one is spatial popularity-driven service caching, referring to caching different combinations and
amounts of services in different MEC servers according to their specific locations and surrounding users’
common interests.

An alternative approach is temporal popularity-driven service caching. The main idea is similar to that
of the spatial counterpart, but it exploits the popularity information in the temporal domain, since the
computation requests also depend on the time period
2) Data Caching for MEC Data Analytics:

Since certain game rendered videos, e.g., gaming scenes, can be reused by other players,
caching these computation results would not only significantly reduce the computation
latency of the players with the same computation request, but also ease the computation
burden for edge servers .Similar idea has been proposed in [165], which investigated
collaborative multi-bitrate video caching and processing in MEC
C. Mobility Management for MEC

You might also like