Edge Computing
Edge Computing
A SEMINAR REPORT ON
“EDGE COMPUTING”
Bachelor of
In Partial Fulfillment of the Requirement for the Award of the Degree of
Engineering in Computer Science And Engineering
SUBMITTED BY
JANGLE SAGAR SURESH
2019-2020
SHREEYASH COLLEGE OF ENGINEERING & TECHNOLOGY,
AURANGABAD
DEPARTMENT OF COMPUTER SCIENCE &
ENGINEERING
CERTIFICATE
This is to certify that, the seminar “Edge Computing” submitted by
is a bonafide work completed under supervision and guidance of Prof. Swati Shinde
and it is submitted towards the partial fulfillment for award of Bachelor of Engineering
(Computer Science & Engineering) Degree of Dr. Babasaheb Ambedkar Marathwada
University, Aurangabad.
"Edge Computing"
Submitted by
is approved for the degree of Bachelor of Engineering (Computer Science & Engineering)
Degree of Dr. Babasaheb Ambedkar Marathwada University, Aurangabad.
Examiner:
Place: SYCET,
Aurangabad
Date :
DECLERATION
I hereby declare that I have completed and written the dissertation entitled “Edge
Computing”. It has not previously submitted for the basis of the award of any degree
or diploma or similar title of this for any other diploma/examining body or university.
Place: Aurangabad
Sagar Jangle
Date: 4607
i
ACKNOWLEDGEMENT
Before I turn towards the seminar, I would like to add a few heartfelt words for the
people who have been part of this seminar by supporting and encouraging me.
In particular, I would like to take this opportunity to express my honor, respect, deep
gratitude and genuine regards to my Guide Prof. P. D. SATHYA, Professor, Computer
Science and Engineering Department for giving me all guidance required for my topic,
apart from being a constant source of inspiration and motivation.
I also extend my sincere thanks to all the faculty member of Computer Science
engineering department for their support and encouragement.
I owe special thanks to my parents for their moral support and warm wishes, and finally
I would express my appreciation to all friends for their support which helped me to
complete my seminar successfully.
SAGAR JANGLE
4607
ii
ABSTRACT
The proliferation of Internet of Things (IoT) and the success of rich cloud services have
pushed the horizon of a new computing paradigm, edge computing, which calls for
processing the data at the edge of the network.
Edge computing has the potential to address the concerns of response time
requirement, battery life constraint, bandwidth cost saving, as well as data safety and
privacy. In this paper, we introduce the definition of edge computing, followed by several
case studies, ranging from cloud offloading to smart home and city, as well as collaborative
edge to materialize the concept of edge computing.
iii
Table of content
Declaration i
Acknowledgement ii
Abstract iii
Chapter No. Contents Page No.
1. Introduction 1
2. Literature Survey 4
3.1 Scope 7
3.2 Objective 7
5. Conclusions 19
6. References 20
LIST OF FIGURES
Figure No. Description Page No
1.1 Cloud computing paradigm 2
ABBREVIATION
CHAPTER 1
INTRODUCTION
Internet of Things (IoT) was first introduced to the community in 1999 for supply chain
management, and then the concept of “making a computer sense information without the
aid of human intervention” was widely adapted to other fields such as healthcare, home,
environment, and transports.
Now with IoT, we will arrive in the post-cloud era, where there will be a large
quality of data generator by things that are immersed in our daily life, and a lot of
applications will also be deployed at the edge to consume these data. By 2019, data
produced by people, machines, and things will reach 500 zettabytes, as estimated by Cisco
Global Cloud Index, however, the global data centre IP traffic will only reach 10.4
zettabytes by that time. By 2019, 45% of IoT-created data will be stored, processed,
analysed, and acted be 50 billion things connected to the Internet by 2020, as predicted by
Cisco Internet Business Solutions Group.
Some IoT applications might require very short response time, some might involve
private data, and some might produce a large quantity of data which could be a heavy load for
networks. Cloud computing is not efficient enough to support these applications. With the push
from cloud services and pull from IoT, we envision that the edge of the network is changing
from data consumer to data producer as well as data consumer. In this paper, we attempt to
contribute the concept of edge computing. We start from the analysis of why we need edge
computing, then we give our definition and vision of edge computing. Several case studies like
cloud offloading, smart home and city as well as collaborative edge are introduced to further
explain edge computing in a detailed manner, followed by some challenges and opportunities
in programmability, naming, data abstraction, service management, privacy and security, as
well as optimization metrics that are worth future.
edge of the network. In this section, we list some reasons why edge computing is more
efficient than cloud computing for some computing services, then we give our definition
and understanding of edge computing.
Edge computing is expected act as a strategic brain behind IoT. Identifying the role of edge
computing in IoT is the main research issue at present. Edge computing is utilized to reduce
the amount of data sent to the cloud and decrease service access latency. Figure illustrates
the complimentary role of edge and cloud computing in the IoT environment.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Page 2
EDGE COMPUTING
CHAPTER 2
LITERATURE SURVEY
2. Secure Edge Computing in IoT Systems: Review and Case Studies (2018)
The architectures for efficient and secure network system designs, such as Internet of
Things (IoT) and big data analytics, are growing at a faster pace than ever before. Edge
computing for an IoT system is data processing that is done at or near the collectors of
data in an IoT system. In this paper, we aim to briefly review the concepts, features,
security, applications of IoT empowered edge computing as well as its security aspects
in our data-driven world. We focus on clarifying different aspects that should be taken
into consideration while creating a scalable, reliable, secure and distributed edge
computing system. We also summarize the basic ideas regarding security risk
mitigation techniques. Then, we explore the presented challenges and opportunities in
the field of edge computing. Finally, we review two case studies, smart parking and
content delivery network (CDN), and analyse different methods in which IoT systems
can be used to carry out daily tasks.
Many early IoT devices could only collect and send data for analysis. However, the
increasing computing capacity of today’s devices allow them to perform complex
computations on-site, resulting in edge computing. Edge computing extends cloud
computing capabilities by bringing services close to the edge of a network and thus
supports a new variety of services and applications. In this work, we investigate,
highlight, and report on recent advances in edge computing technologies with respect
to measuring their impact on IoT. We establish a taxonomy of edge computing by
classifying and categorizing existing literature, and by doing so, we reveal the salient
and supportive features of different edge computing paradigms for IoT. Moreover, we
present the key requirements for the successful deployment of edge computing in IoT
and discuss a few indispensable scenarios of edge computing in IoT. Several open
research challenges are also outlined.
horizontal integration among IoT services. Finally, we present detailed service use-
cases to illustrate how the different protocols presented in the paper fit together to
deliver desired IoT services.
This becomes even more evident in light of the forthcoming 5G networks, which
are envisioned to support an amalgam of diverse applications and services with
heterogeneous performance requirements, including mission-critical IoT
communication, massive machine-type communication, and gigabit mobile
connectivity. Emergency service operators face an enormous challenge in order to
synchronize their model of operation with the 5G paradigm. This article studies the
challenges that next generation emergency services need to overcome in order to
fulfill the requirements for rich-content, real-time, location-specific
communications. The concept for next generation emergency communications as
described in the project EMYNOS is presented, along with a vision of how this
concept can fulfill the 5G requirements for ultra-reliable and ultra-low-latency
emergency communications.
CHAPTER 3
SCOPE AND OBJECTIVE
3.1 Scope
Push from Cloud Services: Putting all the computing tasks on the cloud has been proved
to be an efficient way for data processing since the computing power on the cloud outclasses
the capability of the things at the edge. However, compared to the fast developing data
processing speed, the bandwidth of the network has come to a standstill. With the growing
quantity of data generated at the edge, speed of data transportation is becoming the
bottleneck for the cloud based computing paradigm.
Pull From IoT: Almost all kinds of electrical devices will become part of IoT, and they
will play the role of data producers as well as consumers, such as air quality sensors, LED
bars, streetlights and even an Internet-connected microwave oven. It is safe to infer that the
number of things at the edge of the network will develop to more than billions in a few
years. Thus, raw data produced by them will be enormous, making conventional cloud
computing not efficient enough to handle all these data. This means most of the data
produced by IoT will never be transmitted to the cloud, instead it will be consumed at the
edge of the network.
Change from Data Consumer to Producer: In the cloud computing paradigm, the end
devices at the edge usually play as data consumer, for example, watching a YouTube video
on your smart phone. However, people are also producing data nowadays from their mobile
devices. The change from data consumer to data producer/consumer requires more function
placement at the edge.
3.1 Objective
The different objectives for edge computing in the context of IoT are as follows:
Latency Minimization: High latency has become a crucial problem for IoT-based smart
applications. An alternative platform, such as edge computing, that can guarantee timely
delivery of service is required to fulfil the quality of service (QoS) requirements of delay-
sensitive IoT applications (e.g., smart transportation and online gaming).
Cost Optimization: The use of an adequate platform for enabling edge computing
necessitates extensive infrastructure deployment that involves substantial upfront
investment and operational expenses. Most of these expenses are related to network node
placement, which requires deliberate planning and optimization to minimize the overall cost.
Deployment of an optimal number of nodes at appropriate positions can significantly reduce
capital, and optimal arrangement of edge nodes can minimize operational costs.
Data Management: The large number of IoT devices at present are expected to generate
large amounts of data that need to be managed in a timely manner. Efficient and effective
data management mechanisms are desirable in edge computing. Transmission and
aggregation of IoT-generated data are important concerns in data management.
CHAPTER 4
DESIGN AND IMPLEMENTATION
Taxonomy of IoT-based edge computing that considers particular features, such as wireless
network technologies, computing nodes, computing paradigms, service level objectives,
major enablers, data types, applications, and attributes.
Network Technologies IoT devices send collected data to a locally available edge server
for processing. These devices communicate with edge computing platforms through either
wireless networking technologies, such as WiFi and cellular networking (e.g., 3G, 4G, and
5G), or wired technologies, such as Ethernet. These network technologies vary in terms of
data rate, transmission range, and number ofsupported devices. Wireless networks provide
flexibility and mobility to users who execute their applications on the edge server.
However, wireless network technologies are not as reliable as wired technologies.
Computing Nodes IoT devices have limited processing capabilities, which make them
unsuitable for computation-intensive tasks. However, resource-constrained IoT devices can
augment their capabilities by leveraging the resources of edge servers. The edge computing
paradigm relies on different computational devices to provide services to IoT users. These
computational devices are the core element of IoT-based edge computing. Computing nodes
include servers, base stations (BS), routers, and vehicles that can provide resources and various
services to IoT devices. The use of these devices is specific to the computing paradigm.
Computing Paradigms Various computing paradigms are used in IoT to provide different
services depending on diverse application requirements. These paradigms can be categorized
into cloud computing, edge computing (i.e., MEC, fog, and cloudlet), mobile ad hoc cloud
(MAC), and hybrid platforms. Cloud computing is a centralized computing infrastructure that
aims to provide interruption-free access to powerful cloud servers. These servers can rapidly
process large amounts of data upon receipt from remote IoT devices and send back the results.
However, real-time delay-sensitive applications cannot afford long delays induced by a wide
area network. Continuous transmission of voluminous
raw data through unreliable wireless links may also be ineffective. By contrast, edge computing
is a decentralized computing platform that brings cloud computing capabilities near IoT
devices, that is, the network edge. An important type of edge computing platform is MEC,
which brings cloud computing capabilities to the edge of a cellular network [10]. Computational
and storage services in MEC are provided at the BS. Unlike MEC, fog computing employs
local fog nodes (i.e., local network devices such as a router or switch) available within a limited
geographic region to provide computational services. Fog computing is considered a premier
technology following the success of IoT. Cloudlet is another form of edge computing, in which
delay-sensitive and computation-intensive tasks from IoT devices are performed on a server
deployed in the local area network. Unlike cloud and edge computing platforms that rely on
infrastructure deployment, MAC capitalizes the shared resources of available mobile devices
within local proximity to process computation-intensive tasks. Cloud and edge computing are
used together in hybrid computing. Such infrastructure is usually adopted when we require the
large computing resources of cloud computing but cannot tolerate the latency of the cloud.
Variants of edge computing can be employed in such applications to overcome the latency
problems of cloud computing.
Edge Computing is a distributed architecture, simply defined as the processing of data when
it is collected. It has been emerged to minimize both bandwidth and time response in an
IoT system. The use of an edge computing technique is required when the latency is
required to be optimized to avoid network saturation as well as when the data processing
burden is high at a centralized infrastructure. An extended version of edge computing is fog
computing, which is an architecture that makes use of edge gadgets to accomplish a
considerable amount of computation, storage, communication regionally, which
undoubtedly possesses input and output from the real world referred to as transduction. Fog
nodes determine whether to process the data locally from several data sources or send the
data out to cloud.
The tasks of edge computing, which people carry out in a daily manner. There are three
basic elements: input, processing, and output as summarized based.
• Data sources: As the input, any endpoint which records and collects data from clients or
its environments is described as a data source.
• Artificial intelligence: As the processing function, it is the main facet after data collected
to uncover practical observations, locate patterns and trends, produce individualized
recommendations, and improve the performance based on machine learning or data
analytics models.
• Actionable insights: The results from the previous stage succeed only when an individual
can act and make any informed selection. Thus, within this stage, the insights appear in a
transparent manner in type of control panels, visualizations, alerts and so on, which
motivates communication between machines and humans, therefore generating a beneficial
feedback loop.
An organization should oversee and ensure privacy and security of their IoT framework.
Multiple terminologies used in privacy-preserving management are enumerated in the
following:
• Pseudonymity: where the pseudonym is used as an ID to ensure that an individual can
utilize the source (e.g. pseudonym) without revealing the source’s real identity. However,
a user could still be responsible for usage.
• Unobservability: assuring that an individual could utilize a resource or service without
other third parties and having the ability to observe that the resource or service is being
used.
• Unlinkability: ensuring that a third party (e.g., an attacker) cannot identify whether two
objects are linked to each other or not.
• Anonymity: an individual may make use of a resource without revealing his identity.
• Confidentiality: assuring only the data proprietor and an individual can access the personal
information in the edge computing. It protects against unapproved parties’ access to the data
when the individual’s data is transferred and also collected in edge or core network
framework, as well as when the data is kept or handled in edge or cloud nodes.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Page 13
EDGE COMPUTING
• Integrity: assuring the proper and steady transmission of data to the accredited individual
without unauthorized modification of the data. Privacy of individuals can be impacted due
to the lack of integrity measures.
• Availability: ensuring the accredited party manages to access the edge services in any
regions based on individual’s needs. This also implies that an individual’s data held in edge
or cloud nodes along with the cipher text format can be handled under various practical
needs.
• Access control and authentication: access control imitates a linking point of all privacy
and security demands by the access control technique. Authentication ensures that the
identification of an individual is accredited.
Two case studies are presented in this section to illustrate the edge computing vision
comprehensively. First, we analyse a smart parking system that lessens traffic when an
individual is navigating for a car parking space. Second, we explore utilizing the CDN to
minimize the latency of transmitted data as well as enhance Internet content availability.
The smart parking system is usually powered via RFID, ultrasonic detector, and infrared
sensing units.
1) Flow of Execution: A common flow of execution
works as follows for a smart parking system.
1) Log in to the smartphone parking application.
2) Choose the parking area near the customer’s location.
3) Browse between randomly available parking slots, then select a preferable slot.
4) Select the desired timeframe to park the vehicle.
5) Pay off the parking fee for a chosen timeframe.
6) When a customer parks the car via navigation and confirms his parking, the time
countdown starts.
7) On departure, the customer can pay any additional charge if he exceeds the allowed time.
2) Benefits: Smart parking may minimize traffic for an automobile navigating for a slot,
can be useful for many people and decrease vehicles emissions, making for an even more
environmentally friendly city. It can also boost accessibility for businesses and grocery
stores by enhanced optimization of available parking slots.
3) Future Scope:
• The system may be adjusted to integrate future self-driving automobiles and assure real-
time communication between several vehicles such that an individual possesses no
interactive burden with the system.
• More efficient parking algorithms could be established for the optimal consumption of
resources, such as availability
of slots and parking durations. For example, a deep learning model can be trained for real-
time space allocation.
a significant role because the caching system relies on the performance of these servers.
The CDN architecture enhances the reliability of the entire system. If one POP server is
down, traffic will be re-routed to other PoPs. Individuals can be delivered services with
their nearest POP server.
2) Advantages: There are many advantages for consumers under the CDN architecture.
• Website security improvement: a CDN with the help of distributed denial of service
(DDoS) mitigation can enhance and maintain the website security from DDoS attacks that
can severely interrupt and degrade the service accessibility.
• Faster website page loading: a CDN can be utilized to provide static web content, which
decreases the webpage load time.
• Botnet and spam defence: a CDN can be set up with firewall policies which obstruct
unwanted spamming and botnet probing against the system.
• Enhancing global content availability: a CDN can manage massive traffic and hold up
against failure as compared with central services.
• Handling website traffic spikes: a CDN provides better load balancing between servers
and offers fast horizontal scaling.
We have described five potential applications of edge computing in the last section. To
realize the vision of edge computing, we argue that the systems and network community
need to work together. In this section, we will further summarize these challenges in detail
and bring forward some potential solutions and opportunities worth further research,
including programmability, naming, data abstraction, service management, privacy and
security and optimization metrics.
Programmability
In cloud computing, users program their code and deploy them on the cloud. The cloud provider
is in charge to decide where the computing is conducted in a cloud. Users have zero or partial
knowledge of how the application runs. This is one of the benefits of cloud computing that the
infrastructure is transparent to the user. Usually, the program is written in one programing
language and compiled for a certain target platform, since the program only runs in the cloud.
However, in the edge computing, computation is offloaded from the cloud, and the edge nodes
are most likely heterogeneous platforms. In this case, the runtime of these nodes differ from
each other, and the programmer faces huge difficulties to write an application that may be
deployed in the edge computing paradigm. To address the programmability of edge computing,
we propose the concept of computing stream that is defined as a serial of functions/computing
applied on the data along the data propagation path. The functions/computing could be entire
or partial functionalities of an application, and the computing can occur anywhere on the path
as long as the application defines where the computing should be conducted. The computing
stream is software defined computing flow such that data can be processed in distributed and
efficient fashion on data generating devices, edge nodes, and the cloud environment. As defined
in edge computing, a lot of computing can be done at the edge instead of the centric cloud. In
this case, the computing stream can help the user to determine what functions/computing
should be done and how the data is propagated after the computing happened at the edge. The
function/computing distribution metric could be latency-driven, energy cost, TCO, and
hardware/ software specified limitations. The detailed cost model is discussed in Section IV-F.
By deploying a computing stream, we expect that data is computed as close as possible to the
data source, and the data transmission cost can be reduced.
Naming
In edge computing, one important assumption is that the number of things is tremendously
large. At the top of the edge nodes, there are a lot of applications running, and each application
has its own structure about how the service is provided. Similar to all computer
systems, the naming scheme in edge computing is very important for programing, addressing,
things identification, and data communication. However, an efficient naming mechanism for
the edge computing paradigm has not been built and standardized yet. Edge practitioners
usually needs to learn various communication and network protocols in order to communicate
with the heterogeneous things in their system. The naming scheme for edge computing needs
to handle the mobility of things, highly dynamic network topology, privacy and security
protection, as well as the scalability targeting the tremendously large amount of unreliable
things. Traditional naming mechanisms such as DNS and uniform resource identifier satisfy
most of the current networks very well. However, they are not flexible enough to serve the
dynamic edge network since sometimes most of the things at edge could be highly mobile and
resource constrained. Moreover, for some resource constrained things at the edge of the
network, IP based naming scheme could be too heavy to support considering its complexity
and overhead. New naming mechanisms such as named data networking (NDN) [27] and
Mobility First [28] could also be applied to edge computing. NDN provide a hierarchically
structured name for content/data centric network, and it is human friendly for service
management and provides good scalability for edge. However, it would need extra proxy in
order to fit into other communication protocols such as Bluetooth or ZigBee, and so on. Another
issue associated with NDN is security, since it is very hard to isolate things hardware
information with service providers. MobileFirst can separate name from network address in
order to provide better mobility support, and it would be very efficient if applied to edge
services where things are of highly mobility. Nerveless, a global unique identification (GUID)
needs to be used for naming is MobileFirst, and this is not required in related fixed information
aggregation service at the edge of the network such as home environment. Another
disadvantage of MobileFirst for edge is the difficulty in service management since GUID is not
human friendly.
Data Abstraction
Various applications can run on the edgesOS consuming data or providing service by
communicating through the air position indicators from the service management layer. Data
abstraction has been well discussed and researched in the wireless sensor network and cloud
computing paradigm. However, in edge computing, this issue becomes more challenging.
With IoT, there would be a huge number of data generators in the network, and here we
take a smart home environment as an
example. In a smart home, almost all of the things will report data to the edgeOS, not to
mention the large number of things deployed all around the home. However, most of the
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Page 18
EDGE COMPUTING
things at the edge of the network, only periodically report sensed data to the gateway. For
example, the thermometer could report the temperature every minute, but this data will
most likely only be consumed by the real user several times a day. Another example could
be a security camera in the home which might keep recording and sending the video to the
gateway, but the data will just be stored in the database for a certain time with nobody
actually consuming it, and then be flushed by the latest video.
Service Management
In terms of service management at the edge of the network, we argue that the following four
fundamental features should be supported to guarantee a reliable system, including
differentiation, extensibility, isolation, and reliability. Differentiation: With the fast growth of
IoT deployment, we expected multiple services will be deployed at the edge of the network,
such as Smart Home. These services will have different priorities. For example, critical services
such as things diagnosis and failure alarm should be processed earlier than ordinary service.
Health related service, for example, fall detection or heart failure detection should also have a
higher priority compared with other service such as entertainment.
Optimization Metrics
In edge computing, we have multiple layers with different computation capability. Workload
allocation becomes a big issue. We need to decide which layer to handle the workload or how
many tasks to assign at each part. There are multiple allocation strategies to complete a
workload, for instances, evenly distribute the workload on each layer or complete as much as
possible on each layer. The extreme cases are fully operated on endpoint or fully operated on
cloud. To choose an optimal allocation strategy, we discuss several optimization metrics in this
section, including latency, bandwidth, energy and cost.
Latency: Latency is one of the most important metrics to evaluate the performance,
especially in interaction applications/services
Bandwidth: From latency’s point of view, high bandwidth can reduce transmission time,
especially for large data.
Energy: Battery is the most precious resource for things at the edge of the network. For the
endpoint layer, offloading workload to the edge can be treated as an energy free method.
Cost: From the service providers’ perspective, e.g., YouTube, Amazon, etc., edge
computing provides them less latency and energy consumption, potential increased
throughput and improved user experience.
CHAPTER 5
CONCLUSION
Nowadays, more and more services are pushed from the cloud to the edge of the network
because processing data at the edge can ensure shorter response time and better reliability.
Moreover, bandwidth could also be saved if a larger portion of data could be handled at the
edge rather than uploaded to the cloud. The burgeoning of IoT and the universalized mobile
devices changed the role of edge in the computing paradigm from data consumer to data
producer/consumer. It would be more efficient to process or massage data at the edge of
the network.
In this paper, we came up with our understanding of edge computing, with the
rationale that computing should happen at the proximity of data sources. In this article, we
investigated, highlighted, and reported recent premier advances in edge computing
technologies (e.g., fog computing, MEC, and cloudlets) with respect to measuring their
effect on IoT. Then, we categorized edge computing literature by devising a taxonomy,
which was used to uncover the premium features of edge computing that can be beneficial
to the IoT paradigm. We outlined a few key requirements for the deployment of edge
computing in IoT and discussed indispensable scenarios of edge computing in IoT.
Furthermore, several open research challenges to the successful deployment of edge
computing in IoT are identified and discussed.
We conclude that although the deployment of edge computing in IoT provides
numerous benefits, the convergence of these two computing paradigms brings about new
issues that should be resolved in the future.
CHAPTER 6
REFERENCES
[1] Edge Computing: Vision and Challenges,Weisong Shi, Fellow, IEEE, Jie Cao,
Student Member, IEEE, Quan Zhang, Student Member, IEEE, Youhuizi Li, and
Lanyu Xu 2016
[2] Secure Edge Computing in IoT Systems: Review and Case Studies, Mohammed
Alrowaily Department of Electrical Engineering University of South Florida, Zhuo Lu
Department of Electrical IEEE,2018
[3] The Role of Edge Computing in Internet of Things , Najmul Hassan, Saira Gillani,
Ejaz Ahmed IEEE, Ibrar Yaqoob, and Muhammad Imran, IEEE,2018
[4] Ala Al-Fuqaha, Senior Member, IEEE, Mohsen Guizani, Fellow, IEEE, Mehdi
Mohammadi, Student Member, IEEE, Mohammed Aledhari, Student Member, IEEE,
and Moussa Ayyash, Senior Member, IEEE,2015
[6] T. Taleb et al., “Mobile Edge Computing Potential in Making Cities Smarter,” IEEE
Commune. Mag., vol. 55, no. 3, Mar.2017
[7] M. Satyanarayanan et al., “Edge Analytics in the Internet of Things,” IEEE Pervasive
Computing, vol. 14, no. 2, 2015