Feasibility of Fog Computing: Blesson Varghese, Nan Wang, Dimitrios S. Nikolopoulos Rajkumar Buyya

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Feasibility of Fog Computing

Blesson Varghese, Nan Wang, Dimitrios S. Nikolopoulos Rajkumar Buyya


School of Electronics, Electrical Engg. and Computer Science Dept. of Computing and Information Systems
Queen’s University Belfast, UK University of Melbourne, Australia
Email: {varghese, nwang03, d.nikolopoulos}@qub.ac.uk Email: [email protected]

Abstract—As billions of devices get connected to the Internet,


it will not be sustainable to use the cloud as a centralised server.
The way forward is to decentralise computations away from the
arXiv:1701.05451v1 [cs.DC] 19 Jan 2017

cloud towards the edge of the network closer to the user. This
reduces the latency of communication between a user device and
the cloud, and is the premise of ‘fog computing’ defined in this
paper. The aim of this paper is to highlight the feasibility and
the benefits in improving the Quality-of-Service and Experience
by using fog computing. For an online game use-case, we found
that the average response time for a user is improved by 20%
when using the edge of the network in comparison to using a
cloud-only model. It was also observed that the volume of traffic
between the edge and the cloud server is reduced by over 90%
for the use-case. The preliminary results highlight the potential
of fog computing in achieving a sustainable computing model and
highlights the benefits of integrating the edge of the network into Fig. 1. A global view of executing applications in the current cloud paradigm
the computing ecosystem. where user devices are connected to the cloud. Blue dots show sample
locations of cloud data centers and the yellow dots show user devices that
I. A N OVERVIEW make use of the cloud as a centralised server.

The landscape of parallel and distributed computing has


significantly evolved over the last sixty years [1], [2], [3].
The 1950s saw the advent of mainframes, after which the
vector era dawned in the 1970s. The 1990s saw the rise of
the distributed computing or massively parallel processing era.
More recently, the many-core era has come to light. These
have led to different computing paradigms, supporting full
blown supercomputers, grid computing, cluster computing,
accelerator-based computing and cloud computing. Despite
this growth, there continues to be a significant need for more
computational capabilities to meet future challenges.
It is forecast that between 20-50 billion devices will be
added to the internet by 2020 creating an economy of over
$3 trillion1,2 . Consequently, 43 trillion gigabytes of data Fig. 2. A global view of executing applications at the edge of the network
will be generated and will need to be processed in cloud in the fog computing model where user devices are connected to the cloud
data centers. Applications generating data on user devices, indirectly. The user devices are serviced by the edge nodes. Blue dots show
sample locations of cloud data centers and the yellow dots show user devices
such as smartphones, tablets and wearables currently use the that make use of the cloud through a variety of edge nodes indicated in purple.
cloud as a centralised server (as shown in Figure 1), but
this will soon become an untenable computing model. This
is simply because the frequency and latency of communi- Applications will need to process data closer to its source to
cation between user devices and geographically distant data reduce network traffic and efficiently deal with the data explo-
centers will increase beyond that which can be handled by sion. However, this may not be possible on user devices, since
existing communication and computing infrastructure [4]. This they have relatively restricted hardware resources. Hence, there
will adversely affect Quality-of-Service (QoS) and Quality-of- is strong motivation to look beyond the cloud towards the
Experience (QoE) [5]. edge of the network to harness computational capabilities that
1 http://www.gartner.com/newsroom/id/3165317 are currently untapped [6], [7]. For example, consider routers,
2 http://spectrum.ieee.org/tech-talk/telecom/internet/ mobile base stations and switches that route network traffic.
popular-internet-of-things-forecast-of-50-billion-devices-by-2020-is-outdated The computational resources available on such nodes, referred
to as ‘Edge Nodes’ that are situated closer to the user device mented by definitions provided by IBM3 and Gartner4 . The
than the data center can be employed. key concepts that are in view are on-demand services for
We define the concept of distributed computing on the edge users, rapid elasticity of resources and measurable services
of the network in conjunction with the cloud, referred to as for transparency and billing [12], [13], [14].
‘Fog Computing’ [8], [9], [10]. This computing model is based A. Definition
on the premise that computational workloads can be executed
on edge nodes situated in between the cloud and a host of user We define fog computing as a model to complement
devices to reduce communication latencies and offer better the cloud for decentralising the concentration of computing
QoS and QoE as shown in Figure 2. In this paper, we refer resources (for example, servers, storage, applications and
to edge nodes as the nodes located at the edge of the network services) in data centers towards users for improving the
whose computational capabilities are harnessed. This model quality of service and their experience.
co-exists with cloud computing to complement the benefits In the fog computing model, computing resources already
offered by the cloud, but at the same time makes computing available on weak user devices or on nodes that are currently
more feasible as the number of devices increases. not used for general purpose computing may be used. Alterna-
tively, additional computational resources may be added onto
We differentiate this from ‘edge computing’ [4], [5], [11] in
nodes one or a few hops away in the network to facilitate
which the edge of the network, for example, nodes that are one
computing closer to the user device. This impacts latency,
hop away from a user device, is employed only for comple-
performance and quality of the service positively [15], [16].
menting computing requirements of user devices. On the other
This model in no way can replace the benefits of using the
hand, in fog computing, computational capabilities across the
cloud, but optimises performance of applications that are user-
entire path taken by data may be harnessed, including the edge
driven and communication intensive.
of the network. Both computing models use the edge node;
Consider for example, a location-aware online game use-
the former integrates it in the computing model both with the
case that will be presented in Section IV. Typically, such
cloud and user devices, where as the latter incorporates it only
a game would be hosted on a cloud server and the players
for user devices.
connect to the server through devices, such as smartphones and
In this paper, we provide a definition of fog computing and tablets. Since the game is location-aware the GPS coordinates
articulate its distinguishing characteristics. Further, we provide will need to be constantly updated based on the players
a view of the computing ecosystem that takes the computing movement. This is communication intensive. The QoS may
nodes, execution models, workload deployment techniques be affected given that the latency between a user device and a
and the marketplace into account. A location-aware online distant cloud server will be high. However, if the game server
game use-case is presented to highlight the feasibility of fog can be brought closer to the user, then latency and communica-
computing. The average response time for a user is improved tion frequency can be reduced. This will improve the QoS and
by 20% when compared to a cloud-only model. Further, we QoE. The fog computing model can also incorporate a wide
observed a 90% reduction in data traffic between the edge variety of sensors to the network without the requirement of
of the network and the cloud. The key result is that the fog communicating with distant resources, thereby allowing low
computing model is validated. latency actuation efficiently [17], [18]. For example, sensor
The remainder of this paper is organised as follows. Sec- networks in smart cities generating large volumes of data can
tion II define fog computing and presents characteristics be processed closer to the source without transferring large
that are considered in the fog computing model. Section III amounts of data across the internet.
presents the computing ecosystem, including the nodes, work- Another computing model that is sometimes synonymously
load execution, workload deployment, the fog marketplace. used in literature is edge computing [4], [5], [11]. We dis-
Section IV highlights experimental results obtained from tinguish fog computing and edge computing in this paper.
comparing the cloud computing and fog computing models. In edge computing, the edge of the network (for example,
Section V concludes this paper. nodes that are one hop away from a user device) is employed
for only facilitating computing of user devices. In contrast,
II. D EFINITION AND C HARACTERISTICS OF the aim in fog computing is to harness computing across the
F OG C OMPUTING entire path taken by data, which may include the edge of the
network closer to a user. Computational needs of user devices
A commonly accepted definition for cloud computing was and edge nodes can be complemented by cloud-like resources
provided by the National Institute for Standards and Tech- that may be closer to the user or alternatively workloads can
nology (NIST) in 2011, which was “... a model for enabling be offloaded from cloud servers to the edge of the network.
ubiquitous, convenient, on-demand network access to a shared Both the edge and fog computing models complement each
pool of configurable computing resources (e.g., networks, other and given the infancy of both computing models, the
servers, storage, applications, and services) that can be rapidly distinctions are not obvious in literature.
provisioned and released with minimal management effort 3 https://www.ibm.com/cloud-computing/what-is-cloud-computing

or service provider interaction.” This definition is comple- 4 http://www.gartner.com/it-glossary/cloud-computing/


B. Characteristics
Cloud concepts, such as on-demand services for users,
rapid elasticity of resources and measurable services for
transparency will need to be achieved in fog computing. The
following characteristics specific to fog computing will need
to be considered in addition:
1) Vertical Scaling: Cloud data centers enable on-demand
resource scaling horizontally. Multiple Virtual Machines
(VMs), for example, could be employed to meet the increasing
requests made by user devices to a web server during peak
hours (horizontal scaling is facilitated by the cloud). However,
the ecosystem of fog computing will offer resource scaling ver-
tically, whereby multiple hierarchical levels of computations
offered by different edge nodes could be introduced to not only
reduce the amount of traffic from user devices that reaches the
cloud, but also reduce the latency of communication. Vertical
scaling is more challenging since resources may not be tightly
coupled as servers in a data center and may not necessarily
be under the same ownership.
2) Heterogeneity: On the cloud, virtual resources are usu-
ally made available across homogeneous physical machines.
For example, a specific VM type provided by Amazon is
mapped on to the same physical server. On the other hand,
the fog computing ecosystem comprises heterogeneous nodes
ranging from sensors to user devices to routers, mobile base
stations and switches to large machines situated in data centers.
These devices and nodes have CPUs with varying specifica- Fig. 3. The fog computing ecosystem considered in Section III showing
tions and performance capabilities, including Digital Signal the user device layer, edge node layer and cloud layer. The user device layer
comprises user devices that would traditionally communicate with the cloud.
Processors (DSPs) or other accelerators, such as Graphics The edge node layer comprises multiple hierarchical levels of edge nodes.
Processing Units (GPUs). Facilitating general purpose com- However, in the fog computing model, nodes close to the user are of particular
puting on such a variety of resources both at the horizontal interest since the aim is to bring computing near user devices where data is
generated. The different nodes, include traffic routing nodes (such as base
and vertical scale is the vision of fog computing. stations, routers, switches and gateways), capability added nodes (such as
3) Visibility and Accessibility: Resources in the cloud are traffic routing nodes, with additional computational capabilities, or dedicated
made publicly accessible and are hence visible to a remote user computational resources) and peer nodes (such as a collection of volunteered
user devices as a dynamic cloud). Workloads are executed in an offloading
through a marketplace. The cloud marketplace is competitive (both from user device to the edge and from the cloud to the edge), aggregating
and makes a wide range of offerings to users. In fog com- and sharing models (or a hybrid combining the above, which is not shown)
puting, a significantly larger number of nodes in the network on edge nodes closer to the user.
that would not be otherwise visible to a user will need to
become publicly accessible. Developing a marketplace given
the heterogeneity of resources and different ownership will as shown in Figure 1. Data from the devices are stored in the
be challenging. Moreover, building consumer confidence in cloud. All communication is facilitated through the cloud, as if
using fog enabled devices and nodes will require addressing a the devices were talking to a centralised server. Computing and
number of challenges, such as security and privacy, developing storage resources are concentrated in the cloud data centers
standards and benchmarks and articulating risks. and user devices simply access these services. For example,
4) Volume: There is an increasing number of resources that consider a web application that is hosted on a server in a
are added to a cloud data center to offer services. With vertical data center or multiple data centers. Users from around the
scaling and heterogeneity as in fog computing the number of world access the web application service using the internet.
resources that will be added and that will become visible in The cloud resource usage costs are borne by the company
the network will be large. As previously indicated, billions of offering the web application and they are likely to generate
devices are expected to be included in the network. In addition revenue from users through subscription fees or by advertising.
to a vertical scale out, a horizontal scale out is inevitable. However in the fog computing model as shown in Figure 3,
computing is not only concentrated in cloud data centers.
III. T HE F OG C OMPUTING E COSYSTEM Computation and even storage is brought closer to the user,
In the cloud-only computing model, the user devices at thus reducing latencies due to communication overheads with
the edge of the network, such as smartphones, tablets and remote cloud servers [19], [20], [21]. This model aims to
wearables, communicate with cloud servers via the internet achieve geographically distributed computing by integrating
multiple heterogeneous nodes at the edge of the network that may be poor. The component of the game server that services
would traditionally not be employed for computing. players can be offloaded onto edge nodes located closer to the
players to improve QoS and QoE for European players.
A. Computing Nodes 2) Aggregating model: Data streams from multiple devices
Typically, CPU-based servers are integrated to host VMs in a given geographic area are routed through an edge that
in the cloud. Public clouds, such as the Amazon Elastic performs computation, to either respond to the users or route
Compute Cloud (EC2)5 or the Google Compute Engine6 the processed data to the cloud server for further processing.
offer VMs through dedicated servers that are located in data For example, consider a large network of sensors that track
centers. Hence, multiple users can share the same the physical the level of air pollution in a smart city. The sensors may
machine. Private clouds, such as those owned by individual generate large volumes of data that do not need to be shifted
organisations, offer similar infrastructure but are likely to only to the cloud. Instead, edge nodes may aggregate the data from
execute workloads of users from within the organisation. different sensors, either to filter or pre-process data, and then
To deliver the fog computing vision, the following nodes forward them further on to a more distant server.
will need to be integrated in the computing ecosystem: 3) Sharing model: Workloads relevant to user devices or
1) Traffic routing nodes: through which the traffic of user edge nodes are shared between peers in the same or different
devices is routed (those that would not have been traditionally hierarchical levels of the computing ecosystem. For example,
employed for general purpose computing), such as routers, consider a patient tracking use-case in a hospital ward. The
base stations and switches. patients may be supplied wearables or alternate trackers that
2) Capability added nodes: by extending the existing capa- communicate with a pilot device, such as a chief nurse’s
bilities of traffic routing nodes with additional computational smartphone used at work. Alternatively, the data from the
and storage hardware or by using dedicated compute nodes. trackers could be streamed in an aggregating model. Another
3) Peer nodes: which may be user devices or nodes that example includes using compute intensive applications in a bus
have spare computational cycles and are made available in the or train. Devices that have volunteered to share their resources
network as volunteers or in a marketplace on-demand. could share the workload of a compute intensive application.
Current research aims to deliver fog computing using private 4) Hybrid model: Different components of complex work-
clouds. The obvious advantage in limiting visibility of edge loads may be executed using a combination of the above
nodes and using proprietary architectures is bypassing the strategies to optimise execution. Consider for example, air
development of a public marketplace and its consequences. pollution sensors in a city, which may have computing cores
Our vision is that in the future, the fog computing ecosystem on them. When the level of pollutants in a specific area of the
will incorporate both public and private clouds. This requires city is rising, the monitoring frequency may increase resulting
significant research and development to deliver a marketplace in larger volumes of data. This data could be filtered or pre-
that makes edge nodes publicly visible similar to public cloud processed on peer nodes in the sharing model to keep up
VMs. Additionally, technological challenges in managing re- with the intensity at which data is generated by sensors in
sources and enhancing security will need to be accounted for. the pollution high areas. However, the overall sensor network
may still follow the aggregating model considered above.
B. Workload Execution Models
Given a workload, the following execution models can be C. Workload Deployment Technologies
adopted on the fog ecosystem for maximising performance. While conventional Operating Systems (OS) will work on
1) Offloading model: Workloads can be offloaded in the large CPU nodes, micro OS that are lightweight and portable
following two ways. Firstly, from user devices onto edge nodes may be suitable on edge nodes. Similar to the cloud, abstrac-
to complement the computing capabilities of the device. For tion is key to deployment of workloads on edge nodes [22].
example, consider a face or object recognition application Technologies that can provide abstraction are:
that may be running on a user device. This application may 1) Containers: The need for lightweight abstraction that
execute a parallel algorithm and may require a large number offers reduced boot up times and isolation is offered by con-
of computing cores to provide a quick response to the user. tainers. Examples of containers include, Linux containers [23]
In such cases, the application may offload the workload from at the OS level and Docker [24] at the application level.
the device onto an edge node, for example a capability added 2) Virtual Machines (VMs): On larger and dedicated edge
node that comprises hardware accelerators or many cores. nodes that may have substantial computational resources, VMs
Secondly, from cloud servers onto edge nodes so that provided in cloud data centers can be employed.
computations can be performed closer to the users. Consider These technologies have been employed on cloud platforms
for example, a location aware online game to which users are and work best with homogeneous resources. The heterogene-
connected from different geographic locations. If the game ity aspect of fog computing will need to be considered to
server is hosted in an Amazon data center, for example in N. accommodate a wider range of edge nodes.
Virginia, USA, then the response time for European players D. The Marketplace
5 https://aws.amazon.com/ec2/ The public cloud marketplace has become highly compet-
6 https://cloud.google.com/compute/ itive and offers computing as a utility by taking a variety of
CPU, storage and communication metrics into account [25], nodes are reserved for general purpose computing. Edge
[26]. For example, Amazon’s pricing of a VM is based on the nodes, such as a mobile base station, for example, are used
number of virtual CPUs and memory allocated to the VM. To for receiving and transmitting mobile signals. The computing
realise fog computing as a utility, a similar yet a more complex cores available on such nodes are designed and developed
marketplace will need to be developed. The economics of this for the primary task of routing traffic. However, if these
marketplace will be based on: nodes are used in fog computing and if there is a risk of
1) Ownership: Typically, public cloud data centers are compromising the QoS of the primary service, then a priority
owned by large businesses. If traffic routing nodes were to needs to be assigned to the primary service when co-located
be used as edge nodes, then their ownership is likely to be with additional computational tasks. Such priorities are usually
telecommunication companies or governmental organisations not required on dedicated cloud servers.
that may have a global reach or are regional players (specific 2) Complex Management: Managing a cloud computing
to the geographic location. For example, a local telecom oper- environment requires the fulfilment of agreements between the
ator). Distributed ownership will make it more challenging to provider and the user in the form of Service Level Agreements
obtain a unified marketplace operating on the same standards. (SLAs) [29], [30]. This becomes complex in a multi-cloud en-
2) Pricing Models: On the edge there are three possible vironment [31], [32]. However, management in fog computing
levels of communication, which are between the user devices will be more complex given that edge nodes will need to be
and the edge node, one edge node and another edge node, accessible through a marketplace. If a task were to be offloaded
and an edge node and a cloud server, which will need to be from a cloud server onto an edge node, for example, a mobile
considered in a pricing model. In addition, ‘who pays what’ base station owned by a telecommunications company, then
towards the bill has to be articulated and a sustainable and the cloud SLAs will need to take into account agreements
transparent economic model will need to be derived. Moreover, with a third-party. Moreover, the implications to the user will
the priority of applications executing on these nodes will have need to be articulated. The legalities of SLAs binding both
to be accounted for. the provider and the user in cloud computing are continuing
3) Customers: Given that there are multiple levels of to be articulated. Nevertheless, the inclusion of a third party
communication when using an edge node, there are potentially offering services and the risk of computing on a third party
two customers. The first is an application owner running the node will need to be articulated. Moreover, if computations
service on the cloud who wants to improve the quality of span across multiple edge nodes, then monitoring becomes a
service for the application user. For example, in the online more challenging task.
game use-case considered previously, the company owning the 3) Enhanced Security and Privacy: The key to comput-
game can improve the QoS for customers in specific locations ing remotely is security that needs to be guaranteed by a
(such as Oxford Circus in London that and Times Square in provider [33], [34]. In the cloud context, there is significant
New York that is often crowded) by hosting the game server security risk related to data storage and hosting multiple users.
on multiple edge node locations. This will significantly reduce Robust mechanisms are currently offered on the cloud to guar-
the application latency and may satisfy a large customer base. antee user and user data isolation. This becomes more complex
The second is the application user who could make use in the fog computing ecosystem, given that not only are the
of an edge node to improve the QoE of a cloud service via above risks of concern, but also the security concerns around
fog computing. Consider for example, the basic services of an the traffic routed through nodes, such as routers [35], [36]. For
application on the cloud that are currently offered for free. A example, a hacker could deploy malicious applications on an
user may choose to access the fog computing based service edge node, which in turn may exploit a vulnerability that may
of the application for a subscription or on a pay-as-you-go degrade the QoS of the router. Such threats may have a signif-
basis to improve the user experience, which is achieved by icant negative impact. Moreover, if user specific data needs to
improving the application latency. be temporarily stored on multiple edge locations to facilitate
For both the above, in addition to existing service agree- computing on the edge, then privacy issues along with security
ments, there will be requirements to create agreements be- challenges will need to be addressed. Vulnerability studies that
tween the application owner, the edge node and the user, which can affect security and privacy of a user on both the vertical
can be transparently monitored within the marketplace. and horizontal scale will need to be freshly considered in light
of facilitating computing on traffic routing nodes.
E. Other concepts to consider 4) Lighter Benchmarking and Monitoring: Performance
While there are a number of similarities with the cloud, fog is measured on the cloud using a variety of techniques,
computing will open a number of avenues that will make it such as benchmarking to facilitate the selection of resources
different from the cloud. The following four concepts at the that maximise performance of an application and periodic
heart of fog computing will need to be approached differently monitoring of the resources to ensure whether user-defined
than current implementations on the cloud: service level objectives are achieved [37], [38], [39]. Existing
1) Priority-based multi-tenancy: In the cloud, multiple techniques are suitable in the cloud context since they monitor
VMs owned by different users are co-located on the same nodes that are solely used for executing the workloads [40],
physical server [27], [28]. These servers unlike many edge [41], [42]. On edge nodes however, monitoring will be more
challenging, given the limited hardware availability. Secondly,
benchmarking and monitoring will need to take into account
the primary service, such as routing traffic, that cannot be
compromised. Thirdly, communication between the edge node
and user devices and the edge node and the cloud and
potential communication between different edge nodes will
need to be considered. Fourthly, vertical scaling along multiple
hierarchical levels and heterogeneous devices will need to
be considered. These are not important considerations on the
cloud, but become significant in the context of fog computing.

IV. P RELIMINARY R ESULTS


In this section, we present preliminary results that indicate
that fog computing is feasible and in using the edge of the
network in conjunction with the cloud has potential benefits
that can improve QoS and QoE. The use-case employed is an
open-sourced version of a location-aware online game similar
to PokéMon Go, named iPokeMon7 . The game features a
virtual reality environment that can be played on a variety of
devices, such as smartphones and tablets. The user locates,
captures, battles and trains virtual reality creatures, named
Pokémons, through the GPS capability of the device. The
Pokémons are geographically distributed and a user aims to
build a high value profile among their peers. The users may
choose to walk or jog through a city to collect Pokémons.
The current execution model, which is a cloud-only model,
is such that the game server is hosted on the public cloud and
Fig. 4. The experimental testbed used for implementing the fog computing-
the users connect to the server. The server updates the user based iPokeMon game. The cloud server was hosted in the Amazon Dublin
position and a global view of each user and the Pokémons data center on a t2.micro virtual machine. The server on the edge of the
is maintained by the server. For example, if the Amazon EC2 network was hosted on the Odroid board, which was located in Belfast.
Multiple game clients that were in close proximity to the edge node established
servers are employed, then the game may be hosted in the EC2 connection with the edge server to play the game.
N. Virginia data center and a user in Belfast (over 3,500 miles)
communicates with the game server. This may be optimised by
the application owner in hosting the server closer to Belfast in modern base station. The board has one ARM Big.LITTLE
the Dublin data center (which is nearly a 100 miles away from architecture Exynos 5 Octa processor and 2 GB of DRAM
the user). The original game server is known to have crashed memory. The processor runs Ubuntu 14.04 LTS.
multiple times during its launch due to severe activities which We partitioned the game server to be hosted on both the
were not catered for8,9 . Amazon EC2 Dublin data center12 in the Republic of Ireland
We implemented an fog computing model for executing the and our edge node located in the Computer Science Building
iPokeMon game. The data packets sent from a smartphone to of the Queen’s University Belfast13 in Northern Ireland. The
the game server will pass through a traffic routing node, such cloud server was hosted on a t2.micro instance offered by
as a mobile base station. We assumed a mobile base station Amazon and the server on the edge node was hosted using
(the edge node) was in proximity of less than a kilometre to Linux containers. Partitioning was performed, such that the
a set of iPokeMon users. Modern base stations have on-chip cloud server maintained a global view of the Pokémons, where
CPUs, for example the Cavium Octeon Fusion processors10 . as the edge node server had a local view of the users that
Such processors have between 2 and 6 CPU cores with were connected to the edge server. The edge node periodi-
between 1 to 2 GB RAM memory to support between 200-300 cally updated the global view of the cloud server. Resource
users. To represent such a base station we used an ODROID- management tasks in fog computing involving provisioning
XU+E board11 , which has similar computing resources as a of edge nodes and auto-scaling of resources allocated to be
taken into account. The details of the fog computing-based
7 https://github.com/Kjuly/iPokeMon
8 http://www.forbes.com/sites/davidthier/2016/07/07/
implementation are beyond the scope of this paper presenting
pokemon-go-servers-seem-to-be-struggling/#588a88b64958
preliminary results and will be reported elsewhere.
9 https://www.theguardian.com/technology/2016/jul/12/ Figure 5 shows the average response time from the per-
pokemon-go-australian-users-report-server-problems-due-to-high-demand
10 http://www.cavium.com/OCTEON-Fusion.html 12 https://aws.amazon.com/about-aws/global-infrastructure/
11 http://www.hardkernel.com/ 13 http://www.qub.ac.uk
the fog computing model the data transferred between the
edge node and the cloud is significantly reduced, yielding an
average of over 90% reduction.
The preliminary results for the given online game use-case
highlight the potential of using fog computing in reducing the
communication frequency between a user device and a remote
cloud server, thereby improving the QoS and QoE.

V. C ONCLUSIONS
The fog computing model can reduce the latency and
frequency of communication between a user and an edge node.
This model is possible when concentrated computing resources
located in the cloud are decentralised towards the edge of
the network to process workloads closer to user devices. In
this paper, we have defined fog computing and contrasted
Fig. 5. Comparing average response time of iPokeMon game users when
using a server located on the cloud and on an edge node. In the fog computing it with the cloud. An online game use-case was employed
model, an improvement of over 20% is noted when the server is located on to test the feasibility of the fog computing model. The key
the edge node. result is that the latency of communication decreases for a
user thereby improving the QoS when compared to a cloud-
only model. Moreover, it is observed that the amount of data
that is transferred towards the cloud is reduced.
Fog computing can improve the overall efficiency and per-
formance of applications. These benefits are currently demon-
strated on research use-cases and there are no commercial
fog computing services that integrate the edge and the cloud
models. There are a number of challenges that will need to
be addressed before this integration can be achieved and fog
computing can be delivered as a utility [43]. First of all,
a marketplace will need to be developed that makes edge
nodes visible and accessible in the fog computing model.
This is not an easy task, given that the security and privacy
concerns in using edge nodes will need to be addressed.
Moreover, potential edge node owners and cloud service
providers will need to come to agreement on how edge nodes
can be transparently monitored and billed in the fog computing
model. To this end, standards and benchmarks will need to be
developed, pricing models will need to take multiple party
service level agreements and objectives into account, and the
risk for the user will need to be articulated. Not only are these
Fig. 6. Percentage reduction in the data traffic between edge nodes and the
cloud to highlight the benefit of using the fog computing model. The data socio-economic factors going to play an important role in the
transferred between the edge node and the cloud is reduced by 90%. integration of the edge and the cloud in fog computing, but
from the technology perspective, workload deployment models
and associated programming languages and tool-kits will need
spective of a user, which is measured by round trip latency to be developed.
from when the user device generates a request while playing
the game that needs to be serviced by a cloud server (this R EFERENCES
includes the computation time on the server). The response
[1] E. Strohmaier, J. J. Dongarra, H. W. Meuer, and H. D. Simon, “The
time is noted over a five minute time period for varying Marketplace of High-Performance Computing,” Parallel Computing,
number of users. In the fog computing model, it is noted that vol. 25, no. 1314, pp. 1517 – 1544, 1999.
on an average the response time can be reduced in the edge [2] ——, “Recent Trends in the Marketplace of High Performance Com-
puting,” Parallel Computing, vol. 31, no. 34, pp. 261 – 273, 2005.
computing model for the user playing the game by over 20%. [3] K. Asanovi, R. Bodik, B. C. Catanzaro, J. J. Gebis, P. Husbands,
Figure 6 presents the amount of data that is transferred K. Keutzer, D. A. Patterson, W. L. Plishker, J. Shalf, S. W. Williams, and
during the five minute time period to measure the average K. A. Yelick, “The Landscape of Parallel Computing Research: A View
from Berkeley,” EECS Department, University of California, Berkeley,
response time. As expected with increasing number of users Tech. Rep. UCB/EECS-2006-183, Dec 2006. [Online]. Available: http:
the data transferred increases. However, we observe that in //www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html
[4] P. Garcia Lopez, A. Montresor, D. Epema, A. Datta, T. Higashino, in IEEE International Symposium on Performance Analysis of Systems
A. Iamnitchi, M. Barcellos, P. Felber, and E. Riviere, “Edge-centric and Software, 2015, pp. 171–172.
Computing: Vision and Challenges,” SIGCOMM Computer Communi- [24] D. Bernstein, “Containers and Cloud: From LXC to Docker to Kuber-
cation Review, vol. 45, no. 5, pp. 37–42, Sep. 2015. netes,” IEEE Cloud Computing, vol. 1, no. 3, pp. 81–84, 2014.
[5] M. Satyanarayanan, P. Simoens, Y. Xiao, P. Pillai, Z. Chen, K. Ha, [25] B. Sharma, R. K. Thulasiram, P. Thulasiraman, S. K. Garg, and
W. Hu, and B. Amos, “Edge Analytics in the Internet of Things,” IEEE R. Buyya, “Pricing Cloud Compute Commodities: A Novel Financial
Pervasive Computing, vol. 14, no. 2, pp. 24–31, Apr 2015. Economic Model,” in Proceedings of the 12th IEEE/ACM International
[6] S. Agarwal, M. Philipose, and P. Bahl, “Vision: The Case for Cellular Symposium on Cluster, Cloud and Grid Computing, 2012, pp. 451–457.
Small Cells for Cloudlets,” in Proceedings of the International Workshop [26] H. Xu and B. Li, “A Study of Pricing for Cloud Resources,” SIGMET-
on Mobile Cloud Computing & Services, 2014, pp. 1–5. RICS Perform. Eval. Rev., vol. 40, no. 4, pp. 3–12, 2013.
[7] C. Meurisch, A. Seeliger, B. Schmidt, I. Schweizer, F. Kaup, and [27] Z. Shen, S. Subbiah, X. Gu, and J. Wilkes, “CloudScale: Elastic
M. Mühlhäuser, “Upgrading Wireless Home Routers for Enabling Large- Resource Scaling for Multi-tenant Cloud Systems,” in Proceedings of
scale Deployment of Cloudlets,” in Mobile Computing, Applications, and the 2nd ACM Symposium on Cloud Computing, 2011, pp. 5:1–5:14.
Services, 2015, pp. 12–29. [28] H. AlJahdali, A. Albatli, P. Garraghan, P. Townend, L. Lau, and J. Xu,
[8] A. V. Dastjerdi and R. Buyya, “Fog Computing: Helping the Internet of “Multi-tenancy in Cloud Computing,” in Proceedings of the 2014 IEEE
Things Realize Its Potential,” Computer, vol. 49, no. 8, 2016. 8th International Symposium on Service Oriented System Engineering,
[9] T. H. Luan, L. Gao, Z. Li, Y. Xiang, and L. Sun, “Fog Computing: 2014, pp. 344–351.
Focusing on Mobile Users at the Edge,” CoRR, vol. abs/1502.01815, [29] S. A. Baset, “Cloud SLAs: Present and Future,” ACM SIGOPS Operating
2015. [Online]. Available: http://arxiv.org/abs/1502.01815 Systems Review, vol. 46, no. 2, pp. 57–66, 2012.
[10] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog Computing and [30] R. Buyya, S. K. Garg, and R. N. Calheiros, “SLA-oriented Resource
Its Role in the Internet of Things,” in Proceedings of the Workshop on Provisioning for Cloud Computing: Challenges, Architecture, and So-
Mobile Cloud Computing, 2012, pp. 13–16. lutions,” in Proceedings of the International Conference on Cloud and
[11] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, “The Case for Service Computing, 2011, pp. 1–10.
VM-Based Cloudlets in Mobile Computing,” IEEE Pervasive Comput- [31] N. Grozev and R. Buyya, “Inter-Cloud Architectures and Application
ing, vol. 8, no. 4, pp. 14–23, 2009. Brokering: Taxonomy and Survey,” Software: Practice and Experience,
[12] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, vol. 44, no. 3, pp. 369–390, 2014.
G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “A View [32] A. J. Ferrer, F. HernáNdez, J. Tordsson, E. Elmroth, A. Ali-Eldin,
of Cloud Computing,” Communications of the ACM, vol. 53, no. 4, pp. C. Zsigri, R. Sirvent, J. Guitart, R. M. Badia, K. Djemame, W. Ziegler,
50–58, 2010. T. Dimitrakos, S. K. Nair, G. Kousiouris, K. Konstanteli, T. Varvarigou,
[13] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud B. Hudzia, A. Kipp, S. Wesner, M. Corrales, N. Forgó, T. Sharif,
Computing and Emerging IT Platforms: Vision, Hype, and Reality for and C. Sheridan, “OPTIMIS: A Holistic Approach to Cloud Service
Delivering Computing as the 5th Utility,” Future Generation Computer Provisioning,” Future Generation Computer Systems, vol. 28, no. 1, pp.
Systems, vol. 25, no. 6, pp. 599 – 616, 2009. 66–77, 2012.
[33] K. Hashizume, D. G. Rosado, E. Fernández-Medina, and E. B. Fernan-
[14] A. Barker, B. Varghese, J. S. Ward, and I. Sommerville, “Academic
dez, “An Analysis of Security Issues for Cloud Computing,” Journal of
Cloud Computing Research: Five Pitfalls and Five Opportunities,”
Internet Services and Applications, vol. 4, no. 1, 2013.
in Proceedings of the USENIX Conference on Hot Topics in Cloud
[34] N. Gonzalez, C. Miers, F. Redı́golo, M. Simplı́cio, T. Carvalho,
Computing, 2014.
M. Näslund, and M. Pourzandi, “A Quantitative Analysis of Current
[15] A. Mukherjee, D. De, and D. G. Roy, “A Power and Latency Aware Security Concerns and Solutions for Cloud Computing,” Journal of
Cloudlet Selection Strategy for Multi-Cloudlet Environment,” IEEE Cloud Computing: Advances, Systems and Applications, vol. 1, no. 1,
Transactions on Cloud Computing, 2016. p. 11, 2012.
[16] P. Hari, K. Ko, E. Koukoumidis, U. Kremer, M. Martonosi, D. Ottoni, [35] I. Stojmenovic, S. Wen, X. Huang, and H. Luan, “An overview of
L.-S. Peh, and P. Zhang, “SARANA: Language, Compiler and Run- Fog Computing and its Security Issues,” Concurrency and Computation:
time System Support for Spatially Aware and Resource-aware Mobile Practice and Experience, vol. 28, no. 10, pp. 2991–3005, 2016.
Computing,” Philosophical Transactions of the Royal Society of London [36] Y. Wang, T. Uehara, and R. Sasaki, “Fog Computing: Issues and
A: Mathematical, Physical and Engineering Sciences, vol. 366, no. 1881, Challenges in Security and Forensics,” in Computer Software and
pp. 3699–3708, 2008. Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 3,
[17] M. N. Rahman and P. Sruthi, “Real Time Compressed Sensory Data Pro- July 2015, pp. 53–59.
cessing Framework to Integrate Wireless Sensory Networks with Mobile [37] B. Varghese, O. Akgun, I. Miguel, L. Thai, and A. Barker, “Cloud
Cloud,” in Online International Conference on Green Engineering and Benchmarking for Performance,” in Proceedings of the IEEE Internati-
Technologies (IC-GET), 2015, pp. 1–4. noal Conference on Cloud Computing Technology and Science, 2014,
[18] H. Hromic, D. Le Phuoc, M. Serrano, A. Antonic, I. P. Zarko, C. Hayes, pp. 535–540.
and S. Decker, “Real Time Analysis of Sensor Data for the Internet of [38] B. F. Cooper, A. Silberstein, E. Tam, R. Ramakrishnan, and R. Sears,
Things by Means of Clustering and Event Processing,” in Proceedings “Benchmarking Cloud Serving Systems with YCSB,” in Proceedings of
of the IEEE International Conference on Communications, 2015, pp. the ACM Symposium on Cloud Computing, 2010, pp. 143–154.
685–691. [39] B. Varghese, O. Akgun, I. Miguel, L. Thai, and A. Barker, “Cloud
[19] B. Zhou, A. V. Dastjerdi, R. Calheiros, S. Srirama, and R. Buyya, benchmarking for maximising performance of scientific applications,”
“mCloud: A Context-aware Offloading Framework for Heterogeneous IEEE Transactions on Cloud Computing, 2016.
Mobile Cloud,” IEEE Transactions on Services Computing, 2016. [40] J. Povedano-Molina, J. M. Lopez-Vega, J. M. Lopez-Soler, A. Corradi,
[20] D. G. Roy, D. De, A. Mukherjee, and R. Buyya, “Application-aware and L. Foschini, “DARGOS: A Highly Adaptable and Scalable Monitor-
Cloudlet Selection for Computation Offloading in Multi-cloudlet Envi- ing Architecture for Multi-tenant Clouds,” Future Generation Computer
ronment,” The Journal of Supercomputing, pp. 1–19, 2016. Systems, vol. 29, no. 8, pp. 2041–2056, 2013.
[21] B. Li, Y. Pei, H. Wu, and B. Shen, “Heuristics to Allocate High- [41] S. A. D. Chaves, R. B. Uriarte, and C. B. Westphall, “Toward an
performance Cloudlets for Computation Offloading in Mobile Ad Hoc Architecture for Monitoring Private Clouds,” IEEE Communications
Clouds,” Journal of Supercomputing, vol. 71, no. 8, pp. 3009–3036, Magazine, vol. 49, no. 12, pp. 130–137, 2011.
2015. [42] J. Montes, A. Sánchez, B. Memishi, M. S. Pérez, and G. Antoniu,
[22] L. Xu, Z. Wang, and W. Chen, “The Study and Evaluation of ARM- “GMonE: A Complete Approach to Cloud Monitoring,” Future Gen-
based Mobile Virtualization,” International Journal of Distributed Sen- eration Computer Systems, vol. 29, no. 8, pp. 2026–2040, 2013.
sor Networks, Jan. 2015. [43] B. Varghese, N. Wang, S. Barbhuiya, P. Kilpatrick, and D. S. Nikolopou-
[23] W. Felter, A. Ferreira, R. Rajamony, and J. Rubio, “An Updated los, “Challenges and Opportunities in Edge Computing,” in IEEE
Performance Comparison of Virtual Machines and Linux Containers,” International Conference on Smart Cloud, 2016.

You might also like