Get Managing Distributed Cloud Applications and Infrastructure: A Self-Optimising Approach 1st Ed. Edition Theo Lynn Free All Chapters
Get Managing Distributed Cloud Applications and Infrastructure: A Self-Optimising Approach 1st Ed. Edition Theo Lynn Free All Chapters
Get Managing Distributed Cloud Applications and Infrastructure: A Self-Optimising Approach 1st Ed. Edition Theo Lynn Free All Chapters
https://ebookmass.com/product/managing-
distributed-cloud-applications-and-infrastructure-
a-self-optimising-approach-1st-ed-edition-theo-
lynn/
OR CLICK BUTTON
DOWLOAD NOW
https://ebookmass.com/product/heterogeneity-high-performance-
computing-self-organization-and-the-cloud-1st-ed-edition-theo-
lynn/
https://ebookmass.com/product/measuring-the-business-value-of-
cloud-computing-1st-ed-edition-theo-lynn/
https://ebookmass.com/product/the-cloud-to-thing-continuum-
opportunities-and-challenges-in-cloud-fog-and-edge-computing-1st-
ed-edition-theo-lynn/
https://ebookmass.com/product/data-privacy-and-trust-in-cloud-
computing-building-trust-in-the-cloud-through-assurance-and-
accountability-1st-edition-edition-theo-lynn/
Oracle Cloud Infrastructure: A Guide to Building Cloud
Native Applications Jeevan Gheevarghese Joseph & Adao
Oliveira Junior & Mickey Boxell
https://ebookmass.com/product/oracle-cloud-infrastructure-a-
guide-to-building-cloud-native-applications-jeevan-gheevarghese-
joseph-adao-oliveira-junior-mickey-boxell/
https://ebookmass.com/product/infrastructure-leaders-guide-to-
google-cloud-lead-your-organizations-google-cloud-adoption-
migration-and-modernization-journey-1st-edition-jeremy-lloyd/
https://ebookmass.com/product/managing-and-using-information-
systems-a-strategic-approach/
https://ebookmass.com/product/systems-programming-designing-and-
developing-distributed-applications-anthony/
Managing Distributed
Cloud Applications
and Infrastructure
A Self-Optimising Approach
Edited by
Theo Lynn · John G. Mooney
Jörg Domaschka · Keith A. Ellis
Palgrave Studies in Digital Business & Enabling
Technologies
Series Editors
Theo Lynn
Irish Institute of Digital Business
DCU Business School
Dublin, Ireland
John G. Mooney
Graziadio Business School
Pepperdine University
Malibu, CA, USA
This multi-disciplinary series will provide a comprehensive and coherent
account of cloud computing, social media, mobile, big data, and other
enabling technologies that are transforming how society operates and how
people interact with each other. Each publication in the series will focus on
a discrete but critical topic within business and computer science, covering
existing research alongside cutting edge ideas. Volumes will be written by
field experts on topics such as cloud migration, measuring the business
value of the cloud, trust and data protection, fintech, and the Internet of
Things. Each book has global reach and is relevant to faculty, researchers
and students in digital business and computer science with an interest in
the decisions and enabling technologies shaping society.
Managing Distributed
Cloud Applications
and Infrastructure
A Self-Optimising Approach
Editors
Theo Lynn John G. Mooney
Irish Institute of Digital Business Graziadio Business School
DCU Business School Pepperdine University
Dublin, Ireland Malibu, CA, USA
© The Editor(s) (if applicable) and The Author(s) 2020. This book is an open access
publication.
Open Access This book is licensed under the terms of the Creative Commons Attribution
4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits
use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence and indicate if changes were made.
The images or other third party material in this book are included in the book’s Creative
Commons licence, unless indicated otherwise in a credit line to the material. If material is not
included in the book’s Creative Commons licence and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are
exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information
in this book are believed to be true and accurate at the date of publication. Neither the
publisher nor the authors or the editors give a warranty, expressed or implied, with respect to
the material contained herein or for any errors or omissions that may have been made. The
publisher remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
This Palgrave Macmillan imprint is published by the registered company Springer Nature
Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This is the third book in the series, “Palgrave Studies in Digital Business
& Enabling Technologies”, which aims to contribute to multi-disciplinary
research on digital business and enabling technologies, such as cloud com-
puting, social media, Big Data analytics, mobile technologies, and the
Internet of Things, in Europe.
Previous volumes sought to consider and extend conventional thinking
on disrupting finance and on cloud computing architectures to greater
support heterogeneity, and specifically high-performance computing
workloads. This third volume focuses more specifically on distributed
compute environments that encompass resources, services, and applica-
tions from the cloud to the edge. The emergence of the Internet of Things
(IoT) combined with greater heterogeneity, not only online in cloud com-
puting architectures but across the cloud-to-edge continuum, is introduc-
ing new challenges for managing applications and infrastructure across
this continuum. The scale and complexity are simply so complex that it is
no longer realistic for IT teams to manually foresee the potential issues
and manage the dynamism and dependencies across an increasing interde-
pendent chain of service provision. This book explores these challenges
and offers a solution for the intelligent and reliable management of physi-
cal infrastructure and the optimal placement of applications for the provi-
sion of services.
The content of the book is based on contributions from researchers on
the RECAP project, a European Union project funded under Horizon
2020 (recap-project.eu). RECAP commenced in 2017 and brought
together ten project partners from six countries across Europe to create a
v
vi PREFACE
This book was funded by the European Union’s Horizon 2020 Research
and Innovation Programme through the RECAP project (https://recap-
project.eu) under Grant Agreement Number 732667.
ix
Contents
xi
xii Contents
Index161
Notes on Contributors
xiii
xiv NOTES ON CONTRIBUTORS
the Principal Investigator (PI) of the Irish Centre for Cloud Computing
and Commerce, an Enterprise Ireland/IDA-funded Cloud Computing
Technology Centre. Lynn specialises in the role of digital technologies in
transforming business processes with a specific focus on cloud computing,
social media, and data science.
John G. Mooney is Associate Professor of Information Systems and
Technology Management and Academic Director of the Executive
Doctorate in Business Administration at the Pepperdine Graziadio
Business School. Mooney previously served as Executive Director of the
Institute for Entertainment, Media and Culture from 2015 to 2018. He
was named Fellow of the Association for Information Systems in December
2018. His current research interests include management of digital inno-
vation (i.e. IT-enabled business innovation) and business executive respon-
sibilities for managing digital platforms and information resources.
Linus Närvä is a Software Engineer at Tieto Sweden Support Services
AB. His domain experience includes radio networks, radio base station
software, and cloud computing platforms.
Manuel Noya is a Co-founder and CEO of Linknovate. He holds an
MSc in Materials Science and Technology, a BSc in Chemical Engineering,
and a BSc in Materials Engineering. He is an International Fellow at
SRI International (Menlo Park, CA). His research interests include
materials science, and software technologies in the area of text and
data mining applied to business intelligence.
Per-Olov Östberg is a Research Scientist with a PhD in Computing
Science from Umeå University and more than half a decade of postgradu-
ate experience from both academic research and industry. He has held
Researcher and Visiting Researcher positions at five universities: Umeå
University, Uppsala University, and Karolinska Institutet in Sweden; Ulm
University in Germany; and the Lawrence Berkeley National
Laboratory (LBNL) at the University of California, Berkeley, in the
USA. He specialises in distributed computing resource management
and has worked in the Swedish government’s strategic eScience
research initiative eSSENCE, research and innovation projects funded
by the EU under the FP7 and H2020 programmes, and projects
funded by the Swedish national research council VR.
xviii Notes on Contributors
xix
xx List of Figures
xxiii
CHAPTER 1
1.1 Introduction
The objective of this book is to introduce readers to RECAP, an architec-
tural innovation in cloud, fog, and edge computing based on the concepts
of separation of concerns, model-centricism, modular design, and machine
learning and artificial intelligence (AI) for IT operations to support reli-
able capacity provisioning for distributed clouds. The remainder of this
chapter provides a brief overview of computing across the cloud-to-edge
(C2E) continuum and the challenges of distributing and managing appli-
cations across geo-distributed infrastructure. This chapter also introduces
some of the major design concepts informing the RECAP architectural
design and provides an overview of the RECAP architecture and
components.
P. Casari
IMDEA Networks Institute, Madrid, Spain
e-mail: [email protected]
F. Fowley
Irish Institute of Digital Business, Dublin City University, Dublin, Ireland
e-mail: [email protected]
T. Lynn
Irish Institute of Digital Business, DCU Business School, Dublin, Ireland
e-mail: [email protected]
1 TOWARDS AN ARCHITECTURE FOR RELIABLE CAPACITY PROVISIONING… 3
A world where physical objects are seamlessly integrated into the informa-
tion network, and where the physical objects can become active participants
in business processes. Services are available to interact with these “smart
objects” over the Internet, query their state and any information associated
with them, taking into account security and privacy issues. (Haller et al.
2009, p. 15)
the edge and which should not be. This is of critical importance due to the
potentially limited compute resources available at each edge location. To
add to the complexity, some of these questions must be answered in
advance with incomplete data on user demand while others require near
real-time decision making to meet unpredictable and fluctuating user
demands.
Incorrect placement decisions may result in inflexible, unreliable,
expensive networks and services. This is more likely as the decision space
becomes so complex; it is no longer realistic for IT teams to cost-effectively
foresee and manually manage all possible configurations, component
interactions, and end-user operations on a detailed level. As such, mecha-
nisms are needed for the automated and intelligent placement and scaling
of dynamic applications and for the management of the physical resources
that underpin such applications. RECAP—an architectural innovation in
cloud and edge computing to support reliable capacity provisioning for
distributed clouds—is posited as such a mechanism.
1.3.1 Separation of Concerns
Separation of concerns is a concept that implements a “what-how”
approach to cloud architectures separating application lifecycle manage-
ment and resource management where the end user or enterprise cus-
tomer focuses its efforts on what needs to be done and the cloud service
provider or cloud carrier focuses on how it should be done (Lynn 2018).
At its core, the end user or enterprise customer focuses on specifying the
business functionality, constraints, quality of service (QoS), and quality of
experience (QoE) (together KPIs) they require, with minimal interference
with the underlying infrastructure (Papazoglou 2012). To support a sepa-
ration of concerns, a detailed understanding of the KPIs but also the rela-
tionship between the performance of the applications and underlying
infrastructure, and the achievement of these APIs is required.
In multi-tenant environments, for example clouds and networks, the
separation of concerns is complicated because the actors will, most likely,
6 J. DOMASCHKA ET AL.
1.3.2 Model-Centricism
Due to the complexity, heterogeneity, and dynamic nature of (i) the busi-
ness domains in which enterprises, cloud service providers, and cloud car-
riers operate; (ii) the application landscape (including legacy and next
generation applications); and (iii) the infrastructure in and upon which
these applications operate and are consumed, a flexible software architec-
ture is required that can evolve in line with business, application, and infra-
structure requirements. Model-centricism is a design principle that uses
machine-readable, highly abstract models developed independently of the
implementation technology and stored in standardised repositories
(Kleppe et al. 2003). This provides a separation of concerns by design, and
thus supporting greater flexibility when architecting and evolving
enterprise-scale and hyperscale systems. Brown (2004, pp. 319–320) enu-
merates the advantages of using models including:
1.3.3 Modular Design
A modular architecture is an architecture where at least some components
are optional and there exists the ability to add or remove modules or com-
ponents according to the needs of a given use case (Aissaouii et al. 2013).
The benefits of modular design are well known, not least it supports sepa-
ration of concerns and provides greater implementation flexibility thus
reducing costs and risk. A discrete module or component can be imple-
mented without having to implement the entire system. Enterprises, cloud
service providers, and cloud carriers (to a lesser extent) come in all sizes
and with their own constraints. A modular design provides these firms
with greater choice and flexibility.
use cases increasingly require edge deployments, e.g. vCDN, the ability of
cloud service providers and cloud carriers to respond quickly to demands
on infrastructure, service incidents, and improve on key metrics decreases
(Masood and Hashmi 2019). Increasingly, enterprises are looking to AI
for IT Operations (or AIOps).
AI for IT Operations (AIOps) seeks to use algorithms and machine
learning to dramatically improve the monitoring, operation, and mainte-
nance of distributed systems (Cardoso 2019). Although at a nascent stage
of development, AIOps has the potential of ensuring QoS and customer
satisfaction, boosting engineering productivity, and reducing operational
costs (Prasad and Rich 2018; Dang et al. 2019). This is achieved by:
data needs to be available that can either come from a live system or be
derived from a simulation environment. The Data Analytics Mode pro-
vides the necessary tooling and guidelines to process those data and gener-
ate models from it. The analytics itself is a manual or semi-automated
process that applies approaches from statistics and machine learning in
order to create the models. It consists of pre-processing and data analysis
(or model training respectively). When complete, there is a newly gener-
ated insight in the form of a mathematical formula, a statistical relation-
ship, some other model, or a trained neural network. These insights form
the baseline of the models that are used by other modes and underlying
components.
Run-time Operation Mode: The Run-time Operation Mode uses
online optimisation to continuously update geo-distributed infrastructure
based on the models and the current deployment scenario (deployed
applications, available infrastructure, and user behaviour). Data on the
actual usage of the hardware and software requirements are collected dur-
ing run-time. These data are used by optimisers in the system to weight
the current placement and usage against other options and come up with
new and better configurations. These are output in the form of an optimi-
sation plan that can then be enacted. This changes the configuration of the
actual system. The decisions made in order to improve the system are
based on mathematical, stochastic, or programmatic models of the system
itself, e.g. the capabilities of the hardware, the needs of the application,
current and predicted workload in the system, and the movement of users
in the real world.
Simulation and Planning Mode: The Simulation and Planning Mode
is capable of performing the same steps as the run-time in what-if scenar-
ios and, hence, evaluates the use and acquisition of new, updated, or re-
allocated hardware. This mode supports scenario (what-if) analyses such
as “what if I bought more or different hardware at existing sites”, “what if
I added a new network site in the topology”, and “how much longer can
the available hardware handle my workload, if it keeps growing as pre-
dicted”. Hence, simulation helps operators to take strategic decisions
about their infrastructure. What is more, using simulation, different place-
ment scenarios are explored and weighed against each other to serve as
calibration and constraints for optimisation algorithms.
12 J. DOMASCHKA ET AL.
1.6.2 Optimisation
Optimisation goals in a multi-tenant distributed cloud-edge environment
vary depending on the respective perspective. On the one hand, infra-
structure optimisation has the goal to enforce a scheduling strategy that
best reflects the intention of the infrastructure provider, e.g. to improve
the utilisation of the available hardware or to save energy. On the other
hand, application optimisation strategies try to find the best-possible con-
figuration for an application deployment. Hence, the latter will increase
the available compute capacity when high workload is expected. This,
however, will only lead to satisfaction when the scheduling at the infra-
structure level does not apply strategies that counteract these goals.
Consequently, RECAP’s optimisation subsystem realises a cooperative
two-level optimisation framework, in which the optimisers at the two lev-
els (application and infrastructure) interact in order to avoid conflicting
scheduling decisions. Besides infrastructure-level and application-level
optimisers, the subsystem further contains an optimisation orchestrator
that mediates between the two levels. All entities in that subsystem con-
sume monitoring data, application load data, and infrastructure data. The
outputs of the optimisation algorithms in turn are optimisation steps that
are then processed by the Enactor.
Figure 1.3 illustrates the dependencies between the major components
of the optimisation subsystem. While there is just one Infrastructure
Optimiser in a given installation, there may be multiple Application
Optimisers, one per deployed application. Each of these is equipped with
its own application-specific optimisation strategy and optimisation rules.
The Infrastructure Optimiser in turn is equipped with provider-specific
optimisation policies.
The Application Optimisers constantly receive the current status infor-
mation from the Infrastructure and Modelling subsystems and, based on
this information, estimate the future coming workload. Based on the cur-
rent and predicted workload, each Application Optimiser suggests
18 J. DOMASCHKA ET AL.
optimisation steps for its particular application. These suggestions are fed
to the optimisation orchestrator, which, based on the input received, trig-
gers the infrastructure optimiser that then decides on whether these oper-
ations are feasible and also the mapping between application components
(bundled in virtual machines or containers) and physical resources.
Application Optimisation and Infrastructure Optimisation are presented
in detail in Chaps. 3 and 4 respectively.
Vasta puhuessaan hän tarkasti nuorta naista. Että hän oli ylhäinen
nainen, sitä eivät osoittaneet ainoastaan hänen ratsastusasunsa ja
hevosensa hihnoituksen upeus, vaan myöskin hänen ylimyksellinen
ja kopea esiintymisensä ja kauniiden kasvojensa ylpeä ilme.
Mutta vasta nyt, kun hän tunsi saaneensa selville syyn, minkä
tähden hän aina ratsasti kypärinsilmikko suljettuna, halusi hän itse
kiihkeästi piilottaa kasvonsa muiden ihmisten näkyvistä — ei pelosta,
sillä pelkoa hän ei tuntenut, vaan jonkin sisäisen mielijohteen
pakotuksesta, jota hän ei koettanutkaan käsittää.
KAHDEKSAS LUKU
»Niinpä kyllä», virkkoi pappi, »näin hänet sinä päivänä. Hänellä oli
seikkailu useiden Peter of Colfaxin linnan ritarien kanssa, ja hän
pelasti heidän käsistään erään neidon, jonka ratsun koristeiden
nojalla epäilin kuuluvan Montfortin sukuun. Yhdessä he lähtivät
ratsastamaan pohjoista kohti, mutta poikani ei maininnut, mihin hän
aikoi ja missä tarkoituksessa. Hänen ainoa huomautuksensa, kun
hän puki ylleen sota-asunsa, oli se, että nyt saisin nähdä haukan
hoivaamassa kyyhkystä. Eikö hän ole palannut?»
»Ja siellä hän kenties olisi paremmin kotonaan kuin täällä», virkkoi
pappi tyynesti.