Get Managing Distributed Cloud Applications and Infrastructure: A Self-Optimising Approach 1st Ed. Edition Theo Lynn Free All Chapters

Download as pdf or txt
Download as pdf or txt
You are on page 1of 64

Download More ebooks [PDF]. Format PDF ebook download PDF KINDLE.

Full download ebooks at ebookmass.com

Managing Distributed Cloud Applications


and Infrastructure: A Self-Optimising
Approach 1st ed. Edition Theo Lynn
For dowload this book click BUTTON or LINK below

https://ebookmass.com/product/managing-
distributed-cloud-applications-and-infrastructure-
a-self-optimising-approach-1st-ed-edition-theo-
lynn/
OR CLICK BUTTON

DOWLOAD NOW

Download More ebooks from https://ebookmass.com


More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Heterogeneity, High Performance Computing, Self-


Organization and the Cloud 1st ed. Edition Theo Lynn

https://ebookmass.com/product/heterogeneity-high-performance-
computing-self-organization-and-the-cloud-1st-ed-edition-theo-
lynn/

Measuring the Business Value of Cloud Computing 1st ed.


Edition Theo Lynn

https://ebookmass.com/product/measuring-the-business-value-of-
cloud-computing-1st-ed-edition-theo-lynn/

The Cloud-to-Thing Continuum: Opportunities and


Challenges in Cloud, Fog and Edge Computing 1st ed.
Edition Theo Lynn

https://ebookmass.com/product/the-cloud-to-thing-continuum-
opportunities-and-challenges-in-cloud-fog-and-edge-computing-1st-
ed-edition-theo-lynn/

Data Privacy And Trust In Cloud Computing: Building


Trust In The Cloud Through Assurance And Accountability
1st Edition Edition Theo Lynn

https://ebookmass.com/product/data-privacy-and-trust-in-cloud-
computing-building-trust-in-the-cloud-through-assurance-and-
accountability-1st-edition-edition-theo-lynn/
Oracle Cloud Infrastructure: A Guide to Building Cloud
Native Applications Jeevan Gheevarghese Joseph & Adao
Oliveira Junior & Mickey Boxell

https://ebookmass.com/product/oracle-cloud-infrastructure-a-
guide-to-building-cloud-native-applications-jeevan-gheevarghese-
joseph-adao-oliveira-junior-mickey-boxell/

Infrastructure Leader’s Guide to Google Cloud: Lead


Your Organization’s Google Cloud Adoption, Migration
and Modernization Journey 1st Edition Jeremy Lloyd

https://ebookmass.com/product/infrastructure-leaders-guide-to-
google-cloud-lead-your-organizations-google-cloud-adoption-
migration-and-modernization-journey-1st-edition-jeremy-lloyd/

Managing and Using Information Systems: A Strategic


Approach

https://ebookmass.com/product/managing-and-using-information-
systems-a-strategic-approach/

Systems programming: designing and developing


distributed applications Anthony

https://ebookmass.com/product/systems-programming-designing-and-
developing-distributed-applications-anthony/

Countering Cyber Threats to Financial Institutions: A


Private and Public Partnership Approach to Critical
Infrastructure Protection 1st ed. Edition Pierre-Luc
Pomerleau
https://ebookmass.com/product/countering-cyber-threats-to-
financial-institutions-a-private-and-public-partnership-approach-
to-critical-infrastructure-protection-1st-ed-edition-pierre-luc-
PALGRAVE STUDIES IN DIGITAL BUSINESS
AND ENABLING TECHNOLOGIES
SERIES EDITORS: THEO LYNN · JOHN G. MOONEY

Managing Distributed
Cloud Applications
and Infrastructure
A Self-Optimising Approach
Edited by
Theo Lynn · John G. Mooney
Jörg Domaschka · Keith A. Ellis
Palgrave Studies in Digital Business & Enabling
Technologies

Series Editors
Theo Lynn
Irish Institute of Digital Business
DCU Business School
Dublin, Ireland

John G. Mooney
Graziadio Business School
Pepperdine University
Malibu, CA, USA
This multi-disciplinary series will provide a comprehensive and coherent
account of cloud computing, social media, mobile, big data, and other
enabling technologies that are transforming how society operates and how
people interact with each other. Each publication in the series will focus on
a discrete but critical topic within business and computer science, covering
existing research alongside cutting edge ideas. Volumes will be written by
field experts on topics such as cloud migration, measuring the business
value of the cloud, trust and data protection, fintech, and the Internet of
Things. Each book has global reach and is relevant to faculty, researchers
and students in digital business and computer science with an interest in
the decisions and enabling technologies shaping society.

More information about this series at


http://www.palgrave.com/gp/series/16004
Theo Lynn • John G. Mooney
Jörg Domaschka • Keith A. Ellis
Editors

Managing Distributed
Cloud Applications
and Infrastructure
A Self-Optimising Approach
Editors
Theo Lynn John G. Mooney
Irish Institute of Digital Business Graziadio Business School
DCU Business School Pepperdine University
Dublin, Ireland Malibu, CA, USA

Jörg Domaschka Keith A. Ellis


Institute of Information Resource Intel Labs Europe
Management Dublin, Ireland
Ulm University
Ulm, Germany

ISSN 2662-1282     ISSN 2662-1290 (electronic)


Palgrave Studies in Digital Business & Enabling Technologies
ISBN 978-3-030-39862-0    ISBN 978-3-030-39863-7 (eBook)
https://doi.org/10.1007/978-3-030-39863-7

© The Editor(s) (if applicable) and The Author(s) 2020. This book is an open access
publication.
Open Access This book is licensed under the terms of the Creative Commons Attribution
4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits
use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence and indicate if changes were made.
The images or other third party material in this book are included in the book’s Creative
Commons licence, unless indicated otherwise in a credit line to the material. If material is not
included in the book’s Creative Commons licence and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are
exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information
in this book are believed to be true and accurate at the date of publication. Neither the
­publisher nor the authors or the editors give a warranty, expressed or implied, with respect to
the material contained herein or for any errors or omissions that may have been made. The
publisher remains neutral with regard to jurisdictional claims in published maps and
­institutional affiliations.

This Palgrave Macmillan imprint is published by the registered company Springer Nature
Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

This is the third book in the series, “Palgrave Studies in Digital Business
& Enabling Technologies”, which aims to contribute to multi-disciplinary
research on digital business and enabling technologies, such as cloud com-
puting, social media, Big Data analytics, mobile technologies, and the
Internet of Things, in Europe.
Previous volumes sought to consider and extend conventional thinking
on disrupting finance and on cloud computing architectures to greater
support heterogeneity, and specifically high-performance computing
workloads. This third volume focuses more specifically on distributed
compute environments that encompass resources, services, and applica-
tions from the cloud to the edge. The emergence of the Internet of Things
(IoT) combined with greater heterogeneity, not only online in cloud com-
puting architectures but across the cloud-to-edge continuum, is introduc-
ing new challenges for managing applications and infrastructure across
this continuum. The scale and complexity are simply so complex that it is
no longer realistic for IT teams to manually foresee the potential issues
and manage the dynamism and dependencies across an increasing interde-
pendent chain of service provision. This book explores these challenges
and offers a solution for the intelligent and reliable management of physi-
cal infrastructure and the optimal placement of applications for the provi-
sion of services.
The content of the book is based on contributions from researchers on
the RECAP project, a European Union project funded under Horizon
2020 (recap-project.eu). RECAP commenced in 2017 and brought
together ten project partners from six countries across Europe to create a

v
vi PREFACE

new way to optimally provision distributed heterogeneous resources to


deliver services. The RECAP goal was to investigate and demonstrate an
intelligent means to optimally place and scale dynamic applications and to
manage the physical resources that underpin such applications, while tar-
geting lower costs and better quality of service (QoS). RECAP is a model-­
based methodology, encompassing a reference architecture, models, and
proof-of-concept implementations. This book is an output of this joint
research.
The book is organised around key research contributions from RECAP.
Chapter 1 introduces the context driving the need for more reliable
capacity provisioning of applications and infrastructure in distributed
clouds. While IoT offers the potential of tremendous value for the private
sector, public sector, and society as whole, it introduces complexity of
several orders of magnitude greater in an already complex feature space.
Chapter 1 introduces RECAP, an architectural innovation to support reli-
able capacity provisioning for distributed clouds. It presents a high-level
conceptual overview of RECAP and some of the major design concepts
informing its design, namely separation of concerns, model-centricism,
modular design, and support for the use of machine learning and artificial
intelligence for IT operations. The remainder of this book is loosely organ-
ised around the four functional building blocks of RECAP followed by a
series of case studies to illustrate how RECAP can be implemented
modularly.
Chapter 2 defines and discusses RECAP’s Data Analytics and Machine
Learning subsystem. This chapter describes the infrastructure for the
acquisition and processing of data from applications and systems, and
explains the methodology used to derive statistical and machine learning
models from this data. These models are central to the operation of
RECAP and are an input to all other subsystems, informing run-time
planning, decision making, and optimisation support at both the infra-
structure and the application levels.
Chapter 3 introduces RECAP’s Application Optimisation subsystem.
Optimising distributed cloud applications is a complex problem that
requires understanding a myriad of factors. This chapter outlines the
RECAP approach to application optimisation and presents its framework
for joint modelling of applications, workloads, and the propagation of
these workloads in applications and networks.
Chapter 4 introduces the RECAP Infrastructure Optimiser tasked with
optimal application placement and infrastructure optimisation. This
PREFACE vii

chapter details the methodology, models, and algorithmic approach taken


to augment RECAP Application Optimiser output in producing a more
holistic optimisation, cognisant of both application and infrastructure pro-
vider interests.
Chapter 5 focuses on Simulation and Planning in RECAP. The size and
complexity of modern infrastructures make a real-time testing and experi-
mentation difficult, time-consuming, and costly. The RECAP Simulation
Framework offers cloud and communication service providers an alterna-
tive solution while retaining accuracy and verisimilitude. It comprises two
simulation approaches, Discrete Event Simulation (DES) and Discrete
Time Simulation (DTS), that provide enterprises with information about
optimal virtual cache placements, resource handling and remediation of
the system, optimal request servicing, and finally, optimal distribution of
requests and resource adjustment. This information can inform better
infrastructure capacity planning while taking in to account performance
requirements and constraints such as cost and energy consumption.
Chapter 6 presents four case studies each illustrating an implementa-
tion of one or more RECAP subsystems. The first case study presents a
case study on infrastructure optimisation for a 5G network use case. The
second case study explores application optimisation for virtual content dis-
tribution networks on a large Tier 1 network operator. The third case
study looks at how RECAP components can be embedded in an IoT plat-
form to reduce costs and increase quality of service. The final case study
presents how data analytics and simulation components, within RECAP,
can be used by a small-to-medium-sized enterprise (SME) for cloud capac-
ity planning.

Dublin, Ireland Theo Lynn


Malibu, CA, USA  John G. Mooney
Ulm, Germany  Jörg Domaschka
Dublin, Ireland  Keith A. Ellis
Acknowledgement

This book was funded by the European Union’s Horizon 2020 Research
and Innovation Programme through the RECAP project (https://recap-
project.eu) under Grant Agreement Number 732667.

ix
Contents

1 Towards an Architecture for Reliable Capacity Provisioning


for Distributed Clouds  1
Jörg Domaschka, Frank Griesinger, Mark Leznik,
Per-­Olov Östberg, Keith A. Ellis, Paolo Casari,
Frank Fowley, and Theo Lynn

2 RECAP Data Acquisition and Analytics Methodology 27


Paolo Casari, Jörg Domaschka, Rafael García Leiva,
Thang Le Duc, Mark Leznik, and Linus Närvä

3 Application Optimisation: Workload Prediction and


Autonomous Autoscaling of Distributed Cloud Applications 51
Per-Olov Östberg, Thang Le Duc, Paolo Casari,
Rafael García Leiva, Antonio Fernández Anta,
and Jörg Domaschka

4 Application Placement and Infrastructure Optimisation 69


Radhika Loomba and Keith A. Ellis

xi
xii Contents

5 Simulating Across the Cloud-to-Edge Continuum 93


Minas Spanopoulos-Karalexidis, Christos K. Filelis
Papadopoulos, Konstantinos M. Giannoutakis, George
A. Gravvanis, Dimitrios Tzovaras, Malika Bendechache,
Sergej Svorobej, Patricia Takako Endo, and Theo Lynn

6 Case Studies in Application Placement and Infrastructure


Optimisation117
Miguel Angel López-Peña, Hector Humanes, Johan Forsman,
Thang Le Duc, Peter Willis, and Manuel Noya

Index161
Notes on Contributors

Malika Bendechache is a Postdoctoral Researcher at the Irish Institute


of Digital Business at Dublin City University. She received her PhD in
Computer Science at University College Dublin (UCD) in the area of
parallel and distributed data mining. Bendechache was previously a
researcher at the Insight Centre for Data Analytics at UCD. Her research
interests span across distributed systems, Big Data analytics, and simula-
tion of large-scale distributed cloud, fog, and edge computing environ-
ments and associated applications.
Paolo Casari is a Research Assistant Professor at the IMDEA Networks
Institute, Madrid, Spain. His research interests include many aspects of
networked communications, such as channel modelling, network protocol
design, localisation, simulation, and experimental evaluations. He has (co)
authored over 200 scientific publications, is a Senior Member of IEEE,
and is an Associate Editor for the IEEE Transactions on Mobile Computing
and for the IEEE Transactions on Wireless Communications.
Jörg Domaschka is a Senior Researcher and Group Manager at the
Institute of Information Resource Management at Ulm University. He
holds a Diploma in Computer Science from FAU, Erlangen Nuremberg,
and a PhD in Computer Sscience from Ulm University. His research inter-
ests include distributed systems, fault-tolerance, middleware platforms,
and NoSQL databases. Current focus of his work lies on middleware and
run-time systems for geo-distributed infrastructure and applications.

xiii
xiv NOTES ON CONTRIBUTORS

Keith A. Ellis is a Senior Research Scientist and Manager of Intel Labs


Europe, Ireland. His research focus is orchestration and control in Cyber
Physical Systems. Ellis has led and been actively involved with
national, international, and European part-funded research—
REVISITE, COOPERATE, IMR, WISDOM, RealValue, and EL4L,
targeting various domains—built environment, industrial, water
management, smart grid. and agricultural. He is the holder of mul-
tiple patents and author of several journal articles, book chapters,
and conference papers. He holds an MSc in Tech and Innovation
Management and a BSc (Hons) in Tech.
Patricia Takako Endo is a Postdoctoral Research Fellow at Irish Institute
of Digital Business, Dublin City University, Ireland, and a Professor at
Universidade de Pernambuco, Brazil. Her research interests include cloud
computing, fog computing, Internet of Things, system availability, and
data analytics. Her articles have appeared in over 110 publications in the
above research areas.
Antonio Fernández Anta is a Research Professor at IMDEA Networks
Institute. Previously he was a Full Professor at the Universidad Rey Juan
Carlos (URJC) and was on the Faculty of the Universidad Politécnica de
Madrid (UPM). He spent sabbatical years at Bell Labs Murray Hill and
MIT Media Lab. He has more than 25 years of research experience and
more than 200 scientific publications. He was the Chair of the Steering
Committee of DISC and has served in the TPC of numerous confer-
ences and workshops. He is a Senior Member of ACM and IEEE.
Christos K. Filelis Papadopoulos received his Diploma in Engineering
degree from the Electrical and Computer Engineering Department of the
Democritus University of Thrace, Greece, in 2010 and his PhD in
Numerical Analysis and High Performance Scientific Computations from
the same university in 2014. His research interests include preconditioned
iterative methods, multigrid and multilevel methods, and parallel
computing.
Johan Forsman received an MS degree in Computer Science from Luleå
University of Technology, Sweden. He is a Product Manager and Principal
Solution Architect at Tieto Product Development Services. Forsman has
over 20 years of experience in development of mobile telecommunication
systems and is currently involved in business opportunities in the emerg-
ing telecoms landscape, introducing NFV, 5G, and IoT t­ echnologies. His
NOTES ON CONTRIBUTORS xv

domain of expertise includes mobile networks and specifically radio access


networks and virtualisation technology.
Frank Fowley is a Senior Research Engineer in the Irish Institute for
Digital Business (IIDB) and previously held the same position at the Irish
Centre for Cloud Computing and Commerce (IC4) in Dublin City
University (DCU). His main research revolves around cloud architec-
ture and migration. Prior to joining DCU, Fowley held a number of
senior positions in telecom and ICT companies in Ireland and abroad.
He holds an MSc in Security and Forensic Computing and a BSc in
Engineering.
Rafael García Leiva is a Research Engineer at the IMDEA Networks
Institute, Madrid, Spain. Before this appointment, he was a Research
Assistant at the University of Córdoba, R&D Coordinator at Andago
Ingeniería, and a Principal at Entropy Computational Services. His
research interests lie in the areas of Big Data and machine learning.
Konstantinos M. Giannoutakis is a Postdoctoral Research Fellow at the
Information Technologies Institute of Centre for Research and Technology
Hellas. His research interests include high-performance and scientific
computing, parallel systems, grid/cloud computing, service-oriented
architectures, and software engineering techniques. His articles have
appeared in over 80 publications in the above research areas.
George A. Gravvanis is a Professor in the Department of Electrical and
Computer Engineering of Democritus University of Thrace. His research
interests include computational methods, mathematical modelling and
applications, and parallel computations. He has published over 200
papers and is a member of the editorial board of international
journals.
Frank Griesinger is a Researcher and Software Engineer at the Institute
of Information Resource Management at Ulm University. He holds an
MSc in Computer Science. The focus of his research interest is on the
modelling, tracing, and self-adaptability of highly connected and dynamic
applications as well as description languages and execution environ-
ments for cloud native applications.
Hector Humanes received his degree in Software Engineering and mas-
ter’s in Embedded and Distributed Systems Software from Universidad
Politécnica de Madrid, Spain. Previously, he worked for the System and
xvi NOTES ON CONTRIBUTORS

Software Technology Group, a research group of the Universidad


Politécnica of Madrid. Since 2018, he has been the Technical Leader of
the Innovation Department in Sistemas Avanzados de Tecnología, S.A
(SATEC), a Spanish ICT company.
Thang Le Duc is a Senior Researcher at Tieto Product Development
Services with more than 10 years of R&D experience in both academia
and industry. He received his PhD in Computer Engineering from
Sungkyunkwan University (SKKU) and previously worked as a
Postdoctoral Researcher at SKKU and Umeå University. Prior to
that, he had held multiple academic positions and worked as a senior
engineer in different industrial projects. His research interests include
data analytics, system/workload modelling, cloud/edge computing,
and SDN/NFV.
Mark Leznik is a Researcher and PhD Candidate at the Institute for
Organisation and Management of Information systems at Ulm University.
He holds an MSc in Computer Science from Ulm University, with the
focus on computer vision, computer graphics, and machine learning.
His current research interests include time series analysis, data syn-
thesis, and anomaly detection using neural networks.
Radhika Loomba is a Research Scientist with Intel Labs Europe. She
holds a PhD and BTech (Hons) degree in Computer Science and
Engineering. Her PhD thesis focused on collaborative mobile sensing and
mobile cloud computing technologies. Her current research focus is on
orchestration, analytics, and optimisation for Cyber-Physical Systems from
a mathematical modelling perspective, and her research interests include
cloud computing, SDN, fog and edge computing, distributed collabora-
tive systems, control theory, orchestration, planning, and scheduling.
Miguel Angel López-Peña holds a BS degree in Computer Science from
Universidad Carlos III de Madrid, Spain, and a master’s from the Spanish
Ministerio de Educación (EQF level 7). He is currently pursuing a PhD in
Science and Computer Technologies for Smart Cities at the Universidad
Politécnica de Madrid. Since 2005, he has been an Innovation and
Development Manager with the Sistemas Avanzados de Tecnología,
S.A. (SATEC), a Spanish ICT company.
Theo Lynn is Full Professor of Digital Business at Dublin City University
and is Director of the Irish Institute of Digital Business. He was formerly
Notes on Contributors  xvii

the Principal Investigator (PI) of the Irish Centre for Cloud Computing
and Commerce, an Enterprise Ireland/IDA-funded Cloud Computing
Technology Centre. Lynn specialises in the role of digital technologies in
transforming business processes with a specific focus on cloud computing,
social media, and data science.
John G. Mooney is Associate Professor of Information Systems and
Technology Management and Academic Director of the Executive
Doctorate in Business Administration at the Pepperdine Graziadio
Business School. Mooney previously served as Executive Director of the
Institute for Entertainment, Media and Culture from 2015 to 2018. He
was named Fellow of the Association for Information Systems in December
2018. His current research interests include management of digital inno-
vation (i.e. IT-enabled business innovation) and business executive respon-
sibilities for managing digital platforms and information resources.
Linus Närvä is a Software Engineer at Tieto Sweden Support Services
AB. His domain experience includes radio networks, radio base station
software, and cloud computing platforms.
Manuel Noya is a Co-founder and CEO of Linknovate. He holds an
MSc in Materials Science and Technology, a BSc in Chemical Engineering,
and a BSc in Materials Engineering. He is an International Fellow at
SRI International (Menlo Park, CA). His research interests include
materials science, and software technologies in the area of text and
data mining applied to business intelligence.
Per-Olov Östberg is a Research Scientist with a PhD in Computing
Science from Umeå University and more than half a decade of postgradu-
ate experience from both academic research and industry. He has held
Researcher and Visiting Researcher positions at five universities: Umeå
University, Uppsala University, and Karolinska Institutet in Sweden; Ulm
University in Germany; and the Lawrence Berkeley National
Laboratory (LBNL) at the University of California, Berkeley, in the
USA. He specialises in distributed computing resource management
and has worked in the Swedish government’s strategic eScience
research initiative eSSENCE, research and innovation projects funded
by the EU under the FP7 and H2020 programmes, and projects
funded by the Swedish national research council VR.
xviii Notes on Contributors

Minas Spanopoulos-Karalexidis is a Research Assistant at the


Information Technologies Institute of Centre for Research and Technology
Hellas. His research interests include high-performance scientific comput-
ing, simulation methods, sparse matrix technologies, iterative methods,
parallel and distributed systems, and static timing analysis.
Sergej Svorobej is a Postdoctoral Researcher in the Irish Institute of
Digital Business at Dublin City University. Svorobej’s research focus is on
complex systems, modelling and simulation with specific emphasis on
cloud computing applications and infrastructure. Prior to working on the
Horizon 2020 RECAP project, Svorobej was a Researcher at the
Irish Centre for Cloud Computing and Commerce and on the FP7
CACTOS project. Previously, he held roles in SAP Ireland and SAP
UK. He holds a PhD from Dublin City University and a BSc in
Information Systems and Information Technology from Dublin
Institute of Technology.
Dimitrios Tzovaras is the Director (and Senior Researcher Grade ‘A’) of
the Information Technologies Institute. He received a Diploma in
Electrical Engineering and a PhD in 2D and 3D Image Compression from
the Aristotle University of Thessaloniki, Greece in 1992 and 1997, respec-
tively. Prior to his current position, he was a Senior Researcher on the
Information Processing Laboratory at the Electrical and Computer
Engineering Department of the Aristotle University of Thessaloniki. His
main research interests include network and visual analytics for network
security, computer security, data fusion, biometric security, virtual reality,
machine learning, and artificial intelligence.
Peter Willis manages the Software Based Networks team in BT Applied
Research. He has been researching and developing Network Functions
Virtualisation since 2011. He published the first carrier NFV testing
results in June 2012 and is co-inventor of the term “NFV”. Willis is
currently leading BT’s research to improve NFV and SDN technol-
ogy and its management. Willis previously worked on the develop-
ment of PBB-TE, BT’s 21st Century Network Architecture, and BT’s
Internet service.
List of Figures

Fig. 1.1 Interdependencies between models 8


Fig. 1.2 RECAP conceptual reference model 12
Fig. 1.3 Component-oriented overview of the RECAP optimisation
subsystem18
Fig. 1.4 High-level overview on RECAP simulation framework 19
Fig. 1.5 The RECAP approach to retrieve data, analyse it, and export
the resulting models to other RECAP components 20
Fig. 1.6 Run-time loop of RECAP 21
Fig. 1.7 High-level overview on simulation interaction 22
Fig. 1.8 High-level overview on data analytics subsystems 23
Fig. 2.1 Conceptual overview of data handling in RECAP 29
Fig. 2.2 RECAP monitoring layers 31
Fig. 2.3 RECAP’s distributed monitoring architecture 32
Fig. 2.4 A summary of the main steps of the methodology for
exploratory data analysis of new datasets 40
Fig. 2.5 Decomposition of received traffic at a cache 42
Fig. 2.6 Statistical distributions fitted to records of data sizes of pulled
cache content 43
Fig. 2.7 Simulated workload for a search engine 45
Fig. 2.8 An exemplary presentation of serving time of requests in a
workload data set. (a) Histogram of serving time of user
requests. (b) Box plot of serving time of user requests 47
Fig. 2.9 An exemplary presentation of a correlation of features in a
workload data set 47
Fig. 2.10 Snapshot of the dashboard for the testbed at UULM 48
Fig. 2.11 Snapshot of the dashboard for the testbed at Tieto 49

xix
xx List of Figures

Fig. 3.1 A stratified approach to application optimisation iteratively


building on three optimisation building blocks—(1) classic
optimisation on static data, (2) application adaptation to
variations in workloads and resource availability, (3) joint
autoscaling and optimisation in multi-tenancy scenarios using
machine learning (adapted from Le Duc et al. (2019)) 55
Fig. 3.2 A platform for the integration of predictors and modellers 66
Fig. 4.1 Offline infrastructure optimisation modelling process 71
Fig. 4.2 Online application placement and infrastructure optimiser 71
Fig. 4.3 Mapping a service request graph and a contextualised
landscape graph 76
Fig. 4.4 Overview of Umeå network topology (site routers only) 80
Fig. 4.5 Representation of a subgraph of contextualised
network landscape 81
Fig. 4.6 The load translation methodology in full 83
Fig. 4.7 Average compute utilisation by VNF placement profile for
compute-1 compute-3 84
Fig. 4.8 Provider-centric vs customer-centric attributes
(Loomba et al. 2017) 86
Fig. 4.9 Representation of the evolutionary algorithm 89
Fig. 5.1 High-level conceptual overview of the RECAP simulation
framework96
Fig. 5.2 DES simulation model data format (inputs) 98
Fig. 5.3 DES simulation results format (outputs) 100
Fig. 5.4 Application simulation model example 103
Fig. 5.5 DTS architecture 108
Fig. 5.6 BT hierarchical level of sites 110
Fig. 5.7 A site architecture of DTS 111
Fig. 6.1 Categories of communication services and example of 5G use
cases119
Fig. 6.2 A forwarding graph of a network service in an LTE network 121
Fig. 6.3 Logical view of the testbed 124
Fig. 6.4 Maximum provider and customer utility of each distribution 127
Fig. 6.5 Provider utility vs. customer utility for different distributions 128
Fig. 6.6 Total utility for normal day, event, and 24% growth scenarios 129
Fig. 6.7 Abstract representation of BT UK network topology 133
Fig. 6.8 BT network locations in UK 134
Fig. 6.9 Customer utility vs. number of vCDN sites 137
Fig. 6.10 Provider utility vs. number of vCDN nodes 138
Fig. 6.11 Example of IoT hybrid network for mobile devices 140
Fig. 6.12 Smart city structure 140
Fig. 6.13 SAT-IoT platform architectural model 143
List of Figures  xxi

Fig. 6.14 SAT IoT platform high-level conceptual architecture 146


Fig. 6.15 Optimisation results 148
Fig. 6.16 Number of records transferred for SAT-IoT running route
planning and city traffic monitoring application using cloud-
based processing and RECAP-­optimised processing 149
Fig. 6.17 (a), (b), and (c): Histograms of the distribution of the
responses, response time, and scatter plot of the response size
and time for the LKN search engine 153
Fig. 6.18 Time series of the LKN’s search engine workload (data
aggregated over windows of 30 minutes) 154
Fig. 6.19 Workload predictor dashboard 155
Fig. 6.20 LKN conceptual ElasticSearch (ES) architecture 157
Fig. 6.21 ElasticSearch (ES) workload flow 158
Fig. 6.22 Comparison of actual and simulation query response times 158
Fig. 6.23 LKN system performance under different traffic scenarios 159
List of Tables

Table 3.1 Summary of diffusion algorithms 65


Table 4.1 Steps in the RECAP infrastructure optimisation process 73
Table 5.1 Average values of user requests 104
Table 6.1 Characteristics and requirements of communication services 120
Table 6.2 Use case requirements and corresponding RECAP solutions 123
Table 6.3 RECAP deliverables to address validation scenarios 125
Table 6.4 Initial placement plans of VNFs 126
Table 6.5 vCDN use case requirements and corresponding RECAP
components132
Table 6.6 vCDN technical challenge and RECAP solution 135
Table 6.7 LKN platform components 150
Table 6.8 Statistics of the response codes returned by the LKN search
engine152
Table 6.9 The top 10 IP addresses directing the largest number of
requests to the LKN search engine 152

xxiii
CHAPTER 1

Towards an Architecture for Reliable


Capacity Provisioning for Distributed Clouds

Jörg Domaschka, Frank Griesinger, Mark Leznik,


Per-­Olov Östberg, Keith A. Ellis, Paolo Casari,
Frank Fowley, and Theo Lynn

Abstract The complexity of computing along the cloud-to-edge contin-


uum presents significant challenges to ICT operations and in particular
reliable capacity planning and resource provisioning to meet unpredict-
able, fluctuating, and mobile demand. This chapter presents a high-level

J. Domaschka (*) • F. Griesinger • M. Leznik


Institute of Information Resource Management, Ulm University, Ulm, Germany
e-mail: [email protected]; [email protected];
[email protected]
P.-O. Östberg
Umeå University, Umeå, Sweden
e-mail: [email protected]
K. A. Ellis
Intel Labs Europe, Dublin, Ireland
e-mail: [email protected]

© The Author(s) 2020 1


T. Lynn et al. (eds.), Managing Distributed Cloud Applications
and Infrastructure, Palgrave Studies in Digital Business & Enabling
Technologies, https://doi.org/10.1007/978-3-030-39863-7_1
2 J. DOMASCHKA ET AL.

conceptual overview of RECAP—an architectural innovation to support


reliable capacity provisioning for distributed clouds—and its operational
modes and functional building blocks. In addition, the major design con-
cepts informing its design—namely separation of concerns, model-­
centricism, modular design, and machine learning and artificial intelligence
for IT operations—are also discussed.

Keywords Capacity provisioning • Distributed cloud computing •


Edge computing • Infrastructure optimisation • Application
optimisation

1.1   Introduction
The objective of this book is to introduce readers to RECAP, an architec-
tural innovation in cloud, fog, and edge computing based on the concepts
of separation of concerns, model-centricism, modular design, and machine
learning and artificial intelligence (AI) for IT operations to support reli-
able capacity provisioning for distributed clouds. The remainder of this
chapter provides a brief overview of computing across the cloud-to-edge
(C2E) continuum and the challenges of distributing and managing appli-
cations across geo-distributed infrastructure. This chapter also introduces
some of the major design concepts informing the RECAP architectural
design and provides an overview of the RECAP architecture and
components.

P. Casari
IMDEA Networks Institute, Madrid, Spain
e-mail: [email protected]
F. Fowley
Irish Institute of Digital Business, Dublin City University, Dublin, Ireland
e-mail: [email protected]
T. Lynn
Irish Institute of Digital Business, DCU Business School, Dublin, Ireland
e-mail: [email protected]
1 TOWARDS AN ARCHITECTURE FOR RELIABLE CAPACITY PROVISIONING… 3

1.2   From the Cloud to the Edge and Back Again


The convergence and increasing ubiquity of wireless internet access, cloud
computing, Big Data analytics, social and mobile technologies presage the
possibilities of billions of people and things connected through mobile
devices and smart objects in the cloud. This phenomenon is heralded as
the coming of the fourth industrial revolution, the networked society, the
Internet of Things (IoT), indeed the Internet of Everything. Connecting
but a fraction of the 1.4 trillion “things” worldwide today is predicted to
create US$14.4 trillion and US$4.6 trillion in private and public sector
value, respectively, through accelerated innovation and improved asset
utilisation, employee productivity, supply chain, logistics, and customer
experience (Cisco 2013a, b).
Today, while we are moving towards a society whose social structures
and activities, to a greater or lesser extent, are organised around digital
information networks that connect people, processes, things, data, and
social networks, the reality is still some distance away (Lynn et al. 2018).
The dawn, if not the day, of the Internet of Things is here. Haller et al.
(2009) define IoT as:

A world where physical objects are seamlessly integrated into the informa-
tion network, and where the physical objects can become active participants
in business processes. Services are available to interact with these “smart
objects” over the Internet, query their state and any information associated
with them, taking into account security and privacy issues. (Haller et al.
2009, p. 15)

This definition largely assumes that smart objects (end-devices), rang-


ing from the simple to the complex in terms of compute, storage, and
networking capabilities, will interact with each other and the cloud to
provide and consume services and data, but not necessarily at all times.
Furthermore, these smart end-devices, e.g. smart phones or transport sen-
sors, may move to different geographic areas where, for economic, geo-
graphic, or technological reasons, they cannot always be connected, yet
will be expected to carry on functioning regardless. IoT embodies many of
the drivers that see an increased move from cloud-centric deployments to
distributed application deployments in the cloud or on the edge
infrastructure.
4 J. DOMASCHKA ET AL.

Within the traditional cloud computing paradigm, processing and stor-


age typically take place within the boundaries of a cloud and its underlying
infrastructure, and are often optimised for specific types of applications
and workloads with predictable patterns. Neither the cloud nor the net-
works connecting these objects to the cloud were designed to cater for the
flood of geographically dispersed, heterogeneous end points in the IoT
and the volume, variety, and velocity of data that they generate.
Fog computing and edge computing are two relatively new paradigms
of computing that have been proposed to address these challenges. Fog
computing is a horizontal, physical, or virtual resource paradigm that
resides between smart end-devices and traditional cloud data centres. It is
designed to support vertically isolated, latency-sensitive applications by
providing ubiquitous, scalable, layered, federated, and distributed com-
puting, storage, and network connectivity (Iorga et al. 2018). In contrast,
edge computing is local computing at the edge of the network layer
encompassing the smart end-devices and their users (Iorga et al. 2018). If
one imagines a cloud-to-edge (C2E) continuum, data processing and
storage may be local to an end-device at the edge of a network, located in
the cloud, or somewhere in between, in “the fog”.
As discussed, while fog computing and edge computing offer solutions
for delivering IoT to industry and the masses, they introduce new and
significant challenges to cloud service providers, network operators and
enterprises using this infrastructure. These environments face a high
degree of dynamism as an immediate consequence of user behaviour.
Overall, this setting creates a set of challenges regarding how to distribute
and run applications in such unpredictable geo-distributed environments.
Similar demands are seen at the network edge given the growth of rela-
tively nascent services, e.g. Content Delivery Networks. Spreading infra-
structure out over large geographic areas increases the complexity and cost
of planning, managing, and operating that physical infrastructure. Firstly,
it raises the question of how much infrastructure of what type to place
where in the network—a decision that must be made in advance of any
service being offered. Secondly, applications deployed over large geo-
graphically distributed areas require a detailed understanding of the tech-
nical requirements of each application and the impact on the application
when communication between an application’s components suffers due to
increased latency and/or reduced bandwidth. Thirdly, for a service pro-
vider along the C2E continuum, the question arises about which (parts)
of the various applications in a multi-tenant setting should be operated at
1 TOWARDS AN ARCHITECTURE FOR RELIABLE CAPACITY PROVISIONING… 5

the edge and which should not be. This is of critical importance due to the
potentially limited compute resources available at each edge location. To
add to the complexity, some of these questions must be answered in
advance with incomplete data on user demand while others require near
real-time decision making to meet unpredictable and fluctuating user
demands.
Incorrect placement decisions may result in inflexible, unreliable,
expensive networks and services. This is more likely as the decision space
becomes so complex; it is no longer realistic for IT teams to cost-­effectively
foresee and manually manage all possible configurations, component
interactions, and end-user operations on a detailed level. As such, mecha-
nisms are needed for the automated and intelligent placement and scaling
of dynamic applications and for the management of the physical resources
that underpin such applications. RECAP—an architectural innovation in
cloud and edge computing to support reliable capacity provisioning for
distributed clouds—is posited as such a mechanism.

1.3   Design Principles


This section outlines some of the major design concepts informing the
RECAP architectural design, namely separation of concerns, model-­
centricism, modular design, and machine learning and AI for IT operations.

1.3.1  Separation of Concerns
Separation of concerns is a concept that implements a “what-how”
approach to cloud architectures separating application lifecycle manage-
ment and resource management where the end user or enterprise cus-
tomer focuses its efforts on what needs to be done and the cloud service
provider or cloud carrier focuses on how it should be done (Lynn 2018).
At its core, the end user or enterprise customer focuses on specifying the
business functionality, constraints, quality of service (QoS), and quality of
experience (QoE) (together KPIs) they require, with minimal interference
with the underlying infrastructure (Papazoglou 2012). To support a sepa-
ration of concerns, a detailed understanding of the KPIs but also the rela-
tionship between the performance of the applications and underlying
infrastructure, and the achievement of these APIs is required.
In multi-tenant environments, for example clouds and networks, the
separation of concerns is complicated because the actors will, most likely,
6 J. DOMASCHKA ET AL.

belong to different organisations (including competitors), have very dif-


ferent KPIs, different load patterns, different network topologies, and
more critically, different priorities. Any architecture for reliable capacity
provisioning, whether from an application or infrastructure perspective,
across the C2E continuum must have mechanisms to support separation
of concerns in an agile way.

1.3.2  Model-Centricism
Due to the complexity, heterogeneity, and dynamic nature of (i) the busi-
ness domains in which enterprises, cloud service providers, and cloud car-
riers operate; (ii) the application landscape (including legacy and next
generation applications); and (iii) the infrastructure in and upon which
these applications operate and are consumed, a flexible software architec-
ture is required that can evolve in line with business, application, and infra-
structure requirements. Model-centricism is a design principle that uses
machine-readable, highly abstract models developed independently of the
implementation technology and stored in standardised repositories
(Kleppe et al. 2003). This provides a separation of concerns by design, and
thus supporting greater flexibility when architecting and evolving
enterprise-­scale and hyperscale systems. Brown (2004, pp. 319–320) enu-
merates the advantages of using models including:

• Models help people understand and communicate complex ideas.


• Many different kinds of elements can be modelled depending on the
context offering different views of the world.
• There is commonality at all levels of these models in both the prob-
lems being analysed, and in the proposed solutions.
• Applying the ideas of different kinds of models and transforming
them between representations provide a well-defined style of devel-
opment, enabling the identification and reuse of common approaches.
• Existing model-driven and model-centric conceptual frameworks
exist to express models, model relationships, and model-to-model
transformations.
• Tools and technologies can help to realise this approach, and make it
practical and efficient to apply.

To meet the needs of infrastructure providers as well as application


operators, an understanding is needed on how the impact of load and load
1 TOWARDS AN ARCHITECTURE FOR RELIABLE CAPACITY PROVISIONING… 7

changes on the application layer influences the application’s resource


demands at the infrastructure layer and further, how competing resource
demands from multiple applications, and indeed multiple application pro-
viders, impact the infrastructure layer.
From a high-level perspective, users impose a certain load on the appli-
cations; that load will change over time. At the same time, users have
performance requirements for a given application. For instance, a lack of
responsiveness from a website may make them switch while otherwise they
would have stayed. The operators of that application want to ensure that
some level of performance is guaranteed in order to keep their customers.
Hence, it is their task to adapt the performance of the application to the
amount of workload imposed by the users. How and whether this can be
done depends on the architecture and implementation of the application.
For distributed applications (that constitute a huge portion of today’s
applications), horizontal scaling increases the computational capacity.
This, in turn, reduces queuing and keeps latency constant despite increas-
ing workload. Moreover, for applications composed of multiple different
components, it is important to understand how load imposed at the
customer-­facing components ripples through the application graph and
impacts the loads on each and every component. Finally, to understand
how much performance a component running on a dedicated hardware
unit (e.g. processor type, RAM type, and disk type) can deliver under a
specific configuration (e.g. available RAM and available cores), a mapping
needs to be available that translates load metrics on the application level
such as arrival rate of requests of a specific type to load metrics on hard-
ware such as CPU used, RAM used, disk usage, as well as the performance
achieved from it. In multi-tenant environments such as virtualised cloud
and cloud/edge systems, the mutual impact of multiple, concurrently
running components from different owners on the same physical hard-
ware is critical.
A model-centric approach for capacity provisioning for distributed
clouds requires at least six models—(1) user models, (2) workload models,
(3) application models, (4) infrastructure models, (5) load translation
models, and (6) Quality-of-Service (QoS) models (Fig. 1.1).
User models describe the behaviour of users with respect to the usage
of individual network-based services. That is, they capture different types
of users and their usage patterns over time. What is more, they also describe
their movement over geographical regions such that it becomes possible
to understand which edge parts of the network will have dedicated
8 J. DOMASCHKA ET AL.

Fig. 1.1 Interdependencies between models

demands for specific services. This is of special interest to edge computing


systems as user mobility impacts network load and application access
patterns.
Workload models describe the workload issued on a system from users
and external systems. While the user model captures the location and type
of users, the workload model describes what actions these users execute
and how this translates into interaction with which parts of an application.
Application models fulfil multiple purposes. First and foremost, they
describe which components compose a distributed application and how
these components are linked with each other (static application topology).
This part of the application model also captures how to technically install
the application in the infrastructure and how to update a running deploy-
ment. Deploying an application creates a run-time application topology
that describes how many instances of each application component are cur-
rently available at which location and how they communicate with each
other on a per-instance basis. The (work)load transition models as a
sub-model of the application model describe for the application how
incoming workload propagates through the applications’ components and
the impact this has on the outgoing links of the component.
As application models are not capable of determining whether or not a
given application topology (or scaling factor) is capable of servicing a cer-
tain amount of load, as they neither have an understanding of the available
hardware and its capabilities nor about how the application load translates
on load on the physical layers.
1 TOWARDS AN ARCHITECTURE FOR RELIABLE CAPACITY PROVISIONING… 9

Infrastructure models capture the layout of the physical and virtual


infrastructure and represent key components such as compute, storage,
and network capabilities, as well as their grouping in racks, data centres,
and similar. Furthermore, they describe capabilities of the hardware
including hardware architecture, virtualisation platform (e.g. type of
hypervisor), and virtual machines (containers) running on the host.
Load translation models enhance the infrastructure models and pro-
vide a mapping from workload on application components to resource
demands on the physical infrastructure. They are crucial for understanding
whether enough physical resources are available to handle workload on
application level. In addition, they describe the impact of congestion
caused by components with similar hardware demands concurrently run-
ning on the same hardware.
Finally, Quality-of-Service (QoS) models provide a means to express
QoS demands towards an application and monitor the fulfilment of these
QoS requirements. In addition, they are able to represent the interdepen-
dencies between QoS aspects on different levels, e.g. what QoS require-
ments at the infrastructure level follow from QoS requirements on the
application level. QoS models may be taken as constraints for the optimi-
sation problems solved for rearranging application and infrastructures.

1.3.3  Modular Design
A modular architecture is an architecture where at least some components
are optional and there exists the ability to add or remove modules or com-
ponents according to the needs of a given use case (Aissaouii et al. 2013).
The benefits of modular design are well known, not least it supports sepa-
ration of concerns and provides greater implementation flexibility thus
reducing costs and risk. A discrete module or component can be imple-
mented without having to implement the entire system. Enterprises, cloud
service providers, and cloud carriers (to a lesser extent) come in all sizes
and with their own constraints. A modular design provides these firms
with greater choice and flexibility.

1.3.4  Machine Learning and AI for IT Operations


As discussed above, the complexity and scale of distributed cloud infra-
structure increasingly require an automated approach. As the deluge of
data generated by IoT continues to increase, and as demands from new
10 J. DOMASCHKA ET AL.

use cases increasingly require edge deployments, e.g. vCDN, the ability of
cloud service providers and cloud carriers to respond quickly to demands
on infrastructure, service incidents, and improve on key metrics decreases
(Masood and Hashmi 2019). Increasingly, enterprises are looking to AI
for IT Operations (or AIOps).
AI for IT Operations (AIOps) seeks to use algorithms and machine
learning to dramatically improve the monitoring, operation, and mainte-
nance of distributed systems (Cardoso 2019). Although at a nascent stage
of development, AIOps has the potential of ensuring QoS and customer
satisfaction, boosting engineering productivity, and reducing operational
costs (Prasad and Rich 2018; Dang et al. 2019). This is achieved by:

1. automating and enhancing routine IT operations so that expensive


and scarce IT staff have more time to focus on high value tasks,
2. predicting and recognising anomalies, serious issues, and outages
more quickly and with greater accuracy than humanly possible
thereby reducing mean time to detect (MTTD) and increasing mean
time to failure (MTTF), and
3. suggesting intelligent remediation that reduces mean time to repair
(MTTR) (IBM 2019; Masood and Hashmi 2019).

Predictions suggest that by 2024, 60% of enterprises will have adopted


AIOps suggesting that novel solutions to capacity provisioning must
accommodate this shift in enterprise IT operations (Gillen et al. 2018).

1.4   Operational Modes


A model-centric approach assumes cloud-edge applications, and the envi-
ronments that they run in, can be described by a set of models and that,
based on these models, it is possible to optimise both cloud-edge infra-
structures and their applications at run-time. As such, an optimisation
(control) system and mechanism for creating, validating, and extrapolat-
ing these models to large-scale environments are required. This requires a
variety of interoperating components, which we refer to here as modes.
Data Analytics Mode: The creation of high-quality models requires an
in-depth understanding of many aspects ranging from users to application
to infrastructure. For deriving this understanding, a sufficient amount of
1 TOWARDS AN ARCHITECTURE FOR RELIABLE CAPACITY PROVISIONING… 11

data needs to be available that can either come from a live system or be
derived from a simulation environment. The Data Analytics Mode pro-
vides the necessary tooling and guidelines to process those data and gener-
ate models from it. The analytics itself is a manual or semi-automated
process that applies approaches from statistics and machine learning in
order to create the models. It consists of pre-processing and data analysis
(or model training respectively). When complete, there is a newly gener-
ated insight in the form of a mathematical formula, a statistical relation-
ship, some other model, or a trained neural network. These insights form
the baseline of the models that are used by other modes and underlying
components.
Run-time Operation Mode: The Run-time Operation Mode uses
online optimisation to continuously update geo-distributed infrastructure
based on the models and the current deployment scenario (deployed
applications, available infrastructure, and user behaviour). Data on the
actual usage of the hardware and software requirements are collected dur-
ing run-time. These data are used by optimisers in the system to weight
the current placement and usage against other options and come up with
new and better configurations. These are output in the form of an optimi-
sation plan that can then be enacted. This changes the configuration of the
actual system. The decisions made in order to improve the system are
based on mathematical, stochastic, or programmatic models of the system
itself, e.g. the capabilities of the hardware, the needs of the application,
current and predicted workload in the system, and the movement of users
in the real world.
Simulation and Planning Mode: The Simulation and Planning Mode
is capable of performing the same steps as the run-time in what-if scenar-
ios and, hence, evaluates the use and acquisition of new, updated, or re-­
allocated hardware. This mode supports scenario (what-if) analyses such
as “what if I bought more or different hardware at existing sites”, “what if
I added a new network site in the topology”, and “how much longer can
the available hardware handle my workload, if it keeps growing as pre-
dicted”. Hence, simulation helps operators to take strategic decisions
about their infrastructure. What is more, using simulation, different place-
ment scenarios are explored and weighed against each other to serve as
calibration and constraints for optimisation algorithms.
12 J. DOMASCHKA ET AL.

1.5   RECAP Conceptual Reference Model


Figure 1.2 presents an overview of the RECAP conceptual reference
model which identifies the main components in RECAP and how they
interoperate. The diagram depicts a generic high-level architecture and is
intended to facilitate the understanding of how RECAP operates.
The diagram below outlines the components in the RECAP architec-
ture and shows the process flow loops in the optimisation framework. The
Landscaper Component (1) acquires information on the state and con-
figuration of the physical and virtual infrastructure resources from dispa-
rate sources and presents same as a graph. The Monitoring Component
(2) uses probes to collect telemetry metrics needed for the modelling and
optimisation tasks, including CPU consumption, disk I/O, memory
loads, network loads, and packet statistics—both from virtual and physical
resources. These are input to the optimisers and the output is used to
orchestrate and enact resource changes in the cloud network.
The Application Optimiser (3) is used to optimally autoscale the
applications and resources. Application scaling refers to horizontal scaling,
namely adding additional application components into the system dynam-
ically, while infrastructure scaling relates to vertical scaling, whereby vir-
tual resources are increased for a component. Applications can be scaled
locally or globally and may be in response to run-time traffic limits or

Fig. 1.2 RECAP conceptual reference model


1 TOWARDS AN ARCHITECTURE FOR RELIABLE CAPACITY PROVISIONING… 13

resource levels being reached or may be controlled by data analytic work-


load predictive systems. The application to be deployed is composed of
multiple connected service components in the form of service function
chains (SFC), which need to be placed together. In order to achieve better
than a very sub-optimal application deployment onto a distributed virtual
cloud infrastructure, it is necessary to introduce sufficient functional gran-
ularity into the application structure to allow separate components to be
provisioned and scaled independently. Application optimisation is essen-
tially a mapping of a graph of application components and dependencies
to the network of computing resources that delivers an optimal overall
KPI target such as maximum latency or minimum throughput or maxi-
mum usage cost. The mapping is done subject to application-specific rules
or constraints relating the individual resource requirements for compo-
nents (Minimum/Maximum instance constraints) and their mutual co-­
hosting needs (Affinity/Anti-Affinity constraints).
The outputs of the application optimiser are treated as requests or rec-
ommendations for application scaling and placement, to be subsequently
evaluated by the Infrastructure Optimiser (4) which augments the ini-
tial placement decision by taking into account the additional knowledge of
the available physical infrastructures, the infrastructure policies of the
infrastructure provider and specific committed Service Level Agreement
(SLA) targets. This allows the infrastructure optimiser to retain full con-
trol of the infrastructure resources and to ultimately decide what applica-
tion requests are enacted and how applications are orchestrated. The
Infrastructure Optimiser (4) includes (1) Application Placement which
optimally maps application components to virtual infrastructure resources
to deliver an optimal overall target such as maximum power consumption,
maximum operational cost, or specific committed Service Level Agreement
(SLA) targets; (2) Infrastructure Optimisation to optimally utilise the
physical infrastructure; and (3) Capacity Planning to perform what-if sce-
narios for additional physical infrastructure.
The Infrastructure Optimiser and Simulator use Infrastructure
Models (landscapes) (6). These models/landscapes present the physical
and virtual structure, configuration, and topology of the known resources.
The telemetry utilisation and performance statistics and the application
KPI information are also needed for the Infrastructure Optimiser. Together
these inputs form a consolidated infrastructure model that has the appro-
priate granularity tailored for the given use case thus making optimisation
practicably achievable.
14 J. DOMASCHKA ET AL.

Application and Workload Models (7 and 9) describe the application


components and their behaviours and dependencies and map the applica-
tion components with their virtual resource requirements. The Workload
Models describe the traffic flows through the application components.
Both models are used by the workload predictor and application optimiser
to forecast workloads and application components and recommend how
these components should be placed on the network topology based on
optimising the overall application KPIs. The application models describe
applications as graphs of components with interdependencies and con-
straints in the form of graph links. The workload models describe the
relationships between control and data plane traffic, between end-to-end
latency and traffic, and between traffic and resource usage. They have
been built based on the data analysis of historical trace and synthetic work-
load data using statistical and machine learning techniques.
In the Application Optimiser (3), the traffic workloads are mapped to
the application sub-components, and the propagation of workloads is
modelled to account for the migratory capability of the components and
the mobile nature of users. The Optimisers use Load Distribution
Models (6) to account for this mobility of application components and
the impact of component migration on application performance. They
effectively model the traffic flows in the system and can predict the effect
on workloads if application components are changed. They are based on
the results of load balancing after a component migrates and on user mod-
els which drive component migration. These models are used by the opti-
misers to calculate the cost of component migration when selecting an
optimisation option.
Load Translation Models (7) are used by the Infrastructure
Optimiser (4) to map application configuration to physical infrastructure
capacity. The optimiser correlates the virtual resources (VMs/Containers)
to physical resources, and the physical resource utilisation with the appli-
cation component KPIs (throughput, response time, availability, speed of
service creation, and speed of service remediation). The translation pro-
vides a mapping of actual (specific in time) telemetry metrics of physical
resource consumption (utilisation metrics) to application components
workloads (i.e. the utilisation of resources by the components that are
running on those physical machines). Effectively, this maps the application
placement with the performance of components so placed.
The User Models (9) are based on an agent-based modelling of users,
e.g. citizens navigating through a city and utilising mobile services.
1 TOWARDS AN ARCHITECTURE FOR RELIABLE CAPACITY PROVISIONING… 15

It is possible to create models based on historical trace data and simu-


lated synthetic data. In this case, Simulators (5) are a valuable tool for
generating the user mobile behaviour and demand for application services
as well as the corresponding traffic from the related cloud services.

1.5.1  Optimisation Process Flows


Process A: The Application Optimiser (3) is fed with appropriate out-
put from the Landscaper Component (1) and Monitoring Component
(2), which represents the current resource capacity and utilisation, as well
as the Application Models, which represent the application workload and
performance targets. The Application Optimiser’s (3) prediction engine
produces a recommended deployment of components and outputs this to
the Infrastructure Optimiser (4) for evaluation, and then to the
Orchestrator (11) for orchestration. The Application Optimiser (3)
can be subsequently triggered dynamically to handle variations in applica-
tion workloads and user behaviours so that placement and autoscaling can
take place. In its most proactive mode, the optimiser can create virtual
resources, placing and autoscaling based on machine-learning models that
are run against workload and user metrics in real-time.
Process B: The Infrastructure Optimiser (4) uses the output of the
Landscaper Component (1) and Monitoring Component (2), which
represents the current resource capacity and utilisation, as well as the
Workload and Infrastructure Models to optimise the utilisation of the
physical hardware resources based on required Service Level targets and
policies. The Infrastructure Optimiser (4) optimises the use of the phys-
ical resources taking energy, equipment, and operational costs into account
as well as the plans and policies around physical resource utilisation. This
is based on a logical model of the infrastructure, virtual and physical
resources, and their utilisation mappings. The Infrastructure Optimiser
(4) also needs to represent the mobile nature of workloads and the ability
of application component migration to properly optimise the deployment.
The Infrastructure Optimiser uses the Simulator (5) in a Human-in-the-­
Loop fashion, using the simulator to formulate deployment mapping
selections and calibrating the optimiser’s algorithmic process. The
Simulator (5) validates the results of the optimisation and provides
“what-if” scenario planning.
16 J. DOMASCHKA ET AL.

1.6   RECAP Building Blocks


While the previous section presents RECAP as a loosely integrated con-
ceptual architecture, this section focuses on four high-level functional
building blocks (subsystems) that encapsulate RECAP logic and provide
the necessary functionality to realise the three operational modes discussed
in Sect. 1.4. The respective building blocks are loosely coupled and are a
frame for the RECAP architecture. The building blocks are themselves
distributed so that the entire RECAP system represents a distributed
architecture. The major functional building blocks (subsystems) are
Infrastructure Modelling and Monitoring, Optimisation, Simulation and
Planning, and Data Analytics and Machine Learning. Each of the blocks is
discussed in-depth in the remaining chapters of the book.

1.6.1  Infrastructure Modelling and Monitoring


The old adage “garbage in, garbage out” particularly applies to making
valued optimisation decisions. Thus, within RECAP’s Run-time Operation
Mode, having an accurate understanding of the current state of applica-
tions and the underpinning infrastructure is of paramount importance.
Furthermore, the long-term collection of accurate data is a key require-
ment for being able to apply meaningful data analytics and machine learn-
ing strategies (see Data Analytics Mode). Hereby the current state of
application and infrastructure is represented by two complementary data
sets, the infrastructure landscape and the infrastructure monitoring (telem-
etry) provided through the Landscaper Component and the Monitoring
Component respectively. As discussed earlier, the Landscaper Component
is tasked with providing physical and virtual infrastructure data as “a land-
scape” consisting of nodes and edges. In that landscape, nodes represent
for instance physical servers, virtual machines, or application instances. In
contrast, edges either represent mappings from applications to virtual
resources and further to physical resources, or (network) connections
between instances on the same abstraction layer. In short, the Landscaper
Component identifies what type of infrastructure is available and where,
while the Monitoring Component provides live data from that infrastruc-
ture. Both are essential for modelling and optimisation and are encom-
passed in a requisite distributed design.
As discussed in Sect. 1.5, the RECAP Monitoring Component collects
telemetry-like data from physical infrastructure, virtual infrastructure, and
1 TOWARDS AN ARCHITECTURE FOR RELIABLE CAPACITY PROVISIONING… 17

applications; stores this data in a unified format; and ultimately provides


the data in a consumer-specific format to other components in the wider
RECAP system. Both the Landscaper Component and the Monitoring
Component have been designed to operate on a per-location (data centre)
basis. This helps in respecting administrative domains and, in the case of
monitoring, reduces overall network traffic.

1.6.2  Optimisation
Optimisation goals in a multi-tenant distributed cloud-edge environment
vary depending on the respective perspective. On the one hand, infra-
structure optimisation has the goal to enforce a scheduling strategy that
best reflects the intention of the infrastructure provider, e.g. to improve
the utilisation of the available hardware or to save energy. On the other
hand, application optimisation strategies try to find the best-possible con-
figuration for an application deployment. Hence, the latter will increase
the available compute capacity when high workload is expected. This,
however, will only lead to satisfaction when the scheduling at the infra-
structure level does not apply strategies that counteract these goals.
Consequently, RECAP’s optimisation subsystem realises a cooperative
two-level optimisation framework, in which the optimisers at the two lev-
els (application and infrastructure) interact in order to avoid conflicting
scheduling decisions. Besides infrastructure-level and application-level
optimisers, the subsystem further contains an optimisation orchestrator
that mediates between the two levels. All entities in that subsystem con-
sume monitoring data, application load data, and infrastructure data. The
outputs of the optimisation algorithms in turn are optimisation steps that
are then processed by the Enactor.
Figure 1.3 illustrates the dependencies between the major components
of the optimisation subsystem. While there is just one Infrastructure
Optimiser in a given installation, there may be multiple Application
Optimisers, one per deployed application. Each of these is equipped with
its own application-specific optimisation strategy and optimisation rules.
The Infrastructure Optimiser in turn is equipped with provider-specific
optimisation policies.
The Application Optimisers constantly receive the current status infor-
mation from the Infrastructure and Modelling subsystems and, based on
this information, estimate the future coming workload. Based on the cur-
rent and predicted workload, each Application Optimiser suggests
18 J. DOMASCHKA ET AL.

Fig. 1.3 Component-oriented overview of the RECAP optimisation subsystem

optimisation steps for its particular application. These suggestions are fed
to the optimisation orchestrator, which, based on the input received, trig-
gers the infrastructure optimiser that then decides on whether these oper-
ations are feasible and also the mapping between application components
(bundled in virtual machines or containers) and physical resources.
Application Optimisation and Infrastructure Optimisation are presented
in detail in Chaps. 3 and 4 respectively.

1.6.3  Simulation and Planning


Figure 1.4 illustrates the core architecture of the RECAP Simulation
Framework. It consists of an API Component, a Simulation Manager, and
Simulation Engines. The API component serves as an entry point for
users, be they human or other RECAP components, or external parties.
The API Component offers an interface for controlling simulation runs.
In particular, it is used for submitting experiments and retrieving simula-
tion results from these runs. From the API Component, the experiment
data is forwarded to the Simulation Manager, which, in turn, checks model
validity and submits models to an appropriate Simulation Engine. The
RECAP Simulation Framework currently supports two simulation engines
that address different use case requirements. First, the discrete event simu-
lator (DES), based on CloudSim, is targeted towards the simulation of
1 TOWARDS AN ARCHITECTURE FOR RELIABLE CAPACITY PROVISIONING… 19

Fig. 1.4 High-level overview on RECAP simulation framework

large-scale cloud-computing infrastructures, data centres, virtual machines,


and virtual machine components. It is tailored for fine-grained and detailed
simulations. On the other hand, the discrete time simulator (DTS), based
on the CloudLightning Simulator, is well suited for large-scale simulations
that need to run at speed and whose execution time is bounded.
The primary input to a RECAP simulation is a simulation experiment
comprising instances of the application model, the infrastructure model,
the workload model, and in addition, an experiment configuration. All of
these models are represented in the very same way for both simulation
engines. Once the input has been validated by the Simulation Manager, it
has to be transformed to the simulation engine-specific format. This is
done by the Model Mapper components shown in Fig. 1.4.

1.6.4  Data Analytics and Machine Learning


The Data Analytics and Machine Learning subsystems make use of the
data collected by Landscaper Component and the Monitoring Component.
The primary goal of this functional block is to distil statistical properties
and patterns from load traces. Previously, this activity would be under-
taken within an engineering team; however, due to the massive volume of
data involved, this can no longer be easily undertaken by humans. As such,
the Data Analytics and Machine Learning subsystem operates in a separate
processing pipeline that is decoupled from the Optimisation and the
Another random document with
no related content on Scribd:
tarttuen hänen ratsunsa suitsiin. Tyttö kohotti ratsuraippaansa ja
sivalsi useita kertoja, mutta turhaan ahdistajansa rautaista
päähinettä, samalla kun viimemainittu pyöräytti hevosensa ympäri ja
lasketti nopeata neliä pois näkyvistä, kiskoen tytön ratsua jäljessään.

Norman of Torn juoksi ovelle ja siitä huolimatta, ettei hänen yllänsä


ollut rauta-asua, ponnahti sir Mortimerin selkään ja lähti vinhaa
vauhtia siihen suuntaan, johon tyttö ja hänen ryöstäjänsä olivat
menneet.

Kookas musta oli nopea, ja kun sitä ei nyt painanut ratsastajansa


tavallinen, raskas asu, sai se pian pakenijat näkyviin. Tuskin
puoltatoista kilometriä oli ratsastettu, ennenkuin ritari kääntyessään
taaksepäin vilkaisemaan, ajettiinko häntä takaa, näki Norman of
Tornin kasvot vajaan kymmenen askeleen päässä itsestään.

Kasvoillaan sekava hämmästyksen, harmin ja epäilyksen ilme


ritari seisautti ratsunsa, samalla huudahtaen: »Mon Dieu, Edward!»

»Paljasta miekkasi ja puolustaudu!» kiljaisi Norman of Torn.

»Mutta, teidän korkeutenne», änkytti ritari.

»Puolustaudu! Muutoin pistän sinut kuoliaaksi samoin kuin olen


tappanut satoja englantilaissikoja», karjui Norman of Torn.

Nelistävä ori oli melkein saavuttanut ritarin, ja hän odotti


ratsastajan pysäyttävän sen, mutta mustan vasaman tavoin sir
Mortimer tömähti suoraan toisen hevosen lapaan, ja sekä mies että
ratsu kierähtivät maantien tomuun.

Ritari nousi pystyyn vahingoittumattomana, ja Norman of Torn


hypähti maahan aloittaakseen rehellisen, tasapuolisen ottelun.
Vaikka ritaria haittasi varuksien paino, oli hänelle myöskin niiden
suojan suoma etu, joten he kamppailivat vimmatusti useita
minuutteja kummankaan pääsemättä voitolle.

Tyttö istui liikkumattomana ja silmät levällään maantien ohessa,


tarkkaillen ottelijain jokaista liikettä. Hän ei yrittänytkään pakoon;
hänen edessään riehuvan taistelun rajuus ja mahdollisesti hänen
avukseen rientäneen jättiläisen kiehtova ulkomuoto pitivät häntä
ikäänkuin paikalleen naulattuna. Katsellessaan puolustajaansa hän
näki joustavan, lihaksikkaan, ruskeatukkaisen nuorukaisen, jonka
kirkkaista silmistä ja moitteettomasta vartalosta, koska niitä ei
piilottanut kypäri eikä rautapaita, kuvastui harjaantuneen taistelijan
puhdas urheiluelämä.

Hänen kasvoillaan väikkyi kopean ylpeyden vähäinen kylmä hymy,


kun hänen oikea kätensä jokaisella liikkeellään paljasti valtavan
voimansa ja taitonsa leikkiessään hiestyneellä, huohottavalla rauta-
asuisella vastustajalla, joka saumattomasti huitoi ja hosui
miekallaan. Kalahtelevien säilien ja varuksien kovasta melusta
huolimatta ei kumpikaan ottelija ollut saanut paljoakaan
vahingoitetuksi toista, sillä ritari ei pystynyt voimalla eikä taidolla
survaisemaan miekkansa kärkeä varuksettoman vastustajansa
erehtymättömästi väistävän säilän ohitse, kun taas viimemainitun oli
vaikea pistää toisen rautaisen asun lävitse.

Vihdoin Norman of Torn valtavalla voimallaan työnsi säilänsä


kärjen vastustajan rautapaidan renkaiden lävitse, ja tuskasta
parahtaen mies vaipui hervottomana maahan.

»Nopeasti, herra ritari!» huudahti tyttö. »Nouskaa ratsaille ja


paetkaa! Tuolla tulevat hänen kumppaninsa.»
Ja kun Norman of Torn kääntyi siihen suuntaan, josta hän oli
äsken tullut, kiiruhti sieltä tosiaankin häntä kohti täyttä laukkaa kolme
teräsasuista miestä valtavien ratsujen selässä.

»Ratsastakaa, madame!» kehoitti Norman of Torn. »Minä en


pakene enkä ilman varuksia ja jalkaisin voi toivoa kykeneväni
viivyttämään noita kolmea miestä muuta kuin hetkisen, mutta sillä
aikaa pitäisi teidän helposti päästä pakoon, sillä heidän
raskastaakkaiset hevosensa eivät mitenkään jaksa saavuttaa
nopeata ratsuanne.»

Vasta puhuessaan hän tarkasti nuorta naista. Että hän oli ylhäinen
nainen, sitä eivät osoittaneet ainoastaan hänen ratsastusasunsa ja
hevosensa hihnoituksen upeus, vaan myöskin hänen ylimyksellinen
ja kopea esiintymisensä ja kauniiden kasvojensa ylpeä ilme.

Vaikka Norman of Tornin pää oli tähän aikaan nähnyt melkein


kahdenkymmenen vuoden menevän ohitseen, ei hänellä ollut tietoja
eikä kokemuksia naisten tavoista, eikä hän ollut kertaakaan puhellut
ainoankaan aatelisen eikä arvoasemassa olevan naisen kanssa.
Tornin linnaa ei kaunistanut ainoakaan nainen, eikä poika, mikäli
muisti, ollut koskaan nähnyt äitiään.

Hän käyttäytyi sen vuoksi naisia kohtaan hyvin samaan tapaan


kuin miehiäkin kohtaan lukuunottamatta sitä, että hän oli vannonut
aina suojelevansa heitä. Kenties hän tavallaan kunnioitti naisia, jos
Norman of Tornin voitiin sanoa kunnioittavan mitään, Jumalaa,
ihmisiä tai paholaista. Hänen tapansa oli pikemminkin halveksia
kaikkia olentoja, jotka hän lainkaan vaivautui panemaan merkille.

Katseensa ollessa kiintynyt tähän nuoreen naiseen, jonka kohtalo


oli määrännyt muuttamaan koko hänen elämänsä juoksun, Norman
of Torn näki, että hän oli kaunis ja kuului siihen luokkaan, jota hän oli
vuosikausia saalistanut lainsuojattomille hirtehisjoukkueineen. Sitten
hän taaskin kääntyi uhmailemaan tytön vihollisia, osoittaen sitä
kummallista epäjohdonmukaisuutta, joka oli aina ollut hänen
menettelytapojensa omituinen piirre.

Seuraavana päivänä hän saattaisi hyökätä neidon isän linnan


varustuksille, mutta nyt hän oli ilomielin valmis uhraamaan henkensä
hänen puolestaan — jos tyttö olisi ollut hiilenpolttajan tytär, olisi
nuorukainen tehnyt samoin — se riitti, että hän oli nainen ja
suojeluksen tarpeessa.

Ritarit olivat nyt melkein hänen kimpussaan ja komeasti


välittämättä rehellisen taistelun vaatimuksista syöksyivät keihäät
tanassa varustuksetonta jalkamiestä kohti. Mutta kun ensimmäinen
ritari ehti kyllin likelle erottaakseen hänen kasvonsa, huudahti hän
ällistyneenä ja tyrmistyneenä:

»Mon Dieu, le Prince!» Samalla hän pyöräytti nelistävän ratsunsa


syrjään. Kuultuaan hänen huudahduksensa noudattivat hänen
kumppaninsa esimerkkiä, ja kaikki kolme kiitivät edelleen maantietä
pitkin, ilmeisesti nyt yhtä kiihkeästi pyrkien pakoon kuin äsken
hyökkäämään.

»Saattaisi luulla heidän kohdanneen paholaisen», mutisi Norman


of Torn, silmäillen heidän jälkeensä teeskentelemättömän
hämmästyneenä.

»Mitä tämä merkitsee, lady?» kysyi hän, kääntyen neidon puoleen,


joka ei ollut liikahtanutkaan paetakseen.
»Se merkitsee, että kasvonne hyvin tunnetaan isänne
valtakunnassa, teidän korkeutenne», vastasi tyttö. »Eikä kuninkaan
puoluelaisilla ole vähääkään halua taistella teitä vastaan, vaikka he
eivät käsittäne sen paremmin kuin minäkään, minkä tähden te
haluatte puolustaa Simon de Montfortin tytärtä.»

»Pidetäänkö minua siis Englannin prinssinä Edwardina?»


kummasteli nuorukainen.

»Kenä muuna pitäisi teitä sitten pitää, mylord?»

»Minä en ole prinssi», sanoi Norman of Torn. »Edwardin kerrotaan


olevan
Ranskassa.»

»Olette oikeassa, sir», huudahti tyttö. »Mutta olette niin hänen


näköisensä, että hyvin voisitte harhauttaa itse kuningattarenkin. Ja te
olette kyllin uljas sopiaksenne kuninkaanpojaksi. Kuka te sitten
olette, herra ritari, joka olette paljastanut säilänne ja uhmaillut
kuolemaa Bertraden, Simon de Montfortin, Leicesterin kreivin,
tyttären puolesta?»

»Oletteko te de Montfortin, kuningas Henrikin langon tytär?»


tiedusti Norman of Torn silmiensä souketessa kapeiksi raoiksi ja
kasvojensa kovettuessa.

»Olen kyllä», vastasi tyttö. »Ja kasvoistanne päätän, että te ette


kovinkaan paljon rakasta de Montfortia», lisäsi hän hymyillen.

»Entä minne olette matkalla, lady Bertrade de Montfort? Vaikkapa


olisitte paholaisen tytär tai sisarentytär, olette kuitenkin nainen, enkä
minä sodi naisia vastaan. Minne hyvänsä haluatte, sinne saatan
teidät turvaan.»

»Olin äsken isäni viiden palvelijan saattamana Maryn, Derbyn


herran
John de Stutevillen tyttären, luokse.»

»Tiedän hyvin sen linnan», virkkoi Norman of Torn, ja tuiman


hymyn häive väreili hänen huulillaan, sillä tuskin kahta kuukautta oli
kulunut siitä, kun hän oli vallannut tämän linnoituksen ja ottanut
pakkoveroa mahtavalta ylimykseltä. »Tulkaa, teillä ei ole enää pitkä
matka, ja jos kiiruhdamme, ehditte syömään ystävättärenne kanssa
illallista ennen pimeän tuloa.»

Niin sanottuaan hän nousi ratsaille ja kääntyi palatakseen


maantielle, mutta samassa hän huomasi vainajan, joka virui siinä,
mihin oli kaatunut.

»Ratsastakaa te edelleen!» kehoitti hän Bertrade de Montfortia.


»Tulen perässänne heti.»

Laskeuduttuaan jälleen satulasta hän meni äskeisen


vastustajansa luokse, aukaisi kuolleelleen ritarin kypärinsilmikon ja
piirsi vainajan otsaan tikarinsa kärjellä kirjaimet NT.

Tyttö kääntyi katsomaan, mikä häntä viivytti, mutta hänen


selkänsä oli sinnepäin hänen ollessaan polvillaan kaatuneen
vastustajansa vieressä, eikä neito nähnyt hänen tekoaan. Vaikka
hän olikin uljaan isän uljas tytär, olisi hänen sydämensä sykertynyt
kokoon, jos hän olisi nähnyt, mitä mies teki, ja hän olisi kauhuissaan
paennut tämän Englantia vaivaavan vitsauksen luota, jonka merkin
hän oli nähnyt isänsä kymmenkunnan kuolleen palvelijan ja
sukulaisen otsassa.

Heidän tiensä Stutevilliin vei isä Clauden asunnon ohitse, ja


Norman of Torn pysähtyi siellä pukeutumassa sota-asuunsa. Nyt hän
taaskin ratsasti kypärinsilmikko suljettuna ja äänettömänä, hieman
jäljempänä Bertrade de Montfortia voidakseen tarkastella neidon
kasvoja, jotka äkkiä olivat alkaneet herättää hänen mielenkiintoaan.

Koskaan ennen hän ei ollut muistiaikanaan ollut niin likellä nuorta


ja kaunista naista niin kauan yhtä mittaa, vaikka hän oli usein nähnyt
naisia linnoissa, jotka hän oli vallannut rajuilla ja hirveillä
hyökkäyksillään. Joskin kerrottiin tarinoita siitä, kuinka halpamaisesti
hän kohteli naisvankejaan, ei niissä ollut ensinkään perää. Niitä vain
levittelivät hänen vihamiehensä kiihoittaakseen kansaa häntä
vastaan. Milloinkaan ei Norman of Torn ollut tehnyt väkivaltaa
naiselle, ja hänen hirtehisjoukkonsa oli vannonut kunnioittavansa ja
suojelevansa heikompaa sukupuolta kuolemanrangaistuksen uhalla.

Kun hän syrjäviistosta katseli edessään olevia viehkeitä kasvoja,


liikahti hänen sydämessään tunne, joka oli pyrkinyt purkautumaan jo
vuosikausia. Se ei ollut rakkautta eikä rakkauden tapaistakaan
tunnetta, vaan syvää niiden henkilöiden kaipausta, joihin tyttö kuului
ja joita hän edusti. Norman of Torn ei olisi osannut tulkita tätä
tunnetta sanoilla, sillä hän ei tiennyt, niitä se oli; mutta se oli etäinen,
heikko samanlaista verta ikävöivän veren huuto, ja siihen oli
sekoittunut shakaalien parissa olevan leijonan muihin leijoniin ja
kenties myöskin omaan naarasleijonaansa kohdistuvaa kaipausta.

Heidän ratsastettuaan useita kilometrejä äänettöminä neito äkkiä


kääntyi häneen päin ja lausui:
»Kylläpä te, herra ritari, tarvitsette aikaa, vastataksenne
kysymykseeni. Kuka olette?»

»Olen Nor—» Mutta sitten hän katkaisi sanansa. Aikaisemmin hän


oli aina vastannut siihen kysymykseen uhkamielisen ylpeästi. Miksi
piti hänen epäröidä, mietti hän. Pelkäsikö hän sitä kammoa, jota se
nimi herättäisi tämän hänen halveksimansa ylimystön tyttären
rinnassa? Peloittiko Norman of Tornia katsella halveksimisen ja
inhon ilmettä, jonka hän varmasti arvasi kuvastuvan noissa
viehättävissä kasvoissa?

»Olen Normandiasta», jatkoi hän tyynesti. »Ranskalainen


herrasmies.»

»Entä nimenne?» tiukkasi tyttö käskevästi. »Häpeättekö


nimeänne?»

»Teidän sopii nimittää minua Rogeriksi», vastasi Norman, »Roger


de
Condéksi».

»Avatkaa kypärinsilmikkonne, Roger de Condé!» komensi neito.


»Minusta ei ole hupaista ratsastaa rauta-asun seurassa; haluaisin
nähdä, että sen sisällä on mies.»

Norman of Torn hymyili noudattaessaan käskyä, ja kun hän hymyili


siten, mikä tapahtui harvoin, oli hän hauskan näköinen.

»Se on ensimmäinen komennus, jota olen totellut sen jälkeen, kun


täytin kuusitoista vuotta, Bertrade de Montfort», virkkoi hän.

Tyttö oli yhdeksäntoistavuotias, uhkui nuoruuden ja terveyden


reippaatta ja hilpeyttä, ja niinpä he samosivat, puhellen ja
naureskellen, ikäänkuin olisivat olleet vanhoja ystävyksiä.

Neito kertoi, minkä tähden hänen kimppuunsa oli aikaisemmin


sinä päivänä hyökätty, pitäen sitä erään ylimyksen, Peter of Colfaxin,
yrityksenä ryöstää hänet, koska hänen isänsä oli jyrkästi ja tylysti
hylännyt miehen kosinnan.

Simon de Montfortin tapana ei ollut seuloa sanojaan, ja


epäilemättä hänen tyttärensä kättä pyytänyt vanha vintiö oli kuullut
joitakuita kirpeitä totuuksia tältä mieheltä, joka oli kahdesti loukannut
Englannin ylimystöä karkeilla ja sievistelemättömillä, vaikka
todenmukaisilla ja vilpittömillä, kuninkaalle osoitetuilla puheillaan.

»Tästä Peter of Colfaxista pidetään huoli», murahti Norman of


Torn. »Ja koska olette hylännyt hänen sydämensä ja kätensä, saatte
hänen päänsä, jos sitä pyydätte. Teidän tarvitsee vain käskeä,
Bertrade de Montfort.»

»Sepä hyvä», kiitti tyttö naurusuin, arvellen sitä vain


joutavanpäiseksi kerskumiseksi, jota niihin aikoihin ahkerasti
harrastettiin. »Teidän sopii tuoda minulle hänen päänsä kultaisella
vadilla, Roger de Condé.»

»Entä millaisen palkkion se ritari ansaitsee, joka tuo valtiattarensa


jalkojen juureen hänen vihamiehensä pään?»

»Mitä suosionosoitusta ritari vaalii?»

»Sitä, että kuinka paljon pahaa teille ritaristanne kerrottaneenkaan


ja millaisia parjauksia hänen niskaansa syydettäneenkään, olette
aina hänen ystävänsä ja luotatte hänen kunniantuntoonsa ja
vilpittömään uskollisuuteensa.»
Tyttö nauroi iloisesti vastatessaan, vaikka jokin tuntui ilmaisevan
hänelle, että tämä oli enemmän kuin leikkiä.

»Olkoon niinkuin sanoitte, herra ritari!» lupasi hän. »Ja kerran


annettu lupaus pidetään aina.»

Nopeana tekemään päätöksensä ja yhtä nopeana toimimaan


Norman of Torn päätteli pitävänsä tästä tytöstä ja toivovansa hänen
ystävyyttään enemmän kuin mitään muuta, mistä hän tiesi mitään.
Ja koska hän sitä toivoi, päätti hän voittaa sen kaikilla keinoilla, jotka
olivat sopusoinnussa hänen kunniankäsitteittensä kanssa; ja hänen
kunniankäsitteensä olivat monissa suhteissa ylevämmät kuin
senaikaisen aateliston.

He saapuivat de Stutevillin linnaan myöhään iltapäivällä. Siellä


lausuttiin Norman of Torn herttaisesti tervetulleeksi, ja häntä
vaadittiin noudattamaan paroonin vieraanvaraista kehoitusta ja
jäämään yöksi.

Tilanteen karmea huumori oli henkipatosta liian houkutteleva, ja


kun se tuli lisäksi hänen uuteen toivomukseensa saada olla Bertrade
de Montfortin seurassa, ei hän yrittänytkään vastustaa, vaan ehätti
hyväksymään ystävällisen tarjouksen.

Pitkän pöydän ääressä, jolle illallinen oli katettu, istuivat kaikki


paroonin talouden jäsenet, ja siellä täällä oli miehillä merkkejä
haavoista, jotka olivat parhaiksi ehtineet parantua, kun taas isäntä
itse vieläkin kantoi oikeata kättänsä siteessä.

»Kestettävänämme on ollut karvaita aikoja», selitti sir John


huomattuaan vieraansa silmäilevän kahakan monia merkkejä. »Se
lemmonsikiö, Norman paholainen, piiritti saastaisine laumoineen
meitä kymmenen päivää ja valtasi sitten linnan rynnäköllä sekä ryösti
sen. Elämä ei enää ole turvallista Englannissa, kun kuningas tuhlaa
aikansa ja rahansa ulkomaalaisten suosikkien seurassa ja pestaa
ulkomaista sotaväkeä taistelemaan oman maan ylimyksiä vastaan
eikä turvaa kodin rauhaa ja suojaa, joka on jokaisen englantilaisen
oikeus.

»Mutta», jatkoi hän, »tämä henkipattoinen pahus joutuu


hirttonuoran jatkoksi, kun sisällinen riitamme loppuu, sillä ylimykset
ovat itse päättäneet lähteä sotaretkelle häntä vastaan, jollei kuningas
halua häntä masentaa».

»Ja hän saattaa lähettää ylimykset alastomina kotiin, kuten hän


lähetti kuninkaan sotilaat», tokaisi Bertrade de Montfort nauraen.
»Haluaisin nähdä sen miehen; miltä hän näyttänee — teidän
omasta, sir John, ja monien sotilaitten ne ulkonäöstä päättäen lienee
täällä useitakin sellaisia, jotka ovat hänet kohdanneet.»

»Hän ei kertaakaan avannut kypärinsilmikkoaan ollessaan meidän


keskellämme», vastasi parooni, »mutta jotkut väittävät nähneensä
hänestä lyhyen vilahduksen ja vakuuttavat, että hänellä on
kammottavat kasvot, tuuhea, keltainen parta, toinen silmä poissa ja
iso, punainen arpi otsasta leukaan saakka».

»Hirveä näky», jupisi Norman of Torn. »Eipä ihmekään, että hän


pitää kypärinsilmikkonsa suljettuna.»

»Mutta millainen miekankäyttelijä!» sekaantui puheeseen muuan


de Stutevillin pojista. »Ei ikinä ole koko maailmassa nähty sellaista
miekkailua kuin sinä päivänä näin linnan pihalla.»
»Myöskin minä olen nähnyt ihmetyttävää miekankäyttöä», virkkoi
Bertrade de Montfort, »nimittäin tänään. Hei vain!» huudahti hän,
nauraen veitikkamaisesti. »Totisesti olenkin taitanut vangita hurjan
Norman de Tornin, sillä juuri tämä ritari, joka nimittää itseään Roger
de Condéksi, miekkailee paremmin kuin koskaan olen nähnyt
kenenkään miekkailevan, ja hän ratsasti kypärinsilmikko suljettuna,
kunnes nuhtelin häntä siitä.»

Norman de Torn oli ensimmäisenä senjälkeen alkaneessa


naurussa, ja koko seurueesta hän nautti pilasta eniten.

»Tästä paholaisesta puhuttaessa», pitkitti parooni, »mille puolelle


arvelette hänen taipuvan, jos kuningas lopulta pakottaa
läänitysaateliston sotaan? Kun hänellä on tuhat riiviötä
komennuksessaan, saattaa Englannin kohtalo hyvinkin olla hänen
verisen kätensä kämmenellä.»

»Hän ei pidä kuninkaasta eikä ylimyksistä», puhkesi Mary de


Stutevill puhumaan, »ja minä olen pikemminkin taipuvainen
luulemaan, ettei hän auta kumpaakaan puolta, vaan mieluummin
ryöstää sekä kapinallisten että kuninkaan puoluelaisten linnoja niiden
isäntien ollessa sodassa».

»Hänestä lienee mieluisempaa tulla isännän ollessa kotona


lausumassa hänet tervetulleeksi», virkkoi de Stutevill surkuttelevasti.
»Mutta sittenkin olen aina levoton vaimoni ja tyttärieni turvallisuuden
vuoksi, kun vain olen jonkin aikaa poissa Derbystä. Oi, jospa laupias
Jumala pian vapauttaisi Englannin Tornin paholaisesta!»

»Minusta ei sinun tarvitse olla vähääkään huolissasi siihen


nähden», huomautti Mary, »sillä Tornin Norman ei tehnyt
minkäänlaista väkivaltaa ainoallekaan naiselle Stutevillin muurien
sisällä, ja kun eräs hänen miehistään tarttui minuun kovakouraisesti,
iski itse mahtava henkipatto miestä rautakintaisella kädellään niin
lujasti, että raakalaisen kypäri halkesi, samalla lausuen: 'Etkö tiedä,
vintiö, ettei Norman of Torn sodi naisia vastaan?'»

Pian keskustelu kääntyi muihin aiheisiin, eikä Norman of Torn


enää kuullut mitään itsestään sinä iltana.

Hänen oleskelunsa Stutevillin linnassa venyi kolmipäiväiseksi, ja


kun hän sitten kolmantena päivänä istui Bertrade de Montfortin
seurassa vanhan linnan eteläisen tornin ikkunakomerossa, puhui
hän taaskin siitä, että hänen täytyi lähteä, ja taaskin neito kehoitti
häntä jäämään.

»Saadakseni olla teidän seurassanne, Bertrade de Montfort»,


vastasi hän rohkeasti, »olisin valmis kieltäytymään mistä muusta
nautinnosta tahansa, kestämään millaisia puutteita hyvänsä ja
uhmailemaan kaikkia mahdollisia vaaroja, mutta on muita, jotka
odottavat minulta ohjeita, ja velvollisuuteni vaatii minua lähtemään
luotanne. Te näette minut uudelleen, ja se tapahtuu isänne, Simon
de Monfortin, linnassa Leicesterissä — nimittäin», lisäsi hän, »jos
pidätte minua tervetulleena sinne».

»Olette aina tervetullut luokseni, missä tahansa lienenkin, Roger


de
Condé», vakuutti tyttö.

»Muistakaa se lupaus!» kehoitti nuorukainen nauraen. »Joskus


kenties mielellänne olisitte sitä tunnustamatta.»

»En milloinkaan», intti tyttö, ja hänen silmissään tuikkiva väike


hänen niin sanoessaan olisi merkinnyt paljon sellaiselle miehelle,
joka olisi ollut paremmin perehtynyt naisten tapoihin kuin Norman of
Torn oli.

»Toivottavasti ette», virkkoi Norman vakavasti. »Kun olen vain


kehnosti harjaantunut esiintymään kohteliaasti, en osaa lausua, mitä
haluaisin teille puhua, jotta tietäisitte, kuinka paljon ystävyytenne
merkitsee minulle. Hyvästi, Bertrade de Montfort.» Ja hän notkisti
toisen polvensa viedessään tytön sormet huulilleen.

Kun hän muutamia minuutteja myöhemmin ratsasti nostosillan


ylitse ja sitten edelleen maantietä pitkin takaisin Torniin päin, kääntyi
hän luomaan vielä viimeisen katseen linnaan, ja eteläisen tornin
ikkunakomerossa seisoi nuori nainen, joka kohotti kättänsä, heilutti
sitä ja sitten ikäänkuin äkillisen mielijohteen kannustamana heitti
lentosuukon poistuvan ritarin jälkeen, heti sen jälkeen kadoten
aukosta.

Kun Norman of Torn ratsasti takaisin Derbyn kunnailla sijaitsevaan


kolkkoon linnaansa, oli hänellä matkalla paljon ajateltavaa. Sitä
ennen hän ei ollut koskaan tajunnut, mitä toisenlainen elämä
saattaisi tarjota, ja hän tunsi äkkiä vihlovaa katkeruutta isäksi
nimittämäänsä kovaa vanhusta kohtaan, koska tämän opetukset
hänen varhaisimmasta lapsuudestaan saakka olivat opastaneet
hänet sille tielle, joka oli täydelleen erottanut hänet muiden ihmisten
seurasta lukuunottamatta sitä henkipattojen, roistojen ja seikkailijani
hurjaa laumaa, joka ratsasti Tornin nuoren päällikön kauhistuttavan
lipun jäljessä.

Vain epäselvästi ja himmeästi hän tunsi, että juuri hänen


elämäänsä nyt sekaantunut tyttö se oli saanut hänet ensi kerran
häpeämään entisiä tekojaan. Hän ei tiennyt, mitä rakkaus oli, eikä
senvuoksi aavistanut rakastavansa Bertrade de Montfortia.
Ja toinen seikka, joka nyt askarrutti hänen ajatuksiaan, oli hänen
ja
Englannin kruununprinssin omituinen yhdennäköisyys. Se ynnä isä
Clauden
sanat antoivat hänelle paljon päänvaivaa. Mitä se saattoi merkitä?
Oliko kauhea rikos sattumalta olla kuninkaanpojan näköinen?

Mutta vasta nyt, kun hän tunsi saaneensa selville syyn, minkä
tähden hän aina ratsasti kypärinsilmikko suljettuna, halusi hän itse
kiihkeästi piilottaa kasvonsa muiden ihmisten näkyvistä — ei pelosta,
sillä pelkoa hän ei tuntenut, vaan jonkin sisäisen mielijohteen
pakotuksesta, jota hän ei koettanutkaan käsittää.
KAHDEKSAS LUKU

Kun Norman of Torn lähti ratsastamaan de Stutevillin linnasta


laskeutui isä Claude sileäkarvaisen aasinsa selästä Tornin
ulkopihalla. Kammottavasta ulkonäöstään ja pahasta maineestaan
huolimatta tämä uhkaava varustus aina tarjosi lämpimän
vastaanoton tälle hyväntahtoiselle, rattoisalle papille; eikä sen syynä
ollut ainoastaan syvä ystävyys, jota Tornin isäntä tunsi kelpo isää
kohtaan, vaan myöskin pyhän miehen oma miellyttävä ja herttainen
luonne, joka lämmitti yhtä hyvin pyhimystä kuin synnintekijääkin.

Epäilemättä juuri siitä, että hän hellittämättä koetti kehittää hyvään


suuntaan nuorta Normania siihen aikaan, jolloin pojan luonne oli
herkin voimakkaille vaikutuksille, johtui se, että tämän mahtavan
henkipaton toimintaperiaatteet olivat monessa suhteessa puhtaat ja
ylevät. Mutta juuri tämä sama vaikutus hankki isä Claudelle myöskin
hänen ainoan vihamiehensä Tornissa, pienen, tuiman, harmaapäisen
vanhuksen, sillä tämän ainoana elämänpyrkimyksenä näytti olleen
tukahduttaa kaikki jalommat ritarillisuuden ja miehekkyyden vaistot
pojassa, jonka kasvattamiseen hän oli omistanut elämänsä viimeiset
yhdeksäntoista vuotta.
Isä Clauden kavutessa aasinsa selästä — lihavat ihmiset eivät
keikahda pois satulasta — juoksi puolikymmentä nuorta
aseenkantajaa auttamaan häntä ja taluttamaan eläintä talliin.

Kelpo pappi puhutteli kutakin kerkeätä auttajaansa nimeltä, kysyen


jotakin yhdeltä, laskien hilpeätä pilaa toiselle kevyesti ja
tuttavallisesti, mikä oli molemminpuolisen kiintymyksen ja vanhan
tuttavuuden merkki.

Hänen astellessaan sisälle isosta portista sinkauttelivat sotilaat


hänelle nauraen, mutta kunnioittavasti tervehdyksiä, ja laajalla,
tasaisen nurmikon, upeiden kasvipenkkien, suihkukaivojen,
kuvapatsaiden ja pienien pensaiden kaunistamalla pihalla hän
kohtasi punatukkaisen Shandy-jättiläisen, joka nyt oli Norman of
Tornin korkein alapäällikkö.

»Hyvää huomenta, pyhä Claude!» huusi vankka tappelija. »Oletko


tullut pelastamaan sieluamme vaiko kiroamaan meitä? Minkä
pyhänhäväistyksen olemme nyt tehneet vai olemmeko ansainneet
pyhän kirkon siunauksen? Tuletko nuhtelemaan vaiko kiittämään?»

»En kumpaakaan, sinä uudestisyntymätön vintiö», vastasi pappi


nauraen, »vaikka minusta tuntuu, että ansaitset moitteita, koska
viime viikolla kohtelit mahtavaa Norwichin piispaa niin loukkaavan
epäkohteliaasti».

»So, so, isä», puolustihe Punapää-Shandy. »Mehän vain


autoimme häntä tiukemmin noudattamaan Hänen ohjeitaan ja
käskyjään, jonka palvelija ja opetuslapsi hän väittää olevansa. Eikö
Hänen kirkkonsa arkkipiispan olisi parempi liikkua halpana ja
nöyränä Hänen kansansa keskuudessa kuin nähdä aina ympärillään
kiusauksia, hienoja vaatteita, jalokiviä ja paljon kultaa
puhumattakaan kahdesta viininassakoilla raskaasti kuormitetusta
juhdasta?»

»Uskallanpa väittää, että hänen kiusauksistaan oli vähentynyt


ainakin niin monta viininassakkaa kuin kaksi kuormajuhtaa jaksaa
kantaa, kun sinä, punapäinen rosvo, päästit hänet käsistäsi»,
huudahti isä Claude.

»Kyllä, isä», myönsi roteva veitikka nauraen. »Pyhän kirkon vuoksi


otin kun otinkin sen kiusauksen kokonaan takavarikkoon, ja jos sinun
täytyy saada todistuksia voidaksesi antaa minulle synninpäästön,
niin tule mukaani; saat näytteen siitä erinomaisesta
arvostelukyvystä, jota Norwichin piispa osoittaa valitessaan
kiusauksiaan.»

»Minulle on kerrottu teidän anastaneen tältä suurmieheltä kaikki


hienoudet, Punapää-Shandy», jatkoi isä Claude, pannen kätensä
henkipaton kainaloon ja lähtien astelemaan linnaa kohti.

»Norman of Torn myönsi hänelle ainoastaan yhden


vaatekappaleen, ja koska aurinko paahtoi kuumasti, valitsi hän siksi
ainoaksi pukukappaleeksi piispalle kypärin suojellakseen hänen
paljaaksi ajeltua päälakeaan vanhan auringon säteiltä. Ja kun hän
sitten pelkäsi joidenkuiden maantieraakalaisten mahdollisesti
ryöstävän sen piispalta, käski hän Silmäpuoli-Kantyn niitata sen
kiinni kaulasuojuksen kahden puolen, jotta sitä eivät voisi muut kuin
seppä irroittaa; ja siten hän lähetti Norwichin mahtavan piispan
sidottuna aasin selkään kasvot häntään päin mennä ratisuttamaan
pitkin pölyistä maantietä pää ainakin suojattuna sattumalta
mahdollisesti hänet kohtaavien uteliaiden katseilta. Neljäkymmentä
raipaniskua hän annatti piispan seurueen kullekin jäsenelle, koska
he olivat olleet liikkeellä huonossa seurassa. Mutta tulehan, täältä
saat viiniä kertomukseni todenperäisyyden takeeksi.»

Heidän istuessaan maistelemassa piispan oivallista kanariviiniä


astui sisälle Tornin pieni vanhus. Hän puhutteli isä Claudea
happamesti, tiedustaen, tiesikö hän, missä päin Norman of Torn oli.

»Emme ole nähneet hänestä merkkiäkään, sittenkun hän kolme


päivää sitten lähti ratsastamaan teidän asunnollenne päin», lopetti
hän.

»Niinpä kyllä», virkkoi pappi, »näin hänet sinä päivänä. Hänellä oli
seikkailu useiden Peter of Colfaxin linnan ritarien kanssa, ja hän
pelasti heidän käsistään erään neidon, jonka ratsun koristeiden
nojalla epäilin kuuluvan Montfortin sukuun. Yhdessä he lähtivät
ratsastamaan pohjoista kohti, mutta poikani ei maininnut, mihin hän
aikoi ja missä tarkoituksessa. Hänen ainoa huomautuksensa, kun
hän puki ylleen sota-asunsa, oli se, että nyt saisin nähdä haukan
hoivaamassa kyyhkystä. Eikö hän ole palannut?»

»Ei», vastasi vanhus, »ja epäilemättä hänen seikkailunsa on


luonteeltaan sopusoinnussa sinun poikamaisten ja naisellisten
opetustesi kanssa. Jos hän olisi noudattanut minun kasvatustani
ilman sinun kirottua papillista sekaantumistasi, olisi hän valmistanut
Tornissa rautasalpaisen pesän kirotun englantilaisen ylimystösi
useille kyyhkysille. Jollet sinä jätä häntä rauhaan, pyrkii hän pian
kuninkaan palvelukseen.»

»Ja siellä hän kenties olisi paremmin kotonaan kuin täällä», virkkoi
pappi tyynesti.

»Miksi niin?» ärähti pieni vanhus, silmäillen isä Claudea tiukasti.


»No», vastasi pappi nauraen, »siksi, että sellainen mies, jonka
valta ja eleet ovat vieläkin kuninkaallisemmat kuin kuninkaan, olisi
todellakin kuninkaallisen palatsin kaunistus». Mutta häneltä ei ollut
jäänyt huomaamatta hänen sanojensa aiheuttama hämminki, eikä
hänen kerkeä vastauksensa täydelleen pettänyt vanhusta.

Samassa astui sisälle aseenkantaja ilmoittamaan, että Shandya


tarvittiin portilla, ja luotuaan murheellisen ja pahoittelevan katseen
tyhjentämättömään pulloon tämä arvonmies poistui huoneesta.

Muutamia minuutteja molemmat miehet istuivat miettiväisinä ja


äänettöminä. Äkkiä Tornin vanhus lopetti hiljaisuuden.

»Pappi», alkoi hän, »sinun pojalleni antamasi ohjaus ei miellytä


minua, kuten tiedät. Hänen on ollut tarpeetonta tuhlata niin paljon
kallista aikaa, riistäen sen miekankäytöltä, oppiakseen hyödytöntä
lukuja kirjoitustaitoa. Mitä hyötyä on latinankielen taidosta sellaiselle
miehelle, jonka tuomio väikkyy selvänä hänen edessään. Siihen
saattaa kulua vuosia, ja se saattaa myöskin tapahtua kuukausien
kuluttua, mutta niin totta kuin piru on helvetissä, Norman of Torn
roikkuu kuninkaan hirsipuussa. Ja sinä tiedät sen, ja hän myöskin
yhtä hyvin kuin minäkin. Kaikki se, mitä sinä olet hänelle opettanut,
käy yli hänen asemansa, ja niiden virittämät toiveet ja
kunnianhimoiset pyrkimykset tekevät lopun vain hänestä
karvaammaksi. Viime aikoina olen huomannut hänen olevan entistä
haluttomamman lähtemään saalistusretkille, mutta hän on mennyt
liian pitkälle kääntyäkseen enää koskaan takaisin; eikä hänellä ole
paikkaa, mihin mennä. Mikä muu hän on milloinkaan ollut kuin hylkiö
ja henkipatto? Mitä sen suurempia toiveita olisi hänen rinnassaan
voinut lietsoa kuin se on, että hänen verivihollisensa häntä vihaisivat
ja pelkäisivät?»

You might also like