PDF Artificial Intelligence Foundations of Computational Agents David L Poole Ebook Full Chapter

Download as pdf or txt
Download as pdf or txt
You are on page 1of 53

Artificial Intelligence Foundations of

Computational Agents David L. Poole


Visit to download the full and correct content document:
https://textbookfull.com/product/artificial-intelligence-foundations-of-computational-ag
ents-david-l-poole/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Artificial Intelligence for Computational Modeling of


the Heart 1st Edition Tommaso Mansi (Editor)

https://textbookfull.com/product/artificial-intelligence-for-
computational-modeling-of-the-heart-1st-edition-tommaso-mansi-
editor/

Artificial Intelligence and Security Challenges in


Emerging Networks Advances in Computational
Intelligence and Robotics 1st Edition Ryma Abassi
(Editor)
https://textbookfull.com/product/artificial-intelligence-and-
security-challenges-in-emerging-networks-advances-in-
computational-intelligence-and-robotics-1st-edition-ryma-abassi-
editor/

Mathematical Foundations of Computational


Electromagnetism Franck Assous

https://textbookfull.com/product/mathematical-foundations-of-
computational-electromagnetism-franck-assous/

Ethics Of Artificial Intelligence S. Matthew Liao

https://textbookfull.com/product/ethics-of-artificial-
intelligence-s-matthew-liao/
Artificial Intelligence By Example Develop machine
intelligence from scratch using real artificial
intelligence use cases Denis Rothman

https://textbookfull.com/product/artificial-intelligence-by-
example-develop-machine-intelligence-from-scratch-using-real-
artificial-intelligence-use-cases-denis-rothman/

Artificial Intelligence Computational Modelling and


Criminal Proceedings A Framework for A European Legal
Discussion Serena Quattrocolo

https://textbookfull.com/product/artificial-intelligence-
computational-modelling-and-criminal-proceedings-a-framework-for-
a-european-legal-discussion-serena-quattrocolo/

Advances in Computational Intelligence Proceedings of


Second International Conference on Computational
Intelligence 2018 Sudip Kumar Sahana

https://textbookfull.com/product/advances-in-computational-
intelligence-proceedings-of-second-international-conference-on-
computational-intelligence-2018-sudip-kumar-sahana/

Artificial Intelligence and Bioinspired Computational


Methods Proceedings of the 9th Computer Science On line
Conference 2020 Vol 2 Radek Silhavy

https://textbookfull.com/product/artificial-intelligence-and-
bioinspired-computational-methods-proceedings-of-the-9th-
computer-science-on-line-conference-2020-vol-2-radek-silhavy/

Recent Studies on Computational Intelligence Doctoral


Symposium on Computational Intelligence DoSCI 2020
Ashish Khanna

https://textbookfull.com/product/recent-studies-on-computational-
intelligence-doctoral-symposium-on-computational-intelligence-
dosci-2020-ashish-khanna/
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

Preface

Artificial Intelligence: Foundations of Computational Agents,


2nd Edition

David L. Poole and Alan K. Mackworth

This modern AI textbook book is published by Cambridge University Press. The complete text is
available here with permission of Cambridge University Press. Please consider buying the book. This
online version is free to view and download for personal use only. The text is not for re-distribution, re-
sale or use in derivative works. ©David L. Poole and Alan K. Mackworth 2017. Please create links to this
site rather than redistributing parts.

There are many online resources including AIspace, with interactive tools of many algorithms,
AIPython.org, with Python implementations of many of the algorithms, slides for teaching in class, and
online learning resources.

Preface
1 Artificial Intelligence and Agents
2 Agent Architectures and Hierarchical Control
3 Searching for Solutions
4 Reasoning with Constraints
5 Propositions and Inference
6 Planning with Certainty
7 Supervised Machine Learning
8 Reasoning with Uncertainty
9 Planning with Uncertainty
10 Learning with Uncertainty
11 Multiagent Systems
12 Learning to Act
13 Individuals and Relations
14 Ontologies and Knowledge-Based Systems
15 Relational Planning, Learning, and Probabilistic Reasoning
16 Retrospect and Prospect
A Mathematical Preliminaries and Notation
Bibliography
Index

BibliographyIndex Preface
Generated on Mon Nov 6 09:10:14 2017 by LATEXML
Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth
This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

Artificial Intelligence: Foundations of Computational Agents, 2nd Edition


Artificial Intelligence: Foundations of Computational Agents, 2nd Edition 1 Artificial Intelligence and Agents

Preface

Artificial Intelligence: Foundations of Computational Agents is a book about the science of artificial
intelligence (AI). AI is the study of the design of intelligent computational agents. The book is structured
as a textbook but it is designed to be accessible to a wide audience.

We wrote this book because we are excited about the emergence of AI as an integrated science. As
with any science being developed, AI has a coherent, formal theory and a rambunctious experimental
wing. Here we balance theory and experiment and show how to link them together intimately. We
develop the science of AI together with its engineering applications. We believe the adage, “There is
nothing so practical as a good theory.” The spirit of our approach is captured by the dictum, “Everything
should be made as simple as possible, but not simpler.” We must build the science on solid foundations;
we present the foundations, but only sketch, and give some examples of, the complexity required to
build useful intelligent systems. Although the resulting systems will be complex, the foundations and the
building blocks should be simple.

This second edition results from extensive revision throughout the text. We have restructured the
material based on feedback from instructors who have used the book in classes. We have brought it up to
date to reflect the current state of the art, made parts that were difficult for students more
straightforward, added more intuitive explanations, and coordinated the pseudocode algorithms with new
open-source implementations of the algorithms in Python and Prolog. We have resisted the temptation to
just keep adding more material. AI research is expanding so rapidly now that the volume of potential new
text material is vast. However, research teaches us not only what works but also what does not work so
well, allowing us to be highly selective. We have included more material on machine learning techniques
that have proven successful. However, research also has trends and fashions. We have removed
techniques that have been shown to be less promising, but we distinguish them from the techniques that
are merely out of fashion. We include some currently unfashionable material if the problems attacked still
remain and the techniques have the potential to form the basis for future research and development. We
have further developed the concept of a single design space for intelligent agents, showing how many
bewilderingly diverse techniques can be seen in a simple, uniform framework. This allows us to
emphasize the principles underlying the foundations of computational agents, making those ideas more
accessible to students.

The book can be used as an introductory text on artificial intelligence for advanced undergraduate or
graduate students in computer science or related disciplines such as computer engineering, philosophy,
cognitive science, or psychology. It will appeal more to the technically minded; parts are technically
challenging, focusing on learning by doing: designing, building, and implementing systems. Any curious
scientifically oriented reader will benefit from studying the book. Previous experience with computational
systems is desirable, but prior study of the foundations upon which we build, including logic, probability,
calculus, and control theory, is not necessary, because we develop the concepts as required.

The serious student will gain valuable skills at several levels ranging from expertise in the
specification and design of intelligent agents to skills for implementing, testing, and improving real
software systems for several challenging application domains. The thrill of participating in the emergence
of a new science of intelligent agents is one of the attractions of this approach. The practical skills of
dealing with a world of ubiquitous, intelligent, embedded agents are now in great demand in the
marketplace.
The focus is on an intelligent agent acting in an environment. We start with simple agents acting in
simple, static environments and gradually increase the power of the agents to cope with more
challenging worlds. We explore ten dimensions of complexity that allow us to introduce, gradually and
with modularity, what makes building intelligent agents challenging. We have tried to structure the book
so that the reader can understand each of the dimensions separately and we make this concrete by
repeatedly illustrating the ideas with four different agent tasks: a delivery robot, a diagnostic assistant, a
tutoring system, and a trading agent.

The agent we want the student to envision is a hierarchically designed agent that acts intelligently in
a stochastic environment that it can only partially observe – one that reasons online about individuals
and relationships among them, has complex preferences, learns while acting, takes into account other
agents, and acts appropriately given its own computational limitations. Of course, we cannot start with
such an agent; it is still a research question to build such agents. So we introduce the simplest agents
and then show how to add each of these complexities in a modular way.

We have made a number of design choices which distinguish this book from competing books,
including our earlier book.

• We have tried to give a coherent framework in which to understand AI. We have chosen not to
present disconnected topics that do not fit together. For example, we do not present disconnected
logical and probabilistic views of AI, but we have presented a multidimensional design space in
which the students can understand the big picture, in which probabilistic and logical reasoning
coexist.
• We decided that it is better to clearly explain the foundations upon which more sophisticated
techniques can be built, rather than present these more sophisticated techniques. This means that
a larger gap may exist between what is covered in this book and the frontier of science. But it also
means that the student will have a better foundation to understand current and future research.
• One of the more difficult decisions we made was how to linearize the design space. Our previous
book [Poole et al., 1998] presented a relational language early and built the foundations in terms
of this language. This approach made it difficult for the students to appreciate work that was not
relational, for example, in reinforcement learning that is developed in terms of states. In this book,
we have chosen a relations-late approach. This approach probably reflects better the research
over the past few decades where there has been much progress in reasoning and learning for
feature-based representations. It also allows the student to understand that probabilistic and
logical reasoning are complementary. The book, however, is structured so that an instructor could
present relations earlier.

We provide open-source Python implementations of the algorithms (http://www.aipython.org); these


are designed to be useful and to highlight the main ideas without extra frills to interfere with the main
ideas. This book uses examples from AIspace.org (http://www.aispace.org), a collection of pedagogical
applets that we have been involved in designing. To gain further experience in building intelligent
systems, a student should also experiment with a high-level symbol-manipulation language, such as
Haskell, Lisp or Prolog. We also provide implementations in AILog, a clean logic programming language
related to Prolog, designed to demonstrate many of the issues in this book. These tools are intended to
be helpful, but not essential to an understanding or use of the ideas in this book.

Our approach, through the development of the power of the agent’s capabilities and representation
language, is both simpler and more powerful than the traditional approach of surveying and cataloging
various applications of AI. However, as a consequence, some applications such as the details of
computational vision or computational linguistics are not covered in this book.

We have chosen not to present an encyclopedic view of AI. Not every major idea that has been
investigated is presented here. We have chosen some basic ideas upon which other, more sophisticated,
techniques are based and have tried to explain the basic ideas in detail, sketching how these can be
expanded.
Figure 1: Overview of chapters and dependencies

Figure 1 shows the topics covered in the book. The solid lines depict prerequisites. Often the
prerequisite structure does not include all sub-topics. Given the medium of a book, we have had to
linearize the topics. However, the book is designed so the topics are teachable in any order satisfying the
prerequisite structure.

The references given at the end of each chapter are not meant to be comprehensive; we have
referenced works that we have directly used and works that we think provide good overviews of the
literature, by referencing both classic works and more recent surveys. We hope that no researchers feel
slighted by their omission, and we are happy to have feedback where someone feels that an idea has
been misattributed. Remember that this book is not a survey of AI research.

We invite you to join us in an intellectual adventure: building a science of intelligent agents.

David Poole

Alan Mackworth

Acknowledgments

Thanks to Randy Goebel for valuable input on this book. We also gratefully acknowledge the helpful
comments on the first edition and earlier drafts of the second edition received from Guy van den Broeck,
David Buchman, Giuseppe Carenini, Cristina Conati, Mark Crowley, Matthew Dirks, Bahare Fatemi,
Pooyan Fazli, Robert Holte, Holger Hoos, Manfred Jaeger, Mehran Kazemi, Mohammad Reza Khojasteh,
Jacek Kisyński, Richard Korf, Bob Kowalski, Kevin Leyton-Brown, Josje Lodder, Marian Mackworth, Gabriel
Murray, Sriraam Natarajan, Alex Poole, Alessandro Provetti, Mark Schmidt, Marco Valtorta, and the
anonymous reviewers. Thanks to the students who pointed out many errors in the earlier drafts. Thanks
to Jen Fernquist for the website design. David would like to thank Thomas Lukasiewicz and The
Leverhulme Trust for sponsoring his sabbatical in Oxford, where much of this second edition was written.
We are grateful to James Falen for permission to quote his poem on constraints.

The quote at the beginning of Chapter 9 is reprinted with permission of Simon & Schuster, Inc. from
THE CREATIVE HABIT: Learn it and Use It by Twyla Tharp with Mark Reiter. Copyright 2003 by W.A.T. Ltd.
All Rights Reserved.

Thanks to our editor Lauren Cowles and the staff at Cambridge University Press for all their support,
encouragement and help. All the mistakes remaining are ours.

Artificial Intelligence: Foundations of Computational Agents, 2nd EditionBibliographyIndex 1 Artificial Intelligence and Agents
Generated on Sat Nov 3 11:48:18 2018 by LATEXML

Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth


This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

Artificial Intelligence: Foundations of Computational Agents, 2nd Edition


Preface 1.1 What is Artificial Intelligence?

Chapter 1
Artificial Intelligence and Agents

The history of AI is a history of fantasies, possibilities, demonstrations, and promise. Ever


since Homer wrote of mechanical “tripods” waiting on the gods at dinner, imagined
mechanical assistants have been a part of our culture. However, only in the last half
century have we, the AI community, been able to build experimental machines that test
hypotheses about the mechanisms of thought and intelligent behavior and thereby
demonstrate mechanisms that formerly existed only as theoretical possibilities.

– Bruce Buchanan [2005]

This book is about artificial intelligence, a field built on centuries of thought, which has been a recognized
discipline for over 60 years. As Buchanan points out in the quote above, we now have the tools to test
hypotheses about the nature of thought itself, as well as to solve practical tasks. Deep scientific and
engineering problems have already been solved and many more are waiting to be solved. Many practical
applications are currently deployed and the potential exists for an almost unlimited number of future
applications. In this book, we present the principles that underlie intelligent computational agents. These
principles can help you understand current and future work in AI and equip you to contribute to the
discipline yourself.

1.1 What is Artificial Intelligence?


1.2 A Brief History of Artificial Intelligence
1.3 Agents Situated in Environments
1.4 Designing Agents
1.5 Agent Design Space
1.6 Prototypical Applications
1.7 Overview of the Book
1.8 Review
1.9 References and Further Reading
1.10 Exercises

PrefaceBibliographyIndex 1.1 What is Artificial Intelligence?


Generated on Sat Nov 3 11:48:18 2018 by LATEXML

Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth


This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

1 Artificial Intelligence and Agents


1 Artificial Intelligence and Agents 1.1.1 Artificial and Natural Intelligence

1.1 What is Artificial Intelligence?

Artificial intelligence, or AI, is the field that studies the synthesis and analysis of computational agents
that act intelligently. Let us examine each part of this definition.

An agent is something that acts in an environment; it does something. Agents include worms, dogs,
thermostats, airplanes, robots, humans, companies, and countries.

We are interested in what an agent does; that is, how it acts. We judge an agent by its actions.

An agent acts intelligently when

• what it does is appropriate for its circumstances and its goals, taking into account the short-term
and long-term consequences of its actions
• it is flexible to changing environments and changing goals
• it learns from experience
• it makes appropriate choices given its perceptual and computational limitations

A computational agent is an agent whose decisions about its actions can be explained in terms of
computation. That is, the decision can be broken down into primitive operations that can be implemented
in a physical device. This computation can take many forms. In humans this computation is carried out in
“wetware”; in computers it is carried out in “hardware.” Although there are some agents that are
arguably not computational, such as the wind and rain eroding a landscape, it is an open question
whether all intelligent agents are computational.

All agents are limited. No agents are omniscient or omnipotent. Agents can only observe everything
about the world in very specialized domains, where “the world” is very constrained. Agents have finite
memory. Agents in the real world do not have unlimited time to act.

The central scientific goal of AI is to understand the principles that make intelligent behavior
possible in natural or artificial systems. This is done by

• the analysis of natural and artificial agents


• formulating and testing hypotheses about what it takes to construct intelligent agents and
• designing, building, and experimenting with computational systems that perform tasks
commonly viewed as requiring intelligence.

As part of science, researchers build empirical systems to test hypotheses or to explore the space of
possible designs. These are quite distinct from applications that are built to be useful for an application
domain.

The definition is not for intelligent thought alone. We are only interested in thinking intelligently
insofar as it leads to more intelligent behavior. The role of thought is to affect action.

The central engineering goal of AI is the design and synthesis of useful, intelligent artifacts. We
actually want to build agents that act intelligently. Such agents are useful in many applications.

1.1.1 Artificial and Natural Intelligence


1 Artificial Intelligence and AgentsBibliographyIndex 1.1.1 Artificial and Natural Intelligence
Generated on Sat Nov 3 11:48:18 2018 by LATEXML

Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth


This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

1.1 What is Artificial Intelligence?


1.1 What is Artificial Intelligence? 1.2 A Brief History of Artificial Intelligence

1.1.1 Artificial and Natural Intelligence

Artificial intelligence (AI) is the established name for the field, but the term “artificial intelligence” is a
source of much confusion because artificial intelligence may be interpreted as the opposite of real
intelligence.

For any phenomenon, you can distinguish real versus fake, where the fake is non-real. You can also
distinguish natural versus artificial. Natural means occurring in nature and artificial means made by
people.

Example 1.1. A tsunami is a large wave in an ocean. Natural tsunamis occur from time to time and are
caused by earthquakes or landslides. You could imagine an artificial tsunami that was made by people, for
example, by exploding a bomb in the ocean, yet which is still a real tsunami. One could also imagine fake
tsunamis: either artificial, using computer graphics, or natural, for example, a mirage that looks like a
tsunami but is not one.

It is arguable that intelligence is different: you cannot have fake intelligence. If an agent behaves
intelligently, it is intelligent. It is only the external behavior that defines intelligence; acting intelligently is
being intelligent. Thus, artificial intelligence, if and when it is achieved, will be real intelligence created
artificially.

This idea of intelligence being defined by external behavior was the motivation for a test for
intelligence designed by Turing [1950], which has become known as the Turing test. The Turing test
consists of an imitation game where an interrogator can ask a witness, via a text interface, any question.
If the interrogator cannot distinguish the witness from a human, the witness must be intelligent. Figure
1.1 shows a possible dialog that Turing suggested. An agent that is not really intelligent could not fake
intelligence for arbitrary topics.

Interrogator:
In the first line of your sonnet which reads “Shall I compare thee to a summer’s
day,” would not ”a spring day” do as well or better?
Witness:
It wouldn’t scan.
Interrogator:
How about “a winter’s day,” That would scan all right.
Witness:
Yes, but nobody wants to be compared to a winter’s day.
Interrogator:
Would you say Mr. Pickwick reminded you of Christmas?
Witness:
In a way.
Interrogator:
Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the
comparison.
Witness:
I don’t think you’re serious. By a winter’s day one means a typical winter’s day,
rather than a special one like Christmas.

Figure 1.1: Part of Turing’s possible dialog for the Turing test
There has been much debate about the usefulness of Turing test. Unfortunately, although it may
provide a test for how to recognize intelligence, it does not provide a way to realize intelligence.

Recently Levesque [2014] suggested a new form of question, which he called a Winograd schema
after the following example of Winograd [1972]:

• The city councilmen refused the demonstrators a permit because they feared violence. Who
feared violence?
• The city councilmen refused the demonstrators a permit because they advocated violence. Who
advocated violence?

These two sentences only differ in one word feared/advocated, but have the opposite answer. Answering
such a question does not depend on trickery or lying, but depends on knowing something about the world
that humans understand, but computers currently do not.

Winograd schemas have the property that (a) humans can easily disambiguate them and (b) there is
no simple grammatical or statistical test that could disambiguate them. For example, the sentences
above would not qualify if “demonstrators feared violence” was much less or more likely than
“councilmen feared violence” (or similarly with advocating).

Example 1.2. The following examples are due to Davis [2015]:


• Steve follows Fred’s example in everything. He [admires/influences] him hugely. Who
[admires/influences] whom?
• The table won’t fit through the doorway because it is too [wide/narrow]. What is too
[wide/narrow]?
• Grace was happy to trade me her sweater for my jacket. She thinks it looks [great/dowdy] on
her. What looks [great/dowdy] on Grace?
• Bill thinks that calling attention to himself was rude [to/of] Bert. Who called attention to himself?
Each of these have their own reasons why one answer is preferred to the other. A computer that can
reliably answer these questions needs to know about all of these reasons, and require the ability to do
commonsense reasoning.

The obvious naturally intelligent agent is the human being. Some people might say that worms,
insects, or bacteria are intelligent, but more people would say that dogs, whales, or monkeys are
intelligent (see Exercise 1). One class of intelligent agents that may be more intelligent than humans is
the class of organizations. Ant colonies are a prototypical example of organizations. Each individual ant
may not be very intelligent, but an ant colony can act more intelligently than any individual ant. The
colony can discover food and exploit it very effectively as well as adapt to changing circumstances.
Corporations can be more intelligent than individual people. Companies develop, manufacture, and
distribute products where the sum of the skills required is much more than any individual could master.
Modern computers, from low-level hardware to high-level software, are more complicated than any
human can understand, yet they are manufactured daily by organizations of humans. Human society
viewed as an agent is arguably the most intelligent agent known.

It is instructive to consider where human intelligence comes from. There are three main sources:

Biology
Humans have evolved into adaptable animals that can survive in various habitats.
Culture
Culture provides not only language, but also useful tools, useful concepts, and the
wisdom that is passed from parents and teachers to children.
Lifelong learning
Humans learn throughout their life and accumulate knowledge and skills.

These sources interact in complex ways. Biological evolution has provided stages of growth that allow for
different learning at different stages of life. Biology and culture have evolved together; humans can be
helpless at birth presumably because of our culture of looking after infants. Culture interacts strongly
with learning. A major part of lifelong learning is what people are taught by parents and teachers.
Language, which is part of culture, provides distinctions in the world that are useful for learning.

When building an intelligent system, the designers have to decide which of these sources of
intelligence need to be programmed in, and which can be learned. It is very unlikely we will be able to
build an agent that starts with a clean slate and learns everything. Similarly, most interesting and useful
intelligent agents learn to improve their behavior

1.1 What is Artificial Intelligence?BibliographyIndex 1.2 A Brief History of Artificial Intelligence


Generated on Sat Nov 3 11:48:18 2018 by LATEXML
Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth
This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

1 Artificial Intelligence and Agents


1.1.1 Artificial and Natural Intelligence 1.2.1 Relationship to Other Disciplines

1.2 A Brief History of Artificial Intelligence

Throughout human history, people have used technology to model themselves. There is evidence of this
from ancient China, Egypt, and Greece bearing witness to the universality of this activity. Each new
technology has, in its turn, been exploited to build intelligent agents or models of mind. Clockwork,
hydraulics, telephone switching systems, holograms, analog computers, and digital computers have all
been proposed both as technological metaphors for intelligence and as mechanisms for modeling mind.

About 400 years ago people started to write about the nature of thought and reason. Hobbes (1588–
1679), who has been described by Haugeland [1985, p. 85] as the “Grandfather of AI,” espoused the
position that thinking was symbolic reasoning like talking out loud or working out an answer with pen and
paper. The idea of symbolic reasoning was further developed by Descartes (1596–1650), Pascal (1623–
1662), Spinoza (1632–1677), Leibniz (1646–1716), and others who were pioneers in the philosophy of
mind.

The idea of symbolic operations became more concrete with the development of computers. Babbage
(1792–1871) designed the first general-purpose computer, the Analytical Engine, but it was not built
until 1991 at the Science Museum of London. In the early part of the twentieth century, there was much
work done on understanding computation. Several models of computation were proposed, including the
Turing machine by Alan Turing (1912–1954), a theoretical machine that writes symbols on an infinitely
long tape, and the lambda calculus of Church (1903–1995), which is a mathematical formalism for
rewriting formulas. It can be shown that these very different formalisms are equivalent in that any
function computable by one is computable by the others. This leads to the Church–Turing thesis:

Any effectively computable function can be carried out on a Turing machine (and so also in
the lambda calculus or any of the other equivalent formalisms).

Here effectively computable means following well-defined operations; “computers” in Turing’s day
were people who followed well-defined steps and computers as we know them today did not exist. This
thesis says that all computation can be carried out on a Turing machine or one of the other equivalent
computational machines. The Church–Turing thesis cannot be proved but it is a hypothesis that has stood
the test of time. No one has built a machine that has carried out computation that cannot be computed
by a Turing machine. There is no evidence that people can compute functions that are not Turing
computable. An agent’s actions are a function of its abilities, its history, and its goals or preferences. This
provides an argument that computation is more than just a metaphor for intelligence; reasoning is
computation and computation can be carried out by a computer.

Once real computers were built, some of the first applications of computers were AI programs. For
example, Samuel [1959] built a checkers program in 1952 and implemented a program that learns to
play checkers in the late 1950s. His program beat the Connecticut state checkers champion in 1961.
Wang [1960] implemented a program that proved every logic theorem (nearly 400) in Principia
Mathematica [Whitehead and Russell, 1910]. Newell and Simon [1956] built a program, Logic Theorist,
that discovers proofs in propositional logic.

In addition to work on high-level symbolic reasoning, there was also much work on low-level learning
inspired by how neurons work. McCulloch and Pitts [1943] showed how a simple thresholding “formal
neuron” could be the basis for a Turing-complete machine. The first learning for these neural networks
was described by Minsky [1952]. One of the early significant works was the perceptron of Rosenblatt
[1958]. The work on neural networks went into decline for a number of years after the 1968 book by
Minsky and Papert [1988], which argued that the representations learned were inadequate for intelligent
action.

The early programs concentrated on learning and search as the foundations of the field. It became
apparent early that one of the main tasks was how to represent the knowledge required for intelligent
action. Before learning, an agent must have an appropriate target language for the learned knowledge.
There have been many proposals for representations from simple features to neural networks to the
complex logical representations of McCarthy and Hayes [1969] and many in between, such as the frames
of Minsky [1975].

During the 1960s and 1970s, natural language understanding systems were developed for limited
domains. For example, the STUDENT program of Daniel Bobrow [1967] could solve high school algebra
tasks expressed in natural language. Winograd’s [1972] SHRDLU system could, using restricted natural
language, discuss and carry out tasks in a simulated blocks world. CHAT-80 [Warren and Pereira, 1982]
could answer geographical questions placed to it in natural language. Figure 1.2 shows some questions
that CHAT-80 answered based on a database of facts about countries, rivers, and so on. All of these
systems could only reason in very limited domains using restricted vocabulary and sentence structure.
Interestingly, IBM’s Watson, which beat the world champion in the TV game show Jeopardy!, used a
similar technique to CHAT-80 [Lally et al., 2012]; see Section 13.6.

Does Afghanistan border China?


What is the capital of Upper_Volta?
Which country’s capital is London?
Which is the largest African country?
How large is the smallest American country?
What is the ocean that borders African countries and that borders Asian countries?
What are the capitals of the countries bordering the Baltic?
How many countries does the Danube flow through?
What is the total area of countries south of the Equator and not in Australasia?
What is the average area of the countries in each continent?
Is there more than one country in each continent?
What are the countries from which a river flows into the Black_Sea?
What are the continents no country in which contains more than two cities whose
population exceeds 1 million?
Which country bordering the Mediterranean borders a country that is bordered by a
country whose population exceeds the population of India?
Which countries with a population exceeding 10 million border the Atlantic?

Figure 1.2: Some questions CHAT-80 could answer

During the 1970s and 1980s, there was a large body of work on expert systems, where the aim was
to capture the knowledge of an expert in some domain so that a computer could carry out expert tasks.
For example, DENDRAL [Buchanan and Feigenbaum, 1978], developed from 1965 to 1983 in the field of
organic chemistry, proposed plausible structures for new organic compounds. MYCIN [Buchanan and
Shortliffe, 1984], developed from 1972 to 1980, diagnosed infectious diseases of the blood, prescribed
antimicrobial therapy, and explained its reasoning. The 1970s and 1980s were also a period when AI
reasoning became widespread in languages such as Prolog [Colmerauer and Roussel, 1996; Kowalski,
1988].

During the 1990s and the 2000s there was great growth in the subdisciplines of AI such as
perception, probabilistic and decision-theoretic reasoning, planning, embodied systems, machine
learning, and many other fields. There has also been much progress on the foundations of the field; these
form the framework of this book.

1.2.1 Relationship to Other Disciplines

1.1.1 Artificial and Natural IntelligenceBibliographyIndex 1.2.1 Relationship to Other Disciplines


Generated on Sat Nov 3 11:48:18 2018 by LATEXML

Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth


This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

1.2 A Brief History of Artificial Intelligence


1.2 A Brief History of Artificial Intelligence 1.3 Agents Situated in Environments

1.2.1 Relationship to Other Disciplines

AI is a very young discipline. Other disciplines as diverse as philosophy, neurobiology, evolutionary


biology, psychology, economics, political science, sociology, anthropology, control engineering, statistics,
and many more have been studying aspects of intelligence much longer.

The science of AI could be described as “synthetic psychology,” “experimental philosophy,” or


“computational epistemology”– epistemology is the study of knowledge. AI can be seen as a way to
study the nature of knowledge and intelligence, but with a more powerful experimental tool than was
previously available. Instead of being able to observe only the external behavior of intelligent systems, as
philosophy, psychology, economics, and sociology have traditionally been able to do, AI researchers
experiment with executable models of intelligent behavior. Most important, such models are open to
inspection, redesign, and experiment in a complete and rigorous way. Modern computers provide a way
to construct the models about which philosophers have only been able to theorize. AI researchers can
experiment with these models as opposed to just discussing their abstract properties. AI theories can be
empirically grounded in implementations. Moreover, we are often surprised when simple agents exhibit
complex behavior. We would not have known this without implementing the agents.

It is instructive to consider an analogy between the development of flying machines over the past
few centuries and the development of thinking machines over the past few decades. There are several
ways to understand flying. One is to dissect known flying animals and hypothesize their common
structural features as necessary fundamental characteristics of any flying agent. With this method, an
examination of birds, bats, and insects would suggest that flying involves the flapping of wings made of
some structure covered with feathers or a membrane. Furthermore, the hypothesis could be tested by
strapping feathers to one’s arms, flapping, and jumping into the air, as Icarus did. An alternative
methodology is to try to understand the principles of flying without restricting oneself to the natural
occurrences of flying. This typically involves the construction of artifacts that embody the hypothesized
principles, even if they do not behave like flying animals in any way except flying. This second method
has provided both useful tools – airplanes – and a better understanding of the principles underlying flying,
namely aerodynamics.

AI takes an approach analogous to that of aerodynamics. AI researchers are interested in testing


general hypotheses about the nature of intelligence by building machines that are intelligent and that do
not necessarily mimic humans or organizations. This also offers an approach to the question, “Can
computers really think?” by considering the analogous question, “Can airplanes really fly?”

AI is intimately linked with the discipline of computer science because the study of computation is
central to AI. It is essential to understand algorithms, data structures, and combinatorial complexity to
build intelligent machines. It is also surprising how much of computer science started as a spinoff from AI,
from timesharing to computer algebra systems.

Finally, AI can be seen as coming under the umbrella of cognitive science. Cognitive science links
various disciplines that study cognition and reasoning, from psychology to linguistics to anthropology to
neuroscience. AI distinguishes itself within cognitive science by providing tools to build intelligence rather
than just studying the external behavior of intelligent agents or dissecting the inner workings of
intelligent systems.

1.2 A Brief History of Artificial IntelligenceBibliographyIndex 1.3 Agents Situated in Environments


Generated on Sat Nov 3 11:48:18 2018 by LATEXML
Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth
This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

1 Artificial Intelligence and Agents


1.2.1 Relationship to Other Disciplines 1.4 Designing Agents

1.3 Agents Situated in Environments

AI is about practical reasoning: reasoning in order to do something. A coupling of perception, reasoning,


and acting comprises an agent. An agent acts in an environment. An agent’s environment may well
include other agents. An agent together with its environment is called a world.

An agent could be, for example, a coupling of a computational engine with physical sensors and
actuators, called a robot, where the environment is a physical setting. It could be the coupling of an
advice-giving computer, an expert system, with a human who provides perceptual information and
carries out the task. An agent could be a program that acts in a purely computational environment, a
software agent.

Figure 1.3: An agent interacting with an environment

Figure 1.3 shows a black box view of an agent in terms of its inputs and outputs. At any time, what
an agent does depends on:

• prior knowledge about the agent and the environment


• history of interaction with the environment, which is composed of
– stimuli received from the current environment, which can include observations
about the environment, as well as actions that the environment imposes on the agent
and
– past experiences of previous actions and stimuli, or other data, from which it can
learn
• goals that it must try to achieve or preferences over states of the world
• abilities, the primitive actions the agent is capable of carrying out.

Inside the black box, an agent has some internal belief state that can encode beliefs about its
environment, what it has learned, what it is trying to do, and what it intends to do. An agent updates this
internal state based on stimuli. It uses the belief state and stimuli to decide on its actions. Much of this
book is about what is inside this black box.
This is an all-encompassing view of intelligent agents varying in complexity from a simple
thermostat, to a diagnostic advising system whose perceptions and actions are mediated by human
beings, to a team of mobile robots, to society itself.

Purposive agents have preferences or goals. They prefer some states of the world to other states,
and they act to try to achieve the states they prefer most. The non-purposive agents are grouped
together and called nature. Whether or not an agent is purposive is a modeling assumption that may, or
may not, be appropriate. For example, for some applications it may be appropriate to model a dog as
purposive, and for others it may suffice to model a dog as non-purposive.

If an agent does not have preferences, by definition it does not care what world state it ends up in,
and so it does not matter to it what it does. The reason to design an agent is to instill preferences in it –
to make it prefer some world states and try to achieve them. An agent does not have to know its
preferences explicitly. For example, a thermostat is an agent that senses the world and turns a heater
either on or off. There are preferences embedded in the thermostat, such as to keep the occupants of a
room at a pleasant temperature, even though the thermostat arguably does not know these are its
preferences. The preferences of an agent are often the preferences of the designer of the agent, but
sometimes an agent can acquire goals and preferences at run time.

1.2.1 Relationship to Other DisciplinesBibliographyIndex 1.4 Designing Agents


Generated on Sat Nov 3 11:48:18 2018 by LATEXML

Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth


This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

1 Artificial Intelligence and Agents


1.3 Agents Situated in Environments 1.4.1 Design Time, Offline and Online Computation

1.4 Designing Agents

Artificial agents are designed for particular tasks. Researchers have not yet got to the stage of designing
an agent for the task of surviving and reproducing in a natural environment.

1.4.1 Design Time, Offline and Online Computation


1.4.2 Tasks
1.4.3 Defining a Solution
1.4.4 Representations

1.3 Agents Situated in EnvironmentsBibliographyIndex 1.4.1 Design Time, Offline and Online Computation
Generated on Sat Nov 3 11:48:18 2018 by LATEXML

Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth


This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

1.4 Designing Agents


1.4 Designing Agents 1.4.2 Tasks

1.4.1 Design Time, Offline and Online Computation

In deciding what an agent will do, there are three aspects of computation that must be distinguished: (1)
the computation that goes into the design of the agent, (2) the computation that the agent can do before
it observes the world and needs to act, and (3) the computation that is done by the agent as it is acting.

• Design time computation is the computation that is carried out to design the agent. It is
carried out by the designer of the agent, not the agent itself.
• Offline computation is the computation done by the agent before it has to act. It can include
compilation and learning. Offline, an agent can take background knowledge and data and compile
them into a usable form called a knowledge base. Background knowledge can be given either
at design time or offline.
• Online computation is the computation done by the agent between observing the
environment and acting in the environment. A piece of information obtained online is called an
observation. An agent typically must use its knowledge base, its beliefs and its observations to
determine what to do next.

It is important to distinguish between the knowledge in the mind of the designer and the knowledge
in the mind of the agent. Consider the extreme cases:

• At one extreme is a highly specialized agent that works well in the environment for which it was
designed, but is helpless outside of this niche. The designer may have done considerable work in
building the agent, but the agent may not need to do very much to operate well. An example is a
thermostat. It may be difficult to design a thermostat so that it turns on and off at exactly the right
temperatures, but the thermostat itself does not have to do much computation. Another example
is a car painting robot that always paints the same parts in an automobile factory. There may be
much design time or offline computation to get it to work perfectly, but the painting robot can
paint parts with little online computation; it senses that there is a part in position, but then it
carries out its predefined actions. These very specialized agents do not adapt well to different
environments or to changing goals. The painting robot would not notice if a different sort of part
were present and, even if it did, it would not know what to do with it. It would have to be
redesigned or reprogrammed to paint different parts or to change into a sanding machine or a dog
washing machine.
• At the other extreme is a very flexible agent that can survive in arbitrary environments and
accept new tasks at run time. Simple biological agents such as insects can adapt to complex
changing environments, but they cannot carry out arbitrary tasks. Designing an agent that can
adapt to complex environments and changing goals is a major challenge. The agent will know
much more about the particulars of a situation than the designer. Even biology has not produced
many such agents. Humans may be the only extant example, but even humans need time to
adapt to new environments.

Even if the flexible agent is our ultimate dream, researchers have to reach this goal via more mundane
goals. Rather than building a universal agent, which can adapt to any environment and solve any task,
they have built particular agents for particular environmental niches. The designer can exploit the
structure of the particular niche and the agent does not have to reason about other possibilities.

Two broad strategies have been pursued in building agents:

• The first is to simplify environments and build complex reasoning systems for these simple
environments. For example, factory robots can do sophisticated tasks in the engineered
environment of a factory, but they may be hopeless in a natural environment. Much of the
complexity of the task can be reduced by simplifying the environment. This is also important for
building practical systems because many environments can be engineered to make them simpler
for agents.
• The second strategy is to build simple agents in natural environments. This is inspired by seeing
how insects can survive in complex environments even though they have very limited reasoning
abilities. Researchers then make the agents have more reasoning abilities as their tasks become
more complicated.

One of the advantages of simplifying environments is that it may enable us to prove properties of agents
or to optimize agents for particular situations. Proving properties or optimization typically requires a
model of the agent and its environment. The agent may do a little or a lot of reasoning, but an observer
or designer of the agent may be able to reason about the agent and the environment. For example, the
designer may be able to prove whether the agent can achieve a goal, whether it can avoid getting into
situations that may be bad for the agent (safety), whether it will get stuck somewhere (liveness), or
whether it will eventually get around to each of the things it should do (fairness). Of course, the proof is
only as good as the model.

The advantage of building agents for complex environments is that these are the types of
environments in which humans live and where we want our agents to be.

Even natural environments can be abstracted into simpler environments. For example, for an
autonomous car driving on public roads the environment can be conceptually simplified so that
everything is either a road, another car or something to be avoided. Although autonomous cars have
sophisticated sensors, they only have limited actions available, namely steering, accelerating and
braking.

Fortunately, research along both lines, and between these extremes, is being carried out. In the first
case, researchers start with simple environments and make the environments more complex. In the
second case, researchers increase the complexity of the behaviors that the agents can carry out.

1.4 Designing AgentsBibliographyIndex 1.4.2 Tasks


Generated on Sat Nov 3 11:48:18 2018 by LATEXML

Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth


This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

1.4 Designing Agents


1.4.1 Design Time, Offline and Online Computation 1.4.3 Defining a Solution

1.4.2 Tasks

One way that AI representations differ from computer programs in traditional languages is that an AI
representation typically specifies what needs to be computed, not how it is to be computed. We might
specify that the agent should find the most likely disease a patient has, or specify that a robot should get
coffee, but not give detailed instructions on how to do these things. Much AI reasoning involves searching
through the space of possibilities to determine how to complete a task.

Typically, a task is only given informally, such as “deliver parcels promptly when they arrive” or “fix
whatever is wrong with the electrical system of the house.”

Figure 1.4: The role of representations in solving tasks

The general framework for solving tasks by computer is given in Figure 1.4. To solve a task, the
designer of a system must:

• determine what constitutes a solution


• represent the task in a way a computer can reason about
• use the computer to compute an output, which is answers presented to a user or actions to be
carried out in the environment, and
• interpret the output as a solution to the task.

Knowledge is the information about a domain that can be used to solve tasks in that domain. To
solve many tasks requires much knowledge, and this knowledge must be represented in the computer.
As part of designing a program to solve tasks, we must define how the knowledge will be represented. A
representation language is used to express the knowledge that is used in an agent. A
representation of some piece of knowledge is the particular data structures used to encode the
knowledge so it can be reasoned with. A knowledge base is the representation of all of the knowledge
that is stored by an agent.

A good representation language is a compromise among many competing objectives. A


representation should be:

• rich enough to express the knowledge needed to solve the task.


• as close to a natural specification of the task as possible; it should be compact, natural, and
maintainable. It should be easy to see the relationship between the representation and the
domain being represented, so that it is easy to determine whether the knowledge represented is
correct. A small change in the task should result in a small change in the representation of the
task.
• amenable to efficient computation, or tractable, which means that the agent can act quickly
enough. To ensure this, representations exploit features of the task for computational gain and
trade off accuracy and computation time.
• able to be acquired from people, data and past experiences.

Many different representation languages have been designed. Many of these start with some of these
objectives and are then expanded to include the other objectives. For example, some are designed for
learning, perhaps inspired by neurons, and then expanded to allow richer task solving and inference
abilities. Some representation languages are designed with expressiveness in mind, and then inference
and learning are added on. Some language designers focus on tractability and enhance richness,
naturalness and the ability to be acquired.

1.4.1 Design Time, Offline and Online ComputationBibliographyIndex 1.4.3 Defining a Solution
Generated on Sat Nov 3 11:48:18 2018 by LATEXML

Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth


This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

1.4 Designing Agents


1.4.2 Tasks 1.4.4 Representations

1.4.3 Defining a Solution

Given an informal description of a task, before even considering a computer, an agent designer should
determine what would constitute a solution. This question arises not only in AI but in any software design.
Much of software engineering involves refining the specification of the task.

Tasks are typically not well specified. Not only is there usually much left unspecified, but also the
unspecified parts cannot be filled in arbitrarily. For example, if a user asks a trading agent to find out all
the information about resorts that may have unsanitary food practices, they do not want the agent to
return all the information about all resorts, even though all of the information requested is in the result.
However, if the trading agent does not have complete knowledge about the resorts, returning all of the
information may be the only way for it to guarantee that all of the requested information is there.
Similarly, one does not want a delivery robot, when asked to take all of the trash to the garbage can, to
take everything to the garbage can, even though this may be the only way to guarantee that all of the
trash has been taken. Much work in AI is motivated by commonsense reasoning; we want the
computer to be able to reach commonsense conclusions about the unstated assumptions.

Given a well-defined task, the next issue is whether it matters if the answer returned is incorrect or
incomplete. For example, if the specification asks for all instances, does it matter if some are missing?
Does it matter if there are some extra instances? Often a person does not want just any solution but the
best solution according to some criteria. There are four common classes of solutions:

Optimal solution
An optimal solution to a task is one that is the best solution according to some
measure of solution quality. For example, a robot may need to take out as much trash as
possible; the more trash it can take out, the better. In a more complex example, you may
want the delivery robot to take as much of the trash as possible to the garbage can,
minimizing the distance traveled, and explicitly specify a trade-off between the effort
required and the proportion of the trash taken out. There are also costs associated with
making mistakes and throwing out items that are not trash. It may be better to miss some
trash than to waste too much time. One general measure of desirability, known as utility,
is used in decision theory.
Satisficing solution
Often an agent does not need the best solution to a task but just needs some
solution. A satisficing solution is one that is good enough according to some description
of which solutions are adequate. For example, a person may tell a robot that it must take
all of trash out, or tell it to take out three items of trash.
Approximately optimal solution
One of the advantages of a cardinal measure of success is that it
allows for approximations. An approximately optimal solution is one whose measure
of quality is close to the best that could theoretically be obtained. Typically, agents do not
need optimal solutions to tasks; they only need to get close enough. For example, the
robot may not need to travel the optimal distance to take out the trash but may only need
to be within, say, 10% of the optimal distance. Some approximation algorithms
guarantee that a solution is within some range of optimal, but for some algorithms no
guarantees are available.

For some tasks, it is much easier computationally to get an approximately optimal


solution than to get an optimal solution. However, for other tasks, it is just as difficult to
find an approximately optimal solution that is guaranteed to be within some bounds of
optimal as it is to find an optimal solution.
Probable solution
A probable solution is one that, even though it may not actually be a solution
to the task, is likely to be a solution. This is one way to approximate, in a precise manner,
a satisficing solution. For example, in the case where the delivery robot could drop the
trash or fail to pick it up when it attempts to, you may need the robot to be 80% sure that
it has picked up three items of trash. Often you want to distinguish the false-positive
error rate (the proportion of the answers given by the computer that are not correct)
from the false-negative error rate (the proportion of those answers not given by the
computer that are indeed correct). Some applications are much more tolerant of one of
these types of errors than the other.

These categories are not exclusive. A form of learning known as probably approximately correct (PAC)
learning considers probably learning an approximately correct concept (page 7.8.2).

1.4.2 TasksBibliographyIndex 1.4.4 Representations


Generated on Sat Nov 3 11:48:18 2018 by LATEXML

Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth


This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
ARTIFICIAL
INTELLIGENCE 2E
FOUNDATIONS OF COMPUTATIONAL AGENTS
Contents Index Home

1.4 Designing Agents


1.4.3 Defining a Solution 1.5 Agent Design Space

1.4.4 Representations

Once you have some requirements on the nature of a solution, you must represent the task so a
computer can solve it.

Computers and human minds are examples of physical symbol systems. A symbol is a meaningful
pattern that can be manipulated. Examples of symbols are written words, sentences, gestures, marks on
paper, or sequences of bits. A symbol system creates, copies, modifies, and destroys symbols.
Essentially, a symbol is one of the patterns manipulated as a unit by a symbol system. The term physical
is used, because symbols in a physical symbol system are physical objects that are part of the real world,
even though they may be internal to computers and brains. They may also need to physically affect
action or motor control.

Much of AI rests on the physical symbol system hypothesis of Newell and Simon [1976]:

A physical symbol system has the necessary and sufficient means for general intelligent
action.

This is a strong hypothesis. It means that any intelligent agent is necessarily a physical symbol system. It
also means that a physical symbol system is all that is needed for intelligent action; there is no magic or
an as-yet-to-be-discovered quantum phenomenon required. It does not imply that a physical symbol
system does not need a body to sense and act in the world. There is some debate as to whether hidden
variables, that have not been assigned a meaning, but are useful, can be considered as symbols. The
physical symbol system hypothesis is an empirical hypothesis that, like other scientific hypotheses, is to
be judged by how well it fits the evidence, and by what alternative hypotheses exist. Indeed, it could be
false.

An intelligent agent can be seen as manipulating symbols to produce action. Many of these symbols
are used to refer to things in the world. Other symbols may be useful concepts that may or may not have
external meaning. Yet other symbols may refer to internal states of the agent.

An agent can use a physical symbol system to model the world. A model of a world is a
representation of an agent’s beliefs about what is true in the world or how the world changes. The world
does not have to be modeled at the most detailed level to be useful. All models are abstractions; they
represent only part of the world and leave out many of the details. An agent can have a very simplistic
model of the world, or it can have a very detailed model of the world. The level of abstraction provides
a partial ordering of abstraction. A lower-level abstraction includes more details than a higher-level
abstraction. An agent can have multiple, even contradictory, models of the world. Models are judged not
by whether they are correct, but by whether they are useful.

Example 1.3. A delivery robot can model the environment at a high level of abstraction in terms of
rooms, corridors, doors, and obstacles, ignoring distances, its size, the steering angles needed, the slippage
of the wheels, the weight of parcels, the details of obstacles, the political situation in Canada, and virtually
everything else. The robot could model the environment at lower levels of abstraction by taking some of
these details into account. Some of these details may be irrelevant for the successful implementation of the
robot, but some may be crucial for the robot to succeed. For example, in some situations the size of the
robot and the steering angles may be crucial for not getting stuck around a particular corner. In other
situations, if the robot stays close to the center of the corridor, it may not need to model its width or the
steering angles.

Choosing an appropriate level of abstraction is difficult for the following reasons:

• A high-level description is easier for a human to specify and understand.


• A low-level description can be more accurate and more predictive. Often high-level descriptions
abstract away details that may be important for actually solving the task.
• The lower the level, the more difficult it is to reason with. This is because a solution at a lower
level of detail involves more steps and many more possible courses of action exist from which to
choose.
• An agent may not know the information needed for a low-level description. For example, the
delivery robot may not know what obstacles it will encounter or how slippery the floor will be at
the time that it must decide what to do.

It is often a good idea to model an environment at multiple levels of abstraction. This issue is further
discussed in Section 2.3.

Biological systems, and computers, can be described at multiple levels of abstraction. At successively
lower levels of animals are the neuronal level, the biochemical level (what chemicals and what electrical
potentials are being transmitted), the chemical level (what chemical reactions are being carried out), and
the level of physics (in terms of forces on atoms and quantum phenomena). What levels above the
neuronal level are needed to account for intelligence is still an open question. These levels of description
are echoed in the hierarchical structure of science itself, where scientists are divided into physicists,
chemists, biologists, psychologists, anthropologists, and so on. Although no level of description is more
important than any other, we conjecture that you do not have to emulate every level of a human to build
an AI agent but rather you can emulate the higher levels and build them on the foundation of modern
computers. This conjecture is part of what AI studies.

The following are two levels that seem to be common to both biological and computational entities:

• The knowledge level is the level of abstraction that considers what an agent knows and
believes and what its goals are. The knowledge level considers what an agent knows, but not how
it reasons. For example, the delivery agent’s behavior can be described in terms of whether it
knows that a parcel has arrived or not and whether it knows where a particular person is or not.
Both human and robotic agents are describable at the knowledge level. At this level, you do not
specify how the solution will be computed or even which of the many possible strategies available
to the agent will be used.
• The symbol level is a level of description of an agent in terms of the reasoning it does. To
implement the knowledge level, an agent manipulates symbols to produce answers. Many
cognitive science experiments are designed to determine what symbol manipulation occurs during
reasoning. Whereas the knowledge level is about what the agent believes about the external
world and what its goals are in terms of the outside world, the symbol level is about what goes on
inside an agent to reason about the external world.

1.4.3 Defining a SolutionBibliographyIndex 1.5 Agent Design Space


Generated on Sat Nov 3 11:48:18 2018 by LATEXML

Artificial Intelligence: Foundations of Computational Agents, Poole & Mackworth


This online version is free to view and download for personal use only. The text is not for re-distribution, re-sale or use in
derivative works. Copyright © 2017, David L. Poole and Alan K. Mackworth. This book is published by Cambridge
University Press.
Another random document with
no related content on Scribd:
this season is five, which is a respectable number. As Bolton pointed out—
if we each got five to-day, and there were six extras, we should win. I
suppose if one plays chess a good deal one thinks of these things.

Harold, I mean George, refused to field, so I nobly put myself in last


and substituted for him. This was owing to an argument as to the exact
wording of my bet with Gerald.

"You said you'd get him out," said Gerald.

"I mean 'out of the way,' 'out of the field,' 'out of——'"

"I meant 'out' according to the laws of cricket. There are nine ways.
Which was yours, I should like to know?"

"Obstructing the ball."

"There you are."

I shifted my ground.

"I didn't say I'd get him out," I explained. "I said I'd get him. Those
were my very words. 'I will get George.' Can you deny that I got him?"

"Even if you said that, which you didn't, the common construction that
one puts upon the phrase is——"

"If you are going to use long words like that," I said, "I must refer you
to my solicitor Bolton."

Whereupon Bolton took counsel's opinion, and reported that he could


not advise me to proceed in the matter. So Gerald took second wicket, and I
fielded.

However, one advantage of fielding was that I saw the editor's innings
from start to finish at the closest quarters. He came in at the end of the first
over, and took guard for "left hand round the wicket."
"Would you give it me?" he said to Bolton. "These country umpires....
Thanks. And what's that over the wicket? Thanks."

He marked two places with the bail.

"How about having it from here?" I suggested at mid-on. "It's quite a


good place and we're in a straight line with the church."

The editor returned the bail, and held up his bat again.

"That 'one-leg' all right? Thanks."

He was proceeding to look round the field when a gentle voice from
behind him said: "If you wouldn't mind moving a bit, sir, I could bowl."

"Oh, is it over?" said the editor airily, trying to hide his confusion. "I
beg your pardon, I beg your pardon."

Still he had certainly impressed the sister of their captain, and it was
dreadful to think of the disillusionment that might follow at any moment.
However, as it happened, he had yet another trick up his sleeve. Bolton hit a
ball to cover, and the editor, in the words of the local paper, "most
sportingly sacrificed his wicket when he saw that his partner had not time to
get back. It was a question, however, whether there was ever a run
possible."

Which shows that the reporter did not know of the existence of their
captain's sister.

When I came in, the score was fifty-one for nine, and Henry was still in.
I had only one ball to play, so I feel that I should describe it in full. I have
four good scoring strokes—the cut, the drive, the hook and the glance. As
the bowler ran up to the crease I decided to cut the ball to the ropes.
Directly, however, it left his hand, I saw that it was a ball to hook, and
accordingly I changed my attitude to the one usually adopted for that stroke.
But the ball came up farther than I expected, so at the last moment I drove it
hard past the bowler. That at least was the idea. Actually, it turned out to be
a beautiful glance shot to the leg boundary. Seldom, if ever, has Beldam had
such an opportunity for four action photographs on one plate.

Henry took a sixer next ball, and so we won. And the rest of the story of
my team, is it not written in the journals of The Sportsman and The
Chartleigh Watchman, and in the hearts of all who were privileged to
compose it? But how the editor took two jokes I told him in the train, and
put them in his paper (as his own), and how Carey challenged the engine-
driver to an eighteen-hole solitaire match, and how ... these things indeed
shall never be divulged.
EX NIHILO FIT MULTUM

I should like to explain just what happened to the ball. In the first place
it was of an irreproachable length, and broke very sharply and cleverly from
the leg. (The bowler, I am sure, will bear me out in this.) Also it rose with
great suddenness ... and, before I had time to perfect any adequate system of
defence, took me on the knee, and from there rolled on to the off-stump.
There was a considerable amount of applause on the part of the field, due,
no doubt, to the feeling that a dangerous batsman had been dismissed
without scoring. I need hardly add that I did not resent this appreciation.

What I really wished to say to the wicket-keeper was (1) that it was the
first fast wicket I had played on this summer; (2) that it was my first nought
this season, and, hang it, even Fry made noughts sometimes; and (3) that
personally I always felt that it didn't matter what one made oneself so long
as one's side was victorious. What I actually said was shorter; but I expect
the wicket-keeper understood just as well. He seemed an intelligent fellow.

After that, I walked nine miles back to the pavilion.

The next man was brushing his hair in the dressing-room.

"What's happened?" he asked.

"Nothing," I said truthfully.

"But you're out, aren't you?"

"I mean that nothing has eventuated—accrued, as it were."

"Blob? Bad luck. Is my parting straight?"

"It curls a bit from leg up at the top, but it will do. Mind you make
some. I always feel that so long as one's side is victorious——"

But he was gone. I brushed my own hair very carefully, lit a cigarette,
and went outside to the others. I always think that a nought itself is nothing
—the way one carries it off is everything. A disaster, not only to himself but
also to his side, should not make a man indifferent to his personal
appearance.

"Bad luck," said somebody. "Did it come back?"

"Very quickly. We both did."

"He wasn't breaking much when I was in," said some tactless idiot.

"Then why did you get out?" I retorted.

"L.b.w."

I moved quickly away from him, and sat next to a man who had yet to
go in.

"Bad luck," he said. "Second ball, wasn't it? I expect I shall do the
same."

I thought for a moment.

"What makes you think you will have a second?" I asked.

"To judge from the easy way in which those two are knocking the
bowling about, I sha'n't even have a first," he smiled.

I moved on again.

"Hallo," said a voice. "I saw you get out. How many did you make?"

"None," I said wearily.

"How many?"

I went and sat down next to him.

"Guess," I said.
"Oh, I can't."

"Well, think of a number."

"Yes."

"Double it. Divide by two. Take away the number you first thought of.
What does that make?"

"A hundred."

"You must have done it wrong," I said suspiciously.

"No, I am sure I didn't.... No, it still comes to a hundred."

"Well, then, I must have made a hundred," I said excitedly. "Are you
sure you haven't made a mistake?"

"Quite."

"Then I'd better go and tell the scorer. He put me down a blob—silly
ass."

"He's a bad scorer, I know."

"By the way," I said, as I got up, "what number did you think of?"

"Well, it's like this. When you asked me to guess what you had made I
instinctively thought of blob, only I didn't like to say so. Then when you
began that number game I started with a hundred—it's such an easy number.
Double—two hundred. Divided by two—one hundred. Take away the
number you first thought of—that's blob, and you have a hundred left.
Wasn't that right?"

"You idiot," I said angrily. "Of course it wasn't."

"Well, don't get sick about it. We all make mistakes."


"Sick, I'm not sick. Only just for the moment.... I really thought.... Well,
I shall never be so near a century again."

At lunch I sat next to one of their side.

"How many did you make?" he asked.

"Not very many," I said.

"How many?"

"Oh, hardly any. None at all, practically."

"How many actually?"

"And actually," I said. ("Fool.")

After lunch a strange man happened to be talking to me.

"And why did you get out?" he asked.

It was a silly question and deserved a silly answer. Besides, I was sick
of it all by this time.

"Point's moustache put me off," I said.

"What was wrong with Point's moustache?"

"It swerved the wrong way."

"I was fielding point," he said.

"I'm very sorry. But if you had recognised me, you wouldn't have asked
why I got out, and if I had recognised you I shouldn't have told you. So let's
forgive and forget."

I hoped that the subject was really closed this time. Of course, I knew
that kind friends and relations would ask me on the morrow how many I
had made, but for that day I wanted no more of it. Yet, as it happened, I
reopened the subject myself.

For with five minutes to play their ninth wicket fell. Mid-off sauntered
over towards me.

"Just as well we didn't stay in any longer."

"That's just what I thought," I said triumphantly, "all along."

AN AVERAGE MAN

Of Tomkins as a natural cricketer


It frequently has been remarked—that IF
He'd had more opportunities of bowling,
And rather more encouragement in batting,
And IF his averages, so disclosed,
Batting and bowling, had been interchanged;
And IF the field as usually set
Contained some post (at the pavilion end)
Whose presence rather than a pair of hands
Was called for; then, before the season finished,
Tomkins would certainly have played for Kent.

All this, however, is beside the mark.


Just now I wish to hymn the glorious day
(Ignored by those who write the almanacs,
Unnoticed by the calendar compilers),
That Wednesday afternoon, twelve months ago
When Tomkins raised his average to two.

Thanks to an interval of accidents


(As "Tomkins did not bat"—and "not out 0,"
But this more rarely) Tomkins' average
Had long remained at 1.3.
(Though Tomkins, sacrificing truth to pride,
Or both to euphony, left out the dot—

Left out the little dot upon the three,


Only employing it to justify
A second three to follow on the first.
Thus, if a stranger asked his average
Tomkins would answer "One point thirty-three"—
Nor lay the stress unduly on the "one" ...).

A curious thing is custom! There are men—


Plum Warner is, of course, a case in point—
Who cannot bat unless they go in first.
Others, as Hayes and Denton, have their place
First wicket down; while Number Six or so
Is suited best to Jessop. As for Tomkins
His place was always one above the Byes,
And three above the Wides. So Custom willed.

Upon this famous Wednesday afternoon


Wickets had fallen fast before the onslaught
Of one who had, as Euclid might have put it,
No length, or break, but only pace; and pace
Had been too much for nine of them already.
Then entered Tomkins the invincible.
Took guard as usual, "just outside the leg,"
Looked round the field, and mentally decided
To die—or raise his average to two.
Whereon—for now the bowler was approaching,
He struck a scientific attitude,
Advanced the left leg firmly down the pitch,
And swung his bat along the line AB
(See Ranjitsinhji's famous book of cricket).

And when the bat and leg were both at B


(Having arrived there more or less together),
Then Tomkins, with his usual self-effacement,
Modestly closed his eyes, and left the rest
To Providence and Ranjy and the bowler
(Forming a quorum); two at least of whom
Resolved that he should neatly glide the ball
Somewhere between the first and second slips.
So Tomkins did compile a chanceless two.

Once more the bowler rushed upon the crease,


While Tomkins made a hasty calculation
(Necessitating use of decimals)
And found his average was 1.5.
So lustily he smote and drove the ball
Loftily over long stop's head for one;
Which brought the decimal to seventy-five,
And Tomkins, puffing, to the other end.
Then, feeling that the time for risks was come,
He rolled his sleeves up, blew upon his hands,
And played back to a yorker, and was bowled.

Every position has its special charm.


You go in first and find as a reward
The wicket at its best; you go in later
And find the fielders slack, the bowling loose.
Tomkins, who went in just above the Byes,
Found one of them had slipped into his score.
'Tis wise to take the good the gods provide you—
And Tomkins has an average of two.

SMALL GAMES
PHYSICAL CULTURE

"Why don't you sit up?" said Adela at dinner, suddenly prodding me in
the back. Adela is old enough to take a motherly interest in my figure, and
young enough to look extremely pretty while doing so.

"I always stoop at meals," I explained; "it helps the circulation. My own
idea."

"But it looks so bad. You ought——"

"Don't improve me," I begged,

"No wonder you have——"

"Hush! I haven't. I got a bullet on the liver in the campaign of '03, due
to over-smoking, and sometimes it hurts me a little in the cold weather.
That's all."

"Why don't you try the Hyperion?"

"I will. Where is it?"

"It isn't anywhere; you buy it."

"Oh, I thought you dined at it. What do you buy it for?"

"It's one of those developers with elastics and pulleys and so on. Every
morning early, for half-an-hour before breakfast——"

"You are trying to improve me," I said suspiciously.

"But they are such good things," went on Adela earnestly. "They really
do help to make you beautiful——"

"I am beautiful."

"Well, much more beautiful. And strong——"


"Are you being simply as tactful as you can be?"

"—and graceful."

"It isn't as though you were actually a relation," I protested.

Adela continued, full of her ideas:

"It would do you so much good, you know. Would you promise me to
use it every day if I sent you mine?"

"Why don't you want yours any more now? Are you perfect now?"

"You can easily hook it to the wall——"

"I suppose," I reflected, "there is a limit of beauty beyond which it is


dangerous to go. After that, either the thing would come off the hook, or
——"

"Well," said Adela suddenly, "aren't I looking well?"

"You're looking radiant," I said appreciatively; "but it may only be


because you're going to marry Billy next month."

She smiled and blushed. "Well, I'll send it to you," she said. "And you
try it for a week, and then tell me if you don't feel better. Oh, and don't do
all the exercises to begin with; start with three or four of the easy ones."

"Of course," I said.

* * * * * * *

I undid the wrappings eagerly, took off the lid of the box, and was
confronted with (apparently) six pairs of braces. I shook them out of the
box and saw I had made a mistake. It was one pair of braces for Magog. I
picked it up, and I knew that I was in the presence of the Hyperion. In five
minutes I had screwed a hook into the bedroom wall and attached the
beautifier. Then I sat on the edge of the bed and looked at it.
There was a tin plate, fastened to the top, with the word "LADIES" on
it. I got up, removed it with a knife, and sat down again. Everything was
very dusty, and I wondered when Adela had last developed herself.

By-and-by I went into the other room to see if I had overlooked


anything. I found on the floor a chart of exercises, and returned
triumphantly with it.

There were thirty exercises altogether, and the chart gave

(1) A detailed explanation of how to do each particular exercise;

(2) A photograph of a lady doing it.

"After all," I reassured myself, after the first bashful glance, "it is Adela
who has thrust this upon me; and she must have known." So I studied it.

Nos. 10, 15 and 28 seemed the easiest; I decided to confine myself to


them. For the first of these you strap yourself in at the waist, grasp the
handles, and fall slowly backwards until your head touches the floor—all
the elastic cords being then at full stretch. When I had got very slowly
halfway down, an extra piece of elastic which had got hitched somewhere
came suddenly into play, and I did the rest of the journey without a stop,
finishing up sharply against the towel horse. The chart had said, "Inhale
going down," and I was inhaling hard at the moment that the towel horse
and two damp towels spread themselves over my face.

"So much for Exercise 10," I thought, as I got up. "I'll just get the idea
to-night, and then start properly to-morrow. Now for No. 15."

Somehow I felt instinctively that No. 15 would cause trouble. For No.
15 you stand on the right foot, fasten the left foot to one of the cords, and
stretch it out as far as you can....

What—officially—you do then, I cannot say....

Some people can stand easily upon the right foot, when the left is
fastened to the wall ... others cannot ... it is a gift....
Having recovered from my spontaneous rendering of No. 15 I turned to
No. 28. This one, I realised, was extremely important; I would do it twelve
times.

You begin by lying flat on the floor, roped in at the waist, and with your
hands (grasping the elastic cords) held straight up in the air. The tension on
your waist is then extreme, but on your hands only moderate. Then, taking a
deep breath, you pull your arms slowly out until they lie along the floor.
The tension becomes terrific, the strain on every part of you is immense.
While I lay there, taking a deep breath before relaxing, I said to myself,
"The strain will be too much for me."

I was wrong. It was too much for the hook. The hook whizzed out,
everything flew at me at once, and I remembered no more....

As I limped into bed, I trod heavily upon something sharp. I shrieked


and bent down to see what had bitten me. It was a tin plate bearing the
words "LADIES."

* * * * * * *

"Well?" said Adela, a week later.

I looked at her for a long time.

"When did you last use the Hyperion?" I asked.

"About a year ago."

"Ah! ... You don't remember the chart that went with it?"

"Not well. Except, of course, that each exercise was arranged for a
particular object according to what you wanted."

"Exactly. So I discovered yesterday. It was in very small type, and I


missed it at first."

"Well, how many did you do?"


"I limited myself to Exercises 10, 15, and 28. Do you happen to
remember what those were for?"

"Not particularly."

"No. Well, I started with No. 10. No. 10, you may recall, is one of the
most perilous. I nearly died over No. 10. And when I had been doing it for a
week, I discovered what its particular object was."

"What?"

"'To round the forearm!' Yes, madam," I said bitterly, "I have spent a
week of agony ... and I have rounded one forearm."

"Why didn't you try another?"

"I did. I tried No. 15. Six times in the pursuit of No. 15 have I been shot
up to the ceiling by the left foot ... and what for, Adela? 'To arch the instep!'
Look at my instep! Why should I want to arch it?"

"I wish I could remember which chart I sent you," said Adela, wrinkling
her brow.

"It was the wrong one," I said....

There was a long silence.

"Oh," said Adela suddenly, "you never told me about No. 28."

"Pardon me," I said, "I cannot bear to speak of 28."

"Why, was it even more unsuitable than the other two?"

"I found, when I had done it six times, that its object was stated to be,
'To remove double chin.' That, however, was not the real effect. And so I
crossed out the false comment and wrote the true one in its place."

"And what is that?" asked Adela.


"'To remove the hook,'" I said gloomily.

CROQUET

PROLOGUE

"I hear you're very good at croquet," said my hostess.

"Oh, well," I said modestly. (The fact is I can beat them all at home.)

"We have the North Rutland champion staying with us. He's very keen
on a game. Now then, how can we manage?"

This was terrible. I must put it off somehow.

"Is there a north to Rutland," I began argumentatively. "I always thought


——"

"Yes, I see. He shall play with Jane against you and Miss Middleton. By
the way, let me introduce you all."

We bowed to each other for a bit, and then I had another shock. The
N.R. champion's mallet was bound with brass at each end (in case he
wanted to hit backwards suddenly) and had a silver plate on it. Jane's had
the brass only. It was absurd that they should play together.

I drew Miss Middleton on one side.

"I say," I began nervously, "I'm frightfully sorry, but I quite forgot to
bring my mallet. Will it matter very much?"

"I haven't one either."


"You know, when my man was packing my bag, I particularly said to
him, 'Now, don't forget to put in a mallet.' He said, 'Shall I put the spare one
in too, sir, because the best one's sprung a bit?'"

"Oh, I've never had one of my own. I suppose when one is really good
——"

"Well, to tell you the truth, I've never had one either. We're fairly in for
it now."

"Never mind, we'll amuse ourselves somehow, I expect."

"Oh, I'm quite looking forward to it."

CHAPTER I

They kicked off from the summer-house end, and, after jockeying for
the start a bit, the N.R. champion got going. He went very slowly but very
surely. I watched anxiously for ten minutes, expecting my turn every
moment. After a quarter of an hour I raised my hat and moved away.

"Shall we sit down?" I said to Miss Middleton.

"We shall be in the way if we sit down here, sha'n't we?"

"Outside that chalk line we're safe?"

"I—I suppose so."

We moved outside and sat down on the grass.

"I never even had a chalk line," I said mournfully.

"It's much more fun without."


"You know," I went on, "I can beat them all at home. Why even Wilfrid
——"

"It's just the same with me," said Miss Middleton. "Hilda did win once
by a frightful fluke, but——"

"But this is quite different. At home it would be considered jolly bad


form to go on all this time."

"One would simply go in and leave them," said Miss Middleton.

"You know, it's awful fun at home. The lawn goes down in terraces, and
if you hit the other person's ball hard enough you can get it right down to
the bottom; and it takes at least six to get back on the green again."

Miss Middleton gurgled to herself.

"We've got a stream ... round our lawn," she said, in gasps. "It's such a
joke ... and once ... when Hilda..."

CHAPTER II

"May I call you 'Mary?'" I said; "we're still here."

"Well, we have known each other a long time, certainly," said Miss
Middleton. "I think you might."

"Thanks very much."

"What hoop is he at?"

"He's just half-way."

"I suppose, when he's finished, then, Jane does it all?"


"It practically comes to that. I believe, as a matter of form, I am allowed
a shot in between."

"That won't make any difference, will it?"

"No...."

"It's awfully hot, isn't it?"

"Yes.... Do you bicycle much?"

"No.... Do you?"

"No. I generally sleep in the afternoons."

"Much the best thing to do. Good-night."

"Good-night."

CHAPTER III

"Wake up," I said. "You've been asleep for hours. Jane is playing now."

"Oh, I'm so sorry," said Mary, still with her eyes closed. "Then I missed
your turn. Was it a good one?"

"Absolutely splendid. I had a very long shot, and hit the champion.
Then I took my mallet in both hands, brought it well over the shoulder—are
you allowed to do that, by the way?"

"Yes, it's hockey where you mustn't."

"And croqueted him right down to the house—over beds, through


bushes, across paths—the longest ball I've ever driven."
"I hope you didn't make him very cross. You see, he may not be used to
our game."

"Cross? My dear girl, he was fairly chuckling with delight. Told me I'd
missed the rest of my turn. It seems that if you go over two beds, and across
more than one path, you miss the rest of your turn. Did you know that?"

"I suppose I did really, but I'd forgotten."

"And here I am again. Jane will be even longer. He's lying on the grass,
and taking sights for her just now.... Why didn't you answer my last letter?"

CHAPTER IV

"It's this passion for games," I said, waking up suddenly, "which has
made us Englishmen what we are. Here we have a hot July afternoon, when
all Nature is at peace, and the foreigner is taking his siesta. And what do we
do? How do we English men and women spend this hot afternoon? Why,
immediately after lunch, in one case even before the meal has been
digested, we rush off to take part in some violent game like croquet. Hour
after hour the play goes on relentlessly; there is no backing out on our part,
no pleading for just five minutes in which to get our wind. No, we bear our
part manfully, and—— Are you awake by any chance, or am I wasting all
this?"

"Of course I'm awake," said Mary, opening her eyes.

"What years I have known you! Do you remember those days when we
used to paddle together—the mixed paddling at Brighton?"

"Ah, yes. And your first paint-box."

"And your doll——"

"And the pony——"

You might also like