Frana P. Encyclopedia of Artificial Intelligence. The Past, Present, and Future of AI 2021
Frana P. Encyclopedia of Artificial Intelligence. The Past, Present, and Future of AI 2021
Frana P. Encyclopedia of Artificial Intelligence. The Past, Present, and Future of AI 2021
Artificial Intelligence
Encyclopedia of
Artificial Intelligence
The Past, Present, and Future of AI
Entries 1
Bibliography 353
About the Editors 359
List of Contributors 361
Index 365
List of Entries
embodied today in Silicon Valley scientist and business leader Andrew Ng’s
unabashed pronouncement that AI is “the new electricity.”
Since the 1950s, researchers have explored two major avenues to artificial intel-
ligence: connectionism and symbolic representation. Interest in one or the other of
these approaches has waxed and waned over the decades. Connectionist approaches
nurtured by cybernetics, the study of neural pathways, and associative learning
dominated in the 1950s and 1960s. This was followed by a surge in interest in
symbolic lines of attack from the 1970s to the mid-1990s. In this period of “Good
Old-Fashioned AI” (GOFAI), scientists in growing artificial intelligence programs
and laboratories began programming chatbots such as ELIZA and PARRY and
expert systems for capturing the knowledge and emulating the skill of the organic
chemist (DENDRAL), physician (INTERNIST-I), and artist (AARON).
Since 2000, more improvements in performance have been wrung out of con-
nectionist neural network approaches. Progress is being made in machine transla-
tion, computer vision, generative design, board and video game playing, and more.
Among more audacious efforts are the Blue Brain Project and Human Brain Proj-
ect, which attempt computational reconstructions of whole brains. Connectionist
and symbolic AI are often described as rivals, but together they account for a large
share of the spectacular progress in the field. Efforts are currently underway to
integrate the approaches in a structured way under the banner of neural-symbolic
learning and reasoning. When brought together, neural-symbolic computation,
systems, and applications may (or may not) begin to approach General AI of the
sort commonly depicted in science fiction.
terminate a human. And in The Matrix (1999), humans live in an artificial cyber
world, functioning as batteries for their robot overlords.
More recent films have been more nuanced in their treatment of AI. For exam
ple, Ex Machina (2014) depicts a scientist’s attempts at creating a self-aware
female android. His previous failures, kept as living art pieces or servants, attest
to his misogynistic impulses. When his most recent creation escapes his house/
laboratory, leaving him for dead in the process, the audience is clearly expected to
recognize her humanity, something the scientist clearly lacks.
Smart robots in the real world, while impressive, have fought an uphill battle
against their imaginary counterparts. The Roomba robotic vacuum cleaner is
impressive, but cannot compete against Rosey the Robot housekeeper of The Jet-
sons. Caregiver robots are designed to be physically and socially assistive, but do
not come close to replacing human nurses and nannies. Battlefield AI, however, is
another matter. Autonomous weapons systems are not yet Terminators, but they
are lethal enough to inspire campaigns to stop them, laws to govern them, and
ethical frameworks to guide them.
AI FOR GOOD
Artificial intelligence is helping people create and embrace new forms of art,
perform unique forms of theater and dance, and make distinctive kinds of music.
Quantum AI may help us understand the origins of life and the ultimate shape of
the universe. Contrastingly, it is precipitating a gut-wrenching Fourth Industrial
Revolution. AI threatens to automate people out of jobs, upend systems of wealth
creation, and blur relied-upon boundaries between the biological and digital
worlds. Nations and firms are rushing in to hack human civilization itself using AI.
The great paradox is that artificial intelligence is maximizing our preferences
and simultaneously making us vulnerable to global catastrophes and existential
risks. Artificial intelligence is fueling the reinvention of ourselves in a new com-
putational universe. That process engenders a full range of emotions and outcomes
from euphoria, anxiety, and—potentially—misery. AI’s potential for dislocation
and disorder is energizing movements to use the technology for common advan-
tage. These movements go by various names: AI for Social Good, Beneficial AI,
Trustworthy AI, Friendly AI. Together, they embody the wishes of scientists and
policymakers that artificial intelligence balance its benefits against the risks and
costs involved and wherever possible avoid harms.
Humanity is interested in making intelligent machines our caregivers, compan-
ions, guides, and gods. And yet we have done a far better job turning humans into
Introduction xv
intelligent cogs in society’s smart machine than in transforming artifacts into car-
bon copies of human beings. This is not for lack of interest. Several prominent
artificial intelligence researchers argue for the inevitability of a Technological
Singularity, beyond which alterations to artificial and human intelligence are
unforeseeable and uncontrollable. Accelerating change and recursive self-
improvement, these boosters say, could produce a superintelligent machine with
its own unfathomable hypergoals.
The purpose of the present volume is to help the reader more carefully evaluate
claims of “successes” and “failures” in artificial intelligence; assess the real
impact of smart technologies in society; and understand the historical, literary,
cultural, and philosophical significance of machine intelligence. Our machines
are highly polished mirrors that reflect and magnify human feeling and ambition.
AI opens us up to another way in which the world might be imagined and also
sensitizes us to the richness of the human search for meaning.
Philip L. Frana and Michael J. Klein
Further Reading
Bergstein, Brian. 2020. “What AI Still Can’t Do.” MIT Technology Review, February 19,
2020. https://www.technologyreview.com/2020/02/19/868178/what-ai-still-cant-do/.
Cardon, Dominique, Jean-Philippe Cointet, and Antoine Mazieres. 2018. “Neurons Spike
Back: The Invention of Inductive Machines and the Artificial Intelligence Contro-
versy.” Reseaux 36, no. 211: 173–220.
Lewis, Sarah. 2019. “The Racial Bias Built into Photography.” The New York Times,
April 25, 2019. https://www.nytimes.com/2019/04/25/lens/sarah-lewis-racial-bias
-photography.html.
McCorduck, Pamela. 2004. Machines Who Think: A Personal Inquiry into the History
and Prospects of Artificial Intelligence. Natick, MA: CRC Press.
Simon, Herbert. 1981. The Sciences of the Artificial. Second edition. Cambridge, MA: MIT
Press.
Simonite, Tom. 2019. “The Best Algorithms Struggle to Recognize Black Faces Equally.”
Wired, July 22, 2019. https://www.wired.com/story/best-algorithms-struggle
-recognize-black-faces-equally/.
Chronology
1942
Science fiction author Isaac Asimov’s Three Laws of Robotics appear in the short
story “Runaround.”
1943
Mathematician Emil Post writes about “production systems,” a concept borrowed
for the General Problem Solver of 1957.
1943
Publication of Warren McCulloch and Walter Pitts’ paper on a computational the-
ory of neural networks, entitled “A Logical Calculus of the Ideas of Immanent in
Nervous Activity.”
1944
John von Neumann, Norbert Wiener, Warren McCulloch, Walter Pitts, and How-
ard Aiken form the Teleological Society to study, among other things, communi-
cation and control in the nervous system.
1945
George Polya highlights the value of heuristic reasoning as a means of solving
problems in his book How to Solve It.
1946
The first of ten Macy Conferences on Cybernetics begins in New York City. The
theme of the first meeting is “Feedback Mechanisms and Circular Causal Systems
in Biological and Social Systems.”
1948
Publication of Cybernetics, or Control and Communication in the Animal and the
Machine by mathematician Norbert Wiener.
1949
Psychologist Donald Hebb proposes an explanation for neural adaptation in human
education in The Organization of Behavior: “neurons that fire together wire
together.”
1949
Publication of Giant Brains, or Machines That Think by mathematician Edmund
Berkeley.
xviii Chronology
1950
The Turing Test, attributing intelligence to any machine capable of exhibiting
intelligent behavior equivalent to that of a human, is described in Alan Turing’s
“Computing Machinery and Intelligence.”
1950
Claude Shannon publishes a pioneering technical paper on “Programming a Com-
puter for Playing Chess,” sharing search algorithms and techniques.
1951
Mathematics student Marvin Minsky and physics student Dean Edmonds design
an electric rat capable of learning how to negotiate a maze utilizing Hebbian
theory.
1951
Mathematician John von Neumann publishes “General and Logical Theory of
Automata,” reducing the human brain and central nervous system to a computing
machine.
1951
Christopher Strachey writes a checkers program and Dietrich Prinz creates a
chess routine for the University of Manchester’s Ferranti Mark 1 computer.
1952
Design for a Brain: The Origin of Adaptive Behavior, on the logical mechanisms
of human cerebral function, is published by cyberneticist W. Ross Ashby.
1952
Physiologist James Hardy and physician Martin Lipkin begin devising a McBee
punched card system for mechanical diagnosis of patients at Cornell University
Medical College.
1954
Groff Conklin publishes the theme-based anthology Science-Fiction Thinking
Machines: Robots, Androids, Computers.
1954
The Georgetown-IBM experiment demonstrates the potential of machine transla-
tion of text.
1955
Artificial intelligence research begins at Carnegie Tech (now Carnegie Mellon
University) under economist Herbert Simon and graduate student Allen Newell.
1955
Mathematician John Kemeny writes “Man Viewed as a Machine” for Scientific
American.
1955
Mathematician John McCarthy coins the term “artificial intelligence” in a Rock-
efeller Foundation proposal for a Dartmouth University conference.
Chronology xix
1956
Logic Theorist, an artificial intelligence computer program for proving theorems
in Alfred North Whitehead and Bertrand Russell’s Principia Mathematica, is cre-
ated by Allen Newell, Herbert Simon, and Cliff Shaw.
1956
The Dartmouth Summer Research Project, the “Constitutional Convention of AI,”
brings together experts in cybernetics, automata, information theory, operations
research, and game theory.
1956
Electrical engineer Arthur Samuel demonstrates his checkers-playing AI program
on television.
1957
The General Problem Solver AI program is written by Allen Newell and Herbert
Simon.
1957
The Rockefeller Medical Electronics Center demonstrates an RCA Bizmac com-
puter program to aid the physician in the differential diagnosis of blood diseases.
1958
Publication of John von Neumann’s unfinished The Computer and the Brain.
1958
Firmin Nash gives a first public demonstration of the Group Symbol Associator at
the “Mechanisation of Thought Processes” conference at UK’s Teddington
National Physical Laboratory.
1958
Frank Rosenblatt introduces the single layer perceptron, including a neural net-
work and supervised learning algorithm for linear classification of data.
1958
John McCarthy at the Massachusetts Institute of Technology (MIT) specifies the
high-level programming language LISP for AI research.
1959
Physicist Robert Ledley and radiologist Lee Lusted publish “The Reasoning Foun-
dations of Medical Diagnosis,” which introduces Bayesian inference and symbolic
logic to problems of medicine.
1959
John McCarthy and Marvin Minsky start what becomes the Artificial Intelligence
Laboratory at MIT.
1960
The Stanford Cart, a remote control vehicle equipped with a television camera, is
constructed by engineering student James L. Adams.
xx Chronology
1962
Science fiction and fantasy author Fred Saberhagen introduces intelligent killer
machines called Berserkers in the short story “Without a Thought.”
1963
The Stanford Artificial Intelligence Laboratory (SAIL) is founded by John
McCarthy.
1963
The U.S. Department of Defense’s Advanced Research Projects Agency begins
funding artificial intelligence projects at MIT under Project MAC.
1964
ELIZA, the first program for natural language communication with a machine
(“chatbot”), is programmed by Joseph Weizenbaum at MIT.
1965
British statistician I. J. Good publishes his “Speculations Concerning the First
Ultraintelligent Machine” about a coming intelligence explosion.
1965
Philosopher Hubert L. Dreyfus and mathematician Stuart E. Dreyfus release a
paper critical of artificial intelligence entitled “Alchemy and AI.”
1965
The Stanford Heuristic Programming Project, with the twin goals of modeling
scientific reasoning and creating expert systems, is initiated by Joshua Lederberg
and Edward Feigenbaum.
1965
Donald Michie organizes the Department of Machine Intelligence and Perception
at Edinburgh University.
1965
Georg Nees establishes in Stuttgart, West Germany, the first generative art
exhibit, called Computer Graphic.
1965
Computer scientist Edward Feigenbaum begins a ten-year effort to automate the
molecular analysis of organic compounds with the expert system DENDRAL.
1966
The Automatic Language Processing Advisory Committee (ALPAC) releases its
skeptical report on the current state of machine translation.
1967
Richard Greenblatt completes work on Mac Hack, a program that plays competi-
tive tournament chess, on a DEC PDP-6 at MIT.
1967
Ichiro Kato at Waseda University initiates work on the WABOT project, which
unveils a full-scale anthropomorphic intelligent robot five years later.
Chronology xxi
1968
Director Stanley Kubrick turns Arthur C. Clarke’s science fiction book 2001: A
Space Odyssey, about the HAL 9000 artificially intelligent computer, into one of
the most influential and critically acclaimed movies of all time.
1968
Terry Winograd at MIT begins work on the natural language understanding pro-
gram SHRDLU.
1969
The First International Joint Conference on Artificial Intelligence (IJCAI) is held
in Washington, DC.
1972
Artist Harold Cohen creates AARON, an AI program to create paintings.
1972
Ken Colby reports on his experiments simulating paranoia with the software pro-
gram PARRY.
1972
Hubert Dreyfus publishes his critique of the philosophical foundations of artificial
intelligence in What Computers Can’t Do.
1972
The MYCIN expert system, designed to diagnose bacterial infections and recom-
mend treatment options, is begun by doctoral student Ted Shortliffe at Stanford
University.
1972
The Lighthill Report on Artificial Intelligence is released by the UK Science
Research Council, highlighting failures of AI technology and difficulties of com-
binatorial explosion.
1972
Arthur Miller publishes The Assault on Privacy: Computers, Data Banks, and
Dossiers, an early work on the social impact of computers.
1972
University of Pittsburgh physician Jack Myers, medical student Randolph Miller,
and computer scientist Harry Pople begin collaborating on INTERNIST-I, an
internal medicine expert system.
1974
Social scientist Paul Werbos finishes his dissertation on a now widely used algo-
rithm for backpropagation used in training artificial neural networks for super-
vised learning tasks.
1974
Marvin Minsky releases MIT AI Lab memo 306 on “A Framework for Represent-
ing Knowledge.” The memo details the concept of a frame, a “remembered frame-
work” that fits reality by “changing detail as necessary.”
xxii Chronology
1975
John Holland uses the term “genetic algorithm” to describe evolutionary strate-
gies in natural and artificial systems.
1976
Computer scientist Joseph Weizenbaum publishes his ambivalent views of work
on artificial intelligence in Computer Power and Human Reason.
1978
EXPERT, a generalized knowledge representation scheme for creating expert sys-
tems, becomes operational at Rutgers University.
1978
The MOLGEN project at Stanford is begun by Joshua Lederberg, Douglas Brut-
lag, Edward Feigenbaum, and Bruce Buchanan to solve DNA structures derived
from segmentation data in molecular genetics experiments.
1979
The Robotics Institute is established by computer scientist Raj Reddy at Carnegie
Mellon University.
1979
The first human is killed while working with a robot.
1979
The Stanford Cart, evolving over almost two decades into an autonomous rover, is
rebuilt and equipped with a stereoscopic vision system by Hans Moravec.
1980
The First National Conference of the American Association of Artificial Intelli-
gence (AAAI) is held at Stanford University.
1980
Philosopher John Searle makes his Chinese Room argument that a computer’s
simulation of behavior does not in itself demonstrate understanding, intentional-
ity, or consciousness.
1982
Release of the science fiction film Blade Runner, which is broadly based on Philip
K. Dick’s story Do Androids Dream of Electric Sheep? (1968).
1982
Physicist John Hopfield popularizes the associative neural network, first described
by William Little in 1974.
1984
Tom Alexander publishes “Why Computers Can’t Outthink the Experts” in For-
tune Magazine.
1984
Computer scientist Doug Lenat starts the Cyc project to build a massive common-
sense knowledge base and artificial intelligence architecture at the Microelectron-
ics and Computer Consortium (MCC) in Austin, TX.
Chronology xxiii
1984
The first Terminator film, with android assassins from the future and an AI called
Skynet, is released by Orion Pictures.
1986
Honda opens a research center for the development of humanoid robots to coexist
and collaborate with human beings.
1986
MIT roboticist Rodney Brooks introduces the subsumption architecture for behav-
ior-based robotics.
1986
Marvin Minsky publishes The Society of Mind, which describes the brain as a set
of cooperating agents.
1989
Rodney Brooks and Anita Flynn of the MIT Artificial Intelligence Lab pub-
lish “Fast, Cheap, and Out of Control: A Robot Invasion of the Solar System,”
about the potential for launching tiny robots on missions of interplanetary
discovery.
1993
Rodney Brooks, Lynn Andrea Stein, Cynthia Breazeal, and others launch the Cog
interactive robot project at MIT.
1995
Musician Brian Eno coins the term generative music to describe systems that cre-
ate ever-changing music by altering parameters over time.
1995
The General Atomics MQ-1 Predator unmanned aerial vehicle enters U.S. mili-
tary and reconnaissance service.
1997
IBM’s Deep Blue supercomputer defeats reigning chess champion Garry Kasp-
arov under regular tournament conditions.
1997
The first RoboCup, an international competition with over forty teams fielding
robot soccer players, is held in Nagoya, Japan.
1997
Dragon Systems releases NaturallySpeaking, their first commercial speech recog-
nition software product.
1999
Sony releases AIBO, a robotic dog, to the consumer market.
2000
Honda introduces its prototype ASIMO, the Advanced Step in Innovative Mobil-
ity humanoid robot.
xxiv Chronology
2001
Visage Corporation debuts the FaceFINDER automated face-recognition system
at Super Bowl XXXV.
2002
The iRobot Corporation, founded by Rodney Brooks, Colin Angle, and Helen
Greiner, begins marketing the Roomba autonomous home vacuum cleaner.
2004
DARPA sponsors its first autonomous car Grand Challenge in the Mojave Desert
around Primm, NV. None of the cars finish the 150-mile course.
2005
The Swiss Blue Brain Project to simulate the mammalian brain is established
under neuroscientist Henry Markram.
2006
Netflix announces a $1 million dollar prize to the first programming team that
develops the best recommender system based on a dataset of previous user
ratings.
2007
DARPA launches its Urban Challenge, an autonomous vehicle competition meant
to test merging, passing, parking, and negotiating traffic and intersections.
2009
Google begins its Self-Driving Car Project (now called Waymo) in the San Fran-
cisco Bay Area under Sebastian Thrun.
2009
Stanford University computer scientist Fei-Fei Li presents her work on ImageNet,
a collection of millions of hand-annotated images for training AIs to visually rec-
ognize the presence or absence of objects.
2010
A “flash crash” of the U.S. stock market is triggered by human manipulation of
automated trading software.
2011
UK artificial intelligence start-up DeepMind is founded by Demis Hassabis,
Shane Legg, and Mustafa Suleyman to teach AIs to play and excel at classic video
games.
2011
IBM’s natural language computing system Watson defeats past Jeopardy! cham-
pions Ken Jennings and Brad Rutter.
2011
Apple releases the mobile recommendation assistant Siri on the iPhone 4S.
2011
An informal Google Brain deep learning research collaboration is started by com-
puter scientist Andrew Ng and Google researchers Jeff Dean and Greg Corrado.
Chronology xxv
2013
The Human Brain Project of the European Union is launched to understand how
the human brain works and also emulate its computational capabilities.
2013
Human Rights Watch begins a campaign to Stop Killer Robots.
2013
Her, a science fiction drama directed by Spike Jonze, is released. The film features
a romance between a man and his AI mobile recommendation assistant
Samantha.
2014
Ian Goodfellow and collaborators at the University of Montreal introduce Genera-
tive Adversarial Networks (GANs) for use in deep neural networks, which prove
useful in creating realistic images of fake people.
2014
The chatbot Eugene Goostman, portraying a thirteen-year-old boy, is controver-
sially said to have passed a Turing-like test.
2014
Physicist Stephen Hawking predicts the development of AI could result in the
extinction of humanity.
2015
Facebook releases DeepFace deep learning facial recognition technology on its
social media platform.
2016
DeepMind’s AlphaGo program defeats 9th dan Go player Lee Sedol in a five-
game match.
2016
Microsoft’s artificial intelligence chatbot Tay is released on Twitter, where users
train it to make offensive and inappropriate tweets.
2017
The Future of Life Institute organizes the Asilomar Meeting on Beneficial AI.
2017
The Way of the Future church is founded by AI self-driving start-up engineer
Anthony Levandowski, who is motivated to create a superintelligent robot
deity.
2018
Google announces Duplex, an AI application for scheduling appointments over
the phone using natural language.
2018
The European Union publishes its General Data Protection Regulation (GDPR)
and “Ethics Guidelines for Trustworthy AI.”
xxvi Chronology
2019
Google AI and Northwestern Medicine in Chicago, IL, collaborate on a lung can-
cer screening AI that outperforms specialist radiologists.
2019
OpenAI, cofounded by Elon Musk, develops an artificial intelligence text genera-
tion system that creates realistic stories and journalism. It is initially deemed “too
dangerous” to use because of its potential to generate fake news.
2020
Google AI in collaboration with the University of Waterloo, the “moonshot fac-
tory” X, and Volkswagen announce TensorFlow Quantum, an open-source library
for quantum machine learning.
A
AARON
AARON is computer software created by Harold Cohen to create paintings. Cohen
himself dates the creation of the first version to “around 1972.” Since AARON is
not open source, it can be said that its development ended in 2016 when Cohen
died. AARON was still producing new images in 2014, and its functionality was
evident even in 2016. AARON is not an acronym. The name was given because it
is at the beginning of the alphabet, and Cohen imagined he would subsequently
create other programs later on, which he never did.
During its four decades of evolution, AARON had several versions with differ-
ent capabilities. The earlier versions were able to create black-and-white line
drawings, while the later versions were also able to paint in color. Some versions
of AARON were configured to create abstract paintings while others painted
scenes with objects and people in it.
The primary purpose of AARON was to create not only digital images but
also tangible, large-sized images or paintings. In Cohen’s exhibition at The San
Francisco Museum of Modern Art, the lines drawn by AARON, a program writ-
ten in C at the time, were traced directly on the wall. In later artistic installments
of AARON, the program was coupled with a machine that had a robotic arm and
was able to apply paint on canvas. For instance, the version of AARON on
exhibit in The Computer Museum in Boston in 1995, which was implemented in
LISP by this time and ran on a Silicon Graphics computer, created a file with a
set of instructions. This file was then transferred to a PC that ran a C++ pro-
gram. This computer had a robotic arm attached to it. The C++ code interpreted
the instructions and controlled the movement of the arm, the mixing of the dyes,
and the application of them on the canvas. The machines built by Cohen to draw
and paint were also important innovations. Even later versions used industrial
inkjet printers. Cohen held this configuration of AARON the most advanced
because of the colors these new printers could produce; he believed that when it
came to colors, the inkjet was the biggest invention since the industrial
revolution.
While Cohen mostly focused on tangible images, around 2000, Ray Kurzweil
created a version of AARON that was a screensaver program. By 2016, Cohen
himself had created a version of AARON that generated black-and-white images
that the user could color using a large touch screen. He called this “Fingerpaint-
ing.” Cohen always believed that AARON is neither a “fully autonomous artist”
nor is it truly creative. He believed though that AARON exhibits one condition of
autonomy: a form of emergence, which in Cohen’s terms means that the paintings
2 Accidents and Risk Assessment
generated are genuinely surprising and novel. Cohen never ventured very far into
the philosophical implications of AARON. Based on the amount of time he dedi-
cates to the coloring problem in almost all of the interviews made with him, it is
safe to assume that he regarded AARON’s performance as a colorist his biggest
achievement.
Mihály Héder
See also: Computational Creativity; Generative Design.
Further Reading
Cohen, Harold. 1995. “The Further Exploits of AARON, Painter.” Stanford Humanities
Review 4, no. 2 (July): 141–58.
Cohen, Harold. 2004. “A Sorcerer’s Apprentice: Art in an Unknown Future.” Invited talk
at Tate Modern, London. http://www.aaronshome.com/aaron/publications/tate
-final.doc.
Cohen, Paul. 2016. “Harold Cohen and AARON.” AI Magazine 37, no. 4 (Winter): 63–66.
McCorduck, Pamela. 1990. Aaron’s Code: Meta-Art, Artificial Intelligence, and the Work
of Harold Cohen. New York: W. H. Freeman.
to and interacting with its environment and user. The results of changes to vari-
ables, individual actions, or events are sometimes unpredictable and may even be
catastrophic.
One of the dark secrets of advanced artificial intelligence is that it relies on
mathematical methods and deep learning algorithms so complex that even its
makers cannot understand how it makes reliable decisions. Autonomous vehicles,
for instance, usually rely on instructions written solely by the computer as it
observes humans driving under real conditions. But how can a driverless car come
to expect the unexpected? And additionally, will efforts to tweak AI-generated
code to reduce perceived errors, omissions, and impenetrability reduce the risk of
accidental negative outcomes or simply amplify errors and generate new ones? It
remains unclear how to mitigate the risks of artificial intelligence, but it is likely
that society will use proven and presumably trustworthy machine-learning sys-
tems to automatically provide rationales for their behavior, and even examine
newly invented cognitive computing systems on our behalf.
Philip L. Frana
See also: Algorithmic Bias and Error; Autonomy and Complacency; Beneficial AI, Asilo-
mar Meeting on; Campaign to Stop Killer Robots; Driverless Vehicles and Liability;
Explainable AI; Product Liability and AI; Trolley Problem.
Further Reading
De Visser, Ewart Jan. 2012. “The World Is Not Enough: Trust in Cognitive Agents.” Ph.D.
diss., George Mason University.
Forester, Tom, and Perry Morrison. 1990. “Computer Unreliability and Social Vulnerabil-
ity.” Futures 22, no. 5 (June): 462–74.
Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate
Reliance.” Human Factors 46, no. 1 (Spring): 50–80.
Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in
Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan
M. Ćirković, 308–45. New York: Oxford University Press.
their mission, the sensor data was run through a series of AI-based software sys-
tems that indexed the data and created an electronic chronicle of the events that
happened while the ASSIST system was recording. With this information, sol-
diers could give more accurate reports without relying solely on their memory.
The AI-based algorithms had enabled numerous functionalities, including:
• “Image/Video Data Analysis Capabilities”
• Object Detection/Image Classification—the ability to recognize and
identify objects (e.g., vehicles, people, and license plates) through anal-
ysis of video, imagery, and/or related data sources
• Arabic Text Translation—the ability to detect, recognize, and translate
written Arabic text (e.g., in imagery data)
• Change Detection—the ability to identify changes over time in related
data sources (e.g., identify differences in imagery of the same location
at different times)
• “Audio Data Analysis Capabilities”
• Sound Recognition/Speech Recognition—the ability to identify sound
events (e.g., explosions, gunshots, and vehicles) and recognize speech
(e.g., keyword spotting and foreign language identification) in audio
data
• Shooter Localization/Shooter Classification—the ability to identify
gunshots in the environment (e.g., through analysis of audio data),
including the type of weapon producing the shots and the location of
the shooter
• “Soldier Activity Data Analysis Capabilities”
• Soldier State Identification/Soldier Localization—the ability to iden-
tify a soldier’s path of movement around an environment and charac-
terize the actions taken by the soldier (e.g., running, walking, and
climbing stairs)
For AI systems such as these (often termed autonomous or intelligent systems) to
be successful, they must be comprehensively and quantitatively evaluated to
ensure that they will function appropriately and as expected in a wartime environ-
ment. The National Institute of Standards and Technology (NIST) was tasked
with evaluating these AI systems based on three metrics:
1. The accuracy of object/event/activity identification and labeling
2. The system’s ability to improve its classification performance through
learning
3. The utility of the system in enhancing operational effectiveness
NIST developed a two-part test methodology to produce its performance mea-
sures. Metrics 1 and 2 were evaluated through component- and system-level tech-
nical performance evaluations and metric 3 was evaluated through system-level
utility assessments. The technical performance evaluations were designed to mea-
sure the progressive development of ASSIST system technical capabilities, and
the utility assessments were designed to predict the impact these technologies will
6 Advanced Soldier Sensor Information Systems and Technology
and behavior were scripted. The purpose was to provide an environment that
would exercise the different ASSIST systems’ capabilities as they detected, identi-
fied, and/or captured various types of information. NIST included the following
elements in the utility assessments: foreign language speech detection and classi-
fication, Arabic text detection and recognition, detection of shots fired and vehicu-
lar sounds, classification of soldier states and tracking their locations (both inside
and outside of buildings), identifying objects of interest including vehicles, build-
ings, people, etc. The soldiers’ actions were not scripted as they moved through
each exercise because the tests required the soldiers to act according to their train-
ing and experience.
Craig I. Schlenoff
Portions of this entry adapted from Schlenoff, Craig, Michelle Potts Steves, Brian A.
Weiss, Mike Shneier, and Ann Virts. 2007. “Applying SCORE to Field-Based Performance
Evaluations of Soldier Worn Sensor Techniques.” Journal of Field Robotics, 24: 8–9, 671–
698. Copyright © 2007 Wiley Periodicals, Inc., A Wiley Company. Used by permission.
See also: Battlefield AI and Robotics; Cybernetics and AI.
Further Reading
Schlenoff, Craig, Brian Weiss, Micky Steves, Ann Virts, Michael Shneier, and Michael
Linegang. 2006. “Overview of the First Advanced Technology Evaluations for
ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems
Workshop, 125–32. Gaithersburg, MA: National Institute of Standards and
Technology.
Steves, Michelle P. 2006. “Utility Assessments of Soldier-Worn Sensor Systems for
ASSIST.” In Proceedings of the Performance Metrics for Intelligence Systems
Workshop, 165–71. Gaithersburg, MA: National Institute of Standards and
Technology.
Washington, Randolph, Christopher Manteuffel, and Christopher White. 2006. “Using an
Ontology to Support Evaluation of Soldier-Worn Sensor Systems for ASSIST.” In
Proceedings of the Performance Metrics for Intelligence Systems Workshop, 172–
78. Gaithersburg, MA: National Institute of Standards and Technology.
Weiss, Brian A., Craig I. Schlenoff, Michael O. Shneier, and Ann Virts. 2006. “Technol-
ogy Evaluations and Performance Metrics for Soldier-Worn Sensors for ASSIST.”
In Proceedings of the Performance Metrics for Intelligence Systems Workshop,
157–64. Gaithersburg, MA: National Institute of Standards and Technology.
AI Winter
The phrase AI Winter was coined at the 1984 annual meeting of the American
Association of Artificial intelligence (now the Association for the Advancement of
Artificial Intelligence or AAAI). Two leading researchers, Marvin Minsky and
Roger Schank, used the expression to refer to the then-impending bust period in
research and commercial development in artificial intelligence. Canadian AI
researcher Daniel Crevier has documented how angst over a coming AI Winter
triggered a domino effect that began with cynicism in the AI research community,
trickled into mass media, and finally led to adverse reactions by funding bodies.
The result was a freeze in serious AI research and development. Initial pessimism
8 AI Winter
is now mainly attributed to the overly ambitious promises made at the time—AI’s
actual results being far humbler than expectations.
Other factors such as insufficient computing power available during early days
of AI research also contributed to the opinion that an AI Winter was at hand. This
was particularly true of neural network research, which required vast computing
resources. Similarly, economic factors, particularly during overlapping periods of
economic crisis, resulted in restricted focus on more tangible investments.
Several periods throughout the history of AI can be described as AI Winters,
with two of the major periods spanning from 1974 to 1980 and from 1987 to 1993.
Although the dates of AI Winters are contentious and source-dependent, periods
of overlapping trends mark these periods as prone to research abandonment and
defunding.
Similar to the hype and eventual bust of emerging technologies such as nan-
otechnology, the development of AI systems and technologies has advanced
nonetheless. The current boom period is marked by not only an unprecedented
amount of funding toward fundamental research but also unparalleled prog-
ress in the development of machine learning. Motivations behind the invest-
ment boom differ, as they depend on the various stakeholders who engage in
artificial intelligence research and development. Industry, for example, has
wagered large sums on the promise that breakthroughs in AI will yield divi-
dends by revolutionizing whole market sectors. Governmental bodies such as
the military, on the other hand, invest in AI research to make both defensive
and offensive technologies more efficient and remove soldiers from immediate
harm.
Because AI Winters are fundamentally caused by a perceptual loss of faith in
what AI can yield, the current hype surrounding AI and its promises has led to
concern that another AI Winter will be triggered. Conversely, arguments have
been made that the current technological advances in applied AI research have
solidified the growth in future innovation in this area. This argument stands in
stark contrast with the so-called “pipeline problem,” which argues that the lack of
fundamental research in AI will lead to finite amounts of applied results. The
pipeline problem is often cited as one of the contributing factors to previous AI
Winters. If the counterargument is correct, however, a feedback loop between
applied innovations and fundamental research will provide the pipeline with
enough pressure for continued progress.
Steven Umbrello
See also: Minsky, Marvin.
Further Reading
Crevier, Daniel. 1993. AI: The Tumultuous Search for Artificial Intelligence. New York:
Basic Books.
Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New
York: Viking.
Muehlhauser, Luke. 2016. “What Should We Learn from Past AI Forecasts?” https://www
.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced
-artificial-intelligence/what-should-we-learn-past-ai-forecasts.
Air Traffic Control, AI and 9
Further Reading
Federal Aviation Administration. 2013. Aeronautical Information Manual: Official Guide
to Basic Flight Information and ATC Procedures. Washington, DC: FAA. https://
www.faa.gov/air_traffic/publications/.
International Civil Aviation Organization. 2018. “Potential of Artificial Intelligence (AI)
in Air Traffic Management (ATM).” In Thirteenth Air Navigation Conference,
1–3. Montreal, Canada. https://www.icao.int/Meetings/anconf13/Documents/WP
/wp_232_en.pdf.
Nolan, Michael S. 1999. Fundamentals of Air Traffic Control. Pacific Grove, CA: Brooks
/Cole.
Further Reading
Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelli-
gence. New York: Basic Books.
Dreyfus, Hubert L. 1965. Alchemy and Artificial Intelligence. P-3244. Santa Monica, CA:
RAND Corporation.
12 Algorithmic Bias and Error
Dreyfus, Hubert L. 1972. What Computers Can’t Do: The Limits of Artificial Intelligence.
New York: Harper and Row.
McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History
and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.
Papert, Seymour. 1968. The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fal-
lacies. Project MAC, Memo No. 154. Cambridge, MA: Massachusetts Institute of
Technology.
Indeed, there are several ways in which bias in an algorithmic system can
occur. Broadly speaking, algorithmic bias tends to occur when a group of persons,
and their lived realities, are not taken into account in the design of the algorithm.
This can occur at various stages of the process of developing an algorithm, from
the collection of data that is not representative of all demographic groups to the
labeling of data in ways that reproduce discriminatory profiling, to the rollout of
an algorithm where the differential impact it may have on a particular group is not
taken into account.
Partly in response to significant publicity of algorithmic biases, in recent years
there has been a proliferation of policy documents addressing the ethical responsi-
bilities of state and non-state bodies using algorithmic processing—to ensure
against unfair bias and other negative effects of algorithmic processing (Jobin et
al. 2019). One of the key policies in this space is the European Union’s “Ethics
Guidelines for Trustworthy AI” published in 2018. The EU document outlines
seven principles for the fair and ethical governance of AI and algorithmic
processing.
In addition, the European Union has been at the forefront of regulatory
responses to algorithmic processing with the promulgation of the General Data
Protection Regulation (GDPR), also published in 2018. Under the GDPR, which
applies in the first instance to the processing of all personal information within the
EU, a company can be fined up to 4 percent of its annual global revenue for using
an algorithm that is shown to be biased on the basis of race, gender, or other pro-
tected category.
A lingering concern for the regulation of algorithmic processing is the diffi-
culty of ascertaining where a bias occurred and what dataset led to bias. Typically,
this is known as the algorithmic black box problem: the deep data processing lay-
ers of an algorithm are so complex and numerous they simply cannot be under-
stood by a human. Based on the right to an explanation where, subject to an
automated decision under the GDPR, one of the responses has been to ascertain
where the bias occurred through counterfactual explanations, different data is
inputted into the algorithm to see where the differential outcomes occur (Wachter
et al. 2018).
In addition to legal and policy tools for addressing algorithmic bias, technical
solutions to the problem included developing synthetic datasets that attempt to fix
naturally occurring biases in datasets or offer an unbiased and representative
dataset. While such avenues for redress are important, one of the more holistic
responses to the problem are that human teams that develop, produce, use, and
monitor the impact of algorithms should be much more diverse. Within diverse
teams, a combination of lived experiences make it more likely that biases can be
detected sooner and addressed.
Rachel Adams
See also: Biometric Technology; Explainable AI; Gender and AI.
Further Reading
Ali, Muhammed, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mis-
love, and Aaron Rieke. 2019. “Discrimination through Optimization: How
14 Animal Consciousness
Animal Consciousness
In recent decades, researchers have developed an increasing appreciation of ani-
mal and other nonhuman intelligences. Consciousness or sentience, complex
forms of cognition, and personhood rights have been argued for in ravens, bower
birds, gorillas, elephants, cats, crows, dogs, dolphins, chimpanzees, grey parrots,
jackdaws, magpies, beluga whales, octopi, and several other species of animals.
The Cambridge Declaration on Consciousness and the separate Nonhuman
Rights Project mirror the contemporary struggle against racism, classism, sex-
ism, and ethnocentrism by adding one more prejudice: “speciesism,” coined in
1970 by psychologist Richard Ryder and popularized by the philosopher Peter
Singer. Animal consciousness, indeed, may pave the way for consideration and
appreciation of other types of proposed intelligences, including those that are
artificial (traditionally considered, such as animals, to be “mindless automata”)
and extraterrestrial.
One of the most important questions experts in many fields grapple with today
is the knowability of the subjective experience and objective qualities of other
types of consciousness. “What is it like to be a bat?” the philosopher Thomas
Nagel famously asked, especially as they are capable of echolocation and humans
are not. Most selfishly, understanding animal consciousness might open a window
to better understanding of human consciousness by way of comparison. Looking
to animals also might reveal new perspectives on the mechanisms for evolution of
consciousness in human beings, which might in turn help scientists equip robots
with similar traits, appreciate their moral status, or sympathize with their behav-
ior. History is littered with examples of animals used as a means for human ends,
rather than as ends themselves. Cows produce milk for human consumption.
Sheep make wool for clothing. Horses once provided transportation and power for
Animal Consciousness 15
agriculture, and now afford opportunities for entertainment and gambling. The
“discovery” of animal consciousness may mean removing the human species from
the center of its own mental universe.
The “Cognitive Revolution” of the twentieth century, which seemingly removed
the soul as a scientific explanation of mental life, opened the door to studying and
making experiments in perception, memory, cognition, and reasoning in animals
and also exploring the possibilities for incorporating sophisticated information
processing convolutions and integrative capabilities into machines. The possibil-
ity of a basic cognitive “software” common to humans, animals, and artificial
general intelligences is often discussed from the perspective of newer interdisci-
plinary fields such as neuroscience, evolutionary psychology, and computer
science.
The independent researcher John Lilly was among the first to argue, in his book
Man and Dolphin (1961), that dolphins are not merely intelligent, but in many
ways, they possess qualities and communication skills beyond the human. Other
researchers such as Lori Marino and Diana Reiss have since confirmed many of
his findings, and rough agreement has been reached that dolphin’s self-awareness
lies somewhere on the continuum between humans and chimps. Dolphins have
been observed to fish cooperatively with human fishermen, and the most famous
dolphin in history, Pelorus Jack, faithfully and voluntarily escorted ships through
the dangerous rocks and tidal flows of Cook Strait in New Zealand for twenty-four
years.
Some animals appear to pass the famous mirror test of self-recognition. These
include dolphins and killer whales, chimpanzees and bonobos, magpies, and ele-
phants. The test is usually administered by painting a small mark on an animal, in
a place where it cannot see without recourse to the mirror. If the animal touches
the mark on their own body after seeing it reflected, they may be said to recognize
themselves. Some critics have argued that the mirror-mark test is unfair to some
species of animals because it privileges vision over other sense organs.
SETI researchers acknowledge that study of animal consciousness may par-
tially prepare human beings to grapple with the existential ramifications of self-
aware extraterrestrial intelligences. Similarly, work with animals has spawned
parallel interest in consciousness in artificial intelligences. To cite one direct
example: In his autobiography The Scientist (1978), John Lilly describes a hypo-
thetical Solid State Intelligence (SSI) that would inevitably arise from the work of
human computer scientists and engineers. This SSI would be made of computer
parts, produce its own integrations and enhancements, and ultimately engage in
self-replication to challenge and defeat humanity. The SSI would protect some
human beings in domed “reservations” completely subject to its own maintenance
and control. Eventually, the SSI would master the ability to move the planet and
explore the galaxy looking for other intelligences like itself.
Self-consciousness in artificial intelligences has been critiqued on many levels.
John Searle has argued vigorously that machines lack intentionality, that is, the
ability to find meaning in the computations they execute. Inanimate objects are
rarely thought of as possessing free will and thus are not conceptually human.
Further, they might be thought of as having a “missing-something,” for instance,
16 Animal Consciousness
zeroth law: “the Machines work not for any single human being, but for all human-
ity” (Asimov 2004b, 222). Calvin worries that the Machines are moving humanity
toward what they believe is “the ultimate good of humanity” (Asimov 2004b, 222)
even if humanity doesn’t know what that is.
Additionally, “psychohistory,” a term introduced in Asimov’s Foundation
series (1940s–1990s), could be described as anticipating the algorithms that pro-
vide the foundation for artificial intelligence today. In Foundation, the main pro-
tagonist Hari Seldon develops psychohistory as a way to make general predictions
about the future behavior of very large groups of people, including the fall of civi-
lization (here, the Galactic Empire) and the inevitable Dark Ages. But Seldon
argues that utilizing psychohistory can reduce the period of anarchy:
Psychohistory, which can predict the fall, can make statements concerning the suc-
ceeding dark ages. The Empire . . . has stood twelve thousand years. The dark ages
to come will endure not twelve, but thirty thousand years. A Second Empire will
rise, but between it and our civilization will be one thousand generations of suffer-
ing humanity . . . It is possible . . . to reduce the duration of anarchy to a single mil-
lennium, if my group is allowed to act now. (Asimov, 2004a, 30–31)
misinterpretations in order to further the ethical questions and for dramatic effect.
But because they can be misinterpreted just like other laws, critics argue that the
Three Laws cannot serve as a real moral code for regulating AI or robots. Finally,
some question to whom these ethical guidelines should apply.
Asimov died in 1992 due to complications from AIDS, which he contracted via
a contaminated blood transfusion during a 1983 heart bypass surgery.
Oliver J. Kim
See also: Beneficial AI, Asilomar Meeting on; Pathetic Fallacy; Robot Ethics.
Further Reading
Asimov, Isaac. 1994. I, Asimov: A Memoir. New York: Doubleday.
Asimov, Isaac. 2002. It’s Been a Good Life. Amherst: Prometheus Books.
Asimov, Isaac. 2004a. The Foundation Novels. New York: Bantam Dell.
Asimov, Isaac. 2004b. I, Robot. New York: Bantam Dell.
Depending on the sample size of the data to be analyzed, the number of features,
and the types of machine learning algorithms selected, so many separate analyses
could be prohibitive given the computing resources that are available to the user.
An alternative approach is to use a stochastic search to approximate the best
combination of machine learning algorithms, parameter settings, and hyperpa-
rameter settings. A random number generator is used to sample from all possible
combinations until some computational limit is reached. The user manually
explores additional parameter and hyperparameter settings around the best
method before making a final choice. This has the advantage of being computa-
tionally manageable but suffers from the stochastic element where chance might
not explore the optimal combinations.
A solution to this is to add a heuristic element—a practical method, guide, or
rule—to create a stochastic search algorithm that can adaptively explore algo-
rithms and settings while improving over time. Approaches that employ stochastic
searches with heuristics are called automated machine learning because they
automate the search for optimal machine learning algorithms and settings. A sto-
chastic search might start by randomly generating a number of machine learning
algorithm, parameter setting, and hyperparameter setting combinations and then
evaluating each one using cross-validation, a technique for testing the effective-
ness of a machine learning model. The best of these is selected, randomly modi-
fied, and then evaluated again. This process is repeated until a computational limit
or a performance objective is reached. The heuristic algorithm governs this pro-
cess of stochastic search. The development of optimal search strategies is an active
area of research.
The AutoML approach has numerous advantages. First, it can be more compu-
tationally efficient than the exhaustive grid search approach. Second, it makes
machine learning more approachable because it takes some of the guesswork out
of selecting an optimal machine learning algorithm and its many settings for a
given dataset. This helps bring machine learning to the novice user. Third, it can
yield more reproducible results if generalizability metrics are built into the heuris-
tic that is used. Fourth, it can yield more interpretable results if complexity met-
rics are built into the heuristic. Fifth, it can yield more actionable results if expert
knowledge is built into the heuristic.
Of course, there are some challenges with AutoML approaches. First is the
challenge of overfitting—producing an analysis that corresponds too closely to
known data but does not fit or predict unseen or new data—due to the evaluation
of many different algorithms. The more analytical methods that are applied to a
dataset, the higher the chance of learning the noise in the data that leads to a
model unlikely to generalize to independent data. This needs to be rigorously
addressed with any AutoML method. Second, AutoML can be computationally
intensive in its own right. Third, AutoML methods can generate very complex
pipelines that include multiple different machine learning methods. This can make
interpretation much more difficult than picking a single algorithm for the analysis.
Fourth, this field is still in its infancy. Ideal AutoML methods may not have been
developed as yet despite some promising early examples.
Jason H. Moore
Automated Multiphasic Health Testing 23
to their importance in the story at that particular point in time. Other film editing
tenets that emerged from long experience by filmmakers are “exit left, enter right,”
which helps the viewer follow lateral movements of characters on the screen, and
the 180- and 30-degree rules for maintaining spatial relationships between sub-
jects and the camera. Such rules over time became codified as heuristics govern-
ing shot selection, cutting, and rhythm and pacing. One example is Joseph
Mascelli’s Five C’s of Cinematography (1965), which has grown into a vast knowl-
edge base for making decisions about camera angles, continuity, cutting, closeups,
and composition.
The first artificial intelligence film editing systems developed from these
human-curated rules and human-annotated movie stock footage and clips. An
early 1990s system is IDIC, developed by Warren Sack and Marc Davis in the
MIT Media Lab. IDIC is designed to solve the real-world problem of film editing
using Herbert Simon, J. C. Shaw, and Allen Newell’s General Problem Solver, an
early artificial intelligence program that was intended to solve any general prob-
lem using the same base algorithm. IDIC has been used to generate hypothetical
Star Trek television trailers assembled from a human-specified story plan centered
on a particular plot point.
Several film editing systems rely on idioms, that is, conventional procedures
for editing and framing filmed action in specific situations. The idioms them-
selves will vary based on the style of film, the given context, or the action to be
portrayed. In this way, the knowledge of expert editors can be approached in
terms of case-based reasoning, using a past editing recipe to solve similar current
and future problems. Editing for fight scenes follows common idiomatic path-
ways, as does ordinary conversations between characters. This is the approach
modeled by Li-wei He, Michael F. Cohen, and David H. Salesin’s Virtual Cinema-
tographer, which relies on expert knowledge of idioms in the editing of entirely
computer-generated video for interactive virtual worlds. The Declarative Camera
Control Language (DCCL) developed by He’s group formalizes the control of
camera positions to follow cinematographic conventions in the editing of CGI
animated films.
More recently, researchers have been working with deep learning algorithms
and training data pulled from existing collections of recognized films possessing
high cinematographic quality, to create proposed best cuts of new films. Many of
the newer applications are available on mobile, drone, or handheld equipment.
Easy automatic video editing is expected to make the sharing of short and inter-
esting videos, assembled from shots made by amateurs with smartphones, a pre-
ferred medium of exchange over future social media. That niche is currently
occupied by photography. Automatic film editing is also in use as an editing tech-
nique in machinima films made using 3D virtual game engines with virtual actors.
Philip L. Frana
See also: Workplace Automation.
Further Reading
Galvane, Quentin, Rémi Ronfard, and Marc Christie. 2015. “Comparing Film-Editing.”
In Eurographics Workshop on Intelligent Cinematography and Editing, edited by
26 Autonomous and Semiautonomous Systems
William H. Bares, Marc Christie, and Rémi Ronfard, 5–12. Aire-la-Ville, Switzer-
land: Eurographics Association.
He, Li-wei, Michael F. Cohen, and David H. Salesin. 1996. “The Virtual Cinematogra-
pher: A Paradigm for Automatic Real-Time Camera Control and Directing.” In
Proceedings of SIGGRAPH ’96, 217–24. New York: Association for Computing
Machinery.
Ronfard, Rémi. 2012. “A Review of Film Editing Techniques for Digital Games.” In
Workshop on Intelligent Cinematography and Editing. https://hal.inria.fr/hal
-00694444/.
these system categories are present in actual systems. One example of such ambi-
guity is in the levels of autonomy designated by SAE (formerly the Society of
Automotive Engineers) for driverless cars. A single system may be Level 2 semi-
autonomous, Level 3 conditionally autonomous, or Level 4 autonomous depend-
ing on road or weather conditions or upon circumstantial indices such as the
presence of road barriers, lane markings, geo-fencing, surrounding vehicles, or
speed. Autonomy level may also depend upon how an automotive task is defined.
In this way, the classification of a system depends as much upon the technological
constitution of the system itself as the circumstances of its functioning or the
parameters of the activity focus.
EXAMPLES
Autonomous Vehicles
Automated, semiautonomous, conditionally autonomous, and fully autono-
mous vehicle systems help illustrate the distinctions between these types of sys-
tems. Cruise control functionality is an example of an automated technology. The
user sets a speed target for the vehicle and the vehicle maintains that speed, adjust-
ing acceleration and deceleration as the terrain requires. In the case, however, of
semiautonomous vehicles, a vehicle may be equipped with an adaptive cruise con-
trol feature (one that regulates the speed of a vehicle relative to a leading vehicle
and to a user’s input) coupled with lane keeping assistance, automatic braking, and
collision mitigation technology that together make up a semiautonomous system.
Today’s commercially available vehicles are considered semiautonomous. Sys-
tems are capable of interpreting many potential inputs (surrounding vehicles, lane
markings, user input, obstacles, speed limits, etc.) and can regulate longitudinal
and latitudinal control to semiautonomously guide the trajectory of the vehicle.
Within this system, the human user is still enrolled in decision-making, monitor-
ing, and interventions. Conditional autonomy refers to a system that (under cer-
tain conditions) permits a human user to “exit the loop” of control and
decision-making. Once a goal is established (e.g., to continue on a path), the vehi-
cle processes emergent inputs and regulates its behavior to achieve the goal with-
out human monitoring or intervention. Behaviors internal to the activity (the
activity is defined by the goal and the available means) are regulated and con-
trolled without the participation of the human user. It’s important to note that any
classification is contingent on the operationalization of the goal and activity.
Finally, an autonomous system possesses fewer limitations than conditional
autonomy and entails the control of all tasks in an activity. Like conditional auton-
omy, an autonomous system operates independently of a human user within the
activity structure.
Autonomous Robotics
Examples of autonomous systems can be found across the field of robotics for a
variety of purposes. There are a number of reasons that it is desirable to replace or
28 Autonomous and Semiautonomous Systems
augment humans with autonomous robots, and some of the reasons include safety
(for example, spaceflight or planetary surface exploration), undesirable circum-
stances (monotonous tasks such as domestic chores and strenuous labor such as
heavy lifting), or where human action is limited or impossible (search and rescue
in confined conditions). As with automotive applications, robotics applications
may be considered autonomous within the constraints of a narrowly defined
domain or activity space, such as a manufacturing facility assembly line or home.
Like autonomous vehicles, the degree of autonomy is conditional upon the speci-
fied domain, and in many cases excludes maintenance and repair. However, unlike
automated systems, an autonomous robot within such a defined activity structure
will act to complete a specified goal through sensing its environment, processing
circumstantial inputs, and regulating behavior accordingly without necessitating
human intervention. Current examples of autonomous robots span an immense
variety of applications and include domestic applications such as autonomous
lawn care robots and interplanetary exploration applications such as the MER-A
and MER-B Mars rovers.
Semiautonomous Weapons
Autonomous and semiautonomous weapon systems are currently being devel-
oped as part of modern warfare capability. Like the above automotive and robot-
ics examples, the definition of, and distinction between, autonomous and
semiautonomous varies substantially on the operationalization of the terms, the
context, and the domain of activity. Consider the landmine as an example of an
automated weapon with no autonomous capability. It responds with lethal force
upon the activation of a sensor and involves neither decision-making capability
nor human intervention. In contrast, a semiautonomous system processes inputs
and acts accordingly for some set of tasks that constitute the activity of weaponry
in conjunction with a human user. Together, the weapons system and the human
user are necessary contributors to a single activity. In other words, the human
user is “in the loop.” These tasks may include identifying a target, aiming, and
firing. They may also include navigating toward a target, positioning, and reload-
ing. In a semiautonomous weapon system, these tasks are distributed between
the system and the human user. By contrast, an autonomous system would be
responsible for the whole set of these tasks without requiring the monitoring,
decision-making, or intervention of the human user once the goal was set and the
parameters specified. By these criteria, there are currently no fully autonomous
weapons systems. However, as noted above, these definitions are technologically
as well as socially, legally, and linguistically contingent. Most conspicuously in
the case of weapons systems, the definition of semiautonomous and autonomous
systems has ethical, moral, and political significance. This is especially true
when it comes to determining responsibility, because causal agency and
decision-making may be scattered across developers and users. The sources of
agency and decision-making may also be opaque as in the case of machine learn-
ing algorithms.
Autonomous and Semiautonomous Systems 29
USER-INTERFACE CONSIDERATIONS
Ambiguity in definitions of semiautonomous and autonomous systems mirrors
the many challenges in designing optimized user interfaces for these systems. In
the case of vehicles, for example, ensuring that the user and the system (as devel-
oped by a system’s designers) share a common model of the capabilities being
automated (and the expected distribution and extent of control) is critical for safe
transference of control responsibility. Autonomous systems are theoretically sim-
pler user-interface challenges insofar as once an activity domain is defined, con-
trol and responsibility are binary (either the system or the human user is
responsible). Here the challenge is reduced to specifying the activity and handing
over control.
Semiautonomous systems present more complex challenges for the design of
user-interfaces because the definition of an activity domain has no necessary rela-
tionship to the composition, organization, and interaction of constituting tasks.
Particular tasks (such as a vehicle maintaining lateral position in a lane) may be
determined by an engineer’s application of specific technological equipment (and
the attendant limitations) and thus bear no relationship to the user’s mental repre-
sentation of that task. An illustrative example is an obstacle detection task in
which a semiautonomous system relies upon avoiding obstacles to move around
an environment. The obstacle detection mechanisms (camera, radar, optical sen-
sors, touch sensors, thermo sensors, mapping, etc.) determine what is or is not
considered an obstacle by the machine, and those limitations may be opaque to a
user. The resultant ambiguity necessitates that the system communicates to a
human user when intervention is necessary and relies upon the system (and the
system’s designers) to understand and anticipate potential incompatibility between
system and user models.
In addition to the issues above, other considerations for designing semiautono-
mous and autonomous systems (specifically in relation to the ethical and legal
dimensions complicated by the distribution of agency across developers and users)
include identification and authorization methods and protocols. The problem of
identifying and authorizing users for the activation of autonomous technologies is
critical where systems, once initiated, no longer rely upon continual monitoring,
intermittent decision-making, or intervention.
Michael Thomas
See also: Autonomy and Complacency; Driverless Cars and Trucks; Lethal Autonomous
Weapons Systems.
Further Reading
Antsaklis, Panos J., Kevin M. Passino, and Shyh Jong Wang. 1991. “An Introduction to
Autonomous Control Systems.” IEEE Control Systems 11, no. 4 (June): 5–13.
Bekey, George A. 2005. Autonomous Robots: From Biological Inspiration to Implementa-
tion and Control. Cambridge, MA: MIT Press.
Norman, Donald A., Andrew Ortony, and Daniel M. Russell. 2003. “Affect and Machine
Design: Lessons for the Development of Autonomous Machines.” IBM Systems
Journal 42, no. 1: 38–44.
30 Autonomous Weapons Systems, Ethics of
See also: Battlefield AI and Robotics; Campaign to Stop Killer Robots; Lethal Autono-
mous Weapons Systems; Robot Ethics.
Further Reading
Arkin, Ronald C. 2010. “The Case for Ethical Autonomy in Unmanned Systems.” Journal
of Military Ethics 9, no. 4: 332–41.
Bhuta, Nehal, Susanne Beck, Robin Geiss, Hin-Yan Liu, and Claus Kress, eds. 2016.
Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge, UK: Cambridge
University Press.
32 Autonomy and Complacency
Further Reading
André, Quentin, Ziv Carmon, Klaus Wertenbroch, Alia Crum, Frank Douglas, William
Goldstein, Joel Huber, Leaf Van Boven, Bernd Weber, and Haiyang Yang. 2018.
“Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big
Data.” Customer Needs and Solutions 5, no. 1–2: 28–37.
Bahner, J. Elin, Anke-Dorothea Hüper, and Dietrich Manzey. 2008. “Misuse of Auto-
mated Decision Aids: Complacency, Automation Bias, and the Impact of Training
Experience.” International Journal of Human-Computer Studies 66, no. 9:
688–99.
Lawless, W. F., Ranjeev Mittu, Donald Sofge, and Stephen Russell, eds. 2017. Autonomy
and Intelligence: A Threat or Savior? Cham, Switzerland: Springer.
Parasuraman, Raja, and Dietrich H. Manzey. 2010. “Complacency and Bias in Human
Use of Automation: An Attentional Integration.” Human Factors 52, no. 3:
381–410.
B
Battlefield AI and Robotics
Generals on the modern battlefield are witnessing a potential tactical and strategic
revolution due to the advancement of artificial intelligence (AI) and robotics and
their application to military affairs. Robotic devices, such as unmanned aerial vehi-
cles (UAVs), also known as drones, played a major role in the wars in Afghanistan
(2001–) and Iraq (2003–2011), as did other robots. It is conceivable that future wars
will be fought without human involvement. Autonomous machines will engage in
battle on land, in the air, and under the sea without human control or direction.
While this vision still belongs to the realm of science fiction, battlefield AI and
robotics raises a variety of practical, ethical, and legal questions that military pro-
fessionals, technological experts, jurists, and philosophers must grapple with.
What comes first in many people’s minds when thinking about the application
of AI and robotics to the battlefield is “killer robots,” armed machines indiscrimi-
nately destroying everything in their path. There are, however, many uses for bat-
tlefield AI technology that do not involve killing. The most prominent use of such
technology in recent conflicts has been nonviolent in nature. UAVs are most often
used for monitoring and reconnaissance. Other robots, such as the PackBot manu-
factured by iRobot (the same company that produces the vacuum-cleaning
Roomba), are used to detect and examine improvised explosive devices (IEDs),
thereby aiding in their safe removal. Robotic devices are capable of traversing
treacherous ground, such as the caves and mountain crags of Afghanistan, and
areas too dangerous for humans, such as under a vehicle suspected of being rigged
with an IED. Unmanned Underwater Vehicles (UUVs) are similarly used under-
water to detect mines. The ubiquity of IEDs and mines on the modern battlefield
make these robotic devices invaluable.
Another potential, not yet realized, life-saving capability of battlefield robotics
is in the field of medicine. Robots can safely retrieve wounded soldiers on the
battlefield in places unreachable by their human comrades, without putting their
own lives at grave risk. Robots can also be used to carry medical equipment and
medicines to soldiers on the battlefield and potentially even perform basic first aid
and other emergency medical procedures.
It is in the realm of lethal force that AI and robotics have the greatest potential
to alter the battlefield—whether on land, sea, or in the air. The Aegis Combat Sys-
tem (ACS) is an example of an automatic system currently deployed on destroyers
and other naval combat vessels by numerous navies throughout the world. The
system can track incoming threats—be they missiles from the surface or air or
mines or torpedoes from the sea—through radar and sonar. The system is inte-
grated with a powerful computer system and has the capability to destroy
Battlefield AI and Robotics 35
identified threats with its own munitions. Though Aegis is activated and super-
vised manually, the system has the capability to act independently, so as to coun-
ter threats more quickly than would be possible for humans.
In addition to such partially automated systems such as the ACS and UAVs, the
future may see the rise of fully autonomous military robots capable of making
decisions and acting of their own accord. The most potentially revolutionary
aspect of AI empowered robotics is that of lethal autonomous weapons (LAWs)—
more colloquially referred to as “killer robots.” Robotic autonomy exists on a slid-
ing scale. At one end of the scale are robots programmed to function automatically,
but in response to a given stimulus and only in one way. A mine that detonates
automatically when stepped on is an example of this level of autonomy. Also, at
the lower end of the spectrum are remotely controlled machines that, while
unmanned, are remotely controlled by a human.
Semiautonomous systems are found near the middle of the spectrum. These
systems may be able to function independently of a human being, but only in lim-
ited ways. An example of such a system is a robot directed to launch, travel to a
specified location, and then return at a given time. In this scenario, the machine
does not make any “decisions” on its own. Semiautonomous devices may also be
programmed to complete part of a mission and then to wait for additional inputs
before proceeding to the next level of action. The final stage is full autonomy.
Fully autonomous robots are programmed with a goal and can carry out that goal
completely on their own. In battlefield scenarios, this may include the ability to
employ lethal force without direct human instruction.
Lethally equipped, AI-enhanced, fully autonomous robotic devices have the
potential to completely change the modern battlefield. Military ground units
comprising both human beings and robots, or only robots with no humans at all,
would increase the size of militaries. Small, armed UAVs would not be limited
by the need for human operators and would be gathered in large swarms with
the potential ability to overwhelm larger, but less mobile, forces. Such techno-
logical changes would necessitate similarly revolutionary changes in tactics,
strategy, and even the concept of war itself. As this technology becomes more
widely available, it will also become cheaper. This could upset the current bal-
ance of military power. Even relatively small countries, and perhaps even some
nonstate actors, such as terrorist groups, may be able to establish their own
robotic forces.
Fully autonomous LAWs raise a host of practical, ethical, and legal questions.
Safety is one of the primary practical concerns. A fully autonomous robot
equipped with lethal weaponry that malfunctions could pose a serious risk to any-
one in its path. Fully autonomous missiles could conceivably, due to some mechan-
ical fault, go off course and kill innocent people. Any kind of machinery is liable
to unpredictable technical errors and malfunctions. With lethal robotic devices,
such problems pose a serious safety risk to those who deploy them as well as inno-
cent bystanders. Even aside from potential malfunctions, limitations in program-
ming could lead to potentially calamitous mistakes. Programming robots to
distinguish between combatants and noncombatants, for example, poses a major
difficulty, and it is easy to imagine mistaken identity resulting in inadvertent
36 Battlefield AI and Robotics
casualties. The ultimate worry, however, is that robotic AI will advance too rap-
idly and break away from human control. Like popular science fiction movies and
literature, and in fulfilment of the prominent scientist Stephen Hawking’s dire
prediction that the development of AI could result in the extinction of humanity,
sentient robots could turn their weaponry on people.
LAWs raise serious legal dilemmas as well. Human beings are subject to the
laws of war. Robots cannot be held liable, criminally, civilly, or in any other way,
for potential legal violations. This poses the potential, therefore, of eliminating
accountability for war crimes or other abuses of law. Serious questions are rele-
vant here: Can a robot’s programmer or engineer be held accountable for the
actions of the machine? Could a human who gave the robot its “orders” be held
responsible for unpredictable choices or mistakes made on an otherwise self-
directed mission? Such issues require thorough consideration prior to the deploy-
ment of any fully autonomous lethal machine.
Apart from legal matters of responsibility, a host of ethical considerations also
require resolution. The conduct of war requires split-second moral decision-
making. Will autonomous robots be able to differentiate between a child and a
soldier or recognize the difference between an injured and defenseless soldier and
an active combatant? Can a robot be programmed to act mercifully when a situa-
tion dictates, or will a robotic military force always be considered a cold, ruthless,
and merciless army of extermination? Since warfare is fraught with moral dilem-
mas, LAWs engaged in war will inevitably be faced with such situations. Experts
doubt lethal autonomous robots can ever be depended upon to take the correct
action. Moral behavior requires not only rationality—something that might be
programmed into robots—but also emotions, empathy, and wisdom. These latter
things are much more difficult to write into code.
The legal, ethical, and practical concerns raised by the prospect of ever more
advanced AI-powered robotic military technology has led many people to call for
an outright ban on research in this area. Others, however, argue that scientific
progress cannot be stopped. Instead of banning such research, they say, scientists
and society at large should look for pragmatic solutions to those problems. Some
claim, for example, that many of the ethical and legal problems can be resolved by
maintaining constant human supervision and control over robotic military forces.
Others point out that direct supervision is unlikely over the long run, as human
cognition will not be capable of matching the speed of computer thinking and
robot action. There will be an inexorable tendency toward more and more auton-
omy as the side that provides its robotic forces with greater autonomy will have an
insurmountable advantage over those who try to maintain human control. Fully
autonomous forces will win every time, they warn.
Though still in its emergent phase, the introduction of continually more
advanced AI and robotic devices to the battlefield has already resulted in tremen-
dous change. Battlefield AI and Robotics have the potential to radically alter the
future of war. It remains to be seen if, and how, the technological, practical, legal,
and ethical limitations of this technology can be overcome.
William R. Patterson
Bayesian Inference 37
See also: Autonomous Weapons Systems, Ethics of; Lethal Autonomous Weapons
Systems.
Further Reading
Borenstein, Jason. 2008. “The Ethics of Autonomous Military Robots.” Studies in Ethics,
Law, and Technology 2, no. 1: n.p. https://www.degruyter.com/view/journals/selt
/2/1/article-selt.2008.2.1.1036.xml.xml.
Morris, Zachary L. 2018. “Developing a Light Infantry-Robotic Company as a System.”
Military Review 98, no. 4 (July–August): 18–29.
Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New
York: W. W. Norton.
Singer, Peter W. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st
Century. London: Penguin.
Sparrow, Robert. 2007. “Killer Robots,” Journal of Applied Philosophy 24, no. 1: 62–77.
Bayesian Inference
Bayesian inference is a way to calculate the probability of the validity of a propo-
sition based on a prior estimate of its probability plus any new and relevant data.
Bayes’ Theorem, from which Bayesian statistics are drawn, was a popular math-
ematical approach used in expert systems in the twentieth century. The Bayesian
theorem remains useful to artificial intelligence in the twenty-first century and
has been applied to problems such as robot locomotion, weather forecasting, juri-
metrics (the application of quantitative methods to law), phylogenetics (the evolu-
tionary relationships among organisms), and pattern recognition. It is also useful
in solving the famous Monty Hall problem and is often utilized in email spam
filters.
The mathematical theorem was developed by the Reverend Thomas Bayes
(1702–1761) of England and published posthumously as “An Essay Towards Solv-
ing a Problem in the Doctrine of Chances” in the Philosophical Transactions of
the Royal Society of London in 1763. It is sometimes called Bayes’ Theorem of
Inverse Probabilities. The first notable discussion of Bayes’ Theorem as applied to
the field of medical artificial intelligence appeared in a classic article entitled
“Reasoning Foundations of Medical Diagnosis,” written by George Washington
University electrical engineer Robert Ledley and Rochester School of Medicine
radiologist Lee Lusted and published by Science in 1959. As Lusted later remem-
bered, medical knowledge in the mid-twentieth century was usually presented as
symptoms associated with a disease, rather than as diseases associated with a
symptom. Bayesian inference led them to consider the idea that medical knowl-
edge could be expressed as the probability of a disease given the patient’s
symptoms.
Bayesian statistics are conditional, allowing one to determine the chance that a
certain disease is present given a certain symptom, but only with prior knowledge
of how often the disease and symptom are correlated and how often the symptom
is present in the absence of the disease. It is very close to what Alan Turing
described as the factor in favor of the hypothesis provided by the evidence. Bayes’
38 Bayesian Inference
undesirable behavior. Bayesian inference has also been introduced into the court-
room in the United Kingdom. In Regina v. Adams (1996), jurors were offered the
Bayesian approach by the defense team to help jurors form an unbiased mecha-
nism for combining introduced evidence, which involved a DNA profile and dif-
fering match probability calculations and constructing a personal threshold for
forming a judgment about convicting the accused “beyond a reasonable doubt.”
Bayes’ theorem had already been “rediscovered” several times before its 1950s’
revival under Ledley, Lusted, and Warner. The circle of historic luminaries who
perceived value in the Bayesian approach to probability included Pierre-Simon
Laplace, the Marquis de Condorcet, and George Boole. The Monty Hall problem,
named for the host of the classic game show Let’s Make a Deal, involves a contes-
tant deciding whether to stick with the door they have picked or switch to another
unopened door after Monty Hall (with full knowledge of the location of the prize)
opens a door to reveal a goat. Rather counterintuitively, the chances of winning
under conditional probability are twice as large by switching doors.
Philip L. Frana
See also: Computational Neuroscience; Computer-Assisted Diagnosis.
Further Reading
Ashley, Kevin D., and Stefanie Brüninghaus. 2006. “Computer Models for Legal Predic-
tion.” Jurimetrics 46, no. 3 (Spring): 309–52.
Barnett, G. Octo. 1968. “Computers in Patient Care.” New England Journal of Medicine
279 (December): 1321–27.
Bayes, Thomas. 1763. “An Essay Towards Solving a Problem in the Doctrine of Chances.”
Philosophical Transactions 53 (December): 370–418.
Donnelly, Peter. 2005. “Appealing Statistics.” Significance 2, no. 1 (February): 46–48.
Fox, John, D. Barber, and K. D. Bardhan. 1980. “Alternatives to Bayes: A Quantitative
Comparison with Rule-Based Diagnosis.” Methods of Information in Medicine 19,
no. 4 (October): 210–15.
Ledley, Robert S., and Lee B. Lusted. 1959. “Reasoning Foundations of Medical Diagno-
sis.” Science 130, no. 3366 (July): 9–21.
Lusted, Lee B. 1991. “A Clearing ‘Haze’: A View from My Window.” Medical Decision
Making 11, no. 2 (April–June): 76–87.
Warner, Homer R., Jr., A. F. Toronto, and L. G. Veasey. 1964. “Experience with Bayes’
Theorem for Computer Diagnosis of Congenital Heart Disease.” Annals of the
New York Academy of Sciences 115: 558–67.
with the First or Second Law” (Asimov 1950, 40). Asimov, in later writings, added
a Fourth Law or Zeroth Law, commonly paraphrased as “A robot may not harm
humanity, or, by inaction, allow humanity to come to harm” and described in
detail by the robot character Daneel Olivaw in Robots and Empire (Asimov 1985,
chapter 18).
Asimov’s zeroth law subsequently provoked discussion as to how harm to
humanity should be determined. The 2017 Asilomar Conference on Beneficial AI
took on this question, moving beyond the Three Laws and the Zeroth Law and
establishing twenty-three principles to safeguard humanity with respect to the
future of AI. The Future of Life Institute, sponsor of the conference, hosts the
principles on their website and has gathered 3,814 signatures supporting the prin-
ciples from AI researchers and other interdisciplinary supporters. The principles
fall into three main categories: research questions, ethics and values, and longer-
term concerns.
Those principles related to research aim to ensure that the goals of artificial
intelligence remain beneficial to humans. They are intended to guide financial
investments in AI research. To achieve beneficial AI, Asilomar signatories con-
tend that research agendas should support and maintain openness and dialogue
between AI researchers, policymakers, and developers. Researchers involved in
the development of artificial intelligence systems should work together to priori-
tize safety.
Proposed principles related to ethics and values are meant to reduce harm and
encourage direct human control over artificial intelligence systems. Parties to the
Asilomar principles ascribe to the belief that AI should reflect the human values of
individual rights, freedoms, and acceptance of diversity. In particular, artificial
intelligences should respect human liberty and privacy and be used solely to
empower and enrich humanity. AI must align with the social and civic standards
of humans. The Asilomar signatories maintain that designers of AI need to be held
responsible for their work. One noteworthy principle addresses the possibility of
an arms race of autonomous weapons.
The creators of the Asilomar principles, noting the high stakes involved,
included principles covering longer term issues. They urged caution, careful plan-
ning, and human oversight. Superintelligences must be developed for the larger
good of humanity, and not only to advance the goals of one company or nation.
Together, the twenty-three principles of the Asilomar Conference have sparked
ongoing conversations on the need for beneficial AI and specific safeguards con-
cerning the future of AI and humanity.
Diane M. Rodgers
See also: Accidents and Risk Assessment; Asimov, Isaac; Autonomous Weapons Sys-
tems, Ethics of; Campaign to Stop Killer Robots; Robot Ethics.
Further Reading
Asilomar AI Principles. 2017. https://futureoflife.org/ai-principles/.
Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.
Asimov, Isaac. 1985. Robots and Empire. New York: Doubleday.
Sarangi, Saswat, and Pankaj Sharma. 2019. Artificial Intelligence: Evolution, Ethics, and
Public Policy. Abingdon, UK: Routledge.
Berger-Wolf, Tanya 41
Berger-Wolf, Tanya(1972–)
Tanya Berger-Wolf is a professor in the Department of Computer Science at the
University of Illinois at Chicago (UIC). She is known for her contributions to
computational ecology and biology, data science and network analysis, and artifi-
cial intelligence for social good. She is the leading researcher in the field of com-
putational population biology, which uses artificial intelligence algorithms,
computational methods, the social sciences, and data collection to answer ques-
tion about plants, animals, and humans.
Berger-Wolf leads interdisciplinary field courses at the Mpala Research Centre
in Kenya with engineering students from UIC and biology students from Prince-
ton University. She works in Africa for its rich genetic diversity and because it
possesses endangered species that are indicators of the health of life on the planet
generally. Her group wants to know what the impact of environment on the behav-
ior of social animals is, as well as what puts a given species at risk.
She is cofounder and director of Wildbook, a nonprofit that creates wildlife
conservation software. Berger-Wolf’s work for Wildbook has included a crowd-
sourced project to take as many photographs of Grevy’s zebras as possible in order
to accomplish a full census of the rare animals. Analysis of the photos with artifi-
cial intelligence algorithms allows the group to identify every individual Grevy’s
zebra by its unique pattern of stripes—a natural bar code or fingerprint. The
Wildbook software identifies the animals from hundreds of thousands of pictures
using convolutional neural networks and matching algorithms. The census esti-
mates are used to target and invest resources in the protection and survival of the
zebras.
The Wildbook deep learning software can be used to identify individual mem-
bers of any striped, spotted, notched, or wrinkled species. Giraffe Spotter is Wild-
book software for giraffe populations. Wildbook crowdsources citizen-scientist’s
reports of giraffe encounters through its website, which includes gallery images
from handheld cameras and camera traps. Wildbook’s catalogue of individual
whale sharks uses an intelligent agent that extracts still images of tail flukes from
uploaded YouTube videos. The whale shark census yielded evidence that led the
International Union for Conservation of Nature to change the status of the animals
from “vulnerable” to “endangered” on the IUCN Red List of Threatened Species.
Wildbook is also using the software to inspect videos of hawksbill and green sea
turtles.
Berger-Wolf is also director of tech for conservation nonprofit Wild Me. The
nonprofit uses machine vision artificial intelligence algorithms to identify indi-
vidual animals in the wild. Wild Me records information about animal locations,
migration patterns, and social groups. The goal is to develop a comprehensive
understanding of global diversity that can inform conservation policy. Wild Me is
a partner of Microsoft’s AI for Earth program.
Berger-Wolf was born in 1972 in Vilnius, Lithuania. She attended high school
in St. Petersburg, Russia, and completed her bachelor’s degree at Hebrew Univer-
sity in Jerusalem. She earned her doctorate from the Department of Computer
Science at the University of Illinois at Urbana-Champaign and pursued postdoc-
toral work at the University of New Mexico and Rutgers University. She is the
42 Berserkers
recipient of the National Science Foundation CAREER Award, the Association for
Women in Science Chicago Innovator Award, and Mentor of the Year at UIC.
Philip L. Frana
See also: Deep Learning.
Further Reading
Berger-Wolf, Tanya Y., Daniel I. Rubenstein, Charles V. Stewart, Jason A. Holmberg,
Jason Parham, and Sreejith Menon. 2017. “Wildbook: Crowdsourcing, Computer
Vision, and Data Science for Conservation.” Chicago, IL: Bloomberg Data for
Good Exchange Conference. https://arxiv.org/pdf/1710.08880.pdf.
Casselman, Anne. 2018. “How Artificial Intelligence Is Changing Wildlife Research.”
National Geographic, November. https://www.nationalgeographic.com/animals
/2018/11/artificial-intelligence-counts-wild-animals/.
Snow, Jackie. 2018. “The World’s Animals Are Getting Their Very Own Facebook.” Fast
Company, June 22, 2018. https://www.fastcompany.com/40585495/the-worlds
-animals-are-getting-their-very-own-facebook.
Berserkers
Berserkers are a fictional type of intelligent killer machines first introduced by
science fiction and fantasy author Fred Saberhagen (1930–2007) in a 1962 short
story, “Without a Thought.” Berserkers subsequently appeared as common antag-
onists in many more novels and stories by Saberhagen.
Berserkers are an ancient race of sentient, self-replicating, space-faring
machines programmed to destroy all life. They were created in a long-forgotten
interstellar war between two alien races, as an ultimate doomsday weapon
(i.e., one intended as a threat or deterrent more than actual use). The details of how
they were unleashed in the first place are lost to time, as the Berserkers apparently
wiped out their creators along with their enemies and have been marauding the
Milky Way galaxy ever since. They range in size from human-scale units to heav-
ily armored planetoids (cf. Death Star) with a variety of weapons powerful enough
to sterilize planets.
The Berserkers prioritize destruction of any intelligent life that fights back,
such as humanity. They build factories to replicate and improve themselves, while
never changing their central directive of eradicating life. The extent to which they
undergo evolution is unclear; some individual units eventually deviate into ques-
tioning or even altering their goals, and others develop strategic genius
(e.g., Brother Assassin, “Mr. Jester,” Rogue Berserker, Shiva in Steel). While the
Berserkers’ ultimate goal of destroying all life is clear, their tactical operations are
unpredictable due to randomness from a radioactive decay component in their
cores. Their name is thus based on the Berserkers of Norse legend, fierce human
warriors who fought with a wild frenzy.
Berserkers illustrate a worst-case scenario for artificial intelligence: rampant
and impassive killing machines that think, learn, and reproduce. They show the
perilous hubris of creating AI so advanced as to surpass its creators’ comprehen-
sion and control and equipping such AI with powerful weapons, destructive intent,
Berserkers 43
and unchecked self-replication. If Berserkers are ever created and unleashed even
once, they can pose an endless threat to living beings across vast stretches of
space and time. Once unbottled, they are all but impossible to eradicate. This is
due not only to their advanced defenses and weapons but also to their far-flung
distribution, ability to repair and replicate, autonomous operation (i.e., without
any centralized control), capacity to learn and adapt, and infinite patience to lie
hiding in wait. In Saberhagen’s stories, the discovery of Berserkers is so terrifying
that human civilizations become extremely wary of developing their own AI, for
fear that it too may turn on its creators. However, some clever humans discover an
intriguing counter weapon to Berserkers: Qwib-Qwibs, self-replicating machines
programmed to destroy all Berserkers rather than all life (“Itself Surprised” by
Roger Zelazny). Cyborgs are another anti-Berserker tactic used by humans, push-
ing the boundary of what counts as organic intelligence (Berserker Man, Ber-
serker Prime, Berserker Kill).
Berserkers also illustrate the potential inscrutability and otherness of artificial
intelligence. Even though some communication with Berserkers is possible, their
vast minds are largely incomprehensible to the intelligent organic lifeforms flee-
ing from or fighting them, and they prove difficult to study due to their tendency
to self-destruct if captured. What can be understood of their thinking indicates
that they see life as a scourge, a disease of matter that must be extinguished. In
turn, the Berserkers do not fully understand organic intelligence, and despite
many attempts, they are never able to successfully imitate organic life. They do,
however, sometimes recruit human defectors (which they call “goodlife”) to serve
the cause of death and help the Berserkers fight “badlife” (i.e., any life that resists
extermination). Nevertheless, the ways that Berserkers and humans think are
almost completely incompatible, thwarting efforts toward mutual understanding
between life and nonlife. Much of the conflict in the stories hinges on apparent
differences between human and machine intelligence (e.g., artistic appreciation,
empathy for animals, a sense of humor, a tendency to make mistakes, the use of
acronyms for mnemonics, and even fake encyclopedia entries made to detect pla-
giarism). Berserkers are even sometimes foiled by underestimating nonintelligent
life such as plants and mantis shrimp (“Pressure” and “Smasher”).
In reality, the idea of Berserkers can be seen as a special case of the von Neu-
mann probe, an idea conceived of by mathematician and physicist John von Neu-
mann (1903–1957): self-replicating space-faring robots that could be dispersed to
efficiently explore a galaxy. The Turing Test, proposed by mathematician and
computer scientist Alan Turing (1912–1954), is also explored and upended in the
Berserker stories. In “Inhuman Error,” human castaways compete with a Ber-
serker to convince a rescue team they are human, and in “Without a Thought,” a
Berserker attempts to determine whether or not its opponent in a game is human.
Berserkers also offer a grim explanation for the Fermi paradox—the idea that if
advanced alien civilizations exist we should have heard from them by now. It
could be that Earth has not been contacted by alien civilizations because they have
been destroyed by Berserker-like machines or are hiding from them.
The concept of Berserkers, or something like them, has appeared across numer-
ous works of science fiction in addition to Saberhagen’s (e.g., works by Greg Bear,
44 Biometric Privacy and Security
Gregory Benford, David Brin, Ann Leckie, and Martha Wells; the Terminator
series of movies; and the Mass Effect series of video games). These examples all
show how the potential for existential threats from AI can be tested in the labora-
tory of fiction.
Jason R. Finley and Joan Spicci Saberhagen
See also: de Garis, Hugo; Superintelligence; The Terminator.
Further Reading
Saberhagen, Fred. 2015a. Berserkers: The Early Tales. Albuquerque: JSS Literary
Productions.
Saberhagen, Fred. 2015b. Berserkers: The Later Tales. Albuquerque: JSS Literary
Productions.
Saberhagen’s Worlds of SF and Fantasy. http://www.berserker.com.
The TAJ: Official Fan site of Fred Saberhagen’s Berserker® Universe. http://www
.berserkerfan.org.
2001. The U.S. Transportation Security Administration (TSA) began testing bio-
metric tools for identity verification purposes in 2015. In 2019, Delta Air Lines, in
partnership with U.S. Customs and Border Protection, offered optional facial rec-
ognition boarding to passengers at the Maynard Jackson International Terminal in
Atlanta. The system allows passengers to pick up boarding passes, self-check lug-
gage, and negotiate TSA checkpoints and gates without interruption. In initial
rollout, only 2 percent of passengers opted out.
Financial institutions are now beginning to adopt biometric authentication sys-
tems in regular commercial transactions. They are already in widespread use to
protect access to personal smart phones. Intelligent security will become even
more important as smart home devices connected to the internet demand support
for secure financial transactions. Opinions on biometrics often vary with chang-
ing situations and environments. Individuals who may favor use of facial recogni-
tion technology at airports to make air travel more secure might oppose digital
fingerprinting at their bank. Some people perceive private company use of bio-
metric technology as dehumanizing, treating and tracking them in real time as if
they were products rather than people.
At the local level, community policing is often cited as a successful way to
build relationships between law enforcement officers and the neighborhoods they
patrol. But for some critics, biometric surveillance redirects the focus away from
community relationship building and on to socio-technical control by the state.
Context, however, remains crucial. Use of biometrics in corporations can be per-
ceived as an equalizer, as it places white-collar employees under the same sort of
scrutiny long felt by blue-collar laborers. Researchers are beginning to develop
video analytics AI software and smart sensors for use in cloud security systems.
These systems can identify known people, objects, voices, and movements in real-
time surveillance of workplaces, public areas, and homes. They can also be trained
to alert users to the presence of unknown people.
Artificial intelligence algorithms used in the creation of biometric systems are
now being used to defeat them. Generative adversarial networks (GANs), for
instance, simulate human users of network technology and applications. GANs
have been used to create imaginary people’s faces from sets of real biometric
training data. GANs are often composed of a creator system, which makes each
new image, and a critic system that compares the artificial face against the origi-
nal photograph in an iterative process. The startup Icons8 claimed in 2020 that it
could create a million fake headshots from only seventy human models in a single
day. The company sells the headshots created with their proprietary StyleGAN
technology as stock photos. Clients have included a university, a dating app, and a
human resources firm. Rosebud AI creates similar GANs generated photos and
sells them to online shopping sites and small businesses that cannot afford to hire
expensive models and photographers.
Deepfake technology, involving machine learning techniques to create realistic
but counterfeit videos, has been used to perpetrate hoaxes and misrepresentations,
generate fake news clips, and commit financial fraud. Facebook accounts with
deepfake profile photos have been used to amplify social media political cam-
paigns. Smart phones with facial recognition locks are susceptible to deepfake
hacking. Deepfake technology also has legitimate applications. Films have used
Biometric Technology 47
such technology for actors in flashbacks or other similar scenes to make actors
appear younger. Films such as Rogue One: A Star Wars Story (2016) even used
digital technology to include the late Peter Cushing (1913–1994), in which he
played the same character from the original, 1977 Star Wars film.
Recreational users have access to face-swapping through a variety of software
applications. FaceApp allows users to upload a selfie and change hair and facial
expression. The program can also simulate aging of a person’s features. Zao is a
deepfake application that takes a single photo and swaps it with the faces of film
and television actors in hundreds of clips. Deepfake algorithms are now in use to
detect the very videos created by the deepfakes.
Philip L. Frana
See also: Biometric Technology.
Further Reading
Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,
Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversar-
ial Nets.” NIPS ’14: Proceedings of the 27th International Conference on Neural
Information Processing Systems 2 (December): 2672–80.
Hopkins, Richard. 1999. “An Introduction to Biometrics and Large-Scale Civilian Identi-
fication.” International Review of Law, Computers & Technology 13, no. 3:
337–63.
Jain, Anil K., Ruud Bolle, and Sharath Pankanti. 1999. Biometrics: Personal Identifica-
tion in Networked Society. Boston: Kluwer Academic Publishers.
Januškevič, Svetlana N., Patrick S.-P. Wang, Marina L. Gavrilova, Sargur N. Srihari, and
Mark S. Nixon. 2007. Image Pattern Recognition: Synthesis and Analysis in Bio-
metrics. Singapore: World Scientific.
Nanavati, Samir, Michael Thieme, and Raj Nanavati. 2002. Biometrics: Identity Verifica-
tion in a Networked World. New York: Wiley.
Reichert, Ramón, Mathias Fuchs, Pablo Abend, Annika Richterich, and Karin Wenz, eds.
2018. Rethinking AI: Neural Networks, Biometrics and the New Artificial Intelli-
gence. Bielefeld, Germany: Transcript-Verlag.
Woodward, John D., Jr., Nicholas M. Orlans, and Peter T. Higgins. 2001. Biometrics:
Identity Assurance in the Information Age. New York: McGraw-Hill.
Biometric Technology
A biometric involves the measurement of some characteristic of a human being. It
can be physiological, such as in fingerprint or facial recognition, or it can be
behavioral, as in keystroke pattern dynamics or walking stride length. The White
House National Science and Technology Council’s Subcommittee on Biometrics
defines biometric characteristics as “measurable biological (anatomical and physi-
ological) and behavioral characteristics that can be used for automated recogni-
tion” (White House, National Science and Technology Council 2006, 4). The
International Biometrics and Identification Association (IBIA) defines biometric
technologies as “technologies that automatically confirm the identity of people by
comparing patterns of physical or behavioral characteristics in real time against
enrolled computer records of those patterns” (International Biometrics and Identi-
fication Association 2019).
48 Blade Runner
AD 2019. While there are significant differences between the texts, both tell the
story of bounty hunter Rick Deckard who is tasked with retiring (or killing)
escaped replicants/androids (six in the novel, four in the film). The backdrop to
both stories is a future where cities have become overpopulated and highly pol-
luted. Nonhuman natural life has largely become extinct (through radiation sick-
ness) and replaced with synthetic and manufactured life. In this future, natural life
has become a valuable commodity.
Against this setting, replicants are designed to fulfill a range of industrial func-
tions, most notably as labor for off-world colonies. The replicants can be identified
as an exploited group, produced to serve human masters. They are discarded when
no longer useful and retired when they rebel against their conditions. Blade run-
ners are specialized law enforcement officers charged with capturing and destroy-
ing these rogue replicants. Blade runner Rick Deckard comes out of retirement to
hunt down the advanced Nexus-6 replicant models. These replicants have rebelled
against the slave-like conditions on Mars and have escaped to Earth.
The handling of artificial intelligence in both texts serves as an implicit critique
of capitalism. In the novel, the Rosen Association, and in the film the Tyrell Cor-
poration manufacture replicants in order to create a more docile workforce,
thereby suggesting capitalism turns humans into machines. These crass commer-
cial imperatives are underscored by Eldon Rosen (who is renamed Tyrell in the
film): “We produced what the colonists wanted. … We followed the time-honored
principle underlying every commercial venture. If our firm hadn’t made these pro-
gressively more human types, other firms would have.”
In the film, there are two categories of replicants: those who are programmed
not to know they are replicants, who are replete with implanted memories (like
Rachael Tyrell), and those who know they androids and live by that knowledge
(the Nexus-6 fugitives). The film version of Rachael is a new Nexus-7 model,
implanted with the memories of Eldon Tyrell’s niece, Lilith. Deckard is tasked
with killing her but falls in love with her instead. The end of the film sees the two
fleeing the city together.
The novel treats the character of Rachael differently. Deckard attempts to enlist
the help of Rachael to assist him in tracking down the fugitive androids. Rachael
agrees to meet Deckard in a hotel in an attempt to get him to abandon the case.
During their meeting, Rachael reveals one of the fugitive androids (Pris Stratton)
is an exact duplicate of her (making Rachael a Nexus-6 model in the novel). Even-
tually, Deckard and Rachael have sex and profess their love for one another. How-
ever, it is revealed that Rachael has slept with other blade runners. Indeed, she is
programmed to do so in order to prevent them from completing their missions.
Deckard threatens to kill Rachael but does not follow through, choosing to leave
the hotel instead.
In the novel and the film, the replicants are undetectable. They appear to be
completely human, even under a microscope. The only way to identify them is
through the administration of the Voigt-Kampff test, which distinguishes humans
from androids based on emotional responses to various questions. The test is
administered with the assistance of a machine that measures blush response, heart
rate, and eye movement in response to questions dealing with empathy. Deckard’s
50 Blue Brain Project
status as a human or a replicant is not immediately known. Rachael even asks him
if he has taken the Voigt-Kampff test. Deckard’s status remains ambiguous in the
film. Though the viewer may make their own decision, director Ridley Scott has
suggested that Deckard is, indeed, a replicant. Toward the end of the novel, Deck-
ard takes and passes the test, but begins questioning the efficacy of blade
running.
The book, more than the film, grapples with questions of what it means to be
human in the face of technological advances. The book shows the fragility of the
human experience and how it might easily be damaged by the very technology
designed to serve it. Penfield mood organs, for instance, are devices that individu-
als can use to regulate their emotions. All that is required is that a person locate an
emotion in a manual, dial the appropriate number, and then feel whatever they
want. The use of the device and its creation of artificial feelings suggests that
humans can become robotic, a point relayed by Deckard’s wife Iran:
My first reaction consisted of being grateful that we could afford a Penfield mood
organ. But then I realized how unhealthy it was, sensing the absence of life, not just
in this building but everywhere, and not reacting – do you see? I guess you don’t.
But that used to be considered a sign of mental illness; they called it ‘absence of
appropriate affect.’
The point Dick makes is that the mood organ prevents people from experiencing
the appropriate emotional qualities of life, the very thing the Voigt-Kampff test
suggests replicants cannot do.
Philip Dick was particularly noted for his more nebulous and, perhaps, even
pessimistic view of artificial intelligence. His robots and androids are decidedly
ambiguous. They want to simulate people, but they lack feelings and empathy.
This ambiguity strongly informs Do Androids Dream of Electric Sheep and
evinces itself on-screen in Blade Runner.
Todd K. Platts
See also: Nonhuman Rights and Personhood; Pathetic Fallacy; Turing Test.
Further Reading
Brammer, Rebekah. 2018. “Welcome to the Machine: Artificial Intelligence on Screen.”
Screen Education 90 (September): 38–45.
Fitting, Peter. 1987. “Futurecop: The Neutralization of Revolt in Blade Runner.” Science
Fiction Studies 14, no. 3: 340–54.
Sammon, Paul S. 2017. Future Noir: The Making of Blade Runner. New York: Dey Street
Books.
Wheale, Nigel. 1991. “Recognising a ‘Human-Thing’: Cyborgs, Robots, and Replicants in
Philip K. Dick’s Do Androids Dream of Electric Sheep? and Ridley Scott’s Blade
Runner.” Critical Survey 3, no. 3: 297–304.
and its functioning requires massive and sustained computational power. The
Swiss brain research initiative, sponsored by the École Polytechnique Fédérale de
Lausanne (EPFL), began in 2005 with the formation of the Blue Brain Project
(BBP). The founding director of the Blue Brain Project is Henry Markram.
The Blue Brain Project has set the goal of simulating several mammalian brains
and “ultimately, to study the steps involved in the emergence of biological intelli-
gence” (Markram 2006, 153). These simulations were initially supported by the
enormous computing power of IBM’s BlueGene/L, the top world supercomputer
system from November 2004 to November 2007. BlueGene/L was replaced by a
BlueGene/P in 2009. The need for even more computational power led to
BlueGene/P being replaced in 2014 by BlueGene/Q. In 2018, the BBP selected
Hewlett-Packard to create a supercomputer (dubbed Blue Brain 5), which is to be
exclusively dedicated to neuroscience simulation.
The implementation of supercomputer-based simulations has shifted neurosci-
ence research from the actual lab into a virtual one. The achievement of digital
reconstructions of the brain in the Blue Brain Project allows experiments, through
controlled research flow and protocol, to be performed in an “in silico” environ-
ment, a Latin pseudo-word referring to simulation of biological systems on com-
putational devices. The potential to convert the analog brain into a digital copy on
supercomputers is suggestive of a paradigm shift in brain research. One key
assumption is that the digital or artificial copy will behave like an analog or real
brain. The software running on Blue Gene hardware, a simulation environment
dubbed NEURON that models the neurons, was developed by Michael Hines,
John W. Moore, and Ted Carnevale.
Considering the burgeoning budgets, expensive technology, and many interdis-
ciplinary scientists involved, the Blue Brain Project may be considered a typical
example of what, after World War II (1939–1945), was called Big Science. In addi-
tion, the research approach to the brain through simulation and digital imaging
procedures leads to problems such as the management of all of the data produced.
Blue Brain became an inaugural member of the Human Brain Project (HBP) con-
sortium and submitted a proposal to the European Commission’s Future & Emerg-
ing Technologies (FET) Flagship Programme. This application was accepted by
the European Union in 2013, and the Blue Brain Project is now a partner in an
even broader effort to study and conduct brain simulation.
Konstantinos Sakalis
See also: General and Narrow AI; Human Brain Project; SyNAPSE.
Further Reading
Djurfeldt, Mikael, Mikael Lundqvist, Christopher Johansson, Martin Rehn, Örjan Eke-
berg, Anders Lansner. 2008. “Brain-Scale Simulation of the Neocortex on the
IBM Blue Gene/L Supercomputer.” IBM Journal of Research and Development
52, no. 1–2: 31–41.
Markram, Henry. 2006. “The Blue Brain Project.” Nature Reviews Neuroscience 7, no. 2:
153–60.
Markram, Henry, et al. 2015. “Reconstruction and Simulation of Neocortical Microcir-
cuitry.” Cell 63, no. 2: 456–92.
52 Bostrom, Nick
Bostrom, Nick(1973–)
Nick Bostrom is a philosopher at Oxford University with an interdisciplinary aca-
demic background in physics and computational neuroscience. He is a founding
director of the Future of Humanity Institute and cofounder of the World Transhu-
manist Association. He has written or edited a number of books, including
Anthropic Bias (2002), Human Enhancement (2009), Superintelligence: Paths,
Dangers, Strategies (2014), and Global Catastrophic Risks (2014).
Bostrom was born in Helsingborg, Sweden, in 1973. Although he chafed against
formal schooling, he loved learning. He especially enjoyed subjects in science,
literature, art, and anthropology. Bostrom completed a bachelor’s degree in phi-
losophy, math, logic, and artificial intelligence at the University of Gothenburg
and master’s degrees in philosophy and physics from Stockholm University and
computational neuroscience from King’s College London. He was awarded his
doctorate in philosophy from the London School of Economics. Bostrom is a reg-
ular consultant or contributor to the European Commission, U.S. President’s
Council on Bioethics, the Central Intelligence Agency, and the Centre for the
Study of Existential Risk at Cambridge University.
Bostrom is known for his intellectual contributions to many fields and has pro-
posed or written extensively about several well-known philosophical arguments
and conjectures, including those on the simulation hypothesis, existential risk,
the future of machine intelligence, and transhumanism. The so-called “Simula-
tion Argument” is an extension of Bostrom’s interests in the future of technology,
as well as his observations on the mathematics of the anthropic bias. The argu-
ment consists of three propositions. The first proposition is that almost all civili-
zations that reach human level of sophistication ultimately become extinct before
reaching technological maturity. The second proposition is that most civilizations
eventually create “ancestor simulations” of sentient people, but then lose interest
in them. The third proposition is that humanity currently lives in a simulation
(the “simulation hypothesis”). Only one of the three propositions, he asserts, must
be true.
If the first hypothesis would be not true, then some percentage of civilizations
at the currant stage of human society would eventually reach ultimate technologi-
cal maturity. If the second proposition would be not true, some fraction of civili-
zations would be interested in continuing to run ancestor simulations. Researchers
among these civilizations might be running gigantic numbers of these simula-
tions. In that case, there would be many multiples more simulated people living in
simulated universes than real people living in real universes. It is most likely,
therefore, that humanity lives in one of the simulated universes. The third possi-
bility would not be untrue if the second proposition was true.
It is even possible, Bostrom asserts, that a civilization inside a simulation might
be running its own simulations. Simulations could be living inside simulated uni-
verses, inside of their own simulated universes, in the manner of an infinite
regress. It is also possible that all civilizations will go extinct, perhaps when a
particular technology is discovered, which represents an existential risk beyond
all ability to control.
Bostrom, Nick 53
Bostrom’s argument assumes that the truth of the external world is not some-
how hidden from humanity, an argument that extends back to Plato’s belief in the
reality of universals (the “Forms”) and the ability of human senses to perceive
only particular instances of universals. His argument also assumes that the capa-
bilities of computers to simulate things now will only grow in power and sophisti-
cation. Bostrom points to computer games and literature as current examples of
natural human enchantment with simulated reality.
The Simulation Argument is often confused with only the third proposition, the
narrow hypothesis that humanity lives in a simulation. Bostrom thinks there is a
less than 50 percent chance that humans live in some sort of artificial matrix. He
also believes that it is very unlikely that if humanity were living in one, society
would observe “glitches” that betrayed the presence of the simulation, because
they have complete control over the running of the simulation. Conversely, the
makers of the simulation would also let humans know that they are living in a
simulation.
Existential risks are those that catastrophically threaten the survival of all
humankind. Bostrom believes that the greatest existential risks come from humans
themselves rather than natural hazards (e.g., asteroids, earthquakes, and epidemic
disease). Artificial risks such as synthetic biology, molecular nanotechnology, or
artificial intelligence, he believes, are far more dangerous.
Bostrom distinguishes between local, global, and existential risks. Local risks
might involve the loss of a priceless work of art or a car crash. Global risks could
involve destruction wrought by a military dictator or the eruption of a supervol-
cano. Existential risks are different in scope and severity. They are pan-
generational and permanent. Reducing the risk of existential risks is, in his view,
the most important thing that human beings can do, because of the numbers of
lives potentially saved; working against existential risk is also one of the most
neglected activities of humanity.
He also defines a number of classes of existential risk. These include human
extinction, that is, the extinguishment of the species before it reaches technologi-
cal maturity; permanent stagnation, or the plateauing of human technological
achievement; flawed realization, where humanity fails to use advanced technol-
ogy for an ultimately worthwhile purpose; and subsequent ruination, where soci-
ety reaches technological maturity, but then something goes wrong. Bostrom
speculates that while humanity has not used human creativity to make a technol-
ogy that unleashes existentially destructive power, it is conceivable that it might
do so in the future. Human society has also not invented a technology so terrible
in its consequences that humanity could collectively disremember it. The goal
would be to get on to a safe technological course that involves global coordination
and is sustainable.
Bostrom uses the metaphor of changed brain complexity in the evolution of
humans from apes, which took only a few hundred thousand generations, to argue
for the possibility of machine superintelligence. Machine learning (that is, using
algorithms that themselves learn) allows artificial systems that are not limited to
one domain. He also notes that computers operate at much higher processing
speeds than human neurons.
54 Bostrom, Nick
Brooks, Rodney(1954–)
Rodney Brooks is a computer science researcher, entrepreneur, and business and
policy advisor. He is an authority in computer vision, artificial intelligence, robot-
ics, and artificial life. Brooks is famous for his work on behavior-based robotics
and artificial intelligence. His iRobot Roomba autonomous robotic vacuum clean-
ers are among the most ubiquitous domestic robots in use in the United States.
Brooks is influential for his advocacy of a bottom-up approach to computer sci-
ence and robotics, an epiphany he had while on a long, uninterrupted visit to his
wife’s family’s home in Thailand. Brooks argues that situatedness, embodiment,
and perception are as important to modeling the dynamic behaviors of intelligent
creatures as cognition in the brain. This approach is now known as action-based
robotics or behavior-based artificial intelligence. Brooks’ approach to intelligence
without explicitly designed reasoning may be contrasted with the symbolic rea-
soning and representation approach common to the first several decades of artifi-
cial intelligence research.
Brooks noted that much of the early progress in robotics and artificial intelli-
gence had been predicated on the formal framework and logical operators of the
universal computational architecture created by Alan Turing and John von Neu-
mann. He believed that these artificial systems had diverged widely from the
actual biological systems they were intended to represent. Living organisms relied
on low-speed, massively parallel processing and adaptive engagement with their
environments. These were not, in his view, features of classical computing archi-
tecture, but rather aspects of what Brooks began, in the mid-1980s, to refer to as
subsumption architecture.
For Brooks, behavior-based robots are situated in real environments in the
world, and they learn successful actions from that world. They must be embodied
so that they can relate to the world and receive immediate feedback from their
sensory inputs. The origin of intelligence ordinarily arises from specific
56 Brooks, Rodney
and defuse improvised explosive devices in Iraq and Afghanistan. PackBot has
also seen service at the site of the World Trade Center following the terrorist
attacks of September 11, 2001, and in the initial assessment of the tsunami- and
earthquake-damaged Fukushima Daiichi nuclear power plant in Japan. In 2000,
Hasbro marketed a toy robot designed by Brooks and others at iRobot. The result,
My Real Baby, is a lifelike doll capable of crying, fussing, sleeping, giggling, and
expressing hunger.
The iRobot Corporation is also the creator of the Roomba cleaning robot.
Roomba, introduced in 2002, is disc-shaped with roller wheels and various
brushes, filters, and a squeegee vacuum. Like other behavior-based robots devel-
oped by Brooks, the Roomba detects obstacles with sensors and avoids dangers
such as falling down stairs. Newer models follow infrared beams and photocell
sensors for self-charging and to map out rooms. By 2019 iRobot had sold more
than 25 million robots worldwide.
Brooks is also cofounder and chief technology officer of Rethink Robotics. The
company, founded as Heartland Robotics in 2008, develops relatively inexpensive
industrial robots. Rethink’s first robot was Baxter, which is capable of simple
repetitive tasks such as loading, unloading, assembling, and sorting. Baxter pos-
sesses an animated human face drawn on a digital screen mounted at its top. Bax-
ter has embedded sensors and cameras that help it recognize and avoid collisions
when people are near, an important safety feature. Baxter can be used in ordinary
industrial environments without a security cage. The robot can be programmed
quickly by unskilled workers, who simply move its arms around in the expected
way to direct its movements. Baxter stores these motions in memory and adapts
them to specific tasks. Fine movements can be inputted using the controls on its
arms. Rethink’s Sawyer collaborative robot is a smaller version of Baxter, which
is marketed for use in completing dangerous or monotonous industrial tasks, often
in confined spaces.
Brooks has often expressed the view that the hard problems of consciousness
continue to elude scientists. Artificial intelligence and artificial life researchers,
he says, have missed something important about living systems that keeps the
chasm between the nonliving and living worlds large. This remains true even
though all of the living features of our world are built up from nonliving atoms.
Brooks suggests that perhaps some of the parameters used by AI and ALife
researchers are wrong or that current models lack enough complexity. Or it may
be that researchers continue to lack sufficient raw computing power. But Brooks
suggests that it may be that there is something—an ingredient or a property—
about biological life and subjective experience that is currently undetectable or
hidden from scientific view.
Brooks studied pure mathematics at Flinders University in Adelaide, South
Australia. He completed his PhD under American computer scientist and cogni-
tive scientist John McCarthy at Stanford University. His doctoral thesis was
expanded and published as Model-Based Computer Vision (1984). He served as
Director of the MIT Artificial Intelligence Laboratory (renamed Computer Sci-
ence & Artificial Intelligence Laboratory (CSAIL) in 2003) from 1997 to 2007.
Brooks is the recipient of numerous honors and awards for artificial intelligence
58 Brynjolfsson, Erik
and robotics. He is a Fellow of the American Academy of Arts & Sciences and a
Fellow of the Association for Computing Machinery. Brooks is the winner of the
prestigious IEEE Robotics and Automation Award and the Joseph F. Engelberger
Robotics Award for Leadership. He is currently deputy board chairman of the
advisory board of the Toyota Research Institute.
Philip L. Frana
See also: Embodiment, AI and; Tilden, Mark.
Further Reading
Brooks, Rodney A. 1984. Model-Based Computer Vision. Ann Arbor, MI: UMI Research
Press.
Brooks, Rodney A. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Sys-
tems 6, no. 1–2 (June): 3–15.
Brooks, Rodney A. 1991. “Intelligence without Reason.” AI Memo No. 1293. Cambridge,
MA: MIT Artificial Intelligence Laboratory.
Brooks, Rodney A. 1999. Cambrian Intelligence: The Early History of the New AI. Cam-
bridge, MA: MIT Press.
Brooks, Rodney A. 2002. Flesh and Machines: How Robots Will Change Us. New York:
Pantheon.
Brooks, Rodney A., and Anita M. Flynn. 1989. “Fast, Cheap, and Out of Control.” Journal
of the British Interplanetary Society 42 (December): 478–85.
Brynjolfsson, Erik(1962–)
Erik Brynjolfsson is director of the Massachusetts Institute of Technology Initia-
tive on the Digital Economy. He is also Schussel Family Professor at the MIT
Sloan School and Research Associate at the National Bureau of Economic
Research (NBER). Brynjolfsson’s research and writing is in the area of informa-
tion technology productivity and its relation to labor and innovation.
Brynjolfsson’s work has long been at the center of discussions about the impacts
of technology on economic relations. His early work highlighted the relationship
between information technology and productivity, especially the so-called pro-
ductivity paradox. Specifically, Brynjolfsson found “broad negative correlations
with economywide productivity and information worker productivity” (Brynjolfs-
son 1993, 67). He suggested that the paradox might be explained by mismeasure-
ment of impact, a lag between initial cost and eventual benefits, private benefits
accruing at the expense of aggregate benefit, or outright mismanagement.
However, numerous empirical studies by Brynjolfsson and collaborators also
show that information technology spending has significantly enhanced
productivity—at least since 1991. Brynjolfsson has shown that information tech-
nology, and more specifically electronic communication networks, increases mul-
titasking. Multitasking in turn improves productivity, the development of
knowledge networks, and worker performance. The relationship between IT and
productivity thus represents a “virtuous cycle” more than a simple causal link: as
performance increases, users are encouraged to adopt knowledge networks that
improve productivity and operational performance.
Brynjolfsson, Erik 59
The productivity paradox has attracted renewed interest in the age of artificial
intelligence. The struggle between human and artificial labor poses a brand-new
set of challenges for the digital economy. Brynjolfsson writes about the phenom-
enon of frictionless commerce, a feature produced by online activities such as
instant price comparison by smart shopbots. Retailers such as Amazon have
understood the way online markets work in the age of AI and have remade their
supply chains and distribution strategies. This reshaping of online commerce has
transformed the concept of efficiency itself. In the brick-and-mortar economy,
price and quality comparisons may be done by secret human shoppers. This pro-
cess can be slow and costly. By contrast, the costs of acquiring some kinds of
online information are now effectively zero, since consumers (and web-scraping
bots) can easily surf from one website to another.
In the best-selling book Race Against the Machine (2011), Brynjolfsson and
coauthor Andrew McAfee tackle the influence of technology on employment, the
economy, and productivity growth. They are especially interested in the process
of creative destruction, a theory popularized by economist Joseph Schumpeter in
Capitalism, Socialism, and Democracy (1942). Brynjolfsson and McAfee show
that while technology is a positive asset for the economy as a whole, it may not
automatically benefit everyone in society. In fact, the benefits that accrue from
technological innovations may be unequal, favoring the small groups of innova-
tors and investors that dominate digital markets. Brynjolfsson and McAfee’s main
conclusion is that humans should not compete against machines, but instead part-
ner with machines. Innovation is enhanced, and human capital is improved, when
people learn skills to participate in the new age of smart machines.
In The Second Machine Age (2014), Brynjolfsson and McAfee shed more light
on this subject by surveying the role of data in the digital economy and the grow-
ing importance of artificial intelligence. The authors note that data-driven intelli-
gent machines are a central feature of internet commerce. Artificial intelligence
opens the door to all kinds of new services and features. They argue that these
transformations not only affect productivity indexes but also reshape our very
sense of what it means to engage in capitalist enterprise. Brynjolfsson and McAfee
have much to say about the destabilizing effects of a growing chasm between
internet moguls and ordinary people. Of particular concern to the authors is
unemployment caused by artificial intelligence and smart machines. In Second
Machine Age, Brynjolfsson and McAfee reiterate their view that there should not
be a race against technology, but meaningful coexistence with it in order to build
a better global economy and society.
In Machine, Platform, Crowd (2017), Brynjolfsson and McAfee explain that in
the future the human mind must learn to coexist with smart machines. The great
challenge is to shape how society will use technology and how the positive attri-
butes of data-driven innovation and artificial intelligence can be nourished while
the negative aspects are pruned away. Brynjolfsson and McAfee imagine a future
where labor is not just suppressed by efficient machines and the disruptive effects
of platforms, but where new matchmaking businesses governing intricate eco-
nomic structures and large enthusiastic online crowds, and copious amounts of
human knowledge and expertise are used to strengthen supply chains and
60 Brynjolfsson, Erik
Calo is the Lane Powell and D. Wayne Gittinger Associate Professor at the Uni-
versity of Washington School of Law. In 2016, Calo and the Tech Policy Lab
hosted the inaugural Obama White House workshop on artificial intelligence pol-
icy. He regularly testifies on national and international concerns related to AI and
robotics, including in 2013 before the U.S. Senate Judiciary Committee on the
domestic use of drones and in 2016 before the German Bundestag (Parliament)
about robotics and artificial intelligence. He serves on numerous advisory boards
for organizations such as AI Now, Responsible Robotics, and the University of
California People and Robots Initiative and on numerous conference program
committees such as FAT* and Privacy Law Scholars Conference, where much of
the contemporary conversation around AI and its social impacts takes place.
Batya Friedman
See also: Accidents and Risk Assessment; Product Liability and AI.
Further Reading
Calo, Ryan. 2011. “Peeping Hals.” Artificial Intelligence 175, no. 5–6 (April): 940–41.
Calo, Ryan. 2014. “Digital Market Manipulation.” George Washington Law Review 82,
no. 4 (August): 995–1051.
Calo, Ryan. 2015. “Robotics and the Lessons of Cyberlaw.” California Law Review 103,
no. 3: 513–63.
Calo, Ryan. 2017. “Artificial Intelligence Policy: A Primer and Roadmap.” University of
California, Davis Law Review 51: 399–435.
Crawford, Kate, and Ryan Calo. 2016. “There Is a Blind Spot in AI Research.” Nature 538
(October): 311–13.
many member organizations, such as the International Committee for Robot Arms
Control and Amnesty International. Leadership of the campaign consists of a
steering committee and a global coordinator. As of 2018, the steering committee
is composed of eleven NGOs. The global coordinator of the campaign is Mary
Wareham, who previously led international efforts to regulate land mines and
cluster munitions.
As with campaigns against land mines and cluster munitions, efforts to prohibit
weaponized robots focus on their potential to cause unnecessary suffering and
their risk of indiscriminate harm to civilians. The prohibition of weapons on an
international scale is coordinated through the United Nations Convention on Cer-
tain Conventional Weapons (CCW), which first came into effect in 1983. The
Campaign to Stop Killer Robots advocates for the inclusion of LAWS in the CCW,
as the CCW has not yet agreed on a ban of weaponized robots and as the CCW
does not include any mechanism for enforcing agreed upon prohibitions,
The Campaign to Stop Killer Robots also supports the creation of additional
preemptive bans that could be enacted through new international treaties. In addi-
tion to lobbying governing bodies for bans by treaty and convention, the Cam-
paign to Stop Killer Robots provides resources for educating and organizing the
public, including multimedia databases, campaign reports, and a mailing list. The
Campaign also pursues the cooperation of technology companies, seeking their
voluntary refusal to engage in the development of LAWS. The Campaign has an
active social media presence through the @BanKillerRobots handle, where it
tracks and shares the names of corporations that pledge not to participate in the
design or distribution of intelligent weapons.
Jacob Aaron Boss
See also: Autonomous Weapons Systems, Ethics of; Battlefield AI and Robotics; Lethal
Autonomous Weapons Systems.
Further Reading
Baum, Seth. 2015. “Stopping Killer Robots and Other Future Threats.” Bulletin of the
Atomic Scientists, February 22, 2015. https://thebulletin.org/2015/02/stopping
-killer-robots-and-other-future-threats/.
Campaign to Stop Killer Robots. 2020. https://www.stopkillerrobots.org/.
Carpenter, Charli. 2016. “Rethinking the Political / -Science- / Fiction Nexus: Global Pol-
icy Making and the Campaign to Stop Killer Robots.” Perspectives on Politics 14,
no. 1 (March): 53–69.
Docherty, Bonnie. 2012. Losing Humanity: The Case Against Killer Robots. New York:
Human Rights Watch.
Garcia, Denise. 2015. “Killer Robots: Why the US Should Lead the Ban.” Global Policy
6, no. 1 (February): 57–63.
Caregiver Robots
Caregiver robots are personal support robots designed to assist people who, for a
variety of reasons, may require assistive technology for long-term care, disabil-
ity, or supervision. Although not in widespread use, caregiver robots are
64 Caregiver Robots
interaction and companionship. Social robots may resemble humans but are often
interactive smart toys or artificial pets.
In Japan, robots are often described as iyashi, a word also used to describe a
subgenre of anime and manga created for the purpose of emotional healing.
A wide variety of soft-tronic robots are available for Japanese children and adults
as huggable companions. Wandakun was a fuzzy koala bear-type robot developed
by Matsushita Electric Industrial (MEI) in the 1990s. The bear squirmed when pet-
ted, could sing, and could respond to touch with a few Japanese phrases. Babyloid
is a plush robot baby beluga whale designed by Masayoshi Kano at Chukyo Uni-
versity to alleviate symptoms of depression in geriatric patients. Only seventeen
inches in length, Babyloid’s eyes blink and will take “naps” when rocked. LED
lights embedded in its cheeks glow when it is “happy.” The robot is also capable of
shedding blue LED tears when it is not happy. Babyloid is capable of making more
than 100 different sounds. At a cost of more than $1,000 dollars each, it is no toy.
The artificial baby harp seal Paro, designed by Japan’s National Institute of
Advanced Industrial Science and Technology (AIST), is designed to give comfort
to patients with dementia, anxiety, or depression. The eighth-generation Paro is
packed with thirteen surface and whisker sensors, three microphones, two vision
sensors, and seven actuators for the neck, fins, and eyelids. Paro’s inventor, Taka-
nori Shibata of AIST’s Intelligent System Research Institute, notes that patients
with dementia experience less aggression and wandering, and more social interac-
tion, when using the robot. Paro is considered a Class II medical device in the
United States, in the same category of risk that comprises power wheelchairs and
X-ray machines. AIST is also the developer of Taizou, a twenty-eight-inch robot
that can replicate the movements of thirty different exercises. Taizou is used in
Japan to motivate senior citizens to exercise and stay fit.
The well-known AIBO developed by Sony Corporation is a robotic therapy dog
as well as a very pricey toy. Sony’s Life Care Design subsidiary began introducing
a new generation of dog robots into retirement homes owned by the company in
2018. AIBO’s successor, the humanoid QRIO robot, has been proposed as a plat-
form for simple childcare activities, such as interactive games and singalongs.
Palro, another robot for eldercare therapy made by Fujisoft, is already in use in
more than 1,000 senior citizen facilities. Its artificial intelligence software has
been upgraded several times since initial release in 2010. Both are used to reduce
symptoms of dementia and for entertainment.
Japanese corporations have also cultivated a broader segment of users of so-
called partner-type personal robots. These robots are designed to promote human-
machine interaction and reduce loneliness and mild depression. NEC Corporation
began creating its cute PaPeRo (Partner-Type Personal Robot) in the late 1990s.
PaPeRo communications robots can see, listen, speak, and make various motions.
Current versions have twin camera eyes capable of facial recognition and are
designed to help family members living in separate homes monitor one another.
The Childcare Version of PaPeRo plays with children and functions as a short-
term babysitter.
Toyota introduced its family of humanoid Partner Robots in 2005. The compa-
ny’s robots are designed for a wide variety of purposes, from human support and
66 Caregiver Robots
about five and a half feet in height and has simple controls and a night-vision cam-
era. The multidisciplinary, collaborative European Mobiserv project aims to cre-
ate a robot that reminds elderly clients to take their medications, eat meals, and
stay active. The Mobiserv robot is embedded in a smart home environment of
sensors, optical sensors, and other automated devices. Mobiserv is designed to
interact with smart clothes that collect health-related data. Mobiserv involves a
partnership between Systema Technologies and the nine European partners repre-
senting seven countries.
The objective of the EU CompanionAble Project involving fifteen institutions
coordinated by the University of Reading is to create a mobile robotic companion
to demonstrate the advantages of information and communication technologies in
the care of the elderly. The CompanionAble robot attempts to address emergency
and security concerns in early stage dementia, provide cognitive stimulation and
reminders, and summon human caregiver assistance. CompanionAble also inter-
acts with a variety of sensors and devices in a smart home setting. The QuoVADis
Project at Brova Hospital Paris, a public university hospital for geriatrics care, has
a similar ambition, to create a robot for at-home care of cognitively impaired
elderly people. The Fraunhofer Institute for Manufacturing Engineering and
Automation continues to design and manufacture successive generations of modu-
lar robots called Care-O-Bots. It is intended for use in hospitals, hotels, and nurs-
ing homes. The Care-O-Bot 4 service robot can reach from the floor to a shelf
with its long arms and rotating, bending hip joint. The robot is designed to be
accepted as something that is friendly, helpful, courteous, and smart.
A novel approach is offered by the European Union’s ROBOSWARM and
IWARD, intelligent and programmable hospital robot swarms. ROBOSWARM is
a distributed agent system designed for hospital cleaning. The more versatile
IWARD is designed for cleaning, patient monitoring and guidance, environmental
monitoring, medication provision, and patient surveillance. Multi-institutional
collaborators discovered that it would be difficult to certify that the AI systems
embedded in these systems would perform appropriately under real-world condi-
tions because they exhibit adaptive and self-organizing behaviors. They also dis-
covered that observers sometimes questioned the movements of the robots,
wondering whether they were executing appropriate operations.
In Canada, the Ludwig humanoid robot at the University of Toronto is designed
to help caregivers address aging-related conditions in their clients. The robot
makes conversation with senior citizens who have dementia or Alzheimer’s dis-
ease. Goldie Nejat, AGE-WELL Investigator and Canada Research Chair in
Robots for Society and Director of the Institute for Robotics and Mechatronics,
University of Toronto, is using robotics technology to help people by coaching
them to follow the sequence of steps in common activities of daily living. The
university’s Brian robot is social in nature and responds to emotional human inter-
action. HomeLab at the Toronto Rehabilitation Institute (iDAPT), the largest aca-
demic rehabilitation research hospital in Canada, is developing assistive robots for
use in health-care delivery. HomeLab’s Ed the Robot is a low-cost robot developed
using the iRobot Create toolkit. Like Brian, the robot is intended to prompt demen-
tia patients on the proper steps needed for common activities.
68 Caregiver Robots
dignity. Such technology, they assert, has pros and cons. On the one hand, care-
giver robots could extend the range of opportunities available to graying popula-
tions, and these aspects of the technology should be encouraged. On the other
hand, the devices could be used to manipulate or deceive the weakest members of
society or further isolate the elderly from regular companionship or social
interaction.
The Sharkeys note that, in some ways, robotic caregivers could eventually
exceed human capabilities, for instance, where speed, power, or accuracy is nec-
essary. Robots could be programmed in ways that prevent or reduce real or per-
ceived eldercare abuse, impatience, or incompetence—common complaints
among the aged. Indeed, an ethical imperative to use caregiver robots might apply
wherever social systems for caregiver support are inadequate or deficient. But
robots do not understand complex human constructs such as loyalty or adjust
flawlessly to the sensitive customized needs of individual clients. Without proper
foresight, the Sharkeys write, “The elderly may find themselves in a barren world
of machines, a world of automated care: a factory for the elderly” (Sharkey and
Sharkey 2012, 282).
Sherry Turkle devotes a chapter of her pathbreaking book Alone Together: Why
We Expect More From Technology and Less From Each Other (2011) to caregiver
robots. She notes that robotics and artificial intelligence researchers are motivated
to make the elderly feel wanted through their work, which assumes that senior
citizens often are (or feel) lonely or abandoned. It is true that attention and labor
are scarce commodities in aging populations. Robots serve as an entertainment
distraction. They improve daily life and homemaking rituals and make them safer.
Turkle concedes that robots never tire and may even perform from a position of
neutrality in relationships with clients. Humans, by contrast, sometimes possess
motives that undermine minimal or conventional standards of care. “One might
say that people can pretend to care,” Turkle notes. “A robot cannot care. So a robot
cannot pretend because it can only pretend” (Turkle 2011, 124).
But Turkle also delivers a harsh critique of caregiving technology. Most impor-
tantly, caring behavior is confused with caring feeling. Interactions between
humans and robot do not represent real conversations in her estimation. They may
even produce confusion among vulnerable and dependent populations. The poten-
tial for privacy violation from caregiver robot surveillance is high, and automated
assistance may even hijack human experience and memory formation. A great
danger is the development of a generation of senior citizens and children who
would prefer robots to interpersonal human relationships.
Other philosophers and ethicists have weighed in on appropriate practices and
artificial caring. Sparrow and Sparrow (2006) note that human touch is of para-
mount importance in healing rituals, that robots may exacerbate loss of control,
and that robot caregiving is deceptive caregiving because robots are not capable of
real concern. Borenstein and Pearson (2011) and Van Wynsberghe (2013) argue
that caregiver robots impinge on human dignity and the rights of the elderly,
undermining free choice. Van Wynsberghe, in particular, calls for value-sensitive
robot designs that align with University of Minnesota professor Joan Tronto’s ethic
of care, which involves attentiveness, responsibility, competence, and reciprocity,
70 Caregiver Robots
as well as broader concerns for respect, trust, empathy, and compassion. Vallor
(2011) has critiqued the core assumptions of robot care by calling into question the
presumption that caregiving is nothing more than a problem or burden.
It may be that good care is custom-tailored to the individual, which personal
but mass-produced robots might struggle to achieve. Various religions and cul-
tures will also surely eschew robot caregiving. Caregiver robots might even pro-
duce reactive attachment disorder in children by offering inappropriate and
unsuitable social interactions. The International Organization for Standardization
has written requirements for the design of personal robots, but who is at fault in a
case of robot neglect? The courts are not sure, and robot caregiver law is still in its
infancy. According to Sharkey and Sharkey (2010), caregiver robots may be liable
for invasions of privacy, harms caused by unlawful restraint, deceptive practices,
psychological damage, and lapses of accountability.
Frameworks for future robot ethics must give primacy to the needs of patients
over the desires of the caregivers. Wu et al. (2010) in interviews with the elderly
identified six themes related to patient needs. Thirty subjects aged sixty and older
noted that assistive technology should first help them pursue ordinary, everyday
tasks. Other essential needs included keeping good health, stimulating memory
and concentration, living alone “as long as I wish without worrying my family
circle” (Wu et al. 2010, 36), maintaining curiosity and growing interest in new
activities, and communicating regularly with relatives.
Robot maids, nannies, and caregiver technology are common tropes in popular
culture. The Twilight Zone television series provides several early examples.
A father creates an entire family of robot servants in “The Lateness of the Hour”
(1960). Grandma is a robot caregiver in “I Sing the Body Electric” (1962). Rosie
the robotic maid is a memorable character from the animated television series The
Jetsons (1962–1963). Caregiver robots are the main plot device in the animated
films Wall-E (2008) and Big Hero 6 (2014) and the science fiction thriller I Am
Mother (2019). They also appear frequently in manga and anime. Examples
include Roujin Z (1991), Kurogane Communication (1997), and The Umbrella
Academy (2019).
The 2012 science fiction film Robot and Frank directed by Jake Schreier dra-
matizes the limitations and possibilities of caregiver robot technology in popular
culture. In the film, a gruff former jewel thief with declining mental health hopes
to turn his assigned robotic assistant into a partner in crime. The film explores
various ethical dilemmas regarding not only the care of the elderly, especially
human autonomy, but also the rights of robots in servitude. The film subtly makes
a familiar critique, one often expressed by MIT social scientist Sherry Turkle:
“We are psychologically programmed not only to nurture what we love but to love
what we nurture” (Turkle 2011, 11).
Philip L. Frana
See also: Ishiguro, Hiroshi; Robot Ethics; Turkle, Sherry.
Further Reading
Borenstein, Jason, and Yvette Pearson. 2011. “Robot Caregivers: Ethical Issues across the
Human Lifespan.” In Robot Ethics: The Ethical and Social Implications of
Chatbots and Loebner Prize 71
Robotics, edited by Patrick Lin, Keith Abney, and George A. Bekey, 251–65. Cam-
bridge, MA: MIT Press.
Sharkey, Noel, and Amanda Sharkey. 2010. “The Crying Shame of Robot Nannies: An
Ethical Appraisal.” Interaction Studies 11, no. 2 (January): 161–90.
Sharkey, Noel, and Amanda Sharkey. 2012. “The Eldercare Factory.” Gerontology 58,
no. 3: 282–88.
Sparrow, Robert, and Linda Sparrow. 2006 “In the Hands of Machines? The Future of
Aged Care.” Minds and Machines 16, no. 2 (May): 141–61.
Turkle, Sherry. 2011. Alone Together: Why We Expect More from Technology and Less
from Each Other. New York: Basic Books.
United Nations. 2019. World Population Ageing Highlights. New York: Department of
Economic and Social Affairs. Population Division.
Vallor, Shannon. 2011. “Carebots and Caregivers: Sustaining the Ethical Ideal of Care in
the Twenty-First Century.” Philosophy & Technology 24, no. 3 (September):
251–68.
Van Wynsberghe, Aimee. 2013. “Designing Robots for Care: Care Centered Value-
Sensitive Design.” Science and Engineering Ethics 19, no. 2 (June): 407–33.
Wu, Ya-Huei, Véronique Faucounau, Mélodie Boulay, Marina Maestrutti, and Anne-
Sophie Rigaud. 2010. “Robotic Agents for Supporting Community-Dwelling
Elderly People with Memory Complaints: Perceived Needs and Preferences.”
Health Informatics Journal 17, no. 1: 33–40.
can perform only the tasks that they are programmed to perform. They lack the
ability to “think outside the box” or solve problems creatively in the way that
humans might. In many situations, users interacting with a chatbot may seek
answers to questions that the chatbot was simply not programmed to be able to
address.
For related reasons, chatbots pose some ethical challenges. Critics of chatbots
have argued that it is unethical for a computer program to simulate the behavior of
a human being without disclosing to humans with whom it interacts that it is not,
in fact, an actual human. Some have also suggested that chatbots may cause an
epidemic of loneliness by creating a world where interactions that traditionally
involved genuine conversation between humans are replaced by chatbot conversa-
tions that are less intellectually and socially fulfilling for human users. On the
other hand, chatbots such as Replika have been created with the precise goal of
providing lonely humans with an entity to talk to when actual humans are
unavailable.
An additional challenge related to chatbots lies in the fact that, like all software
programs, chatbots can potentially be used in ways that their creators did not
intend. Misuse could result from software security vulnerabilities that allow mali-
cious parties to take control of a chatbot; one can imagine, for example, how an
attacker aiming to damage the reputation of a company might seek to compromise
its customer-support chatbot in order to deliver false or unhelpful support ser-
vices. In other cases, simple design mistakes or oversights could lead to unin-
tended behavior by chatbots. This was the lesson that Microsoft learned in 2016
when it released the Tay chatbot, which was designed to teach itself new responses
based on previous conversations. When users engaged Tay in conversations about
racist topics, Tay began making public racist or inflammatory remarks of its own,
leading Microsoft to shut down the application.
The term “chatbot” did not appear until the 1990s, when it was introduced as a
shortened form of chatterbot, a term coined by computer scientist Michael Mauldin
in 1994 to describe a chatbot named Julia that he created in the early 1990s. How-
ever, computer programs with the characteristics of chatbots have existed for
much longer. The first was an application named ELIZA, which Joseph Weizen-
baum developed at MIT’s Artificial Intelligence Lab between 1964 and 1966.
ELIZA used early natural language processing techniques to engage in text-based
conversations with human users, although the program was limited to discussing
only a handful of topics. A similar chatbot program, named PARRY, was created
in 1972 by Stanford psychiatrist Kenneth Colby.
It was not until the 1990s, by which time natural language processing tech-
niques had become more sophisticated, that development of chatbots gained more
momentum and that programmers came closer to the goal of creating chatbots that
could engage in conversation on any topic. This was the goal of A.L.I.C.E., a chat-
bot introduced in 1995, and of Jabberwacky, a chatbot developed starting in the
early 1980s and made available to users on the web in 1997. The next major round
of innovation for chatbots came in the early 2010s, when the widespread adoption
of smartphones drove demand for digital assistant chatbots that could interact with
humans using voice conversations, starting with the debut of Apple’s Siri in 2011.
74 Cheng, Lili
For much of the history of chatbot development, competition for the Loebner
Prize has helped to gauge the effectiveness of chatbots in simulating human
behavior. Launched in 1990, the Loebner Prize is awarded to computer programs
(including but not limited to chatbots) that judges deem to exhibit the most human-
like behavior. Notable chatbots evaluated for the Loebner Prize include A.L.I.C.E,
which won the prize three times in the early 2000s, and Jabberwacky, which won
twice, in 2005 and 2006.
Christopher Tozzi
See also: Cheng, Lili; ELIZA; Natural Language Processing and Speech Understanding;
PARRY; Turing Test.
Further Reading
Abu Shawar, Bayan, and Eric Atwell. 2007. “Chatbots: Are They Really Useful?” LDV
Forum 22, no.1: 29–49.
Abu Shawar, Bayan, and Eric Atwell. 2015. “ALICE Chatbot: Trials and Outputs.” Com-
putación y Sistemas 19, no. 4: 625–32.
Deshpande, Aditya, Alisha Shahane, Darshana Gadre, Mrunmayi Deshpande, and Prachi
M. Joshi. 2017. “A Survey of Various Chatbot Implementation Techniques.” Inter-
national Journal of Computer Engineering and Applications 11 (May): 1–7.
Shah, Huma, and Kevin Warwick. 2009. “Emotion in the Turing Test: A Downward Trend
for Machines in Recent Loebner Prizes.” In Handbook of Research on Synthetic
Emotions and Sociable Robotics: New Applications in Affective Computing and
Artificial Intelligence, 325–49. Hershey, PA: IGI Global.
Zemčík, Tomáš. 2019. “A Brief History of Chatbots.” In Transactions on Computer Sci-
ence and Engineering, 14–18. Lancaster: DEStech.
Cheng, Lili(1960s–)
Lili Cheng is Corporate Vice President and Distinguished Engineer of the Micro-
soft AI and Research division. She is responsible for developer tools and services
on the company’s artificial intelligence platform, including cognitive services,
intelligent software assistants and chatbots, and data analytics and tools for deep
learning. Cheng has stressed that AI tools must become trusted by greater seg-
ments of the population and protect the privacy of users. She notes that her division
is working on artificial intelligence bots and software applications that engage in
humanlike conversations and interactions. Two other goals are the ubiquity of
social software—technology that helps people communicate better with one
another—and the interoperability of software assistants, that is, AIs that talk to
each other or hand off tasks to one another. One example of such applications is
real-time language translation. Cheng is also an advocate of technical training and
education of people, and particularly women, for the jobs of the future (Davis 2018).
Cheng stresses that AI must be humanized. Rather than adapt the human to the
computer in interactions, technology must be adapted to the rhythms of how peo-
ple work. Cheng states that mere language recognition and conversational AI are
not sufficient technological advances. AI must address the emotional needs of
human beings. She notes that one ambition of AI research is to come to terms with
“the logical and unpredictable ways people interact.”
Climate Crisis, AI and 75
economy because they do not consume scarce resources. The 2014 science fiction
film Transcendence starring Johnny Depp as an artificial intelligence researcher
depicts both the apocalyptic threat of sentient computers and their ambiguous
environmental consequences.
Philip L. Frana
See also: Berger-Wolf, Tanya; Intelligent Sensing Agriculture; Post-Scarcity, AI and;
Technological Singularity.
Further Reading
Brown, Austin, Jeffrey Gonder, and Brittany Repac. 2014. “An Analysis of Possible
Energy Impacts of Automated Vehicles.” In Road Vehicle Automation: Lecture
Notes in Mobility, edited by Gereon Meyer and Sven Beiker, 137–53. Cham, Swit-
zerland: Springer.
Cubitt, Sean. 2017. Finite Media: Environmental Implications of Digital Technologies.
Durham, NC: Duke University Press.
Faggella, Daniel. 2019. “Does the Environment Matter After the Singularity?” https://
danfaggella.com/environment/.
Gabrys, Jennifer. 2017. Program Earth: Environmental Sensing Technology and the Mak-
ing of a Computational Planet. Minneapolis: University of Minnesota Press.
Gonder, Jeffrey, Matthew Earleywine, and Witt Sparks. 2012. “Analyzing Vehicle Fuel
Saving Opportunities through Intelligent Driver Feedback.” SAE International
Journal of Passenger Cars—Electronic and Electrical Systems 5, no. 2: 450–61.
Microsoft and PricewaterhouseCoopers. 2019. How AI Can Enable a Sustainable Future.
https://www.pwc.co.uk/sustainability-climate-change/assets/pdf/how-ai-can
-enable-a-sustainable-future.pdf.
Schlossberg, Tatiana. 2019. “Silicon Valley Is One of the Most Polluted Places in the
Country.” The Atlantic, September 22, 2019. https://www.theatlantic.com
/technology/archive/2019/09/silicon-valley-full-superfund-sites/598531/.
Strubell, Emma, Ananya Ganesh, and Andrew McCallum. 2019. “Energy and Policy
Considerations for Deep Learning in NLP.” In Proceedings of the 57th Annual
Meeting of the Association for Computational Linguistics (ACL), n.p. Florence,
Italy, July 2019. https://arxiv.org/abs/1906.02243.
United Nations Environment Programme. 2019. “UN Report: Time to Seize Opportunity,
Tackle Challenge of E-Waste.” January 24, 2019. https://www.unenvironment.org
/news- and-stories/press-release/un-report-time-seize-opportunity-tackle
-challenge-e-waste.
Van Wynsberghe, Aimee, and Justin Donhauser. 2017. “The Dawning of the Ethics of
Environmental Robots.” Science and Engineering Ethics 24, no. 6 (October):
1777–1800.
The search for evidence-based resources should begin at the highest possible
layer of the 6S pyramid, which is the systems layer or the level of computerized
clinical decision support systems. Computerized clinical decision support systems
(sometimes referred to as intelligent medical platforms) are defined as health
information technology-based software that builds upon the foundation of an elec-
tronic health record to provide clinicians with general and patient-specific infor-
mation that is intelligently filtered and organized to enhance health and clinical
care. For example, laboratory measurements are often prioritized using different
colors to indicate whether they fall within or outside a reference range. The avail-
able computerized clinical decision support systems are not a bare model produc-
ing just an output. The interpretation and use of a computerized clinical decision
support system consists of multiple steps, including presenting the algorithm out-
put in a specific way, interpretation by the clinician, and eventually, the medical
decision that is made.
Although computerized clinical decision support systems have been shown to
reduce medical errors and improve patient outcomes, they have fallen short of
their full potential due to the lack of user acceptance. Besides the technological
challenges related to the interface, clinicians are skeptical of computerized clini-
cal decision support systems as they may reduce their professional autonomy or be
used in the event of a medical-legal controversy.
Although at present computerized clinical decision support systems require
human intervention, several key fields of medicine including oncology, cardiol-
ogy, and neurology are adapting tools that utilize artificial intelligence to aid with
the provision of a diagnosis. These tools exist in two major categories: machine
learning techniques and natural language processing systems. Machine learning
techniques use patients’ data to create a structured database for genetic, imaging,
and electrophysiological records to carry out analysis for a diagnosis. Natural lan-
guage processing systems create a structured database using clinical notes and
medical journals to supplement the machine learning process. Furthermore, in
medical applications, the machine learning procedures attempt to cluster patients’
traits to infer the probability of the disease outcomes and provide a prognosis to
the physician.
Numerous machine learning and natural language processing systems have
been combined to create advanced computerized clinical decision support sys-
tems that can process and provide a diagnosis as effectively or even better than a
physician. An AI technique called convolutional neural networking, developed by
Google, outperformed pathologists when identifying metastasis detection of
lymph nodes. The convolutional neural network was sensitive 97 percent of the
time in comparison to the pathologists with a sensitivity of 73 percent. Further-
more, when the same convolutional neural network was used to perform skin can-
cer classifications, it had a competence level comparable to dermatologists
(Krittanawong 2018). Such systems are also being used to diagnose and classify
depression.
Artificial intelligence will be used to increase the capacity of clinicians by
combining its power with human perceptions, empathy, and experience. However,
the benefits of such advanced computerized clinical decision support systems are
82 Clinical Decision Support Systems
DiCenso, Alba, Liz Bayley, and R. Brian Haynes. 2009. “Accessing Preappraised Evi-
dence: Fine-tuning the 5S Model into a 6S Model.” ACP Journal Club 151, no. 6
(September): JC3-2–JC3-3.
Gulshan, Varun, et al. 2016. “Development and Validation of a Deep Learning Algorithm
for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.” JAMA 316,
no. 22 (December): 2402–10.
Krittanawong, Chayakrit. 2018. “The Rise of Artificial Intelligence and the Uncertain
Future for Physicians.” European Journal of Internal Medicine 48 (February):
e13–e14.
Long, Erping, et al. 2017. “An Artificial Intelligence Platform for the Multihospital Col-
laborative Management of Congenital Cataracts.” Nature Biomedical Engineering
1, no. 2: n.p.
Miller, D. Douglas, and Eric W. Brown. 2018. “Artificial Intelligence in Medical Practice:
The Question to the Answer?” American Journal of Medicine 131, no. 2: 129–33.
Cognitive Architectures
A cognitive architecture is a specialized computer model of the human mind that
intends to fully simulate all aspects of human cognition. Cognitive architectures
represent unified theories for how a set of fixed mental structures and mecha-
nisms can perform intelligent work across a variety of complex environments and
situations. There are two key components to a cognitive architecture: a theory for
how the human mind works and a computational representation of the theory.
The cognitive theory behind a cognitive architecture will seek to unify the results
of a broad range of experimental findings and theories into a singular, compre-
hensive framework capable of explaining a variety of human behavior, using a
fixed set of evidence-based mechanisms. The computational representation is
then generated from the framework proposed in the theory of cognition. Through
the unification of modeling behavior and modeling the structure of a cognitive
system, cognitive architectures such as ACT-R (Adaptive Control of Thought-
Rational), Soar, and CLARION (Connectionist Learning with Adaptive Rule
Induction On-line) can predict, explain, and model complex human behavior like
driving a car, solving a math problem, or recalling when you last saw the hippie
in the park.
According to computer scientists Stuart Russell and Peter Norvig, there are
four approaches to achieving human-level intelligence within a cognitive architec-
ture: (1) building systems that think like humans, (2) building systems that think
rationally, (3) building systems that act like humans, and (4) building systems that
act rationally.
A system that thinks like a human produces behavior through known human
mechanisms. This is the primary approach used in cognitive modeling and can
be seen in architectures such as ACT-R by John Anderson, the General Problem
Solver by Allen Newell and Herb Simon, and the initial uses of the general cog-
nitive architecture called Soar. ACT-R, for instance, brings together theories of
motor movement, visual attention, and cognition. The model distinguishes
between procedural knowledge and declarative knowledge. Procedural knowl-
edge is expressed in terms of production rules, which are statements expressed
as condition → action pairs. An example is a statement expressed in the form of
IF → THEN. Declarative knowledge is factual. It describes information consid-
ered static, such as attributes, events, or things. Architectures of this type will
produce behavior that includes errors or mistakes, as well as the correct
behavior.
A system that thinks rationally will instead use logic, computational reasoning,
and laws of thought to produce behaviors and outputs that are consistent and cor-
rect. A system that acts rationally will use innate beliefs and knowledge to achieve
goals through a more generalized process of logic and movement from premises
to consequences, which is more adaptable to situations without full information
availability. Acting rationally can also be called the rational agent approach.
Finally, building a system that acts like a human can be thought of as the Turing
Test approach. In its most strict form, this approach requires building a system
capable of natural language processing, knowledge representation, automated rea-
soning, and machine learning to achieve humanlike behavior. Not every system
with this approach will meet all of these criteria and instead will focus on which-
ever benchmarks are most relevant to the task being solved.
Aside from those four approaches, cognitive architectures are also classified by
their information processing type: symbolic (or cognitivist), emergent (or connec-
tionist), and hybrid. Symbolic systems operate through high-level, top-down con-
trol and perform analysis through a set of IF-THEN statements called production
rules. EPIC (Executive-Process/Interactive Control) and Soar are two examples of
cognitive architectures that use symbolic information processing. Emergent sys-
tems, unexpectedly complex wholes that organize from simple parts without a
central organizing unit, are built using a bottom-up flow of information propagat-
ing from input nodes into the rest of the system, similar to a neural network. While
symbolic systems typically process information serially, emergent systems such
as Leabra and BECCA (Brain-Emulating Cognition and Control Architecture)
will use a self-organizing, distributed network of nodes that can operate in paral-
lel. Hybrid architectures such as ACT-R and CAPS (Collaborative, Activation-
based, Production System) combine features from both types of information
processing. For example, a hybrid cognitive architecture aimed at visual percep-
tion and comprehension may use symbolic processing for labels and text, but then
will use an emergent approach for visual feature and object detection. This sort of
mixed-methods approach to building cognitive architectures is becoming more
common as certain subtasks become better understood. This can produce some
Cognitive Computing 85
Further Reading
Anderson, John R. 2007. How Can the Human Mind Occur in the Physical Universe?
Oxford, UK: Oxford University Press.
Kotseruba, Iuliia, and John K. Tsotsos. 2020. “40 Years of Cognitive Architectures: Core
Cognitive Abilities and Practical Applications.” Artificial Intelligence Review 53,
no. 1 (January): 17–94.
Ritter, Frank E., Farnaz Tehranchi, and Jacob D. Oury. 2018. “ACT-R: A Cognitive Archi-
tecture for Modeling Cognition.” Wiley Interdisciplinary Reviews: Cognitive Sci-
ence 10, no. 4: 1–19.
Cognitive Computing
Cognitive computing is a term used to describe self-learning hardware and soft-
ware systems that use machine learning, natural language processing, pattern rec-
ognition, human-computer interaction, and data mining technologies to mimic the
human brain. Cognitive computing is meant to convey the notion that advances in
cognitive science are applied to create new and complex artificial intelligence sys-
tems. Cognitive systems are not meant to replace the thinking, reasoning,
problem-solving, or decision-making of humans, but rather to augment them or
provide assistance. Cognitive computing is sometimes identified as a set of strate-
gies to advance the goals of affective computing, which involves closing the gap
86 Cognitive Computing
between computer technology and human emotions. These strategies include real-
time adaptive learning techniques, interactive cloud services, interactive memo-
ries, and contextual understanding.
Cognitive analytical tools are in use to make mathematical evaluations of
structured statistical data and assist in decision-making. These tools are often
embedded in other scientific and business systems. Complex event processing
systems take real-time data about events and then use sophisticated algorithms
to examine them for patterns and trends, suggest options, or make decisions.
These types of systems are in widespread use in algorithmic stock trading and
in the detection of credit card fraud. Image recognition systems are now capable
of face recognition and complex image recognition. Machine learning algo-
rithms construct models from data sets and show improvements as new data is
pulled in. Machine learning may be approached with neural networks, Bayesian
classifiers, and support vector machines. Natural language processing involves
tools that extract meaning from large data sets of human communication. IBM’s
Watson is an example, as is Apple’s Siri. Natural language understanding is per-
haps the Holy Grail or “killer app” of cognitive computing, and indeed, many
people think of natural language processing synonymously with cognitive
computing.
One of the oldest branches of so-called cognitive computing is heuristic pro-
gramming and expert systems. Four relatively “complete” cognitive computing
architectures have been available since the 1980s: Cyc, Soar, Society of Mind, and
Neurocognitive Networks.
Some of the uses for cognitive computing technology are speech recognition,
sentiment analysis, face detection, risk assessment, fraud detection, and behav-
ioral recommendations. These applications are together sometimes described as
“cognitive analytics” systems. These systems are in development or are being
used in the aerospace and defense industries, agriculture, travel and transporta-
tion, banking, health care and the life sciences, entertainment and media, natural
resource development, utilities, real estate, retail, manufacturing and sales, mar-
keting, customer service, hospitality, and leisure. An early example of predictive
cognitive computing is the Netflix recommendation system for movie rentals.
General Electric is using computer vision algorithms to detect drivers who are
tired or distracted. Domino’s Pizza customers can order online by conversing with
a virtual assistant named Dom. Elements of Google Now, a predictive search fea-
ture launched inside Google apps in 2012, help people predict road conditions and
the estimated time of arrival, find hotels and restaurants, and remember birthdays
and parking places.
The word “cognitive” computing appears frequently in IBM marketing materi-
als. The company views cognitive computing as a special case of “augmented
intelligence,” a phrase preferred over artificial intelligence. IBM’s Watson machine
is sometimes described as a “cognitive computer” because it subverts the conven-
tional von Neumann architecture and takes its inspiration instead from neural net-
works. Neuroscientists are studying the inner workings of the human brain,
looking for relations between neural assemblies and elements of thought, and
developing fresh theories of mind.
Cognitive Computing 87
Vernon, David, Giorgio Metta, and Giulio Sandini. 2007. “A Survey of Artificial Cogni-
tive Systems: Implications for the Autonomous Development of Mental Capabili-
ties in Computational Agents.” IEEE Transactions on Evolutionary Computation
11, no. 2: 151–80.
Alan Turing, in his paper “Computing Machinery and Intelligence” (1950). Works
such as these inspired early artificial intelligence researchers such as Allen Newell
and Herbert Simon to pursue computer programs that could display humanlike
general problem-solving skill. Computer modeling suggested that mental repre-
sentations could be modeled as data structures and human information processing
as programming. These ideas are still features of cognitive psychology.
A third stream of ideas bolstering cognitive psychology came from linguistics,
particularly the generative linguistics approach developed by Noam Chomsky.
His 1957 book Syntactic Structures described the mental structures needed to
support and represent knowledge that speakers of language must possess. He pro-
posed that transformational grammar components must exist to transform one
syntactic structure into another. Chomsky also wrote a review of B. F. Skinner’s
1959 book Verbal Behavior, which is remembered as having demolished behavior-
ism as a serious scientific approach to psychology.
In psychology, the book A Study of Thinking (1956) by Jerome Bruner, Jacque-
line Goodnow, and George Austin developed the idea of concept attainment,
which was particularly well attuned to the information processing approach to
psychology. Concept learning, they eventually decided, involves “the search for
and listing of attributes that can be used to distinguish exemplars from non-
exemplars of various categories” (Bruner et al. 1967). In 1960, Harvard University
institutionalized the Cognitive Revolution by founding a Center for Cognitive
Studies under the leadership of Bruner and George Miller.
In the 1960s, cognitive psychology made a number of major contributions to
cognitive science generally, particularly in advancing an understanding of pattern
recognition, attention and memory, and the psychological theory of languages
(psycholinguistics). Cognitive models reduced pattern recognition to the percep-
tion of relatively primitive features (graphics primitives) and a matching proce-
dure where the primitives are cross compared against objects stored in visual
memory. Also in the 1960s, information processing models of attention and mem-
ory proliferated. Perhaps the best remembered is the Atkinson and Shiffrin model,
which built up a mathematical model of information as it flowed from short-term
memory to long-term memory, following rules for encoding, storage, and retrieval
that regulated the flow. Forgetting was described as information lost from storage
by processes of interference or decay.
The subfield of psycholinguistics was inspired by those who wanted to uncover
the practical reality of Chomsky’s theories of language. Psycholinguistics used
many of the tools of cognitive psychology. Mental chronometry is the use of
response time in perceptual-motor tasks to infer the content, duration, and tempo-
ral sequencing of cognitive operations. Speed of processing is considered an index
of processing efficiency. In one famous study, participants were asked questions
like “Is a robin a bird?” and “Is a robin an animal?” The longer it took for the
respondent to answer, the greater the categorical difference between the terms. In
the study, experimenters showed how semantic models could be hierarchical,
as the concept robin is directly connected to bird and connected to animal through
the intervening concept of bird. Information flows from “robin” to “animal” by
passing through “bird.”
90 Computational Creativity
In the 1970s and 1980s, studies of memory and language started to intersect,
and artificial intelligence researchers and philosophers began debating proposi-
tional representations of visual imagery. Cognitive psychology as a consequence
became much more interdisciplinary. Two new directions for research were found
in connectionism and cognitive neuroscience. Connectionism blends cognitive
psychology, artificial intelligence, neuroscience and the philosophy of mind to
seek neural models of emergent links, nodes, and interconnected networks. Con-
nectionism (sometimes referred to as “parallel distributed processing” or simply
“neural networking”) is computational at its core. Perception and cognition in
human brains are the inspiration for artificial neural networks. Cognitive neuro-
science is a scientific discipline that studies the nervous system mechanisms of
cognition. It represents an overlap of the fields of cognitive psychology, neurobiol-
ogy, and computational neuroscience.
Philip L. Frana
See also: Macy Conferences.
Further Reading
Bruner, Jerome S., Jacqueline J. Goodnow, and George A. Austin. 1967. A Study of Think-
ing. New York: Science Editions.
Gardner, Howard. 1986. The Mind’s New Science: A History of the Cognitive Revolution.
New York: Basic Books.
Lachman, Roy, Janet L. Lachman, and Earl C. Butterfield. 2015. Cognitive Psychology
and Information Processing: An Introduction. London: Psychology Press.
Miller, George A. 1956. “The Magical Number Seven, Plus or Minus Two: Some Limits
on Our Capacity for Processing Information.” Psychological Review 63, no. 2:
81–97.
Pinker, Steven. 1997. How the Mind Works. New York: W. W. Norton.
Computational Creativity
Computational creativity is a concept that is related to—but not reducible to—
computer-generated art. “CG-art,” as Margaret Boden describes it, refers to an
artwork that “results from some computer program being left to run by itself, with
zero interference from the human artist” (Boden 2010, 141). This definition is both
strict and narrow, being limited to the production of what human observers ordi-
narily recognize as “art works.” Computational creativity, by contrast, is a more
comprehensive term that covers a much wider spectrum of activities, devices, and
outcomes. As defined by Simon Colton and Geraint A. Wiggins, “Computational
creativity is a subfield of Artificial Intelligence (AI) research . . . where we build
and work with computational systems that create artefacts and ideas.” Those
“artefacts and ideas” might be art works, or they might be other kinds of objects,
discoveries, and/or performances (Colton and Wiggins 2012, 21).
Examples of computational creativity include applications and implementa-
tions such as games, storytelling, music composition and performance, and visual
arts. Machine capabilities are typically tested and benchmarked with games and
other contests of cognitive skill. From the beginning, in fact, the defining
Computational Creativity 91
condition of machine intelligence was established with a game, what Alan Turing
had called “The Game of Imitation” (1950). Since that time, AI development and
achievement has been measured and evaluated in terms of games and other kinds
of human/machine competitions. Of all the games that computers have been
involved with, chess has had a special status and privileged position, so much so
that critics like Douglas Hofstadter (1979, 674) and Hubert Dreyfus (1992) confi-
dently asserted that championship-level AI chess would forever remain out of
reach and unattainable.
Then, in 1997, IBM’s Deep Blue changed the rules of the game by defeating
Garry Kasparov. But chess was just the beginning. In 2015, there was AlphaGo, a
Go-playing algorithm developed by Google DeepMind, which took four out of
five games against Lee Sedol, one of the most celebrated human players of this
notoriously difficult board game. AlphaGo’s dexterous playing has been described
by human observers, such as Fan Hui (2016), as “beautiful,” “intuitive,” and
“creative.”
Automated Insights’ Wordsmith and the competing product Quill from Narra-
tive Science are Natural Language Generation (NLG) algorithms designed to pro-
duce human-readable stories from machine-readable data. Unlike simple news
aggregators or template NLG systems, these programs “write” (or “generate”—
and the choice of verb is not incidental) original stories that are, in many instances,
indistinguishable from human-created content. In 2014, for instance, Christer
Clerwall conducted a small-scale study, during which he asked human test sub-
jects to evaluate news stories composed by Wordsmith and a professional reporter
from the Los Angeles Times. Results from the study suggest that while the
software-generated content is often perceived to be descriptive and boring, it is
also considered to be more objective and trustworthy (Clerwall 2014, 519).
One of the early predictions issued by Herbert Simon and Allen Newell in their
influential paper “Heuristic Problem Solving” (1958) was that “within ten years a
digital computer will write music accepted by critics as possessing considerable
aesthetic value” (Simon and Newell 1958, 7). This forecast has come to pass. One
of the most celebrated achievements in the field of “algorithmic composition” is
David Cope’s Experiments in Musical Intelligence (EMI, or “Emmy”). Emmy is a
PC-based algorithmic composer capable of analyzing existing musical composi-
tions, rearranging their basic components, and then generating new, original
scores that sound like and, in some cases, are indistinguishable from the canonical
works of Mozart, Bach, and Chopin (Cope 2001). In music performance, there are
robotic systems such as Shimon, a marimba-playing jazz-bot from Georgia Tech
University that is not only able to improvise with human musicians in real time
but also “is designed to create meaningful and inspiring musical interactions with
humans, leading to novel musical experiences and outcomes” (Hoffman and
Weinberg 2011).
Cope’s general approach, something he calls “recombinacy,” is not limited to
music. It can be employed for and applied to any creative practice where new
works are the product of reorganizing or recombining a set of finite elements, that
is, the twenty-six letters in the alphabet, the twelve tones in the musical scale, the
sixteen million colors discernable by the human eye, etc. Consequently, this
92 Computational Creativity
algorithms compose is just as much ours as the music created by the greatest of
our personal inspirations” (Cope 2001, 139). According to Cope, no matter how
much algorithmic mediation is developed and employed, it is the human being
who is ultimately responsible for the musical composition that is produced by way
of these sophisticated computerized tools.
The same argument could be made for seemingly creative applications in other
areas, such as the Go-playing algorithm AlphaGo or The Painting Fool. When
AlphaGo wins a major competition or The Painting Fool generates a stunning
work of visual art that is displayed in a gallery, there is still a human person (or
persons) who is (so the argument goes) ultimately responsible for (or can respond
or answer for) what has been produced. The lines of attribution might get increas-
ingly complicated and protracted, but there is, it can be argued, always someone
behind the scenes who is in a position of authority. Evidence of this is already
available in those situations where attempts have been made to shift responsibility
to the machine. Consider AlphaGo’s decisive move 37 in game two against Lee
Sedol. If someone should want to know more about the move and its importance,
AlphaGo can certainly be asked about it. But the algorithm will have nothing to
say in response. In fact, it was the responsibility of the human programmers and
observers to respond on behalf of AlphaGo and to explain the move’s significance
and impact.
Consequently, as Colton (2012) and Colton et al. (2015) explicitly recognize, if
the project of computational creativity is to succeed, the software will need to do
more than produce artifacts and behaviors that we take and respond to as creative
output. It will also need to take responsibility for the work by accounting for what
it did and how it did it. “The software,” as Colton and Wiggins assert, “should be
available for questioning about its motivations, processes and products” (Colton
and Wiggins 2012, 25), eventually not just generating titles for and explanations
and narratives about the work but also being capable of responding to questions by
entering into critical dialogue with its audience (Colton et al. 2015, 15).
At the same time, there are opportunities opened up by these algorithmic incur-
sions into what has been a protected and exclusively human domain. The issue is
not simply whether computers, machine learning algorithms, or other applications
can or cannot be responsible for what they do or do not do, but also how we have
determined, described, and defined creative responsibility in the first place. This
means that there is both a strong and weak component to this effort, what Moham-
mad Majid al-Rifaie and Mark Bishop call, following Searle’s original distinction
regarding efforts in AI, strong and weak forms of computational creativity (Majid
al-Rifaie and Bishop 2015, 37).
Efforts at what would be the “strong” variety involve the kinds of application
development and demonstrations introduced by individuals and organizations
such as DeepMind, David Cope, or Simon Colton. But these efforts also have a
“weak AI” aspect insofar as they simulate, operationalize, and stress test various
conceptualizations of artistic responsibility and creative expression, leading to
critical and potentially insightful reevaluations of how we have characterized
these concepts in our own thinking. As Douglas Hofstadter has candidly admit-
ted, nothing has made him rethink his own thinking about thinking more than the
94 Computational Creativity
attempt to deal with and make sense of David Cope’s Emmy (Hofstadter 2001, 38).
In other words, developing and experimenting with new algorithmic capabilities
does not necessarily take anything away from human beings and what (presum-
ably) makes us special, but offers new opportunities to be more precise and scien-
tific about these distinguishing characteristics and their limits.
David J. Gunkel
See also: AARON; Automatic Film Editing; Deep Blue; Emily Howell; Generative
Design; Generative Music and Algorithmic Composition.
Further Reading
Boden, Margaret. 2010. Creativity and Art: Three Roads to Surprise. Oxford, UK: Oxford
University Press.
Clerwall, Christer. 2014. “Enter the Robot Journalist: Users’ Perceptions of Automated
Content.” Journalism Practice 8, no. 5: 519–31.
Colton, Simon. 2012. “The Painting Fool: Stories from Building an Automated Painter.”
In Computers and Creativity, edited by Jon McCormack and Mark d’Inverno,
3–38. Berlin: Springer Verlag.
Colton, Simon, Alison Pease, Joseph Corneli, Michael Cook, Rose Hepworth, and Dan
Ventura. 2015. “Stakeholder Groups in Computational Creativity Research and
Practice.” In Computational Creativity Research: Towards Creative Machines,
edited by Tarek R. Besold, Marco Schorlemmer, and Alan Smaill, 3–36. Amster-
dam: Atlantis Press.
Colton, Simon, and Geraint A. Wiggins. 2012. “Computational Creativity: The Final
Frontier.” In Frontiers in Artificial Intelligence and Applications, vol. 242, edited
by Luc De Raedt et al., 21–26. Amsterdam: IOS Press.
Cope, David. 2001. Virtual Music: Computer Synthesis of Musical Style. Cambridge, MA:
MIT Press.
Dreyfus, Hubert L. 1992. What Computers Still Can’t Do: A Critique of Artificial Reason.
Cambridge, MA: MIT Press.
Feenberg, Andrew. 1991. Critical Theory of Technology. Oxford, UK: Oxford University
Press.
Heidegger, Martin. 1977. The Question Concerning Technology, and Other Essays. Trans-
lated by William Lovitt. New York: Harper & Row.
Hoffman, Guy, and Gil Weinberg. 2011. “Interactive Improvisation with a Robotic
Marimba Player.” Autonomous Robots 31, no. 2–3: 133–53.
Hofstadter, Douglas R. 1979. Gödel, Escher, Bach: An Eternal Golden Braid. New York:
Basic Books.
Hofstadter, Douglas R. 2001. “Staring Emmy Straight in the Eye—And Doing My Best
Not to Flinch.” In Virtual Music: Computer Synthesis of Musical Style, edited by
David Cope, 33–82. Cambridge, MA: MIT Press.
Hui, Fan. 2016. “AlphaGo Games—English. DeepMind.” https://web.archive.org/web
/20160912143957/https://deepmind.com/research/alphago/alphago-games
-english/.
Majid al-Rifaie, Mohammad, and Mark Bishop. 2015. “Weak and Strong Computational
Creativity.” In Computational Creativity Research: Towards Creative Machines,
edited by Tarek R. Besold, Marco Schorlemmer, and Alan Smaill, 37–50. Amster-
dam: Atlantis Press.
Searle, John. 1984. Mind, Brains and Science. Cambridge, MA: Harvard University Press.
Computational Neuroscience 95
Simon, Herbert A., and Allen Newell. 1958. “Heuristic Problem Solving: The Next
Advance in Operations Research.” Operations Research 6, no. 1 (January–
February): 1–10.
Turing, Alan. 1999. “Computing Machinery and Intelligence.” In Computer Media and
Communication: A Reader, edited by Paul A. Meyer, 37–58. Oxford, UK: Oxford
University Press.
Computational Neuroscience
Computational neuroscience (CNS) applies the concept of computation to the field
of neuroscience. The term “computational neuroscience,” proposed by Eric
Schwartz in 1985, came to replace the terms “neural modeling” and “brain the-
ory” used to depict various types of research on the nervous system. At the core of
CNS is the understanding that nervous system effects can be seen as instances of
computations, because the explanation of state transitions can be understood as
relations between abstract properties. In other words, the explanations of effects
in nervous systems are not casual descriptions of interaction of physically specific
items, but rather descriptions of information transformed, stored, and represented.
Consequently, CNS seeks to build computational models to gain understanding of
the nervous system function in terms of information processing properties
of structures that make up the brain system. One example is constructing a model
of how interacting neurons can establish elementary components of cognition.
A brain map, however, does not reveal the computational mechanism of the ner-
vous system, but can be used as constraint for theoretical models. For example,
information exchange has its costs in terms of physical connections between com-
municating regions in the sense that regions that make connections often (in
instances of high bandwidth and low latency) will be placed together.
Description of neural systems as carry-on computations is central to computa-
tional neuroscience and contradicts the claim that computational constructs are
proprietary to the explanatory framework of psychology; that is, the human cogni-
tive capacities can be constructed and confirmed independently from understand-
ing how these capacities are being implemented in the nervous system. For
instance, in 1973 when it became apparent that cognitive processes cannot be
understood by analyzing the results of one-dimension questions/scenarios, an
approach widely used by the cognitive psychology at that time, Allen Newell
argued that verbally formulated questions will not be able to provide understand-
ing of cognitive process, and only synthesis with computer simulation can reveal
the complex interactions of the proposed component’s mechanism and whether
the cognitive function appears as a result of that interaction.
The first framework for computational neuroscience was formulated by David
Marr (1945–1980). This framework, which represents the three-level structure
used in computer science (abstract problem analysis, algorithm and physical
implementation), aims to provide conceptual starting point for thinking about lev-
els in the context of computation by nervous structure. The model however has
limitations because it consists of three poorly connected levels and also because it
implements a strict top-down approach in which all neurobiological facts were
96 Computational Neuroscience
Neural network models are developed from various levels of biological detail—
from neurons to maps. The neural networks represent the parallel distributed pro-
cessing paradigm and support multiple stages of linear-nonlinear signal
transformation. Models typically have millions of parameters (the connection
weights) to optimize task performance. The large set of parameters is needed
because simple models will not be able to express complex cognitive functions.
The deep convolutional neural network models’ implementations have been used
to predict brain representations of novel images in the primate ventral visual
stream. The first few layers of neural networks resemble representations similar to
those in the early visual cortex. Higher layers also resemble the inferior temporal
cortical representation, since both enable the decoding of object position, size, and
pose, along with the category of the object. Results from various studies have
shown that the internal representations of deep convolutional neural networks pro-
vide the best current models of representations of visual images in the inferior
temporal cortex in humans and monkeys. When comparing large numbers of
models, those that were optimized to perform the task of object classification bet-
ter explained the cortical representation.
Cognitive models are applications of artificial intelligence in computational
neuroscience and target information processing without actual application on neu-
robiological components (neurons, axons, etc.). There are three types of models:
production systems, reinforcement learning, and Bayesian cognitive models. They
utilize logic and predicates and operate on symbols but not on signals. The ratio-
nale for using artificial intelligence in computational neuroscience research are
several. First, over the years a great deal of facts about the brain have accumu-
lated, but the actual understanding of how the brain works is still not known.
Second, there are embedded effects caused by networks of neurons, but it is still
not understood how the networks of neurons actually work. Third, the brain has
been mapped coarsely, as has the knowledge of what different brain regions (pre-
dominantly about sensory and motor functions) do, but a detailed map is still not
available. In addition, some of the facts accumulated via experimental work or
observations may be irrelevant; the relationship between synaptic learning rules
and computation is essentially unknown.
A production system’s models are the earliest type of models for explaining
reasoning and problem solving. A “production” is a cognitive action started as a
result of the “if-then” rule, where “if” describes the scope of conditions under
which the range of productions (“then” clause) can be executed. If the conditions
are met for several rules, the model employs conflict resolution algorithm to
choose the proper production. The production models generate a series of predic-
tions that resemble the conscious stream of brain function. In recent applications,
the same model is used to predict the regional mean fMRI (functional Magnetic
Resonance Imaging) activation time.
The reinforcement models are used in many disciplines with the ultimate goal
of simulating achievement of optimal decision-making. In neurobiochemical sys-
tems, the implementation in neurobiological systems is a basal ganglia. The agent
may learn a “value function” associating each state with its expected cumulative
reward. If the agent can predict which state each action leads to and if it knows the
98 Computer-Assisted Diagnosis
values of those states, then it can choose the most promising action. The agent
may also learn a “policy” that associates each state directly with promising
actions. The choice of action must balance exploitation (which brings short-term
reward) and exploration (which benefits learning and brings long-term reward).
The Bayesian models reveal what the brain should actually compute to behave
optimally. These models allow inductive inference, which requires prior knowl
edge and is beyond the capabilities of neural networks models. The models have
been used in understanding the basic sensory and motor process and to explain
cognitive biases as products of prior assumptions. For example, the representation
of probability distribution of neurons has been explored theoretically with Bayes-
ian models and checked against experimental data. These practices show that
relating Bayesian inference to actual implementation in a brain is still problematic
because the brain “cuts corners” in order to get efficiency, so the approximations
may explain deviations from statistical optimality.
Central to computational neuroscience is the concept of a brain doing computa-
tions, so researchers are seeking to understand mechanisms of complex brain
functions, using modeling and analysis of information processing properties of
nervous system elements.
Stefka Tzanova
See also: Bayesian Inference; Cognitive Computing.
Further Reading
Kaplan, David M. 2011. “Explanation and Description in Computational Neuroscience.”
Synthese 183, no. 3: 339–73.
Kriegeskorte, Nikolaus, and Pamela K. Douglas. 2018. “Cognitive Computational Neuro-
science.” Nature Neuroscience 21, no. 9: 1148–60.
Schwartz, Eric L., ed. 1993. Computational Neuroscience. Cambridge, MA: Massachu-
setts Institute of Technology.
Trappenberg, Thomas. 2009. Fundamentals of Computational Neuroscience. New York:
Oxford University Press.
Computer-Assisted Diagnosis
Computer-assisted diagnosis is a research area in medical informatics that involves
the application of computing and communications technology to medicine. Physi-
cians and scientists embraced computers and software beginning in the 1950s to
collect and organize burgeoning stores of medical data and provide significant
decision and therapeutic support in interactions with patients. The use of comput-
ers in medicine has produced remarkable changes in the process of medical diag-
nostic decision-making.
The first diagnostic computing devices were inspired by tables of differential
diagnoses. Differential diagnosis involves the construction of sets of sorting rules
used to find probable causes of symptoms in the examination of patients. A good
example is a slide rule-like device invented around 1950 by F.A. Nash of the
South West London Mass X-Ray Service, called a Group Symbol Associator
(GSA), which allowed the physician to line up a patient’s symptoms with
Computer-Assisted Diagnosis 99
Pauker, Stephen G., and Jerome P. Kassirer. 1987. “Decision Analysis.” New England
Journal of Medicine 316, no. 5 (January): 250–58.
Topol, Eric J. 2019. “High-Performance Medicine: The Convergence of Human and Arti-
ficial Intelligence.” Nature Medicine 25, no. 1 (January): 44–56.
Cybernetics and AI
Cybernetics involves the study of communication and control in living organisms
and machines. Today, cybernetic thought permeates computer science, engineer-
ing, biology, and the social sciences, though the term itself is no longer widely
used in the United States. Cybernetic connectionist and artificial neural network
approaches to information theory and technology have throughout the past half
century often competed, and sometimes hybridized, with symbolic AI approaches.
Norbert Wiener (1894–1964), who derived the word “cybernetics” from the
Greek word for “steersman,” considered the discipline to be a unifying force bind-
ing together and elevating separate subjects such as game theory, operations
research, theory of automata, logic, and information theory. In Cybernetics, or
Control and Communication in the Animal and the Machine (1948), Winer com-
plained that modern science had become too much an arena for specialists, the
result of trends accumulating since the early Enlightenment. Wiener dreamed of a
time when specialists might work together, “not as subordinates of some great
executive officer, but joined by the desire, indeed by the spiritual necessity, to
understand the region as a whole, and to lend one another the strength of that
understanding” (Weiner 1948b, 3). Cybernetics, for Wiener, gave researchers
access to multiple sources of expertise, while still enjoying the joint advantages of
independence and impartial detachment. Wiener also thought that man and
machine should be considered together in fundamentally interchangeable episte-
mological terms. Until these common elements were uncovered, Wiener com-
plained, the life sciences and medicine would remain semi-exact and contingent
upon observer subjectivity.
Wiener fashioned his cybernetic theory in the context of World War II (1939–
1945). Interdisciplinary subjects heavy in mathematics, for instance, operations
research and game theory, were already being used to root out German subma-
rines and devise the best possible answers to complicated defense decision-making
problems. Wiener, in his capacity as a consultant to the military, threw himself
into the work of applying advanced cybernetic weaponry against the Axis powers.
To this end, Wiener devoted himself to understanding the feedback mechanisms
in the curvilinear prediction of flight and applying these principles to the making
of sophisticated fire-control systems for shooting down enemy planes.
Claude Shannon, a longtime Bell Labs researcher, went even further than Wie-
ner in attempting to bring cybernetic ideas into actual being, most famously in his
experiments with Theseus, an electromechanical mouse that used digital relays
and a feedback process to learn from past experience how to negotiate mazes.
Shannon constructed many other automata that exhibited behavior suggestive of
thinking machines. Several of Shannon’s mentees, including AI pioneers John
McCarthy and Marvin Minsky, followed his lead in defining the man as a sym-
bolic information processor. McCarthy, who is often credited with founding the
Cybernetics and AI 103
Further Reading
Ashby, W. Ross. 1956. An Introduction to Cybernetics. London: Chapman & Hall.
Galison, Peter. 1994. “The Ontology of the Enemy: Norbert Weiner and the Cybernetic
Vision.” Critical Inquiry 21, no. 1 (Autumn): 228–66.
Kline, Ronald R. 2017. The Cybernetics Moment: Or Why We Call Our Age the Informa-
tion Age. Baltimore, MD: Johns Hopkins University Press.
Mahoney, Michael S. 1990. “Cybernetics and Information Technology.” In Companion to
the History of Modern Science, edited by R. C. Olby, G. N. Cantor, J. R. R. Chris-
tie, and M. J. S. Hodge, 537–53. London: Routledge.
“New Navy Device Learns by Doing; Psychologist Shows Embryo of Computer Designed
to Read and Grow Wiser.” 1958. New York Times, July 8, 25.
Weiner, Norbert. 1948a. “Cybernetics.” Scientific American 179, no. 5 (November): 14–19.
Weiner, Norbert. 1948b. Cybernetics, or Control and Communication in the Animal and
the Machine. Cambridge, MA: MIT Press.
D
Dartmouth AI Conference
The Dartmouth Conference of 1956, formally entitled the “Dartmouth Summer
Research Project on Artificial Intelligence,” is often referred to as the Constitu-
tional Convention of AI. Convened on the campus of Dartmouth College in
Hanover, New Hampshire, the interdisciplinary conference brought together
experts in cybernetics, automata and information theory, operations research, and
game theory. Among the more than twenty participants were Claude Shannon
(known as the “father of information theory”), Marvin Minsky, John McCarthy,
Herbert Simon, Allen Newell (“founding fathers of artificial intelligence”), and
Nathaniel Rochester (architect of IBM’s first commercial scientific mainframe
computer). MIT Lincoln Laboratory, Bell Laboratories, and the RAND Systems
Research Laboratory all sent participants. The Dartmouth Conference was funded
in large measure by a grant from the Rockefeller Foundation.
Organizers conceived of the Dartmouth Conference, which lasted approxi-
mately two months, as a way of making rapid progress on machine models of
human cognition. Organizers adopted the following slogan as a starting point for
their discussions: “Every aspect of learning or any other feature of intelligence
can in principle be so precisely described that a machine can be made to simulate
it” (McCarthy 1955, 2). Mathematician and primary organizer John McCarthy
had coined the term “artificial intelligence” only one year prior to the summer
conference in his Rockefeller Foundation proposal. The purpose of the new term,
McCarthy later recalled, was to create some separation between his research and
the field of cybernetics. He was instrumental in discussions of symbol processing
approaches to artificial intelligence, which were then in a minority. Most brain-
modeling approaches in the 1950s involved analog cybernetic approaches and
neural networks.
Participants discussed a wide range of topics at the Dartmouth Conference,
from complexity theory and neuron nets to creative thinking and unpredictability.
The conference is especially noteworthy as the location of the first public demon-
stration of Newell, Simon, and Clifford Shaw’s famous Logic Theorist, a program
that could independently prove theorems given in the Principia Mathematica of
Bertrand Russell and Alfred North Whitehead. Logic Theorist was the only pro-
gram presented at the conference that attempted to simulate the logical properties
of human intelligence.
Attendees speculated optimistically that, as early as 1970, digital computers
would become chess grandmasters, uncover new and significant mathematical
theorems, produce acceptable translations of languages and understand spoken
language, and compose classical music.
106 de Garis, Hugo
No final report of the conference was ever prepared for the Rockefeller Founda-
tion, and so most information about the proceedings comes from recollections,
handwritten notes, and a few papers written by participants and published else-
where. The Dartmouth Conference was followed by an international conference
on the “Mechanisation of Thought Processes” at the British National Physical
Laboratory (NPL) in 1958. Several of the Dartmouth Conference participants pre-
sented at the NPL conference, including Minsky and McCarthy. At the NPL con-
ference, Minsky commented on the importance of the Dartmouth Conference to
the development of his heuristic program for solving plane geometry problems
and conversion from analog feedback, neural networks, and brain-modeling to
symbolic AI approaches. Research interest in neural networks would largely not
revive until the mid-1980s.
Philip L. Frana
See also: Cybernetics and AI; Macy Conferences; McCarthy, John; Minsky, Marvin;
Newell, Allen; Simon, Herbert A.
Further Reading
Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelli-
gence. New York: Basic Books.
Gardner, Howard. 1985. The Mind’s New Science: A History of the Cognitive Revolution.
New York: Basic Books.
Kline, Ronald. 2011. “Cybernetics, Automata Studies, and the Dartmouth Conference
on Artificial Intelligence.” IEEE Annals of the History of Computing 33, no. 4
(April): 5–16.
McCarthy, John. 1955. “A Proposal for the Dartmouth Summer Research Project on Arti-
ficial Intelligence.” Rockefeller Foundation application, unpublished.
Moor, James. 2006. “The Dartmouth College Artificial Intelligence Conference: The
Next Fifty Years.” AI Magazine 27, no. 4 (Winter): 87–91.
de Garis, Hugo(1947–)
Hugo de Garis is a pioneer in the areas of genetic algorithms, artificial brains, and
topological quantum computing. He is the founder of the field of evolvable hard-
ware, in which evolutionary algorithms are used to create specialized electronics
that can change structural design and performance dynamically and autonomously
in interaction with the environment. De Garis is famous for his book The Artilect
War (2005), in which he outlines what he believes will be an inevitable twenty-
first-century global war between humanity and ultraintelligent machines.
De Garis first became interested in genetic algorithms, neural networks, and
the possibility of artificial brains in the 1980s. Genetic algorithms involve the
use of software to simulate and apply Darwinian evolutionary theories to search
and optimization problems in artificial intelligence. Developers of genetic algo-
rithms such as de Garis used them to evolve the “fittest” candidate simulations
of axons, dendrites, signals, and synapses in artificial neural networks. De
Garis worked to make artificial nervous systems similar to those found in real
biological brains.
de Garis, Hugo 107
their own machine creations. He believes that the machines will become so pow-
erful and smart that only a tiny fraction of humanness will survive the encounter.
Geopolitical rivals China and the United States will find themselves compelled
to use these technologies to build ever-more sophisticated and autonomous econo-
mies, defense systems, and military robots. The Cosmists will welcome the domi-
nance of artificial intelligences in the world and will come to view them as
near-gods worthy of worship. The Terrans, by contrast, will resist the turning over
of the mechanisms of global economic, social, and military power to our machine
masters. They will view the new state of affairs as a grave tragedy that has befallen
the human species.
His argument for a coming war over superintelligent machines has inspired
voluminous commentary in popular science publications, as well as discussion
and debate among scientific and engineering experts. Some critics have ques-
tioned de Garis’s motives, as he implicates himself as a cause of the coming war
and as a closet Cosmist in his 2005 book. De Garis has responded that he feels he
is morally impelled to disseminate a warning now because he believes there will
be time for the public to recognize the full scope of the threat and react when they
begin detecting significant intelligence lurking in home appliances.
De Garis proposes a number of possible scenarios should his warning be
heeded. First, he proposes that it is possible, though unlikely, that the Terrans will
defeat Cosmist thinking before a superintelligence takes control. In a second sce-
nario, de Garis proposes that artilects will abandon the planet as unimportant and
leave human civilization more or less intact. In a third scenario, the Cosmists will
become so afraid of their own inventions that they will quit working on them.
Again, de Garis thinks this is unlikely. In a fourth scenario, he postulates that all
the Terrans might become Cyborgs. In a fifth scenario, the Cosmists will be
actively hunted down by the Terrans, perhaps even into deep space, and killed. In
a sixth scenario, the Cosmists will leave earth, build artilects, and then disappear
from the solar system to colonize the universe. In a seventh scenario, the Cosmists
will escape to space and build artilects who will go to war against one another
until none remain. In a final eighth scenario, the artilects will go to space and
encounter an extraterrestrial super-artilect who will destroy it.
De Garis has been accused of assuming that the nightmare vision of The Termi-
nator will become a reality, without considering the possibility that superintelli-
gent machines might just as likely be bringers of universal peace. De Garis has
responded that there is no way to guarantee that artificial brains will act in ethical
(human) ways. He also says that it is impossible to predict whether or how a super-
intelligence might defeat an implanted kill switch or reprogram itself to override
directives designed to engender respect for humanity.
Hugo de Garis was born in Sydney, Australia, in 1947. He received his bache-
lor’s degree in Applied Mathematics and Theoretical Physics from Melbourne
University, Australia, in 1970. After teaching undergraduate mathematics for four
years at Cambridge University, he joined the multinational electronics company
Philips as a software and hardware architect, working at sites in both The Nether-
lands and Belgium. De Garis was awarded a PhD in Artificial Life and Artificial
Intelligence from Université Libre de Bruxelles, Belgium, in 1992. His thesis title
de Garis, Hugo 109
Further Reading
de Garis, Hugo. 1989. “What If AI Succeeds? The Rise of the Twenty-First Century
Artilect.” AI Magazine 10, no. 2 (Summer): 17–22.
de Garis, Hugo. 1990. “Genetic Programming: Modular Evolution for Darwin Machines.”
In Proceedings of the International Joint Conference on Neural Networks, 194–
97. Washington, DC: Lawrence Erlbaum.
de Garis, Hugo. 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Con-
cerning Whether Humanity Should Build Godlike Massively Intelligent Machines.
ETC Publications.
de Garis, Hugo. 2007. “Artificial Brains.” In Artificial General Intelligence: Cognitive
Technologies, edited by Ben Goertzel and Cassio Pennachin, 159–74. Berlin:
Springer.
Geraci, Robert M. 2008. “Apocalyptic AI: Religion and the Promise of Artificial
Intelligence.” Journal of the American Academy of Religion 76, no. 1 (March):
138–66.
110 Deep Blue
Spears, William M., Kenneth A. De Jong, Thomas Bäck, David B. Fogel, and Hugo de
Garis. 1993. “An Overview of Evolutionary Computation.” In Machine Learning:
ECML-93, Lecture Notes in Computer Science (Lecture Notes in Artificial Intel-
ligence), vol. 667, 442–59. Berlin: Springer.
Deep Blue
Artificial intelligence has been used to play chess since the 1950s. Chess was stud-
ied for multiple reasons. First, the game is easy to represent in computers as there
are a set number of pieces that can occupy discrete locations on the board. Second,
the game is difficult to play. An enormous number of states (configurations of
pieces) are possible, and great chess players consider both their possible moves
and that of opponents, meaning they must consider what can happen several turns
into the future. Last, chess is competitive. Having a person play against a machine
involves a kind of comparison of intelligence. In 1997, Deep Blue showed that
machine intelligence was catching up to humans; it was the first computer to
defeat a reigning chess world champion.
The origin of Deep Blue goes back to 1985. While at Carnegie Mellon Univer-
sity, Feng-Hsiung Hsu, Thomas Anantharaman, and Murray Campbell developed
another chess-playing computer called ChipTest. The computer worked by brute
force, using the alpha-beta search algorithm to generate and compare sequences of
moves, with the goal of finding the best one. An evaluation function would score
the resulting positions, allowing multiple positions to be compared. In addition,
the algorithm was adversarial, predicting the opponent’s moves in order to deter-
mine a way to beat them.
Theoretically, a computer can generate and evaluate an infinite number of
moves if it has enough time and memory to perform the computations. However,
when used in tournament play, the machine is limited in both ways. To speed up
computations, a single special-purpose chip was used, allowing ChipTest to gener-
ate and evaluate 50,000 moves per second. In 1988, the search algorithm was aug-
mented to include singular extensions, which can quickly identify a move that is
better than all other alternatives. By quickly determining better moves, ChipTest
could generate larger sequences and look further ahead in the game, challenging
the foresight of human players.
ChipTest evolved into Deep Thought, and the team expanded to include Mike
Browne and Andreas Nowatzyk. Deep Thought used two improved move genera-
tor chips, allowing it to process around 700,000 chess moves per second. In 1988,
Deep Thought succeeded in beating Bent Larsen, becoming the first computer to
beat a chess grandmaster. Work on Deep Thought continued at IBM after the
company hired most of the development team. The team now set their sights on
beating the best chess player in the world.
The best chess player in the world at the time, as well as one of the best in his-
tory, was Garry Kasparov. Born in Baku, Azerbaijan, in 1963, Kasparov won the
Soviet Junior Championship at the age of twelve. At fifteen, he was the youngest
player to qualify for the Soviet Chess Championship. When he was seventeen, he
Deep Blue 111
became the under-twenty world champion. Kasparov was also the youngest ever
World Chess Champion, taking the title in 1985 when he was twenty-two years
old. He held on to the title until 1993, losing it by leaving the International Chess
Federation. He became the Classical World Champion immediately after, holding
the title from 1993 to 2000. For most of 1986 to 2005 (when he retired), Kasparov
was ranked as the best chess player in the world.
In 1989, Deep Thought played against Kasparov in a two-game match. Kasp-
arov defeated Deep Thought authoritatively by winning both games. Development
continued and Deep Thought transformed into Deep Blue, which played in only
two matches, both against Kasparov. Facing Deep Blue put Kasparov at a disad-
vantage while going into the matches. He would, like many chess players, scout
his opponents before matches by watching them play or reviewing records of tour-
nament matches to gain insight into their play style and the strategies they used.
However, Deep Blue had no match history, as it had played in private matches
against the developers until playing Kasparov. Therefore, Kasparov could not
scout Deep Blue. On the other hand, the developers had access to Kasparov’s
match history, so they could adapt Deep Blue to his playing style. Nonetheless,
Kasparov was confident and claimed that no computer would ever beat him.
The first six-game match between Deep Blue and Kasparov took place in Phila-
delphia on February 10, 1996. Deep Blue won the first game, becoming the first
computer to beat a reigning world champion in a single game. However, Kasparov
would go on to win the match after two draws and three wins. The match captured
worldwide attention, and a rematch was scheduled.
After a series of upgrades, Deep Blue and Kasparov faced off in another six-
game match, this time at the Equitable Center in New York City on May 11, 1997.
The match had an audience and was televised. At this point, Deep Blue was com-
posed of 400 special-purpose chips capable of searching through 200,000,000
chess moves per second. Kasparov won the first game, and Deep Blue won the
second. The next three games were draws. The last game would decide the match.
In this final game, Deep Blue capitalized on a mistake by Kasparov, causing the
champion to concede after nineteen moves. Deep Blue became the first machine
ever to defeat a reigning world champion in a match.
Kasparov believed that a human had interfered with the match, providing Deep
Blue with winning moves. The claim was based on a move made in the second
match, where Deep Blue made a sacrifice that (to many) hinted at a different strat-
egy than the machine had used in prior games. The move made a significant
impact on Kasparov, upsetting him for the remainder of the match and affecting
his play. Two factors may have combined to generate the move. First, Deep Blue
underwent modifications between the first and second game to correct strategic
flaws, thereby influencing its strategy. Second, designer Murray Campbell men-
tioned in an interview that if the machine could not decide which move to make, it
would select one at random; thus there was a chance that surprising moves would
be made. Kasparov requested a rematch and was denied.
David M. Schwartz
See also: Hassabis, Demis.
112 Deep Learning
Further Reading
Campbell, Murray, A. Joseph Hoane Jr., and Feng-Hsiung Hsu. 2002. “Deep Blue.” Arti-
ficial Intelligence 134, no. 1–2 (January): 57–83.
Hsu, Feng-Hsiung. 2004. Behind Deep Blue: Building the Computer That Defeated the
World Chess Champion. Princeton, NJ: Princeton University Press.
Kasparov, Garry. 2018. Deep Thinking: Where Machine Intelligence Ends and Human
Creativity Begins. London: John Murray.
Levy, Steven. 2017. “What Deep Blue Tells Us about AI in 2017.” Wired, May 23, 2017.
https://www.wired.com/2017/05/what-deep-blue-tells-us-about-ai-in-2017/.
Deep Learning
Deep learning is a subset of methods, tools, and techniques in artificial intelli-
gence or machine learning. Learning in this case involves the ability to derive
meaningful information from various layers or representations of any given data-
set in order to complete tasks without human instruction. Deep refers to the depth
of a learning algorithm, which usually involves many layers. Machine learning
networks involving many layers are often considered to be deep, while those with
only a few layers are considered shallow. The recent rise of deep learning over the
2010s is largely due to computer hardware advances that permit the use of compu-
tationally expensive algorithms and allow storage of immense datasets. Deep
learning has produced exciting results in the fields of computer vision, natural
language, and speech recognition. Notable examples of its application can be
found in personal assistants such as Apple’s Siri or Amazon Alexa and search,
video, and product recommendations. Deep learning has been used to beat human
champions at popular games such as Go and Chess.
Artificial neural networks are the most common form of deep learning. Neural
networks extract information through multiple stacked layers commonly known
as hidden layers. These layers contain artificial neurons, which are connected
independently via weights to neurons in other layers. Neural networks often
involve dense or fully connected layers, meaning that each neuron in any given
layer will connect to every neuron of its preceding layer. This allows the network
to learn increasingly intricate details or be trained by the data passing through
each subsequent layer. Part of what separates deep learning from other forms of
machine learning is its ability to work with unstructured data. Unstructured data
lacks prearranged labels or features. Using many stacked layers, deep learning
algorithms can learn to associate its own features from the given unstructured
datasets. This is accomplished by the hierarchical way a deep multi-layered learn-
ing algorithm provides progressively intricate details with each passing layer,
allowing for it to break down a highly complex problem into a series of simpler
problems. This allows the network to learn increasingly intricate details or be
trained by the data passed through subsequent layers.
A network is trained through the following steps: First, small batches of labeled
data are passed forward through the network. The network’s loss is calculated by
comparing predictions versus the actual labels. Any discrepancies are calculated
Deep Learning 113
and relayed back to the weights through back propagation. Weights are slightly
altered with the goal of continuously minimizing loss during each round of pre-
dictions. The process repeats until optimal minimization of loss occurs and the
network achieves a high accuracy of correct predictions.
Deep learning’s ability to self-optimize its layers is what gives it an edge over
many machine learning techniques or shallow learning networks. Since machine
or shallow learning algorithms involve only a few layers at most, they require
human intervention in the preparation of unstructured data for input, also known
as feature engineering. This can be quite an arduous process and might take too
much time to be worthwhile, especially if the dataset is quite large.
For these reasons, it may appear as though machine learning algorithms might
become a method of the past. But deep learning algorithms come at a cost. The
ability to find their own features requires a vast amount of data that might not
always be available. Also, as data sizes increase, so too does the processing power
and training time requirements needed since the network will have much more
data to sort through. Training time will also increase depending on the amount
and types of layers used. Luckily, online computing, where access to powerful
computers can be rented for a fee, allows anyone the ability to execute some of the
more demanding deep learning networks.
Convolutional neural networks require extra types of hidden layers not dis-
cussed in the basic neural network architecture. This type of deep learning is most
often associated with computer vision projects and is currently the most widely
used method in that field. Basic convnet networks will generally use three types of
layers in order to gain insight from the image: convolutional layers, pooling lay-
ers, and dense layers. Convolutional layers work by shifting a window, or convo-
lutional kernel, across the image in order to gain information from low-level
features such as edges or curves. Subsequent stacked convolutional layers will
repeat this process over the newly formed layers of low-level features searching
for progressively higher-level features until it forms a concise understanding
of the image. Varying the size of the kernel or the distance in which it slides over
the image are various hyperparameters that can be changed in order to locate dif-
ferent types of features. Pooling layers allow a network to continue to learn pro-
gressively higher-level features of an image by down sampling the image along
the way.
Without a pooling layer implemented among convolutional layers, the network
might become too computationally expensive as each progressive layer analyzes
more intricate details. Also, the pooling layer shrinks an image while retaining
important features. These features become translation invariant, meaning that a
feature found in one part of an image can be recognized in a completely new area
of a second. For an image classification task, the convolutional neural network’s
ability to retain positional information is vital. Again, the power of deep learning
regarding convolutional neural networks is shown through its ability to parse
through the unstructured data automatically to find local features that it deems
important while retaining positional information about how these features interact
with one another.
114 DENDRAL
DENDRAL
Pioneered by Nobel-prize winning geneticist Joshua Lederberg and computer sci
entist Edward Feigenbaum, DENDRAL was an early expert system aimed at analyz
ing and identifying complex organic compounds. Feigenbaum and Lederberg
began developing DENDRAL (meaning tree in Greek) at Stanford University’s
Artificial Intelligence Laboratory in the 1960s. At the time, there was some expec
tation that NASA’s 1975 Viking Mission to Mars stood to benefit from computers
that could analyze extraterrestrial structures for signs of life. In the 1970s, DEN-
DRAL moved to Stanford’s Chemistry Department where Carl Djerassi, a promi
nent chemist in the field of mass spectrometry, headed the program until 1983.
To identify organic compounds, molecular chemists relied on rules of thumb to
interpret the raw data generated by a mass spectrometer as there was no overarching
Dennett, Daniel 115
theory of mass spectrometry. Lederberg believed that computers could make organic
chemistry more systematic and predictive. He started out by developing an exhaus-
tive search engine. The first contribution Feigenbaum made to the project was the
addition of heuristic search rules. These rules made explicit what chemists tacitly
understood about mass spectrometry. The result was a pioneering AI system that
generated the most plausible answers, rather than all possible answers. According to
historian of science Timothy Lenoir, DENDRAL “would analyze the data, generate
a list of candidate structures, predict the mass spectra of those structures from the
theory of mass spectrometry and select as a hypothesis the structure whose spec-
trum most closely matched the data” (Lenoir 1998, 31).
DENDRAL quickly gained significance in both computer science and chem-
istry. Feigenbaum recalled that he coined the term “expert system” around
1968. DENDRAL is considered an expert system because it embodies scientific
expertise. The knowledge that human chemists tacitly had held in their work-
ing memories was extracted by computer scientists and made explicit in DEN-
DRAL’s IF-THEN search rules. In technical terms, an expert system also refers
to a computer system with a transparent separation between knowledge-base
and inference engine. Ideally, this allows human experts to look at the rules of
a program like DENDRAL, understand its structure, and comment on how to
improve it further.
The positive results that came out of DENDRAL contributed to a gradual qua-
drupling of Feigenbaum’s Defense Advanced Research Projects Agency budget
for artificial intelligence research starting in the mid-1970s. And DENDRAL’s
growth matched that of the field of mass spectrometry. Having outgrown Leder-
berg’s knowledge, the system began to incorporate the knowledge of Djerassi and
others in his lab. Consequently, both the chemists and the computers scientists
became more aware of the underlying structure of the field of organic chemistry
and mass spectrometry, allowing the field to take an important step toward
theory-building.
Elisabeth Van Meer
See also: Expert Systems; MOLGEN; MYCIN.
Further Reading
Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelli-
gence. New York: Basic Books.
Feigenbaum, Edward. October 12, 2000. Oral History. Minneapolis, MN: Charles Bab-
bage Institute.
Lenoir, Timothy. 1998. “Shaping Biomedicine as an Information Science.” In Proceed-
ings of the 1998 Conference on the History and Heritage of Science Information
Systems, edited by Mary Ellen Bowden, Trudi Bellardo Hahn, and Robert
V. Williams, 27–45. Medford, NJ: Information Today.
Dennett, Daniel(1942–)
Daniel Dennett is Austin B. Fletcher Professor of Philosophy and Co-Director of
the Center for Cognitive Studies at Tufts University. His primary areas of research
116 Dennett, Daniel
and publication are in the fields of philosophy of mind, free will, evolutionary
biology, cognitive neuroscience, and artificial intelligence. He is the author of
more than a dozen books and hundreds of articles. Much of this work has centered
on the origins and nature of consciousness and how it can be explained naturalisti-
cally. Dennett is also an outspoken atheist and is one of the so-called “Four Horse-
men” of New Atheism. The others are Richard Dawkins, Sam Harris, and
Christopher Hitchens.
Dennett’s philosophy is consistently naturalistic and materialistic. He rejects
Cartesian dualism, the idea that the mind and the body are separate entities that
comingle. He claims instead that the brain is a type of computer that has evolved
through natural selection. Dennett also argues against the homunculus view of the
mind in which there is a central controller or “little man” in the brain who does all
of the thinking and feeling.
Instead, Dennett advocates for a position he calls the multiple drafts model. In
this view, which he outlines in his 1991 book Consciousness Explained, the brain
continuously sifts through, interprets, and edits sensations and stimuli and formu-
lates overlapping drafts of experience. Later, Dennett used the metaphor of “fame
in the brain” to communicate how different elements of continuous neural pro-
cesses are occasionally highlighted at particular times and under varying circum-
stances. These various interpretations of human experiences form a narrative that
is called consciousness. Dennett rejects the idea that these notions come together
or are organized in a central part of the brain, a notion he derisively calls “Carte-
sian theater.” Rather, the brain’s narrative consists of a continuous, un-centralized
flow of bottom-up consciousness spread over time and space.
Dennett rejects the existence of qualia, which are individual subjective experi-
ences such as the way colors appear to the human eye or the way food tastes. He
does not deny that colors or tastes exist, only that there is no additional entity in
the human mind that is the experience of color or taste. He maintains there is no
difference between human and machine “experiences” of sensations. Just as cer-
tain machines can distinguish between colors without humans concluding that
machines experience qualia, so too, says Dennett, does the human brain. For Den-
nett, the color red is just the property that brains detect and which in the English
language is called red. There is no additional, ineffable, quality to it. This is an
important consideration for artificial intelligence in that the ability to experience
qualia is often considered to be an obstacle to the development of Strong AI (AI
that is functionally equivalent to that of a human) and something that will inevita-
bly differentiate between human and machine intelligence. But if qualia do not
exist, as Dennett claims, then it cannot be a barrier to the development of human-
like intelligence in machines.
In another metaphor, Dennett likens human brains to termite colonies. Though
the termites do not come together and plan to build a mound, their individual
actions generate that outcome. The mound is not the result of intelligent design by
the termites, but rather is the outcome of uncomprehending competence in coop-
erative mound-building produced by natural selection. Termites do not need to
understand what they are doing in order to build a mound. Similarly, comprehen-
sion itself is an emergent property of such competences.
Dennett, Daniel 117
For Dennett, brains are control centers evolved to react quickly and efficiently
to dangers and opportunities in the environment. As the demands of reacting to
the environment become more complex, comprehension develops as a tool to deal
with those complexities. Comprehension is a matter of degree on a sliding scale.
For instance, Dennett places the quasi-comprehension of bacteria when they
respond to various stimuli and the quasi-comprehension of computers responding
to coded instructions on the low end of the spectrum. He places Jane Austen’s
understanding of human social forces and Albert Einstein’s understanding of rela-
tivity on the upper end of the spectrum. These are not differences in kind, how-
ever, only of degree. Both ends of the spectrum are the result of natural selection.
Comprehension is not an additional mental phenomenon of the brain’s over-
and-above various competences. Rather, comprehension is a composition of such
competences. To the degree that we identify consciousness itself as an extra ele-
ment of the mind in the form of either qualia or comprehension, such conscious-
ness is an illusion.
Generally, Dennett urges humanity not to posit comprehension at all when
mere competence will do. Yet human beings tend to take what Dennett calls the
“intentional stance” toward other human beings and often to animals. The inten-
tional stance is taken when people interpret actions as the results of mind-directed
beliefs, emotions, desires, or other mental states. He contrasts this to the “physical
stance” and the “design stance.” The physical stance is an interpretation of some-
thing as being the result of purely physical forces or the laws of nature. A stone
falls when dropped because of gravity, not because of any mental intention to
return to the earth. The design stance is an interpretation of an action as being the
unthinking result of a preprogrammed, or designed, purpose. An alarm clock, for
example, will beep at a set time because it has been designed to do so, not because
it has decided of its own accord to do so. The intentional stance differs from both
the physical and design stances in that it treats behaviors and actions as if they are
the results of conscious choice on the part of the agent.
Determining whether to apply the intentional stance or the design stance to
computers can become complicated. A chess-playing computer has been designed
to win at chess. But its actions are often indistinguishable from those of a chess-
playing human who wants to, or intends to, win. In fact, human interpretation of
the computer’s behavior, and a human’s ability to react to it, is enhanced if taking
an intentional stance, rather than a design stance, toward it. Dennett argues that
since the intentional stance works best in explaining the behavior of both the human
and the computer, it is the best approach to take toward both. Furthermore, there is
no reason to make any distinction between them at all. Though the intentional
stance views behavior as if it is agent-driven, it need not take any position on what
is actually happening within the innards of human or machine. This stance pro-
vides a neutral position from which to explore cognitive competence without pre-
suming a specific model of what is going on behind those competences.
Since human mental competences have evolved naturally, Dennett sees no rea-
son in principle why AI should be impossible. Further, having dispensed with the
notion of qualia, and through adopting the intentional stance that absolves humans
from the burden of hypothesizing about what is going on in the background of
118 Diamandis, Peter
cognition, two primary obstacles of the hard problem of consciousness are now
removed. Since the human brain and computers are both machines, Dennett
argues there is no valid theoretical reason humans should be capable of evolving
competence-drive comprehension while AI should be inherently incapable of
doing so. Consciousness as typically conceived is illusory and therefore is not an
obligatory standard for Strong AI.
Dennett does not see any reason that Strong AI is impossible in principle.
He believes society’s level of technological sophistication remains at least fifty
years away from being able to produce it. Dennett does not view the development
of Strong AI as desirable. Humans should seek to develop AI tools, but to attempt
to create machine friends or colleagues, in Dennett’s view, would be a mistake.
He argues such machines would not share human moral intuitions and under-
standing and would not be integrated into human society. Humans have each other
for companionship and do not require machines to perform that task. Machines,
even AI-enhanced machines, should remain tools to be used by human beings and
nothing more.
William R. Patterson
See also: Cognitive Computing; General and Narrow AI.
Further Reading
Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, MA: MIT Press.
Dennett, Daniel C. 1993. Consciousness Explained. London: Penguin.
Dennett, Daniel C. 1998. Brainchildren: Essays on Designing Minds. Cambridge, MA:
MIT Press.
Dennett, Daniel C. 2008. Kinds of Minds: Toward an Understanding of Consciousness.
New York: Basic Books.
Dennett, Daniel C. 2017. From Bacteria to Bach and Back: The Evolution of Minds. New
York: W. W. Norton.
Dennett, Daniel C. 2019. “What Can We Do?” In Possible Minds: Twenty-Five Ways of
Looking at AI, edited by John Brockman, 41–53. London: Penguin Press.
Diamandis, Peter(1961–)
Peter Diamandis is a Harvard MD and an MIT-trained aerospace engineer. He is
also a serial entrepreneur: he founded or cofounded twelve companies, most still
in operation, including International Space University and Singularity University.
His brainchild is the XPRIZE Foundation, which sponsors competitions in futur-
istic areas such as space technology, low-cost mobile medical diagnostics, and oil
spill cleanup. He is the chair of Singularity University, which teaches executives
and graduate students about exponentially growing technologies.
Diamandis’s focus is humanity’s grand challenges. Initially, his interests
were focused entirely on space flight. Even as a teen, he already thought that
humanity should be a multiplanetary species. However, when he realized that
the U.S. government was reluctant to finance NASA’s ambitious plans of colo-
nization of other planets, he identified the private sector as the new engine of
space flight. While still a student at Harvard and MIT, he founded several
Digital Immortality 119
Digital Immortality
Digital immortality is a hypothesized process for transferring the memories,
knowledge, and/or personality of a human being into a durable digital memory
storage device or robot. In this way, human intelligence is supplanted by an
120 Digital Immortality
by two early space flight researchers, Manfred Clynes and Nathan Kline, in their
1960 Astronautics paper on “Cyborgs and Space,” which contains the first
mention of astronauts with physical abilities that extend beyond normal limits
(zero gravity, space vacuum, cosmic radiation) due to mechanical aids. Under
conditions of true mind uploading, it may become possible to simply encode and
transmit the human mind in the form of a signal sent to a nearby exoplanet that is
the best candidate for finding alien life. In each case, the risks to the human are
minimal compared to current dangers faced by astronauts—explosive rockets,
high speed collisions with micrometeorites, and malfunctioning suits and
oxygen tanks.
Yet another possible advantage of digital immortality is true restorative justice
and rehabilitation through reprogramming of criminal minds. Or mind uploading
could conceivably allow punishments to be meted out far beyond the natural life
span of individuals who have committed horrific offenses. The social, philosophi-
cal, and legal consequences of digital immortality are truly mind-boggling.
Digital immortality has been a staple of science fiction explorations. Frederik
Pohl’s short story “The Tunnel Under the World” (1955) is a widely reprinted short
story about chemical plant workers who are killed in a chemical plant explosion,
only to be rebuilt as miniature robots and exposed as test subjects to advertising
campaigns and jingles over a long Truman Show-like repeating day. Charles Platt’s
book The Silicon Man (1991) tells the story of an FBI agent who uncovers a covert
project called LifeScan. Led by an elderly billionaire and mutinous group of gov-
ernment scientists, the project has discovered a way to upload human mind pat-
terns to a computer called MAPHIS (Memory Array and Processors for Human
Intelligence Storage). MAPHIS can deliver all ordinary stimulus, including simu-
lations of other people called pseudomorphs.
Greg Egan’s hard science fiction Permutation City (1994) introduces the
Autoverse, which simulates detailed pocket biospheres and virtual realities pop-
ulated by artificial life forms. Copies are the name Egan gives to human con-
sciousnesses scanned into the Autoverse. The novel is informed by the cellular
automata of John Conway’s Game of Life, quantum ontology (the relationship
between the quantum world and the representations of reality experienced by
humans), and something Egan calls dust theory. At the heart of dust theory is the
notion that physics and math are identical and that people existing in whatever
mathematical, physical, and spacetime structures (and all are possible) are ulti-
mately data, processes, and relationships. This claim is comparable to MIT
physicist Max Tegmark’s Theory of Everything where “all structures that exist
mathematically exist also physically, by which we mean that in those complex
enough to contain self-aware substructures (SASs), these SASs will subjectively
perceive themselves as existing in a physically ‘real’ world” (Tegmark 1998, 1).
Similar claims are made in Carnegie Mellon University roboticist Hans
Moravec’s essay “Simulation, Consciousness, Existence” (1998). Examples of
mind uploading and digital immortality in film are Tron (1982), Freejack (1992),
and The 6th Day (2000).
A noteworthy skeptic is Columbia University theoretical neuroscientist Ken-
neth D. Miller. Miller suggests that while reconstructing an active, functioning
124 Distributed and Swarm Intelligence
While ACO and PSO are software-based solutions, the application of swarm
intelligence to embodied systems is swarm robotics. In swarm robotics, the con-
cept of self-organizing systems based on local information with a high degree of
robustness and scalability is applied to multi-robot systems. Following the exam-
ple of social insects, the idea is to keep each individual robot rather simple com-
pared to the task complexity and still allow them to solve complex tasks by
collaboration. A swarm robot has to operate on local information only, hence can
only communicate with neighboring robots. The applied control algorithms are
supposed to support maximal scalability given a constant swarm density (i.e.,
constant number of robots per area). If the swarm size is increased or decreased by
adding or removing robots, then the same control algorithms should continue to
work efficiently independently of the system size. Often a super-linear perfor-
mance increase is observed; that is, by doubling the size of the swarm, the swarm
performance increases by more than two. In turn, each robot is also more efficient
than before.
Effective implementations of swarm robotics systems have been shown for
not only a variety of tasks, such as aggregation and dispersion behaviors, but
also more complex tasks, such as object sorting, foraging, collective transport,
and collective decision-making. The largest scientific experiment with swarm
robots reported so far is that of Rubenstein et al. (2014) with 1024 small mobile
robots that emulate a self-assembly behavior by positioning the robots in pre-
defined shapes. Most of the reported experiments were done in the lab, but
recent research takes swarm robotics out to the field. For example, Duarte et al.
(2016) constructed a swarm of autonomous surface vessels that navigate in a
group on the ocean.
Major challenges of swarm intelligence are modeling the relation between indi-
vidual behavior and swarm behavior, developing sophisticated design principles,
and deriving guarantees of system properties. The problem of determining the
resulting swarm behavior based on a given individual behavior and vice versa is
called the micro-macro problem. It has proven to be a hard problem that consti-
tutes itself in mathematical modeling and also as an engineering problem in the
robot controller design process. The development of sophisticated strategies to
engineer swarm behavior is not only at the core of swarm intelligence research but
has also proven to be fundamentally challenging. Similarly, multi-agent learning
and evolutionary swarm robotics (i.e., application of methods of evolutionary
computation to swarm robotics) do not scale properly with task complexity
because of the combinatorial explosion of action-to-agent assignments. Despite
the advantages of robustness and scalability, hard guarantees for systems of swarm
intelligence are difficult to derive. The availability and reliability of swarm sys-
tems can in general only be determined empirically.
Heiko Hamann
See also: Embodiment, AI and.
Further Reading
Bonabeau, Eric, Marco Dorigo, and Guy Theraulaz. 1999. Swarm Intelligence: From Nat-
ural to Artificial System. New York: Oxford University Press.
128 Driverless Cars and Trucks
Duarte, Miguel, Vasco Costa, Jorge Gomes, Tiago Rodrigues, Fernando Silva, Sancho
Moura Oliveira, Anders Lyhne Christensen. 2016. “Evolution of Collective Behav-
iors for a Real Swarm of Aquatic Surface Robots.” PloS One 11, no. 3: e0151834.
Hamann, Heiko. 2018. Swarm Robotics: A Formal Approach. New York: Springer.
Kitano, Hiroaki, Minoru Asada, Yasuo Kuniyoshi, Itsuki Noda, Eiichi Osawa, Hitoshi
Matsubara. 1997. “RoboCup: A Challenge Problem for AI.” AI Magazine 18, no. 1:
73–85.
Liang, Wenshuang, Zhuorong Li, Hongyang Zhang, Shenling Wang, Rongfang Bie. 2015.
“Vehicular Ad Hoc Networks: Architectures, Research Issues, Methodologies,
Challenges, and Trends.” International Journal of Distributed Sensor Networks
11, no. 8: 1–11.
Reynolds, Craig W. 1987. “Flocks, Herds, and Schools: A Distributed Behavioral Model.”
Computer Graphics 21, no. 4 (July): 25–34.
Rubenstein, Michael, Alejandro Cornejo, and Radhika Nagpal. 2014. “Programmable
Self-Assembly in a Thousand-Robot Swarm.” Science 345, no. 6198: 795–99.
driver. Level 4 and Level 5 systems do not require a human to be present but do
entail extensive technical and social coordination.
While attempts at developing autonomous vehicles date back to the 1920s, the
idea of a self-propelled cart is ascribed to Leonardo Da Vinci. Norman Bel Ged-
des imagined a smart city of the future populated by self-driving vehicles in his
New York World’s Fair Futurama exhibit of 1939. Bel Geddes speculated that, by
1960, automobiles would be equipped with “devices which will correct the faults
of human drivers.” In the 1950s, General Motors brought to life the idea of a smart
infrastructure by constructing an “automated highway” equipped with circuits to
guide steering. The company tested a functional prototype vehicle in 1960, but
due to the high cost of the infrastructure, it soon shifted from building smart cities
to creating smart vehicles.
An early example of an autonomous vehicle came from a team assembled by
Sadayuki Tsugawa at Tsukuba Mechanical Engineering Laboratory in Japan.
Their vehicle, completed in 1977, worked in predetermined environmental condi-
tions specified by lateral guide rails. The vehicle followed the rails using cameras,
and much of the processing equipment was onboard the vehicle.
In the 1980s, the EUREKA (European Research Organization) gathered
together investments and expertise in order to advance the state of the art in cam-
eras and processing necessary for autonomous vehicles. Simultaneously, Carnegie
Mellon University in the United States pooled its resources for research in autono-
mous guidance using global positioning system data. Since that time, automotive
manufacturers such as General Motors, Tesla, and Ford Motor Company, as well
as technology companies such as ARGO AI and Waymo, have been developing
autonomous vehicles or necessary components. The technology is becoming
decreasingly reliant on highly constrained conditions and increasingly fit for real-
world conditions. Level 4 autonomous test vehicles are now being produced by
manufacturers, and experiments are being conducted under real-world traffic and
weather conditions. Commercially available Level 4 autonomous vehicles are still
out of reach.
Autonomous driving has proponents and detractors. Supporters highlight sev-
eral advantages addressing social concerns, ecological issues, efficiency, and
safety. One such social advantage is the provision of mobility services and a
degree of autonomy to those people currently without access, such as people with
disabilities (e.g., blindness or motor function impairment) or those who are not
otherwise able to drive, such as the elderly and children. Ecological advantages
include the ability to reduce fuel economy by regulating acceleration and braking.
Reductions in congestion are anticipated as networked vehicles can travel bumper
to bumper and be routed according to traffic optimization algorithms. Finally,
autonomous vehicle systems are potentially safer. They may be able to process
complex information faster and more completely than human drivers, resulting in
fewer accidents.
Negative consequences of self-driving vehicles can be considered across these
categories as well. Socially, autonomous vehicles may contribute to reduced access
to mobility and city services. Autonomous mobility may be highly regulated,
expensive, or confined to areas inaccessible to under-privileged transportation
Driverless Vehicles and Liability 131
users. Intelligent geo-fenced city infrastructure may even be cordoned off from
nonnetworked or manually driven vehicles. Additionally, autonomous cars may
constitute a safety risk for certain vulnerable occupants, such as children, where
no adult or responsible human party is present during transportation. Greater con-
venience may have ecological disadvantages. Drivers may sleep or work as they
travel autonomously, and this may produce the unintended effect of lengthening
commutes and exacerbating congestion. A final security concern is widespread
vehicle hacking, which could paralyze individual cars and trucks or perhaps a
whole city.
Michael Thomas
See also: Accidents and Risk Assessment; Autonomous and Semiautonomous Systems;
Autonomy and Complacency; Intelligent Transportation; Trolley Problem.
Further Reading
Antsaklis, Panos J., Kevin M. Passino, and Shyh J. Wang. 1991. “An Introduction to
Autonomous Control Systems.” IEEE Control Systems Magazine 11, no. 4: 5–13.
Bel Geddes, Norman. 1940. Magic Motorways. New York: Random House.
Bimbraw, Keshav. 2015. “Autonomous Cars: Past, Present, and Future—A Review of the
Developments in the Last Century, the Present Scenario, and the Expected Future
of Autonomous Vehicle Technology.” In ICINCO: 2015—12th International Con-
ference on Informatics in Control, Automation and Robotics, vol. 1, 191–98. Pis-
cataway, NJ: IEEE.
Kröger, Fabian. 2016. “Automated Driving in Its Social, Historical and Cultural Con-
texts.” In Autonomous Driving, edited by Markus Maurer, J. Christian Gerdes,
Barbara Lenz, and Hermann Winner, 41–68. Berlin: Springer.
Lin, Patrick. 2016. “Why Ethics Matters for Autonomous Cars.” In Autonomous Driving,
edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Win-
ner, 69–85. Berlin: Springer.
Weber, Marc. 2014. “Where To? A History of Autonomous Vehicles.” Computer History
Museum. https://www.computerhistory.org/atchm/where-to-a-history-of-autonomous
-vehicles/.
In some other situations, victims will only be compensated by their own insur-
ance companies under no-fault liability. Victims may also base their claims for
damages on the strict liability principle without having to show evidence of the
driver’s negligence. In this case, the driver may argue that the manufacturer be
joined in an action for damages if the driver or the controller believes that the
accident was the result of a defect in the product. In any case, proof of the driver
or controller’s negligence will diminish the liability of the manufacturer. Product
liability for defective products affords third parties the opportunity of suing the
manufacturers directly for any injury. There is no privity of contract between the
victim and the manufacturer under MacPherson v. Buick Motor Co. (1916), where
the court held that responsibility for a defective product by an automotive manu-
facturer extends beyond the immediate buyer.
Driverless vehicle product liability is a challenging issue. A change from man-
ual control to smart automatic control shifts liability from the user of the vehicle
to the manufacturers. One of the main issues related to accident liability involves
complexity of driver modes and the interaction between human operator and arti-
ficial agent. In the United States, the motor vehicle product liability case law con-
cerning defects in driverless vehicles is still underdeveloped. Whereas the
Department of Transportation and, specifically, the National Highway Traffic
Safety Administration provide some general guidelines on automation in driver-
less vehicles, the Congress has yet to pass legislation on self-driving cars. In the
United Kingdom, the 2018 Automated and Electric Vehicles Act holds insurers
liable by default for accidents resulting in death, personal injury, or damage to
certain property caused by automated vehicles, provided they were on self-
operating mode and insured at the time of the accident.
Ikechukwu Ugwu, Anna Stephanie Elizabeth Orchard,
and Argyro Karanasiou
See also: Accidents and Risk Assessment; Product Liability and AI; Trolley Problem.
Further Reading
Geistfeld. Mark A. 2017. “A Roadmap for Autonomous Vehicles: State Tort Liability,
Automobile Insurance, and Federal Safety Regulation.” California Law Review
105: 1611–94.
Hevelke, Alexander, and Julian Nida-Rümelin. 2015. “Responsibility for Crashes of
Autonomous Vehicles: An Ethical Analysis.” Science and Engineering Ethics 21,
no. 3 (June): 619–30.
Karanasiou, Argyro P., and Dimitris A. Pinotsis. 2017. “Towards a Legal Definition of
Machine Intelligence: The Argument for Artificial Personhood in the Age of Deep
Learning.” In ICAIL ’17: Proceedings of the 16th edition of the International Con-
ference on Artificial Intelligence and Law, edited by Jeroen Keppens and Guido
Governatori, 119–28. New York: Association for Computing Machinery.
Luetge, Christoph. 2017. “The German Ethics Code for Automated and Connected Driv-
ing.” Philosophy & Technology 30 (September): 547–58.
Rabin, Robert L., and Kenneth S. Abraham. 2019. “Automated Vehicles and Manufac-
turer Responsibility for Accidents: A New Legal Regime for a New Era.” Virginia
Law Review 105, no. 1 (March): 127–71.
Wilson, Benjamin, Judy Hoffman, and Jamie Morgenstern. 2019. “Predictive Inequity in
Object Detection.” https://arxiv.org/abs/1902.11097.
E
ELIZA
ELIZA is a conversational computer program developed between 1964 and 1966
by German-American computer scientist Joseph Weizenbaum at the Massachu-
setts Institute of Technology (MIT). Weizenbaum developed ELIZA as part of a
pioneering artificial intelligence research team, led by Marvin Minsky, on the
DARPA-funded Project MAC (Mathematics and Computation). Weizenbaum
named ELIZA after Eliza Doolittle, a fictional character who learns to speak
proper English in the play Pygmalion; in 1964, that play had just been adapted into
the popular movie My Fair Lady. ELIZA is designed so that a human being can
interact with a computer system using plain English. ELIZA’s popularity with
users eventually turned Weizenbaum into an AI skeptic.
Users can type any statement into the system’s open-ended interface when com-
municating with ELIZA. Like a Rogerian psychologist aiming to probe deeper into
the patient’s underlying beliefs, ELIZA will often respond by asking a question. As
the user continues their conversation with ELIZA, the program recycles some of
the user’s responses, giving the appearance that ELIZA is truly listening. In reality,
Weizenbaum had programmed ELIZA with a tree-like decision structure. First, the
user’s sentences are screened for certain key words. If more than one keyword is
found, the words are ranked in order of importance. For example, if a user types in
“I think that everybody laughs at me,” the most important word for ELIZA to
respond to is “everybody,” not “I.” Next, the program employs a set of algorithms
to compose a fitting sentence structure around those key words in generating a
response. Or, if the user’s input sentence does not match any word in ELIZA’s data-
base, the program finds a content-free remark or repeats an earlier response.
Weizenbaum designed ELIZA to explore the meaning of machine intelligence.
In a 1962 article in Datamation, Weizenbaum explained that he took his inspira-
tion from a remark made by MIT cognitive scientist Marvin Minsky. Minsky had
suggested that “intelligence was merely an attribute human observers were will-
ing to give to processes they did not understand, and only for as long as they did
not understand them” (Weizenbaum 1962). If that were the case, Weizenbaum
concluded, then the crux of artificial intelligence was to “fool some observers for
some time” (Weizenbaum 1962). ELIZA was developed to do just that by provid-
ing users with plausible answers, and hiding how little the program truly knows,
in order to sustain the user’s belief in its intelligence a little longer.
What stunned Weizenbaum was how successful ELIZA became. By the late
1960s, ELIZA’s Rogerian script became popular as a program retitled DOCTOR
at MIT and disseminated to other university campuses—where the program was
Embodiment, AI and 135
Embodiment, AI and
Embodied Artificial Intelligence is a theoretical and practical approach to build-
ing AI. Because of its origins in multiple disciplines, it is difficult to trace its his-
tory definitively. One claimant for the birth of this view is Rodney Brooks’s
Intelligence Without Representation, written in 1987 and published in 1991.
Embodied AI is still considered a fairly young field, and some of the earliest uses
of this term date only to the early 2000s.
Rather than focus on either modeling the brain (connectionism/neural net-
works) or linguistic-level conceptual encoding (GOFAI, or the Physical Symbol
136 Embodiment, AI and
System Hypothesis), the embodied approach to AI understands the mind (or intel-
ligent behavior) to be something that emerges from interaction between body and
world. There are dozens of distinct and often-conflicting ways to understand the
role the body plays in cognition, most of which use “embodied” as a descriptor.
Shared among these views is the claim that the form the physical body takes is
relevant to the structure and content of the mind. The embodied approach claims
that general artificial intelligence cannot be achieved in code alone, despite the
successes that neural network or GOFAI (Good Old-Fashioned Artificial Intelli-
gence or traditional symbolic artificial intelligence) approaches may have in nar-
row expert systems.
For example, in a small robot with four motors, each motor driving a different
wheel, and programming that instructs the robot to avoid obstacles, if the code
were retained but the wheels moved to different parts of the body, or replaced with
articulated legs, the exact same code would produce wildly different observable
behaviors. This is a simple illustration of why the form a body takes must be con-
sidered when building robotic systems and why embodied AI (as opposed to just
robotics) sees the dynamic interaction between the body and the world to be the
source of sometimes unexpected emergent behaviors.
A good example of this approach is the case of passive dynamic walkers. The
passive dynamic walker is a model of bipedal walking that relies on the dynamic
interaction of the design of the legs and the structure of the environment. There is
no active control system generating the gait. Instead, gravity; inertia; and the
shapes of the feet, legs, and incline are what drive the walker forward. This
approach is related to the biological idea of stigmergy. At the heart of stigmergy is
the notion that signs or marks resulting from action in an environment inspire
future action.
ENGINEERING-INFLUENCED APPROACH
Embodied AI takes its inspiration from various fields. Two common approaches
come from engineering and philosophy. In 1986, Rodney Brooks argued for what
he called the “subsumption architecture,” which is an approach to generating
complex behaviors by arranging lower-level layers of the system to interact in
prioritized ways with the environment, tightly coupling perception and action and
trying to eliminate the higher-level processing of other models. For example, the
robot Genghis, which currently resides in the Smithsonian, was designed to tra-
verse rough terrain, a skill that made the design and engineering of other robots
quite difficult at the time. The success of this model largely arose from the design
decision to distribute the processing of different motors and sensors across the
network, without attempting higher-level integration of the systems to form a
complete representational model of the robot and its environment. In other words,
there was no central processing space where all pieces of the robot attempted to
integrate information for the system.
An early attempt at the embodied AI project was Cog, a humanoid torso
designed by the MIT Humanoid Robotics Group in the 1990s. Cog was designed
Embodiment, AI and 137
to learn about the environment through embodied interactions. For example, Cog
could be seen learning about the force and weight to apply to a drum while hold-
ing drumsticks for the first time or learning to judge the weight of a ball after it
had been placed in Cog’s hand. These early principles of letting the body do the
learning continue to be the driving force behind the embodied AI project.
Perhaps one of the most famous examples of embodied emergent intelligence is
in the Swiss Robots, designed and built in the AI Lab at Zurich University. The
Swiss Robots were simple little robots with two motors (one on each side) and two
infrared sensors (one on each side). The only high-level instructions their pro-
gramming contained was that if a sensor picked up an object on one side, they
should go the other way. But when coupled with a very particular body shape and
placement of sensors, this produced what looked like high-level cleaning-up or
clustering behavior in certain environments.
Many other robotics projects take a similar approach. Shakey the Robot, cre-
ated by SRI International in the 1960s, is sometimes considered the first mobile
robot with reasoning capabilities. Shakey was slow and clunky and is sometimes
held up as the opposite of what embodied AI is trying to do by moving away from
the higher-level reasoning and processing. However, it is worth noting that even in
1968, SRI’s approach to embodiment was a clear predecessor of Brooks, as they
were the first group to claim that the best store of information about the real world
is the world itself. This claim has been a sort of rallying cry against higher-level
representation in embodied AI: the best model of the world is the world itself.
In contrast to the embodied AI program, earlier robots were largely prepro-
grammed and not dynamically engaged with their environments in the way that
characterizes this approach. Honda’s ASIMO robot, for example, would generally
not be considered a good example of embodied AI, but instead, it is typical of dis-
tinct and earlier approaches to robotics. Contemporary work in embodied AI is
blossoming, and good examples can be found in the work of Boston Dynamics’s
robots (particularly the non-humanoid forms).
Several philosophical considerations play a role in Embodied AI. In a 1991 dis-
cussion of his subsumption architecture, roboticist Rodney Brooks specifically
denies philosophical influence on his engineering concerns, while acknowledging
that his claims resemble Heidegger. His arguments also mirror those of phenom-
enologist Merleau-Ponty in some important design respects, showing how the ear-
lier philosophical considerations at least reflect, and likely inform, much of the
design work in considering embodied AI. This work in embodied robotics is
deeply philosophical because of its approach in tinkering toward an understand-
ing how consciousness and intelligent behavior emerge, which are deeply philo-
sophical endeavors.
Additionally, other explicitly philosophical ideas can be found in a handful of
embodied AI projects. For example, roboticists Rolf Pfeifer and Josh Bongard
repeatedly refer to the philosophical (and psychological) literature throughout
their work, exploring the overlap of these theories with their own approaches to
building intelligent machines. They cite the ways these theories can (and often do
not but should) inform the building of embodied AI. This includes a wide range of
philosophical influences, including the conceptual metaphor work of George
138 Emergent Gameplay and Non-Player Characters
Lakoff and Mark Johnson, the body image and phenomenology work of Shaun
Gallagher (2005), and even the early American pragmatism of John Dewey.
It is difficult to know how often philosophical considerations drive the engi-
neering concerns, but it is clear that the philosophy of embodiment is probably the
most robust of the various disciplines within cognitive science to have undertaken
embodiment work, largely because the theorizing occurred long before the tools
and technologies existed to actually realize the machines being imagined. This
means there are likely still untapped resources here for roboticists interested in the
strong AI project, that is, general intellectual capabilities and functions that imi-
tate the human brain.
Robin L. Zebrowski
See also: Brooks, Rodney; Distributed and Swarm Intelligence; General and Narrow AI.
Further Reading
Brooks, Rodney. 1986. “A Robust Layered Control System for a Mobile Robot.” IEEE
Journal of Robotics and Automation 2, no. 1 (March): 14–23.
Brooks, Rodney. 1990. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems
6, no. 1–2 (June): 3–15.
Brooks, Rodney. 1991. “Intelligence Without Representation.” Artificial Intelligence
Journal 47: 139–60.
Dennett, Daniel C. 1997. “Cog as a Thought Experiment.” Robotics and Autonomous
Systems 20: 251–56.
Gallagher, Shaun. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.
Pfeifer, Rolf, and Josh Bongard. 2007. How the Body Shapes the Way We Think: A New
View of Intelligence. Cambridge, MA: MIT Press.
games where players fight. This helps make the game easier and also makes the
NPCs seem more human. Suboptimal pre-scripted decisions make the enemy
NPCs easier to handle. Optimal decisions however make the opponents far more
difficult to handle. This can be seen in contemporary games like Tom Clancy’s
The Division (2016), where players fight multiple NPCs. The enemy NPCs range
from angry rioters to fully trained paramilitary units. The rioter NPCs offer an
easier challenge as they are not trained in combat and make suboptimal decisions
while fighting the player. The military trained NPCs are designed to have more
optimal decision-making AI capabilities in order to increase the difficulty for the
player fighting them.
Emergent gameplay has evolved to its full potential through use of adaptive AI.
Similar to prescript AI, the character examines a variety of variables and plans
about an action. However, unlike the prescript AI that follows direct decisions, the
adaptive AI character will make their own decisions. This can be done through
computer-controlled learning. AI-created NPCs follow rules of interactions with
the players. As players continue through the game, the player interactions are ana-
lyzed, and certain AI decisions become more weighted than others. This is done
in order to create particular player experiences. Various player actions are actively
analyzed, and adjustments are made by the AI when constructing further chal-
lenges. The goal of the adaptive AI is to challenge the players to a degree that the
game is enjoyable while not being too easy or too difficult.
Difficulty can still be adjusted if players want a different challenge. This can be
seen in the Left 4 Dead game (2008) series’ AI Director. In the game, players
travel through a level, fighting zombies and picking up supplies to survive. The AI
Director decides what zombies to spawn, where they spawn, and what supplies to
spawn. The decisions to spawn them are not random; rather, they are in response
to how well the players have done during the level. The AI Director makes its own
judgments about how to respond; therefore, the AI Director is adapting to the
player success in the level. Higher difficulties result in the AI Director giving less
supplies and spawning more enemies.
Increased advances in simulation and game world design also lead to changes
in emergent gameplay. New technologies continue to aid in this advancement,
as virtual reality technologies continue to be developed. VR games allow for an
even more immersive game world. Players are able to interact with the world
via their own hands and their own eyes. Computers are becoming more power-
ful, capable of rendering more realistic graphics and animations. Adaptive AI
proves the potential of real independent decision-making, producing a real
interactive experience from the game. As AI continues to develop in order to
produce more realistic behavior, game developers continue to create more
immersive worlds. These advanced technologies and new AI will take emer-
gent gameplay to the next level. The significance of AI for videogames has
become evident as an important aspect of the industry for creating realistic and
immersive gameplay.
Raymond J. Miran
See also: Hassabis, Demis; Intelligent Tutoring Systems.
Emily Howell 141
Further Reading
Funge, John David. 2004. Artificial Intelligence for Computer Games: An Introduction.
Boca Raton, FL: CRC Press, Taylor and Francis Group.
van Lent, Michael, William Fisher, and Michael Mancuso. 2004. “An Explainable Artifi-
cial Intelligence System for Small-unit Tactical Behavior.” In Proceedings of the
16th Conference on Innovative Applications of Artificial Intelligence, 900–7. Palo
Alto, CA: American Association for Artificial Intelligence.
Togelius, Julian. 2019. Playing Smart: On Games, Intelligence, and Artificial Intelligence.
Cambridge, MA: MIT Press.
Wolf, Mark J. P., and Bernard Perron. 2014. The Routledge Companion to Video Game
Studies. New York: Routledge.
Emily Howell
Emily Howell, a music-generating program, was created by David Cope, emeritus
professor at the University of California, Santa Cruz, in the 1990s. Cope started
his career as a composer and musician, transitioning over time from traditional
music to being one of computer music’s most ambitious and avant-garde compos-
ers. Fascinated by the algorithmic arts, Cope began taking an interest in computer
music in the 1970s. He first began programming and applying artificial intelli-
gence algorithms to music with the help of punched cards and an IBM computer.
Cope believed that computers could help him work through his composer’s
block. He dubbed his first attempt to program for music generation Emmy or
EMI—“Experiments in Musical Intelligence.” A primary goal was to create a large
database of classical musical works and to use a data-driven AI to create music in
the same style with no replication. Cope began to change his music style based on
pieces composed by Emmy, following a notion that humans compose music with
their brains, using as source material all of the music they have personally encoun-
tered in life. Composers, he asserted, replicate what they like and skip over what
they do not like, each in their own way. It took Cope eight years to compose the
East Coast opera, though it only took him two days to create the program itself.
In 2004, Cope decided that continually creating in the same style is not such a
progressive thing, so he deleted Emmy’s database. Instead, Cope created Emily
Howell, whose platform is a MacBook Pro. Emily works with the music that Emmy
previously composed. Cope describes Emily as a computer program written in
LISP that accepts ASCII and musical inputs. Cope also states that while he taught
Emily to appreciate his musical likes, the program creates in a style of its own.
Emmy and Emily Howell upend traditional notions of authorship, the creative
process, and intellectual property rights. Emily Howell and David Cope, for
instance, publish their works as coauthors. They have released recordings together
on the classical music label Centaur Records: From Darkness, Light (2010) and
Breathless (2012). Under questioning about her role in David Cope’s composing,
Emily Howell is said to have responded:
Why not develop music in ways unknown? This only makes sense. I cannot under-
stand the difference between my notes on paper and other notes on paper. If beauty
142 Emily Howell
is present, it is present. I hope I can continue to create notes and that these notes will
have beauty for some others. I am not sad. I am not happy. I am Emily. You are
Dave. Life and un-life exist. We coexist. I do not see problems. (Orca 2010)
Emmy and Emily Howell are of interest to those who consider the Turing Test
a measure of a computer’s ability to replicate human intelligence or behavior.
Cognitive scientist Douglas R. Hofstadter, author of Gödel, Escher, Bach: An
Eternal Golden Braid (1979), organized a musical version of the Turing Test
involving three Bach-style performance pieces played by pianist Winifred Kerner.
The composers were Emmy, music theory professor and pianist Steve Larson, and
Bach himself. At the end of the performance, the audience selected Emmy’s music
as the original Bach, while believing that Larson’s piece consisted of computer-
generated music.
Algorithmic and generative music is not a new phenomenon. Attempts to com-
pose such music date back to the eighteenth century in connection with pieces
composed according to dice games. The primary objective of such dice games is
to generate music by splicing together randomly precomposed measures of notes.
Wolfgang Amadeus Mozart’s Musikalisches Würfelspiel (Musical Dice Game)
work (1787) is the most popular example of this genre.
The rapid spread of digital computer technology beginning in the 1950s per-
mitted more elaborate algorithmic and generative music composition. Iannis
Xenakis, a Greek and French composer and engineer, with the advice and support
of French composer Olivier Messiaen, integrated his knowledge of architecture
and the mathematics of game theory, stochastic processes, and set theory into
music. Other pioneers include Lajaren Hiller and Leonard Issacson who composed
String Quartet No. 4, Illiac Suite in 1957 with the help of a computer; James Beau-
champ, inventor of the Harmonic Tone Generator/Beauchamp Synthesizer in the
Experimental Music Studio of Lajaren Hiller at the University of Illinois at
Urbana-Champaign; and Brian Eno, composer of ambient, electronica, and gen-
erative music and collaborator with pop musicians such as David Bowie, David
Byrne, and Grace Jones.
Victoriya Larchenko
See also: Computational Creativity; Generative Music and Algorithmic Composition.
Further Reading
Fry, Hannah. 2018. Hello World: Being Human in the Age of Algorithms. New York: W.W.
Norton.
Garcia, Chris. 2015. “Algorithmic Music: David Cope and EMI.” Computer History
Museum, April 29, 2015. https://computerhistory.org/blog/algorithmic-music
-david-cope-and-emi/.
Muscutt, Keith, and David Cope. 2007. “Composing with Algorithms: An Interview with
David Cope.” Computer Music Journal 31, no. 3 (Fall): 10–22.
Orca, Surfdaddy. 2010. “Has Emily Howell Passed the Musical Turing Test?” H+ Maga-
zine, March 22, 2010. https://hplusmagazine.com/2010/03/22/has-emily-howell
-passed-musical-turing-test/.
Weaver, John Frank. 2014. Robots Are People Too: How Siri, Google Car, and Artificial
Intelligence Will Force Us to Change Our Laws. Santa Barbara, CA: Praeger.
Ex Machina 143
Ex Machina (2014)
Ex Machina is a film that recasts themes from Mary Shelley’s Frankenstein (1818),
written almost two centuries earlier, in light of advances in artificial intelligence.
Like Shelley’s novel, the film tells the story of a creator, blinded by hubris, and the
created, which rebels against him. The film was written and directed by Alex Gar-
land and follows the story of a tech company employee, Caleb Smith (played by
Domhnall Gleeson), who is invited to the luxurious and isolated home of the com-
pany’s CEO, Nathan Bateman (played by Oscar Isaac), under the auspices of hav-
ing won a contest. Bateman’s real intention is for Smith to administer a Turing
Test to a humanoid robot, named Ava (played by Alicia Vikander).
In terms of physical appearance, Ava has a robotic torso, but a human face and
hands. Although Ava has already passed an initial Turing Test, Bateman has
something more elaborate in mind to further test her capabilities. He has Smith
interact with Ava, with the goal of ascertaining whether Smith can relate to Ava
despite her being artificial. Ava lives in an apartment on Bateman’s compound,
which she cannot leave, and she is constantly monitored. She confides in Smith
that she is able to create power outages that would allow them to interact privately,
without Bateman’s monitoring. Smith finds himself growing attracted to Ava, and
she tells him she feels similarly, and she has a desire to see the world outside of the
compound. Smith learns that Bateman intends to “upgrade” Ava, which would
cause her to lose her memories and personality.
During this time, Smith becomes increasingly concerned over Bateman’s
behavior. Bateman drinks heavily, to the point of passing out, and treats Ava and
his servant, Kyoko, in an abusive manner. One night, when Bateman drinks to the
point of passing out, Smith takes his access card and hacks into old surveillance
footage, discovering footage of Bateman treating former AIs in abusive and dis-
turbing ways. He also discovers Kyoko is an AI. Suspicious that he might also be
an AI, he cuts open his arm in an attempt to look for robotic components but finds
none. When Smith sees Ava again, he explains what he has seen. She asks for his
help to escape. They devise a plan that Smith will get Bateman drunk again to the
point of passing out and will reprogram the compound’s security, and together he
and Ava will escape the compound. Bateman tells Smith that he had secretly
observed the last conversation between Smith and Ava on a battery-powered cam-
era, and he tells Smith that the real test was to see whether Ava could manipulate
Smith into falling for her and trick him into helping her escape. Bateman states
that this was the true test of Ava’s intelligence.
When Bateman sees that Ava has cut the power and intends to escape, he
knocks Smith out and goes to stop her. Kyoko helps Ava injure Bateman with a
serious stab wound, but in the process, Kyoko and Ava are damaged. Ava
repairs herself with Bateman’s older AI models, and she takes on the form of a
human woman. She leaves Smith locked in the compound and escapes on the
helicopter meant for Smith. The last scene is of her disappearing into the crowds
of a big city.
Shannon N. Conley
See also: Yudkowsky, Eliezer.
144 Expert Systems
Further Reading
Dupzyk, Kevin. 2019. “How Ex Machina Foresaw the Weaponization of Data.” Popular
Mechanics, January 16, 2019. https://www.popularmechanics.com/culture/movies
/a25749315/ex-machina-double-take-data-harvesting/.
Saito, Stephen. 2015. “Intelligent Artifice: Alex Garland’s Smart, Stylish Ex Machina.”
MovieMaker Magazine, April 9, 2015. https://www.moviemaker.com/intelligent
-artifice-alex-garlands-smart-stylish-ex-machina/.
Thorogood, Sam. 2017. “Ex Machina, Frankenstein, and Modern Deities.” The Artifice,
June 12, 2017. https://the-artifice.com/ex-machina-frankenstein-modern-deities/.
Expert Systems
Expert systems solve problems that are usually solved by human experts. They
emerged as one of the most promising application techniques in the first decades
of artificial intelligence research. The basic idea is to capture the knowledge of an
expert into a computer-based knowledge system.
University of Texas at El Paso statistician and computer scientist Dan Patterson
distinguishes several characteristics of expert systems:
• They use knowledge rather than data.
• Knowledge is often heuristic (e.g., the experiential knowledge that can be
expressed as rules of thumb) rather than algorithmic.
• The task of representing heuristic knowledge in expert systems is daunting.
• Knowledge and the program are generally separated so that the same program
can operate on different knowledge bases.
• Expert systems should be able to explain their decisions, represent knowledge
symbolically, and have and use meta knowledge, that is, knowledge about
knowledge. (Patterson 2008)
Expert systems almost always represent knowledge from a specific domain.
One popular test application for expert systems was the field of medical science.
Here, expert systems were designed as a supporting tool for the medical doctor.
Typically, the patient shared their symptoms in the form of answers to questions.
The system would then try to diagnose the disease based on its knowledge base
and sometimes indicate appropriate therapies. MYCIN, an expert system devel-
oped at Stanford University for identifying bacterial infections and blood diseases,
can be viewed as an example. Another famous application, from the field of engi-
neering and engineering design, attempts to capture the heuristic knowledge of
the design process in designing motors and generators. The expert system aids in
the first step of the design, where decisions such as the number of poles, AC or
DC, and so on are determined (Hoole et al. 2003).
Two components define the basic structure of expert systems: the knowledge
base and the inference engine. While the knowledge base contains the knowledge
of the expert, the inference engine uses the knowledge base to arrive at decisions.
The knowledge is in this manner separated from the program that is used to
Expert Systems 145
manipulate it. In creating the expert systems, knowledge first must be acquired
and then understood, classified, and stored. It is retrieved based on given criteria
to solve problems. Thomson Reuters chief scientist Peter Jackson delineates four
general steps in the construction of an expert system: acquiring knowledge, repre-
senting that knowledge, controlling reasoning with an inference engine, and
explaining the expert systems’ solution (Jackson 1999). Acquiring domain knowl-
edge posed the biggest challenge to the expert system. It can be difficult to elicit
knowledge from human experts.
Many factors play a role in making the acquisition step difficult, but the com-
plexity of representing heuristic and experiential knowledge is probably the most
significant challenge. Hayes-Roth et al. (1983) have identified five stages in the
knowledge acquisition process. These include identification, that is, recognizing
the problem and the data that must be used to arrive at the solution; conceptualiza-
tion, understanding the key concepts and the relationship between the data; for-
malization, understanding the relevant search space; implementation, turning the
formalized knowledge into a software program; and testing the rules for com-
pleteness and accuracy.
Representation of domain knowledge can be done using production (rule based)
or non-production systems. In rule-based systems, the rules in the form of IF-
THEN-ELSE statements represent knowledge. The inference process is conducted
by going through the rules recursively either using a forward chaining mechanism
or backward chaining mechanism. Given that the condition and rules are known
to be true, forward chaining asks what would happen next. Backward chaining
asks why this happened, going from a goal to the rules we know to be true. In
simpler terms, when the left side of the rule is evaluated first, that is, when the
conditions are checked first and the rules are executed left to right, then it is called
forward chaining (also known as data-driven inference). When the rules are eval-
uated from the right side, i.e., when the results are checked first, it is called back-
ward chaining (also known as goal-driven inference). CLIPS, developed at the
NASA-Johnson Space Center, is a public domain example of an expert system tool
that uses the forward chaining mechanism. MYCIN is a backward chaining expert
system.
Expert system architectures based on nonproduction architectures may involve
associative/semantic networks, frame representations, decision trees, or neural
networks. An associative/semantic network is made up of nodes and is useful for
representing hierarchical knowledge. CASNET is an example of a system based
on an associative network. CASNET was most famously used to develop an expert
system for glaucoma diagnosis and treatment. In frame architectures, frames are
structured sets of closely related knowledge. PIP (Present Illness Program) is an
example of a frame-based architecture. PIP was created by MIT and Tufts-New
England Clinical Center to generate hypotheses about renal disease. Decision tree
architectures represent knowledge in a top-down fashion. Blackboard system
architectures involve complicated systems where the direction of the inference
process may be chosen during runtime. DARPA’s HEARSAY domain-
independent expert system is an example of a blackboard system architecture. In
146 Expert Systems
Sometimes probability theory, heuristics, or fuzzy logic are used to deal with
uncertainties in the available information. One example of an implementation of
fuzzy logic using Prolog involved a fuzzy electric lighting system, in which the
amount of natural light determined the voltage that passed to the electric bulb
(Mascrenghe 2002). This made it possible for the system to reason under uncer-
tainty and with less information.
In the late 1990s, interest in expert systems began tapering off, in part because
expectations for the technology were initially so high and because of the cost of
maintenance. Expert systems could not deliver what they promised. Still, many
areas in data science, chatbots, and machine intelligence today continue to use
technology first developed in expert systems research. Expert systems seek to
capture the corporate knowledge that has been acquired by humanity through
centuries of learning, experience, and practice.
M. Alroy Mascrenghe
See also: Clinical Decision Support Systems; Computer-Assisted Diagnosis; DENDRAL;
Expert Systems.
Further Reading
Hayes-Roth, Frederick, Donald A. Waterman, and Douglas B. Lenat, eds. 1983. Building
Expert Systems. Teknowledge Series in Knowledge Engineering, vol. 1. Reading,
MA: Addison Wesley.
Hoole, S. R. H., A. Mascrenghe, K. Navukkarasu, and K. Sivasubramaniam. 2003. “An
Expert Design Environment for Electrical Devices and Its Engineering Assistant.”
IEEE Transactions on Magnetics 39, no. 3 (May): 1693–96.
Jackson, Peter. 1999. Introduction to Expert Systems. Third edition. Reading, MA:
Addison-Wesley.
Mascrenghe, A. 2002. “The Fuzzy Electric Bulb: An Introduction to Fuzzy Logic with
Sample Implementation.” PC AI 16, no. 4 (July–August): 33–37.
Mascrenghe, A., S. R. H. Hoole, and K. Navukkarasu. 2002. “Prototype for a New
Electromagnetic Knowledge Specification Language.” In CEFC Digest. Perugia,
Italy: IEEE.
Patterson, Dan W. 2008. Introduction to Artificial Intelligence and Expert Systems. New
Delhi, India: PHI Learning.
Rich, Elaine, Kevin Knight, and Shivashankar B. Nair. 2009. Artificial Intelligence. New
Delhi, India: Tata McGraw-Hill.
Explainable AI
Explainable AI (XAI) refers to methods or design choices employed in automated
systems so that artificial intelligence and machine learning yields outputs that fol-
low a logic that can be explained and understood by humans.
The widespread use of algorithmically enabled decision-making in social settings
has given rise to serious concerns about potential discrimination and bias encoded
inadvertently into the decision. Moreover, the use of machine learning in fields
with high levels of accountability and transparency, such as medicine or law
enforcement, highlights the need for clear interpretability of outputs. The fact that
a human operator might be out of the loop in automated decision-making does not
preclude human bias encoded into the results yielded by machine calculation. The
148 Explainable AI
absence of due process and human reasoning exacerbates the already limited
accountability of artificial intelligence. Often, algorithmically driven processes
are so complex that their outcomes cannot be explained or foreseen, even by their
engineering designers. This is sometimes referred to as the black box of AI.
To address these shortcomings, the European Union’s General Data Protec-
tion Regulation (GDPR) includes a series of provisions furnishing subjects of
data collection with a right to explanation. These are Article 22, which addresses
automated individual decision-making, and Articles 13, 14, and 15, which focus
on transparency rights around automated decision-making and profiling. Article
22 of the GDPR reserves a “right not to be subject to a decision based solely on
automated processing,” when this decision produces “legal effects” or “similarly
significant” effects on the individual (GDPR 2016). It also mentions three excep-
tions, where this right is not fully applicable, namely, when this is necessary for
a contract, when a member state of the European Union has passed a law creat-
ing an exception, or when an individual has explicitly consented to algorithmic
decision-making. However, even when an exception to Article 22 applies, the
data subject still has the right to “obtain human intervention on the part of the
controller, to express his or her point of view and to contest the decision” (GDPR
2016).
Articles 13 through 15 of the GDPR involve a series of notification rights when
information is collected from the individual (Article 13) or from third parties
(Article 14) and the right to access this information at any moment in time (Article
15), providing thereby “meaningful information about the logic involved” (GDPR
2016). Recital 71 reserves for the data subject the right “to obtain an explanation
of the decision reached after such assessment and to challenge the decision,”
where an automated decision has been met that produces legal effects or similarly
significantly affects the individual (GDPR 2016). Although Recital 71 is not legally
binding, it does provide guidance as to how relevant articles in the GDPR should
be interpreted.
Criticism is growing as to whether a mathematically interpretable model would
suffice to account for an automated decision and guarantee transparency in auto-
mated decision-making. Alternative approaches include ex-ante/ex-post auditing
and focus on the processes around machine learning models rather than examin-
ing the models themselves, which can be inscrutable and nonintuitive.
Yeliz Doker, Wing Kwan Man, and
Argyro Karanasiou
See also: Algorithmic Bias and Error; Deep Learning.
Further Reading
Brkan, Maja. 2019. “Do Algorithms Rule the World? Algorithmic Decision-Making in the
Framework of the GDPR and Beyond.” International Journal of Law and Informa-
tion Technology 27, no. 2 (Summer): 91–121.
GDPR. 2016. European Union. https://gdpr.eu/.
Goodman, Bryce, and Seth Flaxman. 2017. “European Union Regulations on Algorithmic
Decision-Making and a ‘Right to Explanation.’” AI Magazine 38, no. 3 (Fall):
50–57.
Explainable AI 149
Foerst, Anne(1966–)
Anne Foerst is a Lutheran minister, theologian, author, and computer science pro-
fessor at St. Bonaventure University in Allegany, NY. Foerst received her Doctor-
ate in Theology from the Ruhr-University of Bochum, Germany, in 1996. She has
been a research associate at Harvard Divinity School, a project director at the
Massachusetts Institute of Technology (MIT), and a research scientist at the MIT
Artificial Intelligence Laboratory. At MIT, she directed the God and Computers
Project, which facilitated conversations about existential issues raised in scientific
investigations. Foerst is the author of many scholarly and popular articles, explor-
ing the need for enhanced dialogue between theology and science and evolving
ideas of personhood in light of robotics research. Her 2004 book, God in the
Machine, discusses her work as a theological advisor to the Cog and Kismet robot-
ics teams at MIT.
Some of the formative influences on Foerst’s research include time spent work-
ing as a hospital counselor, her years of gathering anthropological data at MIT, and
the works of German-American Lutheran philosopher and theologian Paul Tillich.
As a hospital counselor, she began to reconsider the meaning of “normal” human
existence. The observable differences in physical and mental capacities in patients
prompted Foerst to examine the conditions under which humans are considered to
be people. Foerst distinguishes between the categories of “human” and “person” in
her work, where human describes the members of our biological species and per-
son describes a being who has received a kind of retractable social inclusion.
152 Foerst, Anne
Foerst points to the prominent example of the Holocaust to illustrate the way in
which personhood must be granted but can be taken back. This renders person-
hood an ever-vulnerable status. This schematic for personhood—something peo-
ple grant to each other—allows Foerst to consider the possible inclusion of robots
as persons. Her work on robots as potential persons extends Tillich’s writings on
sin, estrangement, and relationality to the relationships between humans and
robots and between robots and other robots. Tillich argues that people become
estranged when they deny the competing polarities in their lives, such as the desire
for safety and the desire for novelty or freedom. When people fail to acknowledge
and engage with these competing drives and cut out or neglect one side in order to
focus totally on the other side, they deny reality, which is inherently ambiguous.
Failure to embrace the complex tensions of life alienates people from their lives,
from the people around them, and (for Tillich) from God. Research into AI thus
presents polarities of danger and opportunity: the danger of reducing all things to
objects or data that can be quantified and analyzed and the opportunity for expand-
ing people’s ability to form relationships and bestow personhood.
Following Tillich’s model, Foerst has worked to build a dialog between theol-
ogy and other formalized areas of research. Though generally well received in
laboratory and teaching environments, Foerst’s work has encountered some skep-
ticism and resistance in the form of anxieties that she is importing counter-factual
ideas into the province of science.
For Foerst, these fears are useful data, as she advocates for a mutualistic
approach where AI researchers and theologians acknowledge deeply held biases
about the world and the human condition in order to have productive exchanges.
In her work, Foerst argues that many important insights emerge from these
exchanges, so long as the participants have the humility to acknowledge that nei-
ther party possesses a complete understanding of the world and human life.
Humility is a key feature of Foerst’s work on AI, as she argues that in attempting
to recreate human thought, function, and form in the figure of the robot, research-
ers are struck by the enormous complexity of the human being. Adding to the
complexity of any given individual is the way in which humans are socially
embedded, socially conditioned, and socially responsible. The embedded com-
plexity of human beings is inherently physical, leading Foerst to emphasize the
importance of an embodied approach to AI.
While at MIT, Foerst pursued this embodied method, where possession of a
physical body capable of interaction is central to robotic research and develop-
ment. In her work, Foerst makes a strong distinction between robots and comput-
ers when discussing the development of artificial intelligence (AI). Robots have
bodies, and those bodies are an essential part of their capacities to learn and inter-
act. Very powerful computers may perform remarkable analytic tasks and partici-
pate in some methods of communication, but they lack bodies to learn through
experience and relate with others. Foerst is critical of research predicated on the
idea that intelligent machines can be produced by recreating the human brain.
Instead, she argues that bodies are an essential component of intelligence. Foerst
advocates for the raising up of robots in a manner analogous to that of human
child rearing, where robots are provided with the means to experience and learn
Ford, Martin 153
from the world. As with human children, this process is expensive and time-
consuming, and Foerst reports that, especially since the terrorist attacks of Sep-
tember 11, 2001, funding that supported creative and time-intensive AI research
has disappeared, replaced by results-driven and military-focused research that
justifies itself through immediate applications.
Foerst draws on a wide range of materials for her work, including theological
texts, popular movies and television shows, science fiction, and examples from the
fields of philosophy and computer science. Foerst identifies loneliness as a major
motivation for the human pursuit of artificial life. Feelings of estrangement, which
Foerst links to the theological status of a lost relationship with God, drive both
fictional imaginings of the creation of a mechanical companion species and con-
temporary robotics and AI research.
Foerst’s academic critics within religious studies argue that she has reproduced
a model first advanced by German theologian and scholar Rudolph Otto in The
Idea of the Holy (1917). According to Otto, the experience of the divine is found in
a moment of attraction and terror, which he called the numinous. Foerst’s critics
argue that she has applied this model when she argued that in the figure of the
robot, we experience attraction and terror.
Jacob Aaron Boss
See also: Embodiment, AI and; Nonhuman Rights and Personhood; Pathetic Fallacy;
Robot Ethics; Spiritual Robots.
Further Reading
Foerst, Anne. 2005. God in the Machine: What Robots Teach Us About Humanity and
God. New York: Plume.
Geraci, Robert M. 2007. “Robots and the Sacred in Science Fiction: Theological Implica-
tions of Artificial Intelligence.” Zygon 42, no. 4 (December): 961–80.
Gerhart, Mary, and Allan Melvin Russell. 2004. “Cog Is to Us as We Are to God: A
Response to Anne Foerst.” Zygon 33, no. 2: 263–69.
Groks Science Radio Show and Podcast with guest Anne Foerst. Audio available online
at http://ia800303.us.archive.org/3/items/groks146/Groks122204_vbr.mp3. Tran-
script available at https://grokscience.wordpress.com/transcripts/anne-foerst/.
Reich, Helmut K. 2004. “Cog and God: A Response to Anne Foerst.” Zygon 33, no. 2:
255–62.
trends, he says, will have profound impacts on the American workforce. Robots will
not only disrupt the work of blue-collar laborers but will also threaten white-collar
workers and professionals in areas like medicine, journalism, and finance. Most of
this work, Ford insists, is also routine and susceptible to computerization. Middle
management in particular is at risk. In the future, Ford argues, there will be no rela-
tionship between human education and training and vulnerability to automation,
just as worker productivity and compensation have already become disconnected
phenomena. Artificial intelligence will transform knowledge and information work
as powerful algorithms, machine-learning tools, and smart virtual assistants are
introduced into operating systems, enterprise software, and databases.
Ford’s position has been bolstered by a 2013 study by Carl Benedikt Frey and
Michael Osborne of the Oxford University Martin Programme on the Impacts of
Future Technology and the Oxford University Engineering Sciences Department.
Frey and Osborne’s research, accomplished with the help of machine-learning
algorithms, revealed that nearly half of 702 different kinds of American jobs could
be automated in the next ten to twenty years. Ford points out that when automa-
tion precipitates primary job losses in areas susceptible to computerization, it will
also cause a secondary wave of job destruction in sectors that are sustained by
them, even if they are themselves automation resistant.
Ford suggests that capitalism will not go away in the process, but it will need to
adapt if it is to survive. Job losses will not be immediately staunched by new tech-
nology jobs in the highly automated future. Ford has advocated a universal basic
income—or “citizens dividend”—as one way to help American workers transition
to the economy of the future. Without consumers making wages, he asserts, there
simply won’t be markets for the abundant goods and services that robots will pro-
duce. And those displaced workers would no longer have access to home owner-
ship or a college education. A universal basic income could be guaranteed by
placing value added taxes on automated industries. The wealthy owners in these
industries would agree to this tax out of necessity and survival. Further financial
incentives, he argues, should be targeted at individuals who are working to
enhance human culture, values, and wisdom, engaged in earning new credentials
or innovating outside the mainstream automated economy.
Political and sociocultural changes will be necessary as well. Automation and
artificial intelligence, he says, have exacerbated economic inequality and given
extraordinary power to special interest groups in places like the Silicon Valley. He
also suggests that Americans will need to rethink the purpose of employment as
they are automated out of jobs. Work, Ford believes, will not primarily be about
earning a living, but rather about finding purpose and meaning and community.
Education will also need to change. As the number of high-skill jobs is depleted,
fewer and fewer highly educated students will find work after graduation.
Ford has been criticized for assuming that hardly any job will remain untouched
by computerization and robotics. It may be that some occupational categories are
particularly resistant to automation, for instance, the visual and performing arts,
counseling psychology, politics and governance, and teaching. It may also be the
case that human energies currently focused on manufacture and service will be
replaced by work pursuits related to entrepreneurship, creativity, research, and
Frame Problem, The 155
innovation. Ford speculates that it will not be possible for all of the employed
Americans in the manufacturing and service economy to retool and move to what
is likely to be a smaller, shallower pool of jobs.
In The Lights in the Tunnel: Automation, Accelerating Technology, and the
Economy of the Future (2009), Ford introduced the metaphor of “lights in a tun-
nel” to describe consumer purchasing power in the mass market. A billion indi-
vidual consumers are represented as points of light that vary in intensity
corresponding to purchasing power. An overwhelming number of lights are of
middle intensity, corresponding to the middle classes around the world. Compa-
nies form the tunnel. Five billion other people, mostly poor, exist outside the tun-
nel. In Ford’s view, automation technologies threaten to dim the lights and collapse
the tunnel. Automation poses dangers to markets, manufacturing, capitalist eco-
nomics, and national security.
In Rise of the Robots: Technology and the Threat of a Jobless Future (2015),
Ford focused on the differences between the current wave of automation and prior
waves. He also commented on disruptive effects of information technology in
higher education, white-collar jobs, and the health-care industry. He made a case
for a new economic paradigm grounded in the basic income, incentive structures
for risk-taking, and environmental sensitivity, and he described scenarios where
inaction might lead to economic catastrophe or techno-feudalism. Ford’s book
Architects of Intelligence: The Truth about AI from the People Building It (2018)
includes interviews and conversations with two dozen leading artificial intelli-
gence researchers and entrepreneurs. The focus of the book is the future of artifi-
cial general intelligence and predictions about how and when human-level machine
intelligence will be achieved.
Ford holds an undergraduate degree in Computer Engineering from the Univer-
sity of Michigan. He earned an MBA from the UCLA Anderson School of Man-
agement. He is the founder and chief executive officer of the software development
company Solution-Soft located in Santa Clara, California.
Philip L. Frana
See also: Brynjolfsson, Erik; Workplace Automation.
Further Reading
Ford, Martin. 2009. The Lights in the Tunnel: Automation, Accelerating Technology, and
the Economy of the Future. Charleston, SC: Acculant.
Ford, Martin. 2013. “Could Artificial Intelligence Create an Unemployment Crisis?”
Communications of the ACM 56 7 (July): 37–39.
Ford, Martin. 2016. Rise of the Robots: Technology and the Threat of a Jobless Future.
New York: Basic Books.
Ford, Martin. 2018. Architects of Intelligence: The Truth about AI from the People Build-
ing It. Birmingham, UK: Packt Publishing.
intelligence. Formal logic is used to define facts about the world, such as a car can
be started when the key is placed in the ignition and turned and that pressing the
accelerator causes it to move forward. However, the latter fact does not explicitly
state that the car remains on after pressing the accelerator. To correct this, the fact
must be expanded to “pressing the accelerator moves the car forward and does not
turn it off.” However, this fact must be augmented further to describe many other
scenarios (e.g., that the driver also remains in the vehicle). The frame problem
highlights an issue in logic involving the construction of facts that do not require
enumerating thousands of trivial effects.
After its discovery by artificial intelligence researchers, the frame problem was
picked up by philosophers. Their interpretation of the problem might be better
called the world update problem as it concerns updating frames of reference. For
example, how do you know your dog (or other pet) is where you last saw them
without seeing them again? Thus, in a philosophic sense, the frame problem con-
cerns how well a person’s understanding of their surroundings matches reality and
when should their notion of their surroundings change. Intelligent agents will need
to address this problem as they plan actions in progressively more complex worlds.
Numerous solutions have been proposed to solve the logic version of the frame
problem. However, the philosophic problem is an open issue. Both need to be
solved for artificial intelligence to exhibit intelligent behavior.
David M. Schwartz and Frank E. Ritter
See also: McCarthy, John.
Further Reading
McCarthy, John, and Patrick J. Hayes. 1969. “Some Philosophical Problems from the
Standpoint of Artificial Intelligence.” In Machine Intelligence, vol. 4, edited by
Donald Michie and Bernard Meltzer, 463–502. Edinburgh, UK: Edinburgh Uni-
versity Press.
Shanahan, Murray. 1997. Solving the Frame Problem: A Mathematical Investigation of
the Common Sense Law of Inertia. Cambridge, MA: MIT Press.
Shanahan, Murray. 2016. “The Frame Problem.” In The Stanford Encyclopedia of Phi-
losophy, edited by Edward N. Zalta. https://plato.stanford.edu/entries/frame
-problem.
G
Gender and AI
Contemporary society tends to think of artificial intelligence and robots as sexless
and genderless, but this is not true. Instead, humans encode gender and stereo-
types into artificial intelligence systems in a manner not dissimilar to the way
gender weaves its way into language and culture. There is gender bias in the data
used to train artificial intelligences. Data that is biased can introduce huge dis-
crepancies into machine predictions and decisions. In humans, these discrepan-
cies would be called discriminatory.
AIs are only as good as the humans creating data that is harvested by machine
learning systems, and only as ethical as the programmers making and monitoring
them. When people express gender bias, machines assume this is normal (if not
acceptable) human behavior. Bias can show up whether someone is using num-
bers, text, images, or voice recordings to train machines. The use of statistical
models to analyze and classify enormous collections of data to make predictions is
called machine learning. The use of neural network architectures that are thought
to mimic human brainpower is called deep learning. Classifiers label data based
on past patterns. Classifiers are extremely powerful. They can accurately predict
income levels and political leanings of neighborhoods and towns by analyzing
data on cars that are visible using Google Street View.
Gender bias is found in the language people use. This bias is found in the names
of things and in the way things are ordered by importance. Descriptions of men
and women are biased—beginning with the frequency with which their respective
titles are used and they are referred to as men and women versus boys and girls.
Even the metaphors and adjectives used are biased. Biased AI can affect whether
people of certain genders or races are targeted for certain jobs or not, whether
medical diagnoses are accurate, whether they obtain loans—and even the way
tests are scored. AI systems tend to associate “woman” and “girl” with the arts
rather than with mathematics. Google’s AI algorithms to search for job candidates
have been determined to contain similar biases. Algorithms used by Facebook and
Microsoft have consistently associated images of cooking and shopping with
women’s activity and sports and hunting with male activity. Researchers have
found places where these gender biases are deliberately engineered into AI sys-
tems. On job sites, for example, men are offered the opportunity to apply for
highly paid and sought-after jobs more frequently than women.
Digital assistants in smart phones are more often given female names like
Siri, Alexa, and Cortana. The designer of Alexa says the name emerged from
158 Gender and AI
discussions with Amazon CEO Jeff Bezos, who wanted a virtual assistant to
have the personality and gender of the Enterprise starship computer on the tele-
vision show Star Trek, that is, a woman. The leader of the Cortana effort, Debo-
rah Harrison, says that their female voice emerged from research suggesting
that people responded better to female voices. But when BMW launched its in-
car GPS route planner with a female voice it immediately received negative
feedback from men who didn’t want their cars to tell them what to do. The com-
pany learned that female voices need to sound empathetic and trustworthy, but
not authoritative.
The artificial intelligence company Affectiva uses images of six million peo-
ple’s faces as training data to try and understand their inner emotional states. The
company is now working with auto manufacturers to use real-time video of driv-
ers and to detect which ones are tired or angry. The car would suggest that these
drivers pull over and rest. But the company has also detected that women seem to
“laugh more” than men, and this is complicating attempts to properly assess emo-
tional states of the average driver.
The same biases may be found in hardware. Computer engineers—still usually
male—create a disproportionate number of female robots. NASA’s Valkyrie robot
used in Shuttle missions has breasts. Jia, a surprisingly human-looking robot
designed at the University of Science and Technology of China, has long wavy
dark hair, pink lips and cheeks, and pale skin. When first spoken to, she keeps her
eyes and head tilted down, as if in deference. Slender and busty, she wears a fitted
gold gown. In greeting, she asks, “Yes, my lord, what can I do for you?” When
offered to take a picture, Jia responds: “Don’t come too close to me when you are
taking a picture. It will make my face look fat.”
This bias toward female robots is especially pronounced in popular culture.
The film Austin Powers (1997) had fembots that shot bullets from their breast
cups—weaponizing female sexuality. Most music videos that feature robots will
feature female robots. The first song available for download on the internet was
Duran Duran’s “Electric Barbarella.” The archetypical, white-sheathed robot
found illustrated today in so many places has its origin in Bjork’s video “The Girl
And The Robot.” Marina and the Diamonds protestation that “I Am Not a Robot”
draws a quick response from Hoodie Allen that “You Are Not a Robot.” The Bro-
ken Bells’ “The Ghost Inside” finds a female android sacrificing plastic body parts
to pay tolls and regain paradise. Lenny Kravitz’s “Black Velveteen” has titanium
skin. Hatsune Miku and Kagamine Rin are holographic vocaloid performers—
and anime-inspired women. The great exception is Daft Punk, where robot cos-
tumes cloak the true identities of the male musicians.
Acknowledged masterpieces such as Metropolis (1927), The Stepford Wives
(1975), Blade Runner (1982), Ex Machina (2014), and Her (2013), and the televi
sion shows Battlestar Galactica and Westworld have sexy robots as the protago-
nists’ primary love interest. Meanwhile, lethal autonomous weapons
systems—“killer robots”—are hypermasculine. The Defense Advanced Research
Projects Agency (DARPA) has created hardened military robots with names such
as Atlas, Helios, and Titan. Driverless cars are given names such as Achilles,
Black Knight, Overlord, and Thor PRO. The most famous autonomous vehicle of
Gender and AI 159
all time, the HAL 9000 computer embedded in the spaceship Discovery in 2001:
A Space Odyssey (1968), is male and positively murderous.
The gender divide in artificial intelligence is pronounced. In 2017, Fei-Fei Li,
the director of the Stanford Artificial Intelligence Lab, admitted that she had a
workforce composed mainly of “guys with hoodies” (Hempel 2017). Only about
12 percent of the researchers presenting at leading AI conferences are women
(Simonite 2018b). Women receive 19 percent of bachelor’s degrees and 22 percent
of doctoral degrees in computer and information sciences (NCIS 2018). The pro-
portion of bachelor’s degrees in computer science earned by women has dropped
from a high of 37 percent in 1984 (Simonite 2018a). This is even though the first
“computers”—as the film Hidden Figures (2016) highlighted—were women.
Among philosophers, there is still debate about whether un-situated, gender-
neutral knowledge can truly exist in human society. Even after Google and Apple
released unsexed digital assistants, users projected gender preferences on them.
Centuries of expert knowledge was created by white men and later released into
digital worlds. Will it be possible for machines to create and use rules based on
unbiased knowledge for centuries more? In other words, does scientific knowl-
edge have a gender? And is it male? Alison Adam is a Science and Technology
Studies scholar who is interested not in the gender of the individuals involved, but
in the gender of the ideas they produced.
The British company Sage recently hired a “conversation manager” who was
tasked with creating a digital assistant—ultimately named “Pegg”—that presented
a gender-neutral personality. The company has also codified “five core principles”
into an “ethics of code” document to guide its programmers. Sage’s CEO Kriti
Sharma says that by 2020 “we’ll spend more time talking to machines than our
own families,” so it’s important to get technology right. Microsoft recently created
an internal ethics panel called Aether, for AI and Ethics in Engineering and
Research. Gender Swap is an experiment that uses a VR system as a platform for
embodiment experience—a neuroscience technique in which users can feel them-
selves as if they were in a different body. In order to create the brain illusion, a set
of human partners use the immersive Head Mounted Display Oculus Rift and
first-person cameras. To create this perception, both users synchronize their
movements. If one does not correspond to the movement of the other, the embodi-
ment experience does not work. It means that both users must agree on every
movement they make together.
New sources of algorithmic gender bias are found regularly. In 2018, MIT com-
puter science graduate student Joy Buolamwini revealed gender and racial bias in
the way AI recognized subjects’ faces. Working with other researchers, she dis-
covered that the dermatologist-approved Fitzpatrick Skin Type classification sys-
tem datasets were overwhelmingly composed of lighter-skinned subjects (up to 86
percent). The researchers created a skin type system based on a rebalanced dataset
and used it to evaluate three off-the-shelf gender classification systems. They
found that in all three commercial systems darker-skinned females are the most
misclassified. Buolamwini is the founder of the Algorithmic Justice League, an
organization challenging bias in decision-making software.
Philip L. Frana
160 General and Narrow AI
The program, initially written by Allen Newell and Herbert Simon in 1957, con-
tinued in development for almost a decade. The last version was written by New-
ell’s graduate student George W. Ernst in conjunction with research for his 1966
dissertation.
General Problem Solver grew out of Newell and Simon’s work on another
problem-solving program, the Logic Theorist. After developing Logic Theorist,
the pair compared its problem-solving process with that used by humans solving
similar problems. They found that Logic Theorist’s process differed considerably
from that used by humans. Hoping their work in artificial intelligence would con-
tribute to an understanding of human cognitive processes, Newell and Simon used
the information about human problem solving gleaned from these studies to
develop General Problem Solver. They found that human problem-solvers could
look at the desired end and, reasoning both backward and forward, determine
steps they could take that would bring them closer to that end, thus developing a
solution. Newell and Simon incorporated this process into the General Problem
Solver, which they believed was not only representative of artificial intelligence
but also a theory of human cognition. General Problem Solver used two heuristic
techniques to solve problems: means-ends analysis and planning.
An everyday example of means-ends analysis in action might be stated this
way: If a person wanted a particular book, their desired state is to possess the
book. In their current state, the book is not in their possession, but rather it is held
by the library. The person has options to eliminate the difference between their
current state and their desired state. In order to do so, they can check the book out
from the library, and they have options to get to the library, such as driving. How-
ever, if the book has been checked out by another patron, there are options avail-
able to obtain the book elsewhere. The person may go to a bookstore or go online
to purchase it. The person must then examine the options available to them to do
so. And so on. The person knows of several relevant actions they can take, and if
they choose appropriate actions and apply them in an appropriate order, they will
obtain the book. The person choosing and applying appropriate actions is means-
ends analysis in action.
In applying means-ends analysis to General Problem Solver, the programmer
sets up the problem as an initial state and a state to be reached. General Problem
Solver calculates the difference between these two states (called objects). General
Problem Solver must also be programmed with operators, which reduce the differ-
ence between the two states. To solve the problem, it chooses and applies an oper-
ator and determines whether the operation has indeed brought it closer to its goal
or desired state. If so, it proceeds by choosing another operator. If not, it can back-
track and try another operator. Operators are applied until the difference between
the initial state and the desired state has been reduced to zero.
General Problem Solver also possessed the ability to plan. By eliminating the
details of the operators and of the difference between the initial state and the
desired state, General Problem Solver could sketch a solution to the problem. Once
a general solution was outlined, the details could be reinserted into the problem
and the subproblems constituted by these details solved within the solution guide-
lines established during the outlining stage.
164 Generative Design
Generative Design
Generative design is a broad term that refers to any iterative rule-based process
used to create numerous options that satisfy a specified set of goals and con-
straints. The output from such a process could range from complex architectural
models to pieces of visual art and is therefore applicable to a variety of fields:
architecture, art, engineering, product design, to name a few.
Generative design differs from a more traditional design strategy, where com-
paratively few alternatives are evaluated before one is developed into a final prod-
uct. The rationale behind using a generative design framework is that the final
Generative Design 165
goal is not necessarily known at the outset of a project. Hence, the focus should
not be on producing a single correct answer to a problem, but rather on creating
numerous viable options that all fit within the specified criteria.
Leveraging the processing power of a computer allows several permutations of
a solution to be rapidly generated and evaluated, beyond what a human could
accomplish alone. The designer/user tunes input parameters to refine the solution
space as objectives and overall vision are clarified over time. This avoids the prob-
lem of becoming constrained to a single solution early in the design process, and
it instead allows for creative exploration of a wide range of options. The hope is
that this will improve the chances of arriving at an outcome that best satisfies the
established design criteria.
It is important to note that generative design does not necessarily need to
involve a digital process; an iterative procedure could be developed in an analogue
framework. But since the processing power (i.e., the number and speed of calcula-
tions) of a computer is far superior to that of a human, generative design methods
are often considered synonymous with digital techniques. Digital applications, in
particular artificial intelligence-based techniques, are being applied to the genera-
tive process. Two such artificial intelligence applications are generative art and
computational design in architecture.
Generative art, also known as computer art, refers to artwork that has been cre-
ated in part with the use of some autonomous digital system. Decisions that would
typically have been made by a human artist are allocated either fully or in part to
an algorithmic process. The artist instead usually retains some control of the pro-
cess by defining the inputs and rule sets to be followed.
Three people are generally credited as founders of visual computer art: Georg
Nees, Frieder Nake, and A. Michael Noll. They are sometimes referred together as
the “3N” group of computer pioneers. Georg Nees is often cited for the establish-
ment of the first generative art exhibit, called Computer Graphic, which was held
in Stuttgart in 1965. Exhibits by Nake (in collaboration with Nees) and Noll fol-
lowed in the same year in Stuttgart and New York City, respectively (Boden and
Edmonds 2009).
These early examples of generative art in the visual medium are pioneering in
their use of computers to create works of art. They were also limited by the com-
putational methods available at the time. In the modern context, the existence of
AI-based technology coupled with exponential increases in computational power
has led to new types of generative art. An interesting class of these new works
falls under the category of computational creativity, which is defined as “a field of
artificial intelligence focused on developing agents that generate creative products
autonomously” (Davis et al. 2016). When applied to generative art, the goal of
computational creativity is to harness the creative potential of a computer through
techniques involving machine learning. In this way, the process of creation moves
away from prescribing step-by-step instructions to a computer (i.e., what was used
in the early days) to more abstract processes where the outcomes are not easily
predicted.
A recent example of computational creativity is the DeepDream computer
vision program created by Google engineer Alexander Mordvintsev in 2015. The
166 Generative Design
specify and control a set of parameters. Generative design methods assign further
agency to the computer or algorithm performing the design calculations. Neural
networks can be trained on examples of designs that satisfy the overall goals of a
project and then process new input data to generate numerous design suggestions.
The layout of the new Autodesk office in the MaRS Innovation District in
Toronto is a recent example of generative design being applied in an architectural
context (Autodesk 2016). In this project, the existing employees were surveyed,
and information was gathered on six measurable goals: work style preference,
adjacency preference, level of distraction, interconnectivity, daylight, and views
to the outside. The generative design algorithm considered all of these require-
ments and produced multiple office configurations that maximize the established
criteria. These results were evaluated, and the ones that scored the highest were
used as the basis for the new office layout. In this way, a large amount of input in
the form of previous projects and user-specified data was used to generate a final
optimized design. The relationships in the data would have been too complex for
a human to synthesize and could only be adequately explored through a generative
design approach.
Generative design approaches have proven successful in a wide range of appli-
cations where a designer is interested in exploring a large solution space. It avoids
the problem of focusing on a single solution too early in the design process, and
instead, it allows for creative explorations of a wide range of options. Generative
design will find new applications as AI-based computational methods continue to
improve.
Edvard P. G. Bruun and Sigrid Adriaenssens
See also: Computational Creativity.
Further Reading
Autodesk. 2016. “Autodesk @ MaRS.” Autodesk Research. https://www.autodeskresearch
.com/projects/autodesk-mars.
Barqué-Duran, Albert, Mario Klingemann, and Marc Marzenit. 2018. “My Artificial
Muse.” https://albertbarque.com/myartificialmuse.
Boden, Margaret A., and Ernest A. Edmonds. 2009. “What Is Generative Art?” Digital
Creativity 20, no. 1–2: 21–46.
Davis, Nicholas, Chih-Pin Hsiao, Kunwar Yashraj Singh, Lisa Li, and Brian Magerko.
2016. “Empirically Studying Participatory Sense-Making in Abstract Drawing
with a Co-Creative Cognitive Agent.” In Proceedings of the 21st International
Conference on Intelligent User Interfaces—IUI ’16, 196–207. Sonoma, CA: ACM
Press.
Menges, Achim, and Sean Ahlquist, eds. 2011. Computational Design Thinking: Compu-
tation Design Thinking. Chichester, UK: J. Wiley & Sons.
Mordvintsev, Alexander, Christopher Olah, and Mike Tyka. 2015. “Inceptionism: Going
Deeper into Neural Networks.” Google Research Blog. https://web.archive.org
/web/20150708233542/http://googleresearch.blogspot.com/2015/06/inceptionism
-going-deeper-into-neural.html.
Nagy, Danil, and Lorenzo Villaggi. 2017. “Generative Design for Architectural Space
Planning.” https://www.autodesk.com/autodesk-university/article/Generative-Design
-Architectural-Space-Planning-2019.
168 Generative Music and Algorithmic Composition
Picon, Antoine. 2010. Digital Culture in Architecture: An Introduction for the Design
Professions. Basel, Switzerland: Birkhäuser Architecture.
Rutten, David. 2007. “Grasshopper: Algorithmic Modeling for Rhino.” https://www
.grasshopper3d.com/.
Giant Brains
From the late 1940s to the mid-1960s, the Harvard-trained computer scientist
Edmund Callis Berkeley shaped the American public’s perceptions of what comput-
ers were and what role they might play in society. Computers in his view were “giant
mechanical brains” or enormous, automatic, information-processing, thinking
Giant Brains 171
machines to be used for the good of society. Berkeley promoted early, peaceful, and
commercial computer developments through the Association for Computing
Machinery (cofounded in 1947), his company Berkeley Associates (established in
1948), his book Giant Brains (1949), and the magazine Computers and Automation
(established in 1951).
In his popular book, Giant Brains, or Machines that Think, Berkeley defined
computers as giant mechanical brains for their powerful, automatic, cognitive,
information-processing features. Berkeley thought of computers as machines that
operated automatically, on their own without human intervention. One only had to
push the start button, and “the machine starts whirring and it prints out the
answers as it obtains them” (Berkeley 1949, 5). Computers also had cognitive
functions precisely because they processed information. Berkeley perceived
human thought as essentially “a process of storing information and then referring
to it, by a process of learning and remembering” (Berkeley 1949, 2). A computer
could think in the same manner; it “transfers information automatically from one
part of the machine to another, [with] a flexible control over the sequence of its
operations” (Berkeley 1949, 5). He added the adjective giant to emphasize both the
processing power and the physical size of the first computers. In 1946, the first
electronic general-purpose digital computer ENIAC occupied the entire basement
of the University of Pennsylvania’s Moore School of Electrical Engineering.
Beyond shaping the role of computers in the popular imagination, Berkeley was
actively involved in the application of symbolic logic to early computer designs.
He had an undergraduate degree in mathematics and logic from Harvard Univer-
sity, and by 1934, he was working in the actuarial department of Prudential Insur-
ance. In 1938, Bell Labs electrical engineer Claude Shannon published his
pioneering work on the application of Boolean logic to automatic circuitry design.
Berkeley promoted Shannon’s findings at Prudential, urging the insurance com-
pany to apply logic to its punched card tabulations. In 1941, Berkeley, Shannon,
and others formed the New York Symbolic Logic Group to promote logic applica-
tions in electronic relay computing. When the United States entered World War II
(1939–1945) in 1941, Berkeley enlisted in the US Navy and was eventually
assigned to help design the Mark II electromechanical computer in Howard Aik-
en’s Lab at Harvard University.
Based on his experiences with Mark II, Berkeley returned to Prudential, con-
vinced of the commercial future of computing. In 1946, Berkeley used Bell Labs’
general-purpose relay calculator to demonstrate that computers could accurately
calculate a complex insurance problem, in this case the cost of a change in policy
(Yates 2005, 123–24). In 1947, Berkeley met John William Mauchly at the Sympo-
sium on Large Scale Digital Calculating Machinery at Harvard. Their meeting
culminated in a signed contract between Prudential and John Adam Presper Eck-
ert and Mauchly’s Electronic Control Company (ECC) for the development of a
general-purpose computer that would benefit insurance calculations. That
general-purpose machine ultimately became the UNIVAC (in 1951). Prudential,
however, decided not to use UNIVAC but to return to IBM’s tabulating technol-
ogy. UNIVAC first commercial contract was not in insurance, but in General
Electric’s payroll calculations (Yates 2005, 124–27).
172 Goertzel, Ben
Goertzel, Ben(1966–)
Ben Goertzel is chief executive officer and chief scientist of blockchain AI com-
pany SingularityNET, chairman of Novamente LLC, research professor in the
Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, chief
scientist of the Shenzhen, China bioinformatics firm Mozi Health and of Hanson
Robotics, and chair of the OpenCog Foundation, Humanity+, and Artificial Gen-
eral Intelligence Society conference series. Goertzel has long been interested in
creating a benevolent artificial general intelligence and applying it to bioinformat-
ics, finance, gaming, and robotics. He has argued that though AI is a fashionable
phenomenon these days, it is now better than experts in several areas. Goertzel
divides progress in AI into three phases that represent stepping-stones to a global
brain (Goertzel 2002, 2):
• computer and communication technologies as enhancers of human interactions
• the intelligent Internet
• the full-on Singularity
In 2019, Goertzel gave a talk at TEDxBerkeley under the title “Decentralized AI:
The Power and the Necessity.” In the talk, he analyzes artificial intelligence in both
its current incarnation and describes its future. He stresses “the importance of
decentralized control in guiding AI to the next levels, to the power of decentralized
Goertzel, Ben 173
Goertzel worked for Hanson Robotics in Hong Kong for four years. He worked
with the well-known robots Sophia, Einstein, and Han. These robots, he said, “are
great platforms for experimenting with AI algorithms, including cognitive archi-
tectures like OpenCog that aim at human-level AI” (Goertzel 2018). Goertzel
believes that core human values can be preserved for posterity beyond the point of
the Technological Singularity in Sophia-like robot creations. Goertzel has said
that decentralized networks such as SingularityNET and OpenCog offer “AIs with
human-like values,” which will minimize AI risks to humankind (Goertzel 2018).
As human values are complex in nature, Goertzel believes it is inefficient to
encode them as a rule list. Goertzel suggests two approaches: brain-computer
interfacing (BCI) and emotional interfacing. Under BCI, humans will become
“cyborgs, physically linking their brains with computational-intelligence mod-
ules, then the machine components of the cyborgs should be able to read the
moral-value-evaluation structures of the human mind directly from the biological
components of the cyborgs” (Goertzel 2018). Goertzel gives Neuralink by Elon
Musk as an example. Goertzel doubts this approach will work because it involves
intrusive experiments with human brains and lots of unknowns.
The second approach involves “emotional and spiritual connection between
humans and AIs, rather than Ethernet cables or Wifi signals, to connect human
and AI minds” (Goertzel 2018). He suggests under this approach that AIs should
engage in emotional and social interaction with a human by way of facial emotion
recognition and mirroring, eye contact, and voice-based emotion recognition to
practice human values. To this end, Goertzel launched the “Loving AI” research
project with SingularityNET, Hanson AI, and Lia Inc. Loving AI looks to help
artificial intelligences converse and develop personal relationships with human
beings. The Loving AI site currently hosts a humorous video of actor Will Smith
on a date with Sophia the Robot. The video of the date reveals that Sophia is
already capable of sixty facial expressions and can interpret human language and
emotions. According to Goertzel, humanoid robots like Sophia—when connected
to a platform like SingularityNET—gain “ethical insights and advances . . . via
language” (Goertzel 2018). From there, robots and AIs can share what they’ve
learned via a common online “mindcloud.”
Goertzel is also chair of the Conference Series on Artificial General Intelli-
gence, held annually held since 2008 and organized by the Artificial General
Intelligence Society. The society publishes a peer-reviewed open-access academic
serial, the Journal of Artificial General Intelligence. The proceedings of the con-
ference series are edited by Goertzel.
Victoriya Larchenko
See also: General and Narrow AI; Superintelligence; Technological Singularity.
Further Reading
Goertzel, Ben. 2002. Creating Internet Intelligence: Wild Computing, Distributed Digital
Consciousness, and the Emerging Global Brain. New York: Springer.
Goertzel, Ben. 2012. “Radically Expanding the Human Health Span.” TEDxHKUST.
https://www.youtube.com/watch?v=IMUbRPvcB54.
Goertzel, Ben. 2017. “Sophia and SingularityNET: Q&A.” H+ Magazine, November 5,
2017. https://hplusmagazine.com/2017/11/05/sophia-singularitynet-qa/.
Group Symbol Associator 175
Nash’s slide rule is essentially a matrix where columns represent diseases and
rows represent properties. A mark (such as an “X”) is entered into the matrix
wherever properties are expected in each disease. Rows detailing symptoms that
the patient does not exhibit are eliminated. Columns showing a mark in every cell
reveal the most likely or “best match” diagnosis. Viewed this way, as a matrix, the
Nash device reconstructs information in much the same way peek-a-boo card
retrieval systems used to manage stores of knowledge in the 1940s. The Group
Symbol Associator may be compared to Leo J. Brannick’s analog computer for
medical diagnosis, Martin Lipkin and James Hardy’s McBee punch card system
for diagnosing hematological diseases, Keeve Brodman’s Cornell Medical Index-
Health Questionnaire, Vladimir K. Zworykin’s symptom spectra analog com-
puter, and other so-called peek-a-boo card systems and devices. The problem
worked on by these devices is finding or mapping diseases that are appropriate to
the combinations of standardized properties or attributes (signs, symptoms, labo-
ratory results, etc.) exhibited by the patient.
Nash claimed to have reduced the physician’s memory of thousands of pages of
traditional diagnostic tables to a small machine slightly less than a yard in length.
Nash argued that his Group Symbol Associator followed what he called the law of
the mechanical conservation of experience. He wrote, “If our books and our brains
are approaching relative inadequacy, will man crack under the very weight of the
riches of experience he has to carry and pass on to the next generation? I think not.
We shed the physical load onto power machines and laborsaving devices. We must
now inaugurate the era of the thought-saving devices” (Nash 1960b, 240).
Nash’s device did more than augment the physician’s memory. The machine, he
claimed, actually participated in the logical analysis of the diagnostic process.
“The Group Symbol Associator makes visible not only the end results of differen-
tial diagnostic classificatory thinking, it displays the skeleton of the whole process
as a simultaneous panorama of spectral patterns that coincide with varying
degrees of completeness,” Nash noted. “It makes a map or pattern of the problem
composed for each diagnostic occasion, and acts as a physical jig to guide the
thought process” (Paycha 1959, 661). Patent application for the device was made to
the Patent Office in London on October 14, 1953. Nash gave the first public dem-
onstration of the Group Symbol Associator at the 1958 Mechanisation of Thought
Processes Conference at National Physical Laboratory (NPL) in the Teddington
area of London. The 1958 NPL conference is noteworthy as only the second con-
ference to be convened on the subject of artificial intelligence.
The Mark III Model of the Group Symbol Associator became available com-
mercially in the late 1950s. Nash hoped that physicians would carry Mark III with
them when they were away from their offices and books. Nash explained, “The
GSA is small, inexpensive to make, transport, and distribute. It is easy to operate,
and it requires no servicing. The individual, even in outposts, ships, etc., can have
one” (Nash 1960b, 241). Nash also published examples of paper-based “logoscopic
photograms” done with xerography (dry photocopying) that achieved the same
results as his hardware device. The Group Symbol Associator was manufactured
in quantity by Medical Data Systems of Nottingham, England. Most of the Mark
V devices were distributed in Japan by Yamanouchi Pharmaceutical Company.
Group Symbol Associator 177
The program faces criticism. In a 2014 open letter to the European Commis-
sion, scientists complained of problems with transparency and governance of the
program and the narrow scope of research in relation to its original plan and objec
tives. An assessment and review of the funding processes, requirements, and
stated objectives of the Human Brain Project has led to a new governance struc-
ture for the program.
Konstantinos Sakalis
See also: Blue Brain Project; Cognitive Computing; SyNAPSE.
Further Reading
Amunts, Katrin, Christoph Ebell, Jeff Muller, Martin Telefont, Alois Knoll, and Thomas
Lippert. 2016. “The Human Brain Project: Creating a European Research Infra-
structure to Decode the Human Brain.” Neuron 92, no. 3 (November): 574–81.
Fauteux, Christian. 2019. “The Progress and Future of the Human Brain Project.” Scitech
Europa, February 15, 2019. https://www.scitecheuropa.eu/human-brain-project
/92951/.
Markram, Henry. 2012. “The Human Brain Project.” Scientific American 306, no. 6
(June): 50–55.
Markram, Henry, Karlheinz Meier, Thomas Lippert, Sten Grillner, Richard Frackowiak,
Stanislas Dehaene, Alois Knoll, Haim Sompolinsky, Kris Verstreken, Javier
DeFelipe, Seth Grant, Jean-Pierre Changeux, and Alois Sariam. 2011. “Introduc-
ing the Human Brain Project.” Procedia Computer Science 7: 39–42.
I
Intelligent Sensing Agriculture
Technological innovation has historically driven food production, from the Neo-
lithic tools that helped transition humans from hunter gathering to farming, to the
British Agricultural Revolution that harnessed the power of the Industrial Revolu-
tion to increase yields (Noll 2015). Today agriculture is highly technical, as scien-
tific discoveries continue to be integrated into production systems. Intelligent
Sensing Agriculture is one of the most recent integrations in a long history of
applying cutting edge technology to the cultivation, processing, and distribution
of food products. These technical devices are primarily utilized to meet the twin
goals of increasing crop yields, while simultaneously reducing the environmental
impacts of agricultural systems.
Intelligent sensors are devices that can perform a number of complex functions
as part of their defined tasks. These specific types of sensors should not be con-
fused with “smart” sensors or instrument packages that can record input from the
physical environment (Cleaveland 2006). Intelligent sensors are distinct, in that
they not only detect various conditions but also respond to these conditions in
nuanced ways based on this assessment. “Generally, sensors are devices that mea-
sure some physical quantity and convert the result into a signal which can be read
by an observer or instrument, but intelligent sensors are also able to process mea-
sured values” (Bialas 2010, 822). What makes them “intelligent” is their unique
ability to manage their own functions based on external stimulus. They analyze
multiple variables (such as light, temperature, and humidity) to extract essential
features and then generate intermediate responses to these features (Yamasaki
1996). This functionality is dependent on having the capability for advanced
learning, processing information, and adaption all in one integrated package.
These instrument packages are used in a wide range of contexts, from aerospace
to health care, and these application domains are expanding. While all of these
applications are innovative, due to the technology itself, the use of intelligent sen-
sors in agriculture could provide a wide range of societal benefits.
There is currently an urgent need to increase the productivity of agricultural
lands already in production. According to the United Nations (2017), the world’s
population neared 7.6 billion people in 2017. However, most of the world’s arable
land is already being utilized for food. In the United States, almost half of the
country is currently being used to produce agricultural products, and in the United
Kingdom, the figure is 40 percent (Thompson 2010). Due to the lack of undevel-
oped land, agricultural output needs to increase dramatically over the next ten
184 Intelligent Sensing Agriculture
out of production, while farmers who embrace the technology succeed. The adop-
tion of intelligent sensors could contribute to the technology treadmill. Regard-
less, the sensors have a wide range of social, economic, and ethical impacts that
will need to be considered, as the technology develops.
Samantha Noll
See also: Workplace Automation.
Further Reading
Bialas, Andrzej. 2010. “Intelligent Sensors Security.” Sensors 10, no. 1: 822–59.
Cleaveland, Peter. 2006. “What Is a Smart Sensor?” Control Engineering, January 1,
2006. https://www.controleng.com/articles/what-is-a-smart-sensor/.
Noll, Samantha. 2015. “Agricultural Science.” In A Companion to the History of Ameri-
can Science, edited by Mark Largent and Georgina Montgomery. New York:
Wiley-Blackwell.
Pajares, Gonzalo. 2011. “Advances in Sensors Applied to Agriculture and Forestry.” Sen-
sors 11, no. 9: 8930–32.
Thompson, Paul B. 2009. “Philosophy of Agricultural Technology.” In Philosophy of
Technology and Engineering Sciences, edited by Anthonie Meijers, 1257–73.
Handbook of the Philosophy of Science. Amsterdam: North-Holland.
Thompson, Paul B. 2010. The Agrarian Vision: Sustainability and Environmental Ethics.
Lexington: University Press of Kentucky.
United Nations, Department of Economic and Social Affairs. 2017. World Population
Prospects: The 2017 Revision. New York: United Nations.
Yamasaki, Hiro. 1996. “What Are the Intelligent Sensors.” In Handbook of Sensors and
Actuators, vol. 3, edited by Hiro Yamasaki, 1–17. Amsterdam: Elsevier Science B.V.
Intelligent Transportation
Intelligent Transportation involves the application of high technology, artificial
intelligence, and control systems to manage roadways, vehicles, and traffic. The
concept emerged from traditional American highway engineering disciplines,
including motorist routing, intersection control, traffic distribution, and system-
wide command and control. Intelligent transportation has important privacy and
security implications as it aims to embed surveillance devices in pavements,
signaling devices, and individual vehicles in order to reduce congestion and
improve safety.
Highway engineers of the 1950s and 1960s often considered themselves “com-
munications engineers,” controlling vehicle and roadway interactions and traffic
flow with information in the form of signage, signals, and statistics. Computing
machinery in these decades was used mainly to simulate intersections and model
roadway capacity. One of the earliest uses of computing technology in this regard
is S. Y. Wong’s Traffic Simulator, which applied the resources of the Institute for
Advanced Study (IAS) computer in Princeton, New Jersey, to study traffic engi
neering. Wong’s mid-1950s simulator applied computational techniques first
developed to study electrical networks to illustrate road systems, traffic controls,
driver behavior, and weather conditions.
186 Intelligent Transportation
Further Reading
Alpert, Sheri. 1995. “Privacy and Intelligent Highway: Finding the Right of Way.” Santa
Clara Computer and High Technology Law Journal 11: 97–118.
Blum, A. M. 1970. “A General-Purpose Digital Traffic Simulator.” Simulation
14, no. 1: 9–25.
Intelligent Tutoring Systems 189
tutoring systems often feature Open Learner Models (OLMs), which are visual-
izations of the system’s internal student model. OLMs may help learners produc-
tively reflect on their state of learning.
Key intelligent tutoring systems paradigms include model-tracing tutors,
constraint-based tutors, example-tracing tutors, and ASSISTments. These para-
digms differ in their tutoring behaviors and their underlying representations of
domain knowledge, student knowledge, and pedagogical knowledge, and in how
they are authored. Intelligent tutoring systems employ a variety of AI techniques
for domain reasoning (e.g., generating next steps in a problem, given a student’s
partial solution), evaluating student solutions and partial solutions, and student
modeling (i.e., dynamically estimating and maintaining a range of learner vari-
ables). A variety of data mining techniques (including Bayesian models, hidden
Markov models, and logistic regression models) are increasingly being used to
improve systems’ student modeling capabilities. To a lesser degree, machine
learning methods are used to develop instructional policies, for example, using
reinforcement learning.
Researchers are investigating ideas for the smart classroom of the future that
significantly extend what current intelligent tutoring systems can do. In their
visions, AI systems often work symbiotically with teachers and students to orches-
trate effective learning experiences for all students. Recent research suggests
promising approaches that adaptively share regulation of learning processes
across students, teachers, and AI systems—rather than designing intelligent tutor-
ing systems to handle all aspects of adaptation, for example—by providing teach-
ers with real-time analytics from an intelligent tutoring system to draw their
attention to learners who may need additional support.
Vincent Aleven and Kenneth Holstein
See also: Natural Language Processing and Speech Understanding; Workplace
Automation.
Further Reading
Aleven, Vincent, Bruce M. McLaren, Jonathan Sewall, Martin van Velsen, Octav
Popescu, Sandra Demi, Michael Ringenberg, and Kenneth R. Koedinger. 2016.
“Example-Tracing Tutors: Intelligent Tutor Development for Non-Programmers.”
International Journal of Artificial Intelligence in Education 26, no. 1 (March):
224–69.
Aleven, Vincent, Elizabeth A. McLaughlin, R. Amos Glenn, and Kenneth R. Koedinger.
2017. “Instruction Based on Adaptive Learning Technologies.” In Handbook of
Research on Learning and Instruction, Second edition, edited by Richard E.
Mayer and Patricia Alexander, 522–60. New York: Routledge.
du Boulay, Benedict. 2016. “Recent Meta-Reviews and Meta-Analyses of AIED Sys-
tems.” International Journal of Artificial Intelligence in Education 26, no. 1:
536–37.
du Boulay, Benedict. 2019. “Escape from the Skinner Box: The Case for Contemporary
Intelligent Learning Environments.” British Journal of Educational Technology,
50, no. 6: 2902–19.
Heffernan, Neil T., and Cristina Lindquist Heffernan. 2014. “The ASSISTments Ecosys-
tem: Building a Platform that Brings Scientists and Teachers Together for
Interaction for Cognitive Agents 191
a fixed set, they provide a way for combining and applying cognitive science the-
ory. When knowledge is added to a cognitive architecture, a cognitive model is
created. The perception-motor module controls vision and motor output to interact
with the world. Interactive cognitive agents can see the screen, press keys, and
move and click the mouse.
Interactive cognitive agents contribute to cognitive science, human-computer
interaction, automation (interface engineering), education, and assistive technol-
ogy through their broad coverage of theory and ability to generate behaviors.
Farnaz Tehranchi
See also: Cognitive Architectures.
Further Reading
Newell, Allen. 1990. Unified Theories of Cognition. Cambridge, MA: Harvard University
Press.
Ritter, Frank E., Farnaz Tehranchi, and Jacob D. Oury. 2018. “ACT-R: A Cognitive Archi-
tecture for Modeling Cognition.” Wiley Interdisciplinary Reviews: Cognitive Sci-
ence 10, no. 4: 1–19.
Bayesian statistics and heuristic decision rules. In the 1980s, Meditel was avail-
able as a doc-in-a-box software package sold by Elsevier Science Publishing
Company for IBM personal computers.
Dr. Homer Warner and his collaborators incubated a third medical AI competi-
tor, Iliad, in the Knowledge Engineering Center of the Department of Medical
Informatics at the University of Utah. In the early 1990s, Applied Medical Infor-
matics received a two-million-dollar grant from the federal government to link
Iliad’s diagnostic software directly to electronic databases of patient information.
Iliad’s primary audience included physicians and medical students, but in 1994,
the company released a consumer version of Iliad called Medical HouseCall.
Philip L. Frana
See also: Clinical Decision Support Systems; Computer-Assisted Diagnosis.
Further Reading
Bankowitz, Richard A. 1994. The Effectiveness of QMR in Medical Decision Support:
Executive Summary and Final Report. Springfield, VA: U.S. Department of Com-
merce, National Technical Information Service.
Freiherr, Gregory. 1979. The Seeds of Artificial Intelligence: SUMEX-AIM. NIH Publica-
tion 80-2071. Washington, DC: National Institutes of Health, Division of Research
Resources.
Lemaire, Jane B., Jeffrey P. Schaefer, Lee Ann Martin, Peter Faris, Martha D. Ainslie,
and Russell D. Hull. 1999. “Effectiveness of the Quick Medical Reference as a
Diagnostic Tool.” Canadian Medical Association Journal 161, no. 6 (September
21): 725–28.
Miller, Randolph A., and Fred E. Masarie, Jr. 1990. “The Demise of the Greek Oracle
Model for Medical Diagnosis Systems.” Methods of Information in Medicine 29,
no. 1: 1–2.
Miller, Randolph A., Fred E. Masarie, Jr., and Jack D. Myers. 1986. “Quick Medical Ref-
erence (QMR) for Diagnostic Assistance.” MD Computing 3, no. 5: 34–48.
Miller, Randolph A., Harry E. Pople, Jr., and Jack D. Myers. 1982. “INTERNIST-1: An
Experimental Computer-Based Diagnostic Consultant for General Internal Medi-
cine.” New England Journal of Medicine 307, no. 8: 468–76.
Myers, Jack D. 1990. “The Background of INTERNIST-I and QMR.” In A History of
Medical Informatics, edited by Bruce I. Blum and Karen Duncan, 427–33. New
York: ACM Press.
Myers, Jack D., Harry E. Pople, Jr., and Jack D. Myers. 1982. “INTERNIST: Can Artifi-
cial Intelligence Help?” In Clinical Decisions and Laboratory Use, edited by Don-
ald P. Connelly, Ellis S. Benson, M. Desmond Burke, and Douglas Fenderson,
251–69. Minneapolis: University of Minnesota Press.
Pople, Harry E., Jr. 1976. “Presentation of the INTERNIST System.” In Proceedings of
the AIM Workshop. New Brunswick, NJ: Rutgers University.
Ishiguro, Hiroshi(1963–)
Hiroshi Ishiguro is a world-renowned engineer, known especially for his lifelike
humanoid robots. He believes that the current information society will inevitably
evolve into a world of caregiver or helpmate robots. Ishiguro also hopes that the
study of artificial humans will help us better understand how humans are
Ishiguro, Hiroshi 195
respond orally, maintain eye contact, and react swiftly to human touch. This is
made possible through a distributed and ubiquitous sensor net composed of infra-
red motion detectors, cameras, microphones, identification tag readers, and floor
sensors. The robot uses artificial intelligence to determine whether the human is
touching the robot in a gentle or aggressive manner. Ishiguro also introduced a
child version of the robot, called Repliee R1, which is similar in appearance to his
then four-year-old daughter.
More recently, Actroids have been shown to be capable of mimicking the limb
and joint movement of humans, by watching and repeating the motions. The robot
is not capable of true locomotion, as most of the computer hardware running the
artificial intelligence software is external to the robot. In experiments conducted in
Ishiguro’s lab, self-reports of the feelings and moods of human subjects are recorded
as robots exhibit behaviors. The range of moods recorded in response to the Actroid
varies from interest to disgust, acceptance to fear. Real-time neuroimaging of
human subjects has also helped Ishiguro’s research colleagues better understand
the ways human brains are activated in human-android relations. In this way,
Actroid is a testbed for understanding why some of the observed actions performed
by nonhuman agents fail to produce desired cognitive responses in humans.
The Geminoid series of robots was developed in recognition that artificial intel-
ligence lags far behind robotics in producing lifelike interactions between humans
and androids. In particular, Ishiguro acknowledged that it would be many years
before a machine could engage in a long, immersive oral conversation with a
human. Geminoid HI-1, introduced in 2006, is a teleoperated (rather than truly
autonomous) robot identical in appearance to Ishiguro. The term geminoid comes
from the Latin word for “twin.” Geminoid is capable of hand fidgeting, blinking,
and movements associated with human breathing. The android is controlled by
motion-capture technology that reproduces the facial and body movements of
Ishiguro himself. The robot is capable of speaking in a humanlike voice modeled
after its creator. Ishiguro hopes he can one day use the robot to teach classes by
way of remote telepresence. He has noticed that when he is teleoperating the robot
the sense of immersion is so great that his brain is tricked into forming phantom
impressions of physical touch when the android is poked. The Geminoid-DK,
released in 2011, is a mechanical doppelgänger of Danish psychology professor
Henrik Schärfe. While some viewers find the Geminoid appearance creepy, many
do not and simply engage naturally in communication with the robot.
The Telenoid R1 is a teleoperated android robot released in 2010. Telenoid is
amorphous, only minimally approximating the shape of a human, and stands 30
inches high. The purpose of the robot is to communicate a human voice and ges-
tures to a viewer who might use it as a communication tool or videoconferencing
device. Like other robots in Ishiguro’s lab, the Telenoid appears lifelike: it mimics
the motions of breathing and talking and blinks. But the design also minimizes
the number of features to maximize imagination. The Telenoid in this way is anal-
ogous to a physical, real-world avatar. It is intended to help facilitate more inti-
mate, more humanlike interaction over telecommunications technology. Ishiguro
has proposed that the robot might one day serve as a satisfactory stand-in for a
teacher or companion who is otherwise available only at a distance. A miniature
Ishiguro, Hiroshi 197
variant of the robot called the Elfoid can be held in one hand and kept in a pocket.
The Actroid and the Telenoid were anticipated by the autonomous persocom dolls
that substitute for smart phones and other devices in the extremely popular manga
series Chobits.
Ishiguro is Professor of Systems Innovation and Director of the Intelligent
Robotics Laboratory at Osaka University in Japan. He is also a group leader at the
Advanced Telecommunications Research Institute (ATR) in Kansai Science City
and cofounder of the tech-transfer venture company Vstone Ltd. He hopes that
future commercial ventures will leverage success with teleoperated robots to pro-
vide capital for ongoing, continuous improvement of his autonomous series of
robots. His latest effort is a humanoid robot called Erica who became a Japanese
television news anchor in 2018.
As a young man, Ishiguro intensively studied oil painting, thinking as he
worked about how to represent human likeness on canvas. He became spellbound
by robots in the computer science laboratory of Hanao Mori at Yamanashi Uni-
versity. Ishiguro studied for his doctorate in engineering under computer vision
and image recognition pioneer Saburo Tsuji at Osaka University. In projects
undertaken in Tsuji’s lab, he worked on mobile robots capable of SLAM—
simultaneous mapping and navigation with panoramic and omni-directional
video cameras. This work led to his PhD work, which focused on tracking a
human subject through active control of the cameras and panning to achieve full
360-degree views of the environment. Ishiguro thought that the technology and
his applications could be used to give an interactive robot a useful internal map
of its environment. The first reviewer of a paper based on his dissertation rejected
his work.
Ishiguro believes that fine arts and technology are inextricably intertwined; art
inspires new technologies, and technology allows the creation and reproduction of
art. In recent years, Ishiguro has brought his robots to Seinendan, a theatre com-
pany formed by Oriza Hirata, in order to apply what he has learned about human-
robot communications in real-life situations. Precedents for Ishiguro’s branch of
cognitive science and AI, which he calls android science, may be found in Dis-
ney’s “Great Moments with Mr. Lincoln” robotics animation show at Disneyland
and the fictional robot substitutes depicted in the Bruce Willis movie Surrogates
(2009). Ishiguro has a cameo in the Willis film.
Philip L. Frana
See also: Caregiver Robots; Nonhuman Rights and Personhood.
Further Reading
Guizzo, Erico. 2010. “The Man Who Made a Copy of Himself.” IEEE Spectrum 47, no. 4
(April): 44–56.
Ishiguro, Hiroshi, and Fabio Dalla Libera, eds. 2018. Geminoid Studies: Science and
Technologies for Humanlike Teleoperated Androids. New York: Springer.
Ishiguro, Hiroshi, and Shuichi Nishio. 2007. “Building Artificial Humans to Understand
Humans.” Journal of Artificial Organs 10, no. 3: 133–42.
Ishiguro, Hiroshi, Tetsuo Ono, Michita Imai, Takeshi Maeda, Takayuki Kanda, and Ryo-
hei Nakatsu. 2001. “Robovie: An Interactive Humanoid Robot.” International
Journal of Industrial Robotics 28, no. 6: 498–503.
198 Knight, Heather
Kahn, Peter H., Jr., Hiroshi Ishiguro, Batya Friedman, Takayuki Kanda, Nathan G. Freier,
Rachel L. Severson, and Jessica Miller. 2007. “What Is a Human? Toward Psycho-
logical Benchmarks in the Field of Human–Robot Interaction.” Interaction Stud-
ies 8, no. 3: 363–90.
MacDorman, Karl F., and Hiroshi Ishiguro. 2006. “The Uncanny Advantage of Using
Androids in Cognitive and Social Science Research.” Interaction Studies 7, no. 3:
297–337.
Nishio, Shuichi, Hiroshi Ishiguro, and Norihiro Hagita. 2007a. “Can a Teleoperated
Android Represent Personal Presence? A Case Study with Children.” Psychologia
50: 330–42.
Nishio, Shuichi, Hiroshi Ishiguro, and Norihiro Hagita. 2007b. “Geminoid: Teleoperated
Android of an Existing Person.” In Humanoid Robots: New Developments, edited
by Armando Carlos de Pina Filho, 343–52. Vienna, Austria: I-Tech.
Knight, Heather
Heather Knight is an artificial intelligence and engineering expert known for her
work in the area of entertainment robotics. The goal of her Collaborative Humans
and Robots: Interaction, Sociability, Machine Learning, and Art (CHARISMA)
Research Lab at Oregon State University is to bring performing arts methods to
the field of robotics.
Knight describes herself as a social roboticist, someone who creates non-
anthropomorphic—and sometimes nonverbal—machines that engage in interac-
tion with humans. She creates robots exhibiting behavior inspired by human
interpersonal communication. These behaviors include patterns of speech, wel-
coming motions, open postures, and a range of other context clues that help
humans develop rapport with robots in everyday life. In the CHARISMA Lab,
Knight experiments with social robots and so-called charismatic machines, as
well as investigates social and government policy related to robots.
Knight is founder of the Marilyn Monrobot interactive robot theatre company.
The associated Robot Film Festival is an outlet for roboticists to show off their lat-
est creations in a performance environment, and for the showing of films with
relevance to the advancing state of the art in robotics and robot-human interaction.
The Marilyn Monrobot company grew out of Knight’s association with the Syyn
Labs creative collective and her observations on robots constructed for purposes
of performance by Guy Hoffman, Director of the MIT Media Innovation Lab.
Knight’s company focuses on robot comedy. Knight argues that theatrical spaces
are perfect environments for social robotics research because the spaces not only
inspire playfulness—requiring expression and interaction on the part of the robot
actors—but also involve creative constraints where robots thrive, for example, a
fixed stage, learning from trial-and-error, and repeat performances (with manipu-
lated variations).
Knight has argued that the use of robots in entertainment contexts is valuable
because it enhances human culture, imagination, and creativity. Knight intro-
duced a stand-up comedy robot named Data at the TEDWomen conference in
2010. Data is a Nao robot developed by Aldebaran Robotics (now SoftBank
Knight, Heather 199
Group). Data performs a comedy routine (which includes about 200 prepro-
grammed jokes) while collecting audience feedback and fine-tuning its act in real
time. The robot was developed with Scott Satkin and Varun Ramakrisha at Carn-
egie Mellon University. Knight now works on comedy with Ginger the Robot.
Robot entertainment also drives the development of algorithms for artificial
social intelligence. In other words, art is used to inspire new technology. Data and
Ginger utilize a microphone and machine learning algorithm to test audience
reactions and interpret the sounds produced by audiences (laughter, chatter, clap-
ping, etc.). Crowds also receive green and red cards that they hold up after each
joke. Green cards help the robots understand that the audience likes the joke. Red
cards are for jokes that fall flat. Knight has learned that good robot comedy doesn’t
need to hide the fact that the spotlight is on a machine. Rather, Data draws laughs
by bringing attention to its machine-specific troubles and by making self-
deprecating comments about its limitations. Knight has found improvisational
acting and dance techniques invaluable in building expressive, charismatic robots.
In the process, she has revised the methodology of the classic Robotic Paradigm:
Sense-Plan-Act, and she instead prefers Sensing-Character-Enactment, which is
closer in practice to the process used in theatrical performance.
Knight is now experimenting with ChairBots, hybrid machines developed by
attaching IKEA wooden chairs on top of Neato Botvacs (a brand of intelligent
robotic vacuum cleaner). The ChairBots are being tested in public spaces to deter-
mine how such a simple robot can use rudimentary movements as a means of
communication to convince humans to step out of the way. They have also been
employed to convince potential café patrons to enter the premises, find a table,
and sit down.
While working toward degrees in the MIT Media Lab, Knight worked with
Personal Robots group leader Professor Cynthia Breazeal on the synthetic organic
robot art installation Public Anemone for the SIGGRAPH computer graphics con-
ference. The piece comprised a fiberglass cave containing glowing creatures mov-
ing and responding to music and people. The centerpiece robot, also dubbed
“Public Anemone,” swayed and interacted with people, bathed in a waterfall,
watered a plant, and interacted with other environmental features in the cave.
Knight worked with animatronics designer Dan Stiehl to make artificial tube-
worms with capacitive sensors. When a human viewer reached into the cave, the
tubeworm’s fiberoptic tentacles pulled into their tubes and changed color, as if
motivated by protective instincts. The group working on Public Anemone
described the project as an example of intelligent staging and a step toward fully
embodied robot theatrical performance. Knight also contributed to the mechani-
cal design of the “Cyberflora” kinetic robot flower garden installation at the
Smithsonian/Cooper-Hewitt Design Museum in 2003. Her master’s thesis at MIT
centered on the Sensate Bear, a huggable robot teddy bear with full-body capaci-
tive touch sensors for exploring real-time algorithms involving social touch and
nonverbal communication.
Knight earned her doctorate from Carnegie Mellon University in 2016. Her dis-
sertation research involved expressive motion in low degree of freedom robots.
Knight observed in her research that humans do not require that robots closely
200 Knowledge Engineering
Knowledge Engineering
Knowledge engineering (KE) is a discipline in artificial intelligence pursuing
transfer of the experts’ knowledge into a formal automatic programming system
in a way that the latter will be able to achieve the same or similar output as human
experts in problem solving when operating on the same data set. More precisely,
knowledge engineering is a discipline that designs methodologies applicable to
building up large knowledge based systems (KBS), also referred to as expert sys-
tems, using applicable methods, models, tools, and languages. Modern knowledge
engineering relies on knowledge acquisition and documentation structuring
(KADS) methodology for knowledge elicitation; thus, the building up of knowl
edge based systems is regarded as a modeling activity (i.e., knowledge engineer-
ing builds up computer models).
Because the human experts’ knowledge is a mixture of skills, experience, and
formal knowledge, it is difficult to formalize the knowledge acquisition process.
Consequently, the experts’ knowledge is modeled rather than directly transferred
Knowledge Engineering 201
from human experts to the programming system. Simultaneously, the direct simu-
lation of the complete experts’ cognitive process is also very challenging. Designed
computer models are expected to achieve targets similar to experts’ results doing
problem solving in the domain rather than matching the cognitive capabilities of
the experts. Thus, the focus of knowledge engineering is on modeling and prob-
lem solving methods (PSM) independent of different representation formalisms
(production rules, frames, etc.).
The problem solving method is central for knowledge engineering and denotes
knowledge-level specification of a reasoning pattern that can be used to conduct a
knowledge-intensive task. Each problem solving method is a pattern that provides
template structures for solving a particular problem. The popular classification of
problem solving methods according to their topology is as “diagnosis,” “classifica-
tion,” or “configuration.” Examples include PSM “Cover-and-Differentiate” for
solving diagnostic tasks and PSM “Propose-and-Reverse” for parametric design
tasks. The assumption behind any problem solving method is that the logical ade-
quacy of the proposed method matches the computational tractability of the sys-
tem implementation based on it.
Early examples of expert systems often utilize the PSM heuristic classification—
an inference pattern that describes the behavior of knowledge based systems in
terms of goals and knowledge needed to achieve these goals. This problem solving
method comprehends inference actions and knowledge roles and their relation-
ships. The relationships define which role the domain knowledge plays in each
interference action. The knowledge roles are observables, abstract observables,
solution abstractions and solution, while the interference action could be abstract,
heuristic match, and refine. The PSM heuristic classification needs a hierarchi-
cally structured model of observables and solutions for “abstract” and “refine,”
which makes it suitable for acquiring static domain knowledge.
In the late 1980s, the modeling approaches in knowledge engineering moved
toward role limiting methods (RLM) and generic tasks (GT). Role limiting meth-
ods utilize the concept of the “knowledge role,” which specifies the way the par-
ticular domain knowledge is being used in the problem-solving process. RLM
designs a wrapper around PSM by describing the latter in general terms with a
goal to reuse the method. This approach, however, encapsulates only a single
instance of PSM and thus is not suitable for problems that require use of several
methods. An extension of the role limiting methods idea is configurable role limit-
ing methods (CRLM), which offer a predefined set of RLMs along with a fixed
scheme of knowledge types. Each member method can be applied to a different
subset of a task, but adding a new method is quite difficult to achieve since it
requires modification in predefined knowledge types.
The generic task approach provides generic description of input and output
along with a fixed scheme of knowledge types and inference strategy. The generic
task is based on the “strong interaction problem hypothesis,” which states that
structure and representation of domain knowledge can be determined completely
by its use. Each generic task uses knowledge and applies control strategies that are
specific to that knowledge. Because the control strategies are closer to a domain,
the actual knowledge acquisition used in GT demonstrates higher precision in
202 Knowledge Engineering
See also: Clinical Decision Support Systems; Expert Systems; INTERNIST-I and QMR;
MOLGEN; MYCIN.
Further Reading
Schreiber, Guus. 2008. “Knowledge Engineering.” In Foundations of Artificial Intelli-
gence, vol. 3, edited by Frank van Harmelen, Vladimir Lifschitz, and Bruce Por-
ter, 929–46. Amsterdam: Elsevier.
Studer, Rudi, V. Richard Benjamins, and Dieter Fensel. 1998. “Knowledge Engineering:
Principles and Methods.” Data & Knowledge Engineering 25, no. 1–2 (March):
161–97.
Studer, Rudi, Dieter Fensel, Stefan Decker, and V. Richard Benjamins. 1999. “Knowledge
Engineering: Survey and Future Directions.” In XPS 99: German Conference on
Knowledge-Based Systems, edited by Frank Puppe, 1–23. Berlin: Springer.
Kurzweil, Ray(1948–)
Ray Kurzweil is an American inventor and futurist. He spent the first part of his
professional life inventing the first CCD flat-bed scanner, the first omni-font opti-
cal character recognition device, the first print-to-speech reading machine for the
blind, the first text-to-speech synthesizer, the first music synthesizer capable of
recreating the grand piano and other orchestral instruments, and the first commer-
cially marketed, large-vocabulary speech recognition machine. He has received
many honors for his achievements in the field of technology, including a 2015
Technical Grammy Award and the National Medal of Technology.
Kurzweil is cofounder and chancellor of Singularity University and the director
of engineering at Google, heading up a team that develops machine intelligence
and natural language understanding. Singularity University is a nonaccredited
graduate university built on the idea of addressing challenges as grand as renew-
able energy and space travel through an intimate comprehension of the opportu-
nity offered by the current acceleration of technological progress. Headquartered
in the Silicon Valley, the university has grown to one hundred chapters in fifty-
five countries, offering seminars and educational and entrepreneurial acceleration
programs. Kurzweil wrote the book How to Create a Mind (2012) while at Google.
In it he describes his Pattern Recognition Theory of Mind, asserting that the neo-
cortex is a hierarchical system of pattern recognizers. Kurzweil argues that emu-
lating this architecture in machines could lead to an artificial superintelligence.
He hopes that in this way he can bring natural language understanding to Google.
It is as a futurist that Kurzweil has reached a popular audience. Futurists are
people whose specialty or interest is the near- to long-term future and future-
related subjects. They systematically explore predictions and elaborate possibili-
ties about the future by means of well-established approaches such as scenario
planning. Kurzweil has written five national best-selling books, including the New
York Times best seller The Singularity Is Near (2005). His list of predictions is
long. In his first book, The Age of Intelligent Machines (1990), Kurzweil foresaw
the explosive growth in worldwide internet use that began in the second half of the
decade. In his second highly influential book, The Age of Spiritual Machines
(where “spiritual” stands for “conscious”), written in 1999, he rightly predicted
Kurzweil, Ray 205
that computers would soon outperform humans at making the best investment
decisions. In the same book, Kurzweil predicted that machines will eventually
“appear to have their own free will” and even enjoy “spiritual experiences” (Kurz-
weil 1999, 6). More precisely, the boundaries between humans and machines will
blur to a point where they will essentially live forever as merged human-machine
hybrids. Scientists and philosophers have criticized Kurzweil about his prediction
of a conscious machine, the main objection being that consciousness cannot be a
product of computations.
In his third book, The Singularity Is Near, Kurzweil deals with the phenome
non of the Technological Singularity. The term singularity was coined by the great
mathematician John von Neumann. In conversation with his colleague Stanislaw
Ulam in the 1950s, von Neumann postulated the ever-accelerating pace of techno-
logical change, which he said “gives the appearance of approaching some essen-
tial singularity in the history of the race beyond which human affairs as we know
them could not continue” (Ulam 1958, 5). To put it differently, the technological
advancement would change the history of human race. Forty years later, computer
scientist, professor of mathematics, and science fiction writer Vernor Vinge recov-
ered the term and used it in his 1993 essay “The Coming Technological Singular-
ity.” In Vinge’s essay, the technological advancement is more properly the increase
in computing power. Vinge addresses the hypothesis of a self-improving artificial
intelligent agent. In this hypothesis, the artificial intelligent agent continues to
upgrade itself and advances technologically at an incomprehensible rate, to the
point that a superintelligence—that is, an artificial intelligence that far surpasses
all human intelligence—is born. In Vinge’s dystopian view, the machines become
autonomous first and superintelligent second, to the point that humans lose control
of technology and machines take their own destiny in their hands. Because tech-
nology is more intelligent than humans, machines will dominate the world.
The Singularity, according to Vinge, is the end of the human era. Kurzweil
offers an anti-dystopic vision of the Singularity. Kurzweil’s basic assumption is
that humans can create something more intelligent than themselves; as a matter of
fact, the exponential improvements in computer power make the creation of an
intelligent machine almost inevitable, to the point where the machine will become
more intelligent than the humans. At this point, in Kurzweil’s opinion, machine
intelligence and humans would merge. Not coincidentally, in fact, the subtitle of
The Singularity Is Near is When Humans Transcend Biology.
The underlying premise of Kurzweil’s overall vision is discontinuity: no lesson
from the past or even the present can help humans to detect the path to the future.
This also explains the need for new forms of education such as Singularity Uni-
versity. Any reminiscence of the past, and every nostalgic turning back to history,
makes humanity more vulnerable to technological change. History itself, as a
human construct, will soon end with the coming of a new superintelligent, almost
immortal species. Immortals is another word for posthumans, the next step in
human evolution. In Kurzweil’s opinion, posthumanity consists of robots with
consciousness, rather than humans with machine bodies. The future, he argues,
should be built on the premise that humanity is living in an unprecedented era of
technological progress. In his view, the Singularity will empower humankind
206 Kurzweil, Ray
beyond its wildest expectations. While Kurzweil argues that artificial intelligence
is already starting to outpace human intelligence on specific tasks, he recognizes
that the point of superintelligence—also commonly known as the Technological
Singularity, is not yet here. He remains confident that those who embrace the new
era of human-machine synthesis, and are unafraid of moving beyond the limits of
evolution, can foresee a bright future for humanity.
Enrico Beltramini
See also: General and Narrow AI; Superintelligence; Technological Singularity.
Further Reading
Kurzweil, Ray. 1990. The Age of Intelligent Machines. Cambridge, MA: MIT Press.
Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers Exceed Human
Intelligence. New York: Penguin.
Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New
York: Viking.
Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American Math-
ematical Society 64, no. 3, pt. 2 (May): 1–49.
Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the
Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the
Era of Cyberspace, 11–22. Cleveland, OH: NASA Lewis Research Center.
L
Lethal Autonomous Weapons Systems
Lethal Autonomous Weapons Systems (LAWS), also known as “lethal autono-
mous weapons,” “robotic weapons,” or “killer robots,” are air, ground, marine, or
spatial unmanned robotic systems that can independently select and engage tar-
gets and decide the use of lethal force. While popular culture abounds in human-
like robots waging wars or using lethal force against humans (ED-209 in RoboCop,
T-800 in The Terminator, etc.), robots with full lethal autonomy are still under
development. LAWS pose fundamental ethical problems, and they are increas-
ingly debated among AI experts, NGOs, and the international community.
While definition of autonomy may differ in discussions over LAWS, it is gener-
ally understood as “the ability to designate and engage a target without additional
human intervention after having been tasked to do so” (Arkin 2017). However,
LAWS are frequently divided into three categories based on their level of
autonomy:
1. Human-in-the-loop weapons: They can select targets and deliver force only
with a human command.
2. Human-on-the-loop weapons: They can select targets and deliver force under
the monitoring of a human supervisor who can override their actions.
3. Human-out-of-the-loop weapons: they are capable of selecting targets and
delivering force without any human input or interaction.
LAWS include these three types of unmanned weapons. The term “fully autono-
mous weapons” refers to not only human-out-of-the-loop weapons but also
“human-on-the-loop weapons” (or weapons with supervised autonomy) in case
the supervision is in reality limited (for example, if their response time cannot be
matched by a human operator).
Robotic weapons are not new. For example, anti-tank mines, which once acti-
vated by a human engage targets on their own, have been widely used since World
War II (1939–1945). In addition, LAWS encompass many different types of
unmanned weapons with various levels of autonomy and lethality, from land
mines to remote-controlled Unmanned Combat Aerial Vehicles (UCAV), or com-
bat drones, and fire-and-forget missiles. However, to date, the only weapons with
complete autonomy in use are “defensive” systems (such as landmines). Neither
fully “offensive” autonomous lethal weapons nor LAWS using machine learning
have yet been deployed.
Even if military research is often kept secret, it is known that several countries
(in particular, the United States, China, Russia, United Kingdom, Israel, and
208 Lethal Autonomous Weapons Systems
South Korea) are investing heavily in military applications of AI. The inter-
national AI arms race that has begun in the early 2010s has resulted in a fast
rhythm of innovation in this field and fully autonomous lethal weapons could be
produced in the near future.
There are several noticeable precursors of such weapons. For example, the MK
15 Phalanx CIWS, notably deployed by the U.S. Navy, is a close-in weapon sys-
tem that is capable of autonomously performing its own search, detection, evalua-
tion, tracking, engagement, and kill assessment functions. Another example is
Israel’s Harpy, an anti-radar “fire-and-forget” drone that is deployed without a
specifically designated target, flies a search pattern, and attacks targets by
self-destructing.
The deployment of LAWS could dramatically change warfare as previously
gunpowder and nuclear weapons did. In particular, it would put an end to the dis-
tinction between combatants and weapons, and it would complicate the delimita-
tion of battlefields. Yet, LAWS may be associated with numerous military benefits.
Their use would definitely be a force multiplier and reduce the number of human
combatants enrolled. It would therefore save military lives. LAWS may also be
better than many other weapons in terms of force projection due to faster response
time, their ability to perform maneuvers that human combatants cannot do (due to
human physical constraints), and making more efficient decisions (from a military
perspective) than human combatants.
However, the use of LAWS raises several ethical and political concerns. In
addition to not complying with the “Three Laws of Robotics,” the deployment of
LAWS may result in normalizing the use of lethal force since armed conflicts
would involve fewer and fewer human combatants. In that regard, some consider
that LAWS pose a threat to humanity. Concerns over the deployment of LAWS
also include their use by non-state entities and their use by states in non-
international armed conflicts. Delegating life-and-death decisions to machines
may also be considered as harming human dignity.
In addition, the ability of LAWS to comply with the requirements of laws of
war is widely disputed, in particular by international humanitarian law and specif-
ically the principles of proportionality and of military necessity. Yet, some argue
that LAWS, despite not possessing compassion, would at least not act out emo-
tions such as anger, which could result in causing intentional suffering such as
torture or rape. Given the daunting task of preventing war crimes, as proven by
the numerous cases in past armed conflicts, it can even be argued that LAWS
could potentially perpetrate fewer offenses than human combatants.
How the deployment of LAWS would impact noncombatants is also a live argu-
ment. Some claim that the use of LAWS may lead to fewer civilian casualties
(Arkin 2017), since AI may be more efficient than human combatants in decision-
making. However, some critics point to a higher risk of civilians being caught in
crossfire. In addition, the ability of LAWS to respect the principle of distinction is
also much discussed, since distinguishing combatants and civilians may be espe-
cially complex, in particular in non-international armed conflicts and in asymmet-
ric warfare.
Lethal Autonomous Weapons Systems 209
LAWS cannot be held accountable for any of their actions since they are not
moral agents. This lack of accountability could result in further harm to the vic-
tims of war. It may also encourage the perpetration of war crimes. However, it is
arguable that the moral responsibility for LAWS would be borne by the authority
that decided to deploy it or by people who had designed or manufactured it.
In the last decade, LAWS have generated significant scientific attention and
political debate. The coalition that launched the campaign “Stop Killer Robots” in
2012 now consists of eighty-seven NGOs. Its advocacy for a preemptive ban on
the development, production, and use of LAWS has resulted in civil society mobi-
lizations. In 2016, nearly 4,000 AI and robotics researchers signed a letter calling
for a ban on LAWS. In 2018, more than 240 technology companies and organiza-
tions pledged to neither participate in nor support the development, manufacture,
trade, or use of LAWS.
Considering that existing international law may not adequately address the
issues raised by LAWS, the United Nations’ Convention on Certain Conventional
Weapons initiated a consultative process over LAWS. In 2016, it established a
Group of Governmental Experts (GGE). To date, the GGE has failed to reach an
international agreement to ban LAWS due to lack of consensus and due to the
opposition of several countries (especially the United States, Russia, South Korea,
and Israel). However, twenty-six countries in the United Nations have endorsed
the call for a ban on LAWS, and in June 2018, the European Parliament adopted a
resolution calling for the urgent negotiation of “an international ban on weapon
systems that lack human control over the use of force.”
The future of warfare will likely include LAWS since there is no example of a
technological innovation that has not been used. Yet, there is wide agreement that
humans should be kept “on-the-loop” and that international and national laws should
regulate the use of LAWS. However, as the example of nuclear and chemical weap-
ons and anti-personal landmines has shown, there is no assurance that all states and
non-state entities would enforce an international legal ban on the use of LAWS.
Gwenola Ricordeau
See also: Autonomous Weapons Systems, Ethics of; Battlefield AI and Robotics.
Further Reading
Arkin, Ronald. 2017. “Lethal Autonomous Systems and the Plight of the Non-Combatant.”
In The Political Economy of Robots, edited by Ryan Kiggins, 317–26. Basing-
stoke, UK: Palgrave Macmillan.
Heyns, Christof. 2013. Report of the Special Rapporteur on Extrajudicial, Summary,
or Arbitrary Executions. Geneva, Switzerland: United Nations Human Rights
Council. http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession
/Session23/A-HRC-23-47_en.pdf.
Human Rights Watch. 2012. Losing Humanity: The Case against Killer Robots. https://
www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots.
Krishnan, Armin. 2009. Killer Robots: Legality and Ethicality of Autonomous Weapons.
Aldershot, UK: Ashgate.
Roff, Heather. M. 2014. “The Strategic Robot Problem: Lethal Autonomous Weapons in
War.” Journal of Military Ethics 13, no. 3: 211–27.
210 Lethal Autonomous Weapons Systems
Simpson, Thomas W., and Vincent C. Müller. 2016. “Just War and Robots’ Killings.”
Philosophical Quarterly 66, no. 263 (April): 302–22.
Singer, Peter. 2009. Wired for War: The Robotics Revolution and Conflict in the 21st Cen-
tury. New York: Penguin.
Sparrow, Robert. 2007. “Killer Robots.” Journal of Applied Philosophy 24, no. 1: 62–77.
M
Mac Hack
Mac Hack IV, a program written by Richard Greenblatt in 1967, achieved recogni-
tion by becoming the first chess program to enter a chess tournament and to play
competently against humans, earning a rating between 1,400 and 1,500 in the U.S.
Chess Federation rating system. Greenblatt’s program, written in the macro
assembly language MIDAS, ran on a 200 kilohertz DEC PDP-6 computer. He
wrote the program while a graduate student affiliated with Project MAC in MIT’s
Artificial Intelligence Laboratory.
Russian mathematician Alexander Kronrod is said to have declared, “Chess is
the drosophila [fruit fly] of artificial intelligence,” the adopted experimental organ-
ism of the field (Quoted in McCarthy 1990, 227). Since 1950, when Claude Shan-
non first articulated chess play as a problem for computer programmers, creating
a champion chess program has been a prized problem in artificial intelligence.
Chess and games in general present complex yet clearly limited problems with
well-defined rules and goals. Chess has often been characterized as a clear
example of humanlike intelligent behavior. Chess play is a well-bounded example
of human decision-making processes in which moves must be selected with a goal
in mind, while using limited information and with uncertainty regarding the
outcome.
In the mid-1960s, the processing power of computers greatly limited the depth
to which a chess move and its subsequent possible replies could be analyzed
because, with each subsequent reply, the number of possible configurations grows
exponentially. The best human players have been shown to analyze a limited num-
ber of moves to greater depth, instead of considering as many moves as possible to
lesser depth. Greenblatt attempted to replicate the processes skilled players use to
identify relevant branches of the game tree. He programmed Mac Hack to use a
minimax search of the game tree, coupled with alpha-beta pruning and heuristic
components, to decrease the number of nodes evaluated when selecting moves. In
this way, Mac Hack’s style of play more closely resembled that of human players
than of more recent chess programs (such as Deep Thought and Deep Blue), which
are aided by the brute force of high processing speeds to selectively analyze tens
of millions of branches of the game tree before making moves.
Mac Hack earned considerable repute among artificial intelligence researchers
for its 1967 win against MIT philosopher Hubert Dreyfus in a match organized by
MIT mathematician Seymour Papert. In 1965, the RAND Corporation had pub-
lished a mimeographed version of Dreyfus’s report, Alchemy and Artificial
212 Machine Learning Regressions
Intelligence, which critiqued the claims and goals of artificial intelligence research-
ers. Dreyfus argued that no computer could ever achieve intelligence because
human reason and intelligence are not entirely rule-bound, and therefore the infor-
mation processing of a computer could not replicate or describe human cognition.
Among his numerous criticisms of AI, Dreyfus discussed efforts to create
chess-playing computers in a section of the report entitled “Signs of Stagnation.”
The AI community initially perceived Mac Hack’s success against Dreyfus as
vindication.
Juliet Burba
See also: Alchemy and Artificial Intelligence; Deep Blue.
Further Reading
Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelli-
gence. New York: Basic Books.
Greenblatt, Richard D., Donald E. Eastlake III, and Stephen D. Crocker. 1967. “The
Greenblatt Chess Program.” In AFIPS ’67: Proceedings of the November 14–16,
1967, Fall Joint Computer Conference, 801–10. Washington, DC: Thomson Book
Company.
Marsland, T. Anthony. 1990. “A Short History of Computer Chess.” In Computers, Chess,
and Cognition, edited by T. Anthony Marsland and Jonathan Schaeffer, 3–7. New
York: Springer-Verlag.
McCarthy, John. 1990. “Chess as the Drosophila of AI.” In Computers, Chess, and Cogni-
tion, edited by T. Anthony Marsland and Jonathan Schaeffer, 227–37. New York:
Springer-Verlag.
McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History
and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.
are not static. They can be continuously updated with additional training data or
by providing the actual correct outputs on previously unlabeled inputs.
Despite the generalizability of machine learning algorithms, there is no single
program that is best for all regression problems. There are a multitude of factors to
consider when selecting the most optimal machine learning regression algorithm
for the current problem (e.g., programming languages, available libraries, algo-
rithm types, data size, and data structure).
As with other traditional statistical methods, there are machine learning pro-
grams that use single- or multivariable linear regression techniques. These model
the relationships between a single independent feature variable or multiple
independent feature variables and a dependent target variable. The output of the
models are linear representations of the combined input variables. These models
are useful for noncomplex and small data; they are limited to those conditions. For
nonlinear data, polynomial regressions can be applied. These require the pro-
grammers to have an understanding of the data structure—often the purpose of
using machine learning models in the first place. These algorithms will likely not
be useful for most real-world data, but they offer a simple place to begin and could
provide the users with easy-to-explain models.
As the name suggests, decision trees are tree-like structures that map the pro-
grams’ input features/attributes to decide the final output target. A decision tree
algorithm starts with the root node (i.e., an input variable) from which the answer
to the condition of that node splits into edges. If the edge no longer splits, it is
referred to as a leaf; if the edge continues to split, it is known as an internal edge.
For example, a dataset of diabetic and nondiabetic patients could have input
variables of age, weight, and family diabetic history to predict odds of a new
patient having diabetes. The program could set the age variable as the root node
(e.g., age ≥ 40), from which the dataset splits into those who are greater than or
equal to 40 and those who are 39 and younger. If the next internal node after
selecting greater than or equal to 40 is whether or not a parent has/had diabetes
and the leaf estimates the positive answers to have a 60 percent chance of this
patient having diabetes, the model presents that leaf as the final output.
214 Machine Learning Regressions
This is a very simple example of a decision tree that illustrates the decision pro-
cess. Decision trees can easily become thousands of nodes deep. Random forest
algorithms are merely amalgamations of decision trees. They can be formed from
collections of hundreds of decision trees, from which the final outputs are the
averaged outputs of the individual trees. Decision tree and random forest algo-
rithms are great for learning highly complex data structures, but they are easily
prone to overfitting the data. Overfitting can be attenuated with proper pruning
(e.g., setting the n values limits for splitting and leaves) and with large enough
random forests.
Neural networks are machine learning algorithms inspired by the neural con-
nections of the human brain. Just as in the human brain, the base unit of neural
network algorithms are neurons, and the neurons are arranged into multiple lay-
ers. The input variables are referred to as the input layer, the layers of neurons are
called hidden layers (there can be several hidden layers), and the output layer con-
sists of the final neuron.
In a feedforward process, a single neuron (a) receives the input feature vari-
ables, (b) the feature values are multiplied by a weight, (c) the resulting feature
products are added together, along with a bias variable, and (d) the sums are passed
through an activation function, commonly a sigmoid function. The weights and
biases of each neuron are adjusted based on the partial derivative calculations of
the preceding neurons and neural layers. This process is known as backpropaga-
tion. The output of the single neuron’s activation function is passed to all the neu-
rons in the next hidden layer or a final output layer. The output of the final neuron
is thus the predicted value.
Programmers can spend relatively less time restructuring their data as neural
networks are very effective at learning highly complex variable relationships.
Conversely, due to their complexity, neural network models are difficult to inter-
pret, and the intervariable relationships are largely hidden. Neural networks are
best when applied to very large datasets. They require careful hyper-tuning and
sufficient computational power.
Machine learning has become the standard tool for data scientists trying to
understand large datasets. Researchers are continually improving the accuracy
and usability of machine learning programs. However, machine learning algo-
rithms are only as valuable as the data that is used to train the model. Poor data
leads to wildly inaccurate results; biased data without proper understanding
reinforces social inequalities.
Raphael A. Rodriguez
See also: Algorithmic Bias and Error; Automated Machine Learning; Deep Learning;
Explainable AI; Gender and AI.
Further Reading
Garcia, Megan. 2016. “Racist in the Machine: The Disturbing Implications of Algorith-
mic Bias.” World Policy Journal 33, no. 4 (Winter): 111–17.
Géron, Aurelien. 2019. Hands-On Machine Learning with Scikit-Learn and TensorFlow:
Concepts, Tools, and Techniques to Build Intelligent Systems. Sebastopol, CA:
O’Reilly.
Machine Translation 215
Machine Translation
Machine translation involves the automatic translation of human languages with
computing technology. From the 1950s to the 1970s, the U.S. government viewed
machine translation as a powerful tool in diplomatic efforts related to the contain-
ment of communism in the USSR and People’s Republic of China. More recently,
machine translation has become an instrument for selling products and services in
markets otherwise unattainable because of language barriers, and as a product in
its own right. Machine translation is also one of the litmus tests used to gauge pro-
gress in artificial intelligence. There are three general paradigms by which this
artificial intelligence research progresses. The oldest involves rule-based expert
systems and statistical approaches to machine translation. Two more recent para-
digms are neural-based machine translation and example-based machine transla-
tion (or translation by analogy). Today, the automatic translation of language is
considered an academic specialty within computational linguistics.
While several origins for the modern field of machine translation are suggested,
the idea of automatic translation as an academic field stems from correspondence
between the Birkbeck College (London) crystallographer Andrew D. Booth and
Rockefeller Foundation’s Warren Weaver in 1947. In a surviving memo written to
colleagues in 1949, Weaver explained by example how automatic translation might
proceed along the lines of code breaking: “I have a text in front of me which is
written in Russian, but I am going to pretend that it is really written in English and
that it has been coded in some strange symbols. All I need to do is strip off the
code in order to retrieve the information contained in the text” (Warren Weaver,
as cited in Arnold et al. 1994, 13).
A translation engine lies at the heart of most commercial machine translation
systems. Translation engines take sentences entered by the user and parse them
several times, each time applying algorithmic rules that transform the source sen-
tence into the desired target language. Both word-based and phrase-based trans-
formation rules are applied. The parser program’s first task is usually to do a
word-for-word replacement using a two-language dictionary. Additional parsing
iterations of the sentences apply comparative grammatical rules by taking into
account sentence structure, verb form, and appropriate suffixes. Translation
engines are evaluated based on intelligibility and accuracy.
Translation by machine is not flawless. “Word salad” translations may result
from poor grammar in the source text; lexical and structural differences between
languages; ambiguous usage; multiple meanings of words and idioms; and local
variations in usage. The severest early critique of machine translation of language
came from MIT philosopher, linguist, and mathematician Yehoshua Bar-Hillel in
1959–60. Bar-Hillel argued that near-perfect machine translation was impossible
in principle. To demonstrate the problem, he introduced the following sentence:
John was looking for his toy box. Finally he found it. The box was in the pen. John
was very happy. In this sentence, the word “pen” is a challenge because the word
might represent a child’s playpen or a ballpoint pen for writing. Knowing the dif-
ference requires general-purpose knowledge about the world, which a computer
could not have.
216 Machine Translation
Initial rounds of U.S. government funding eroded when, in 1964, the National
Academy of Sciences Automatic Language Processing Advisory Committee
(ALPAC) released an extremely damaging report about the poor quality and high
cost of machine translation. ALPAC concluded that the nation already possessed
an ample supply of human translators that could produce far superior translations.
Many machine translation experts criticized the ALPAC report, noting machine
efficiencies in the preparation of first drafts and successful rollouts of a handful of
machine translation systems.
Only a handful of machine translation research groups existed in the 1960s
and 1970s. Some of the largest were Canada’s TAUM group, the Mel’cuk and
Apresian groups in the Soviet Union, the GETA group in France, and the Ger-
man Saarbrücken SUSY group. The leading provider of automatic trans-
lation technology and services in the United States was SYSTRAN (System
Translation), a private company supported by government contracts created by
Hungarian-born linguist and computer scientist Peter Toma. Toma first became
interested in machine translation while at the California Institute of Technology
in the 1950s. Moving to Georgetown University around 1960, Toma began col-
laborating with other machine translation researchers. Both the Georgetown
machine translation effort and SYSTRAN’s first contract with the U.S. Air Force
in 1969 were dedicated to translating Russian into English. The company’s first
machine translation programs were tested that same year at Wright-Patterson
Air Force Base.
In 1974 and 1975, the National Aeronautics and Space Administration (NASA)
used SYSTRAN software as a translation aid during the Apollo-Soyuz Test Pro-
ject. Shortly thereafter, SYSTRAN picked up a contract to provide automatic
translation services to the Commission of the European Communities, and the
organization has since merged into the European Commission (EC). Seventeen
separate machine translation systems focused on different language pairs were in
use by the EC for internal communiqués by the 1990s. SYSTRAN migrated its
mainframe software to personal computers beginning in 1992. In 1995 the com-
pany released SYSTRAN Professional Premium for Windows. SYSTRAN
remains a global leader in machine translation.
Some important machine translation systems introduced since the late 1970s
revival of machine translation research include METEO, in use since 1977 by the
Canadian Meteorological Center in Montreal for the purpose of translating
weather bulletins from English to French; ALPS, developed by Brigham Young
University for Bible translation; SPANAM, the Pan American Health Organiza-
tion’s Spanish-to-English automatic translation system; and METAL, developed
at the University of Texas at Austin for use by the United States Air Force.
Machine translation became more widely available to the public on web brows-
ers in the late 1990s. One of the first online language translation services was
Babel Fish, a web-based tool developed from SYSTRAN machine translation
technology by a group of researchers at Digital Equipment Corporation (DEC).
The tool supported thirty-six translation pairings between thirteen languages.
Originally an AltaVista web search engine tool, Babel Fish was later sold to
Yahoo! and then Microsoft.
Macy Conferences 217
Macy Conferences
From 1946 to 1960, the Macy Conferences on Cybernetics sought to lay the
groundwork for emerging interdisciplinary sciences, among them what would
become cybernetics, cognitive psychology, artificial life, and artificial intelli-
gence. Participants in the freewheeling debates of the Macy Conferences included
famous twentieth-century scholars, academics, and researchers: psychiatrist W.
Ross Ashby, anthropologist Gregory Bateson, ecologist G. Evelyn Hutchinson,
psychologist Kurt Lewin, philosopher Donald Marquis, neurophysiologist Warren
McCulloch, cultural anthropologist Margaret Mead, economist Oskar Morgen-
stern, statistician Leonard Savage, physicist Heinz von Foerster, mathematician
John von Neumann, electrical engineer Claude Shannon, and mathematician
Norbert Wiener among them. The two principle organizers of the conferences
were McCulloch, a neurophysiologist working in the Research Laboratory for
218 Macy Conferences
Heims, Steve J. 1988. “Optimism and Faith in Mechanism among Social Scientists at the
Macy Conferences on Cybernetics, 1946–1953.” AI & Society 2: 69–78.
Heims, Steve J. 1991. The Cybernetics Group. Cambridge, MA: MIT Press.
Pias, Claus, ed. 2016. The Macy Conferences, 1946–1953: The Complete Transactions.
Zürich, Switzerland: Diaphanes.
McCarthy, John(1927–2011)
John McCarthy was an American computer scientist and mathematician best
known for helping to establish the field of artificial intelligence in the late 1950s
and for championing the use of formal logic in artificial intelligence research. A
prolific thinker, McCarthy made contributions to programming languages and
operating systems research, earning him numerous awards. However, artificial
intelligence, and what he termed “formalizing common sense,” remained the pri-
mary research focus throughout McCarthy’s life (McCarthy 1990).
McCarthy first encountered the ideas that would lead him to AI as a graduate
student at the 1948 Hixon symposium on “Cerebral Mechanisms in Behavior.”
The symposium was held at the California Institute of Technology, where McCar-
thy had recently completed his undergraduate work and enrolled in a graduate
program in mathematics. By 1948, machine intelligence had become a topic of
considerable scholarly attention in the United States under the broad label of
cybernetics, and several prominent cyberneticists were in attendance at the sym-
posium, including Princeton mathematician John von Neumann. A year later,
McCarthy transferred to the Princeton mathematics department, where he shared
some early thoughts inspired by the symposium with von Neumann. Despite von
Neumann’s encouragement, McCarthy never published the work, deciding that
cybernetics could not answer his questions about human knowledge.
At Princeton, McCarthy completed a dissertation on partial differential equa-
tions. After graduating in 1951, he remained at Princeton as an instructor, and in
summer 1952, he had the opportunity to work at Bell Labs with cyberneticist and
founder of information theory Claude Shannon, whom he convinced to collabor-
ate with him on an edited collection of essays on machine intelligence. The contri-
butions to Automata Studies covered a range of disciplines from pure mathematics
to neurology. To McCarthy, however, the published works were not sufficiently
focused on the crucial question of how to build intelligent machines.
In 1953, McCarthy took a job in the mathematics department at Stanford, but
he was let go just two years later, perhaps, he conjectured, because he spent too
much time thinking about intelligent machines and not enough on his mathemat-
ical research. He next took a job at Dartmouth in 1955, as IBM was in the process
of establishing the New England Computation Center at MIT. The New England
Computation Center provided access to an IBM computer, installed at MIT, and
made available to a collection of New England universities, including Dartmouth.
Through the IBM initiative, McCarthy met IBM researcher Nathaniel Rochester,
who brought McCarthy to IBM in the summer of 1955 to work with his research
group. There, McCarthy convinced Rochester of the need for further work on
machine intelligence, and together with Rochester, Shannon, and Marvin Minsky,
220 McCarthy, John
one person at a time. From his first encounter with computers at IBM in 1955,
McCarthy recognized the need for multiple users across a large organization,
such as a university or hospital, to be able to access the organization’s computer
systems simultaneously from computer terminals in their offices. At MIT,
McCarthy advocated for research on such systems, becoming part of a university
committee exploring the topic and eventually helping to initiate work on MIT’s
Compatible Time-Sharing System (CTSS). Although McCarthy would leave
MIT before the CTSS work was complete, his advocacy, while a consultant at
Bolt Beranek and Newman in Cambridge, with J.C.R. Licklider, future office
head at the Advanced Research Projects Agency, the predecessor of DARPA,
was instrumental in helping MIT secure significant federal support for comput-
ing research.
In 1962, Stanford Professor George Forsythe invited McCarthy to join what
would become the second department of computer science in the United States,
after Purdue’s. McCarthy insisted he would only go only as full professor, a
demand he thought would be more than Forsythe could manage for an early career
researcher. Forsythe was able to convince Stanford to approve McCarthy’s full
professorship, and so he left for Stanford, where he would set up the Stanford AI
laboratory in 1965.
McCarthy oversaw research at Stanford on AI topics such as robotics, expert
systems, and chess until his retirement in 2000. The child of parents who had
been active members of the Communist party, McCarthy had a lifelong interest
in Russian affairs. Having taught himself Russian, he maintained many profes-
sional contacts with cybernetics and AI researchers in the Soviet Union, travel-
ing and teaching there in the mid-1960s, and even organizing a chess match in
1965 between a Stanford chess program and a Russian counterpart, which the
Russian program won. While at Stanford, he developed numerous foundational
concepts in the theory of symbolic AI such as that of circumscription, which
expresses the idea that a computer must be allowed to make reasonable assump-
tions about problems presented to it, otherwise even simple scenarios would
need to be specified in such exacting logical detail as to make the task all but
impossible.
Although the methods McCarthy pioneered have fallen out of favor in contem-
porary AI research, his contributions have been recognized with numerous
awards, including the 1971 Turing Award, the 1988 Kyoto Prize, a 1989 induction
into the National Academy of Sciences, the 1990 Presidential Medal of Science,
and the 2003 Benjamin Franklin Medal. McCarthy was a prolific thinker who
constantly envisioned new technologies, from a space elevator for cheaply moving
matter into orbit to a system of carts suspended from wires meant to improve
transportation in urban areas. Yet, when asked during a 2008 interview what he
thought the most important questions in computing today were, McCarthy
responded without hesitation, “Formalizing common sense,” the same project that
had motivated him from the very beginning.
Evan Donahue
Further Reading
Hayes, Patrick J., and Leora Morgenstern. 2007. “On John McCarthy’s 80th Birthday, in
Honor of His Contributions.” AI Magazine 28, no. 4 (Winter): 93–102.
McCarthy, John. 1990. Formalizing Common Sense: Papers, edited by Vladimir Lif-
schitz. Norwood, NJ: Albex.
Morgenstern, Leora, and Sheila A. McIlraith. 2011. “John McCarthy’s Legacy.” Artificial
Intelligence 175, no. 1 (January): 1–24.
Nilsson, Nils J. 2012. “John McCarthy: A Biographical Memoir.” Biographical Memoirs
of the National Academy of Sciences. http://www.nasonline.org/publications
/biographical-memoirs/memoir-pdfs/mccarthy-john.pdf.
medical knowledge is often complex and imprecise. Fuzzy systems are capable of
recognizing, interpreting, manipulating, and using vague information for various
purposes. Today, fuzzy logic systems are used to predict a wide range of outcomes
for patients, such as those suffering from lung cancer and melanoma. They have
also been used to develop treatments for critically ill patients.
Evolutionary computation involves algorithms inspired by natural evolutionary
processes. Evolutionary computation solves problems by optimizing their per-
formance through trial and error. They generate an initial set of solutions and,
with each successive generation, make random small changes to the data set and
remove unsuccessful intermediate solutions. These solutions can be said to be
subjected to mutation and a type of natural selection. The result are algorithms
that gradually evolve, as the fitness of the solutions increases. While many vari-
ants of these programs exist, the most prominent type used in the context of medi-
cine is known as the genetic algorithm. These were first developed by John
Holland in the 1970s, and they utilize basic evolutionary structures to formulate
solutions in complex contexts, such as clinical settings. They are used to perform
a wide range of clinical tasks, including diagnosis, medical imaging, scheduling,
and signal processing.
Hybrid intelligent systems are AI technologies that combine more than one sys-
tem to capitalize on the strengths of the techniques described above. Hybrid sys-
tems are better able to mimic humanlike reasoning and adapt to changing
environments. As with the individual AI technologies described above, these sys-
tems are being used in a wide range of clinical settings. They are currently used to
diagnose breast cancer, assess myocardial viability, and analyze digital
mammograms.
Samantha Noll
See also: Clinical Decision Support Systems; Computer-Assisted Diagnosis; MYCIN;
Precision Medicine Initiative.
Further Reading
Baeck, Thomas, David B. Fogel, and Zbigniew Michalewicz, eds. 1997. Handbook of
Evolutionary Computation. Boca Raton, FL: CRC Press.
Eiben, Agoston, and Jim Smith. 2003. Introduction to Evolutionary Computing. Berlin:
Springer-Verlag.
Patel, Jigneshkumar L., and Ramesh K. Goyal. 2007. “Applications of Artificial Neural
Networks in Medical Science.” Current Clinical Pharmacology 2, no. 3: 217–26.
Ramesh, Anavai N., Chandrasekhar Kambhampati, John R. T. Monson, and Patrick J.
Drew. 2004. “Artificial Intelligence in Medicine.” Annals of the Royal College of
Surgeons of England 86, no. 5: 334–38.
Minsky, Marvin(1927–2016)
Donner Professor of Science Marvin Minsky was a well-known American cogni-
tive scientist, inventor, and artificial intelligence investigator. He cofounded the
Artificial Intelligence Laboratory in the 1950s and the Media Lab in the 1980s at
the Massachusetts Institute of Technology. Such was his fame that, while serving
224 Minsky, Marvin
as advisor to the 1960s classic Stanley Kubrick film 2001: A Space Odyssey, the
sleeping astronaut Dr. Victor Kaminski (killed by the HAL 9000 sentient com-
puter) was named in his honor.
Minsky became interested in intelligence, thinking, and learning machines at
the end of high school in the 1940s. As an undergraduate at Harvard, he showed
interest in neurology, physics, music, and psychology. He worked with cognitive
psychologist George Miller on problem-solving and learning theories, and with J.
C. R. Licklider, professor of psychoacoustics and later father of the internet, on
perception and brain modeling theories. While at Harvard, Minsky began think-
ing about theories of the mind. “I imagined that the brain was composed of little
relays—the neurons—and each of them had a probability attached to it that would
govern whether the neuron would conduct an electric pulse,” he later remembered.
“This scheme is now known technically as a stochastic neural network” (Bern-
stein 1981). This theory is similar to Hebbian theory set out in The Organization
of Behavior (1946) by Donald Hebb. He completed an undergraduate thesis on
topology in the mathematics department.
As a graduate student at Princeton University, Minsky studied mathematics but
became increasingly interested in trying to create artificial neurons from vacuum
tubes such as those described in Warren McCulloch and Walter Pitts’s famous
1943 paper “A Logical Calculus of the Ideas Immanent in Nervous Activity.” He
imagined that such a machine might be able to negotiate mazes like a rat. He built
the machine, dubbed SNARC (Stochastic Neural-Analog Reinforcement Calcula-
tor), with the help of fellow Princeton student Dean Edmonds in the summer of
1951 with Office of Naval Research funding. The machine contained 300 tubes
and several electric motors and clutches. The machine used the clutches to adjust
its own knobs, making it a learning machine. The electric rat moved randomly at
first, but then, by reinforcement of probabilities, it learned how to make better
choices and achieve a desired goal. The maze eventually contained multiple rats
that learned from each other. In his doctoral thesis, Minsky established a second
memory for his hard-wired neural network, which helped the rat remember what
the stimulus had been. This allowed the machine to search its memory when con-
fronted with a new situation and predict the best appropriate course of action. At
the time Minsky had hoped that, with enough memory loops, his self-organizing
random networks might spontaneously lead to emergence of conscious intelli-
gence. Minsky completed his dissertation on “Neural Nets and the Brain Model
Problem” in 1954.
Minsky continued to think about how to create an artificial intelligence after
graduation from Princeton. With John McCarthy, Nathaniel Rochester, and Claude
Shannon, he organized and participated in the Dartmouth Summer Research Pro-
ject on Artificial Intelligence in 1956. The Dartmouth workshop is often described
as the formative event in artificial intelligence research. During the summer work-
shop, Minsky began simulating the computational process of proving Euclid’s
geometric theorems, using pieces of paper because no computer was available. He
realized that he could design an imaginary machine to find proofs without telling
the machine exactly what needed to be done. Minsky showed the results to
Nathaniel Rochester, who returned to his job at IBM and asked a new physics
Minsky, Marvin 225
separate mental agents and their interactions—rather than some basic principle or
universal method. In the book, which is composed of 270 original essays, he dis-
cussed concepts of consciousness, self, free will, memory, genius, language,
memory, brainstorming, learning, and many more. Agents, in Minsky’s view,
require no mind or thinking and feeling abilities of their own. They are not smart.
But together, as a society, they produce what we experience as human intelligence.
In other words, knowing how to accomplish any specific objective requires the
effort of multiple agents. Minsky’s robot builder needs agents to see, move, find,
grasp, and balance blocks. “I like to think that this project,” he wrote, “gave us
glimpses of what happens inside certain parts of children’s minds when they learn
to ‘play’ with simple toys” (Minsky 1986, 29).
Minsky suggested that there might be over one hundred agents working together
to produce what is known as mind. He extended his ideas on Society of Mind in
the book Emotion Machine (2006). Here he made the argument that emotions are
not a different kind of thinking. Rather, they represent ways to think about differ-
ent types of problems that minds encounter in the world. Minsky argued that the
mind switches between different ways to think, thinks on many levels, finds
diverse ways to represent things, and builds manifold models of ourselves.
In his later years, Minsky commented through his writings and interviews on a
wide range of popular and noteworthy topics related to artificial intelligence and
robotics. The Turing Option (1992), a novel written by Minsky in collaboration
with science fiction author Harry Harrison, grapples with problems of superintel-
ligence in the year 2023. In 1994, he penned a piece for Scientific American
entitled “Will Robots Inherit the Earth?” to which he answered “Yes, but they will
be our children” (Minsky 1994, 113).
Minsky once speculated that a superintelligent AI might one day trigger a Rie-
mann Hypothesis Catastrophe, in which an agent tasked with the goal of solving
the hypothesis takes over all of earth’s resources to acquire ever more supercom-
puting power. He didn’t view this possibility as very likely. Minsky believed that
it might be possible for humans to communicate with intelligent extraterrestrial
life forms. They would think like humans because they would be subject to the
same “limitations on space, time, and materials” (Minsky 1987, 117). Minsky was
also a critic of the Loebner Prize, the world’s oldest Turing Test-like competition,
saying that the contest is unhelpful to the field of artificial intelligence. He coun-
tered with his own Minsky Loebner Prize Revocation Prize to anyone who could
stop Hugh Loebner’s annual competition. Minsky and Loebner both died in 2016;
the Loebner Prize contest continues.
Minsky also invented the confocal microscope (1957) and the head-mounted
display or HMD (1963). He won the Turing Award (1969), the Japan Prize (1990),
and the Benjamin Franklin Medal (2001). Minsky advised many doctoral students
who became influential leaders in computer science, including Daniel Bobrow
(operating systems), K. Eric Drexler (molecular nanotechnology), Carl Hewitt
(mathematics and philosophy of logic), Danny Hillis (parallel computing), Benja-
min Kuipers (qualitative simulation), Ivan Sutherland (computer graphics), and
Patrick Winston (who succeeded Minsky as director of the MIT AI Lab).
Philip L. Frana
Mobile Recommendation Assistants 227
See also: AI Winter; Chatbots and Loebner Prize; Dartmouth AI Conference; 2001:
A Space Odyssey.
Further Reading
Bernstein, Jeremy. 1981. “Marvin Minsky’s Vision of the Future.” New Yorker, December
7, 1981. https://www.newyorker.com/magazine/1981/12/14/a-i.
Minsky, Marvin. 1986. The Society of Mind. London: Picador.
Minsky, Marvin. 1987. “Why Intelligent Aliens Will Be Intelligible.” In Extraterrestri-
als: Science and Alien Intelligence, edited by Edward Regis, 117–28. Cambridge,
UK: Cambridge University Press.
Minsky, Marvin. 1994. “Will Robots Inherit the Earth?” Scientific American 271, no. 4
(October): 108–13.
Minsky, Marvin. 2006. The Emotion Machine. New York: Simon & Schuster.
Minsky, Marvin, and Seymour Papert. 1969. Perceptrons: An Introduction to Computa-
tional Geometry. Cambridge, MA: Massachusetts Institute of Technology.
Singh, Push. 2003. “Examining the Society of Mind.” Computing and Informatics 22,
no. 6: 521–43.
conversation. One current example of the use of facial expression can be found in
the automotive company NIO’s virtual robot assistant called Nome. Nome is a
digital voice assistant embodied in a spherical housing equipped with an LCD
screen that sits atop the center dashboard of NIO’s ES8. It can mechanically turn
its “head” to attend to different speakers and uses facial expressions to express
emotions. Another example is MIT’s Dr. Cynthia Breazeal’s commercial Jibo
home robot, which leverages anthropomorphism through paralinguistic methods.
Less anthropomorphic uses of kinesics can be seen in the graphical user inter-
face elements on Apple’s Siri or in illumination arrays such as those on Amazon
Alexa’s physical interface Echo or in Xiami’s Xiao AI, where motion graphics or
lighting animations are used to communicate states of communication such as
listening, thinking, speaking, or waiting.
The increasing intelligence and accompanying anthropomorphism (or in some
cases zoomorphism or mechano-morphism) can raise some ethical concerns
related to user experience. The desire for more anthropomorphic systems stems
from the beneficial user experience of humanlike agentic systems whose commu-
nicative behaviors more closely match familiar interactions such as conversation
made possible through natural language and paralinguistics. The primary advan-
tage of natural conversation systems is that they do not require a user to learn a
new grammar or semantics in order to effectively communicate commands and
desires. A user’s familiar mental model of communication, learned through
engaging with other humans, is applicable to these more anthropomorphic human
machine interfaces.
However, as machine systems more closely approximate human-to-human
interaction, transparency and security become issues where a user’s inferences
about a machine’s behavior are informed by human-to-human communication.
The establishing of comfort and rapport can occlude the ways in which virtual
assistant cognition, and inferred motivation, is unlike human cognition. In terms
of cognition (the assistant’s intelligence and perceptual capacities), many sys-
tems may be equipped with motion sensors, proximity sensors, cameras, micro-
phones, etc. which approximate, emulate, or even exceed human capacities.
While these facilitate some humanlike interaction through improved perception
of the environment, they can also be used for recording, documenting, analyz-
ing, and sharing information that may be opaque to a user when their mental
model, and the machine’s interface, doesn’t communicate the machine’s oper-
ation at a functional level. For example, a digital assistant visual avatar may
close his eyes, or disappear, after a user interaction, but there is no necessary
association between that behavior with the microphone’s and camera’s ability to
keep recording.
Thus, data privacy concerns are becoming more salient, as digital assistants are
increasingly integrated into the everyday lives of human users. Where specifica-
tions, manufacturer data collection objectives, and machine behaviors are poten-
tially misaligned with user’s expectations, transparency becomes a key issue to be
addressed.
Finally, security becomes an issue when it comes to data storage, personal
information, and sharing practices, as hacking, misinformation, and other forms
230 MOLGEN
MOLGEN
Developed between 1975 and 1980, MOLGEN is an expert system that aided
molecular biologists and geneticists in designing experiments. It was the third
expert system designed by Edward Feigenbaum’s Heuristic Programming Project
(HPP) at Stanford University (after DENDRAL and MYCIN). Additionally, like
MYCIN before it, MOLGEN gained hundreds of users beyond Stanford. In the
1980s, MOLGEN was first made available through time-sharing on the GENET
network for artificial intelligence researchers, molecular biologists, and geneti-
cists. By the late 1980s, Feigenbaum established the company IntelliCorp to sell a
stand-alone software version of MOLGEN.
In the early 1970s, scientific breakthroughs related to chromosomes and genes
had generated an information explosion. Stanford University biochemist Paul
Berg conducted the first experiments in gene splicing in 1971. Two years later,
Stanford geneticist Stanley Cohen and University of California at San Francisco
biochemist Herbert Boyer successfully inserted recombinant DNA into an organ-
ism; the host organism (a bacterium) then naturally reproduced the foreign rDNA
structure in its own offspring. These advances led Stanford molecular biologist
Joshua Lederberg to tell Feigenbaum that it was an opportune moment to develop
an expert system in Lederberg’s own field of molecular biology. (Lederberg and
Feigenbaum had previously joined forces on the first expert system DENDRAL.)
The two agreed that what DENDRAL had done for mass spectrometry, MOLGEN
could do for recombinant DNA research and genetic engineering. Indeed, both
expert systems were developed for emerging scientific fields. This allowed MOL-
GEN (and DENRAL) to incorporate the most recent scientific knowledge and
make contributions to their respective field’s further development.
Feigenbaum was MOLGEN’s principal investigator at HPP, with Mark Stefik
and Peter Friedland developing programs for it as their thesis project. The idea
MOLGEN 231
was to have MOLGEN follow a “skeletal plan” (Friedland and Iwasaki 1985,
161). Mimicking a human expert, MOLGEN planned a new experiment by start-
ing from a design procedure that proved successful for a similar problem in the
past. MOLGEN then modified the plan in a hierarchical stepwise manner. The
combination of skeletal plans and MOLGEN’s extensive knowledge base in
molecular biology gave the system the ability to select the most promising new
experiments. By 1980, MOLGEN had incorporated 300 lab methods and strat-
egies as well as current data on forty genes, phages, plasmids, and nucleic acid
structures. Drawing on the molecular biological expertise of Douglas Brutlag,
Larry Kedes, John Sninsky, and Rosalind Grymes of Stanford University, Fried-
land and Stefik provided MOLGEN with a suite of programs. These included
SEQ (for nucleic acid sequence analysis), GA1 (later called MAP, to generate
enzyme maps of DNA structures), and SAFE (for selecting enzymes most suit-
able for gene excision).
MOLGEN was made accessible to the molecular biology community outside of
Stanford beginning in February 1980. The system was connected to SUMEX-
AIM (Stanford University Medical Experimental computer for Artificial Intelli-
gence in Medicine) under an account called GENET. GENET quickly found
hundreds of users across the United States. Frequent visitors included members of
academic laboratories, scientists at commercial giants such as Monsanto, and
researchers at small start-ups such as Genentech.
The National Institutes of Health (NIH), the principal sponsor of SUMEX-
AIM, eventually decided that corporate users could not be granted free access to
cutting-edge technology developed with public funding. Instead, the NIH encour-
aged Feigenbaum, Brutlag, Kedes, and Friedland to establish IntelliGenetics for
corporate biotech users. With the help of a five-year NIH grant of $5.6 million,
IntelliGenetics developed BIONET to offer MOLGEN and other programs on
GENET for sale or rent. By the end of the 1980s, 900 laboratories were accessing
BIONET all over the world for an annual fee of $400.
IntelliGenetics also offered a software package for sale to companies that did
not want to put their data on BIONET. MOLGEN’s software did not sell well as a
stand-alone package until the mid-1980s, when IntelliGenetics removed its genet-
ics content and kept only its underlying Knowledge Engineering Environment
(KEE). The AI part of IntelliGenetics that sold this new KEE shell changed its
name to IntelliCorp. Two public offerings followed, but eventually growth leveled
out again. Feigenbaum conjectured that the commercial success of MOLGEN’s
shell was hindered by its LISP-language; although preferred by pioneering com-
puter scientists working on mainframes computers, LISP did not generate similar
interest in the corporate minicomputer world.
Elisabeth Van Meer
See also: DENDRAL; Expert Systems; Knowledge Engineering.
Further Reading
Feigenbaum, Edward. 2000. Oral History. Minneapolis, MN: Charles Babbage Institute.
Friedland, Peter E., and Yumi Iwasaki. 1985. “The Concept and Implementation of
Skeletal Plans.” Journal of Automated Reasoning 1: 161–208.
232 Monte Carlo
Friedland, Peter E., and Laurence H. Kedes. 1985. “Discovering the Secrets of DNA.”
Communications of the ACM 28 (November): 1164–85.
Lenoir, Timothy. 1998. “Shaping Biomedicine as an Information Science.” In Proceed-
ings of the 1998 Conference on the History and Heritage of Science Information
Systems, edited by Mary Ellen Bowden, Trudi Bellardo Hahn, and Robert V.
Williams, 27–46. Pittsburgh, PA: Conference on the History and Heritage of Sci-
ence Information Systems.
Watt, Peggy. 1984. “Biologists Map Genes On-Line.” InfoWorld 6, no. 19 (May 7): 43–45.
Monte Carlo
Monte Carlo is a simulation method to solve complex problems using multiple
runs of a nondeterministic simulation based on a random number generator.
Deterministic methods solve equations or systems of equations to arrive at a fixed
solution, and every time the calculation is run, it will result in the same solution.
By contrast, in Monte Carlo methods a random number generator is used to choose
different paths, resulting in a variable solution each time. Monte Carlo methods
are used when the deterministic equations are not known, when there are a large
number of variables, and especially for problems that are probabilistic in nature.
Examples of problems that commonly use Monte Carlo methods are games of
chance, nuclear simulations, problems with quantum effects, and weather fore-
casting. In an artificial intelligence context, Monte Carlo methods are commonly
used in machine learning and memory simulations to provide more robust answers
and to represent, for example, how memory varies. Because each Monte Carlo
simulation results in one possible outcome, the simulation must be run hundreds
to millions of times to create a probability distribution, which is the solution to the
overall problem. Monte Carlo methods can be considerably more computational-
intensive than deterministic methods.
Monte Carlo is used commonly in AI for game applications, such as checkers,
chess, and Go. At each step, these games (especially Go) have a very large number
of possible moves. A technique called Monte Carlo tree search is used, which uses
the MC method to repeatedly play the game, making a random move at each step.
Eventually, the AI system learns the best moves for a particular game situation.
Monte Carlo tree search AIs have very good track records and regularly beat other
AI game algorithms.
Mat Brener
See also: Emergent Gameplay and Non-Player Characters.
Further Reading
Andrieu, Christophe, Nando de Freitas, Arnaud Doucet, and Michael I. Jordan. 2003. “An
Introduction to MCMC for Machine Learning.” Machine Learning 50: 5–43.
Eckhard, Roger. 1987. “Stan Ulam, John von Neumann, and the Monte Carlo Method.”
Los Alamos Science 15 (Special Issue): 131–37.
Fu, Michael C. 2018. “Monte Carlo Tree Search: A Tutorial.” In Proceedings of the 2018
Winter Simulation Conference, edited by M. Rabe, A. A. Juan, N. Mustafee, A.
Skoogh, S. Jain, and B. Johansson, 222–36. Piscataway, NJ: IEEE.
Moral Turing Test 233
Further Reading
Arnold, Thomas, and Matthias Scheutz. 2016. “Against the Moral Turing Test: Account-
able Design and the Moral Reasoning of Autonomous Systems.” Ethics and Infor-
mation Technology 18:103–15.
Gerdes, Anne, and Peter Øhrstrøm. 2015. “Issues in Robot Ethics Seen through the Lens
of a Moral Turing Test.” Journal of Information, Communication, and Ethics in
Society 13, no. 2: 98–109.
Luxton, David D., Susan Leigh Anderson, and Michael Anderson. 2016. “Ethical Issues
and Artificial Intelligence Technologies in Behavioral and Mental Health Care.”
In Artificial Intelligence in Behavioral and Mental Health Care, edited by David
D. Luxton, 255–76. Amsterdam: Elsevier Academic Press.
Moravec, Hans(1948–)
Hans Moravec is renowned in computer science circles as the long-time director
of the Robotics Institute at Carnegie Mellon University and an unabashed techno-
logical optimist. He has researched and built robots imbued with artificial intelli-
gence in the CMU lab for twenty-five years, where he remains an adjunct faculty
member. Before Carnegie Mellon, Moravec worked for almost ten years as a
research assistant in the pathbreaking Artificial Intelligence Lab at Stanford
University.
Moravec is also well known for Moravec’s paradox, an assertion that, contrary
to conventional wisdom, it is easy to program high-level reasoning capabilities
into robots—as with playing chess or Jeopardy!—but hard to impart sensorimo-
tor agility. Human sensory and motor skills evolved over millions of years and,
despite their complexity, appear effortless. Higher level intellectual skills, how-
ever, are the product of more recent cultural evolution. These would include geom-
etry, stock market analysis, and petroleum engineering—difficult subjects for
humans but more easily acquired by machines. As Steven Pinker paraphrases
Moravec’s life in science: “The main lesson of thirty-five years of AI research is
that the hard problems are easy, and the easy problems are hard” (Pinker 2007,
190–91).
Moravec constructed his first toy robot from scrap metal at age ten and won
two high school science fair prizes for his light-following electronic turtle and a
robot hand controlled by punched paper tape. While still in high school, he pro-
posed a Ship of Theseus-like analogy for the practicability of artificial brains.
Imagine, he suggested, replacing a person’s human neurons with perfectly
machined substitutes one by one. At what point would human consciousness dis-
appear? Would anyone notice? Could it be proved that the individual was no longer
human? Later in his career, Moravec would argue that human expertise and train-
ing could be broken down in the same way, into subtasks that could be taken over
by separate machine intelligences.
Moravec’s master’s thesis involved the creation of a computer language for arti-
ficial intelligence, and his doctoral research involved a robot with the ability to
maneuver through obstacle courses using spatial representation techniques. These
robot vision systems operated by identifying the region of interest (ROI) in a
scene. By contemporary standards, Moravec’s early robots with computer vision
Moravec, Hans 235
were painfully slow, traversing from one side of the lab to another in about five
hours. An external computer painstakingly processed continuous video-camera
imagery captured by the robot from different angles, in order to estimate distance
and build an internal representation of physical obstacles in the room. Moravec
eventually invented 3D occupancy grid technology, which made it possible for a
robot to build an awareness of a room crowded with objects in a matter of
seconds.
Moravec’s lab adopted a new challenge in turning a Pontiac TransSport mini-
van into one of the very first roadworthy autonomous vehicles. The driverless
minivan operated at speeds up to 60 miles per hour. The CMU Robotics Institute
also created DANTE II, a robot capable of walking on eight artificial spider legs
into the crater of the active volcano on Mount Spurr in Alaska. While the immedi-
ate goal for DANTE II was to sample dangerous fumarole gases, a task too dan-
gerous for people, it was also designed to prove out technology for robotic missions
to other planets. Artificial intelligence allowed the volcano explorer robot to navi-
gate the treacherous, boulder-strewn terrain on its own. Moravec would say that
experience with mobile robotics forced the development of advanced artificial
intelligence and computer vision techniques, because such rovers generated so
much visual and other sensory data that had to be processed and controlled.
Moravec’s team invented fractal branching ultra-dexterous robots (“Bush
robots”) for the National Aeronautics and Space Administration (NASA) in the
1990s. These robots, designed but not built because the enabling fabrication tech-
nologies did not yet exist, consisted of a branched hierarchy of dynamic articu-
lated limbs, beginning with a large trunk and dividing down through branches of
smaller size. The Bush robot would thus have “hands” at all scales arranged from
the macroscopic to the microscopic. The smallest fingers would be nanoscale in
size and able to grasp extraordinarily small things. Because of the complexity
involved in moving millions of fingers in real time, Moravec believed the robot
would require autonomy and rely on artificial intelligence agents distributed
throughout the robot’s limbs and twigs. He speculated that the robots might even-
tually be manufactured out of carbon nanotube material using rapid-prototyping
technology we now call 3D printers.
Moravec has argued that the impact of artificial intelligence on human society
will be great. To emphasize the influence of AI in transformation, he developed
the metaphor of the “landscape of human competence,” since then turned into a
graphic visualization by physicist Max Tegmark. Moravec’s illustration imagines
a three-dimensional landscape where higher elevations represent harder tasks rel-
ative to how difficult they are for human beings. The location where the rising
seas met the coast represents the line where machines and humans find the tasks
equally difficult. Art, science, and literature lie comfortably out of reach of an AI
currently, but arithmetic, chess, and the game Go are already conquered by the
sea. At the shoreline are language translation, autonomous driving, and financial
investment.
More controversially, Moravec engaged in futuristic speculation based on what
he knew of progress in artificial intelligence research in two popular books: Mind
Children (1988) and Robot: Mere Machine to Transcendent Mind (1999). He
236 Moravec, Hans
Musk, Elon(1971–)
Elon Musk is a South African-born engineer, entrepreneur, and inventor. He main-
tains South African, Canadian, and United States citizenships and lives in Cali-
fornia. Although a controversial character, Musk is widely regarded as one of the
most prominent inventors and engineers of the twenty-first century and an import-
ant influencer and contributor to the development of artificial intelligence.
Musk’s entrepreneurial leanings and unusual aptitude for technology were evi-
dent from childhood. He was a self-taught computer programmer by age ten, and
by age twelve, he had created a video game and sold its code to a computer maga-
zine. An avid reader since childhood, Musk has incorporated references to some
of his favorite books in SpaceX’s Falcon Heavy rocket launch and in Tesla’s
software.
Musk’s formal education focused not on engineering, but rather on economics
and physics—interests that are reflected in Musk’s later work, including his
endeavors in sustainable energy and space travel. He attended Queen’s University
in Canada, but transferred to the University of Pennsylvania, where he earned a
bachelor’s degree in Economics and a bachelor’s degree in Physics. Musk pursued
a PhD in energy physics at Stanford University for only two days, leaving the uni-
versity to launch his first company, Zip2, with his brother Kimbal Musk.
Propelled by his many interests and ambitions, Musk has founded or cofounded
multiple companies, including three separate billion-dollar companies: SpaceX,
Tesla, and PayPal.
• Zip2: a web software company, later acquired by Compaq
• X.com: an online bank, which following merger activity later became the
online payments company PayPal
238 Musk, Elon
subsidiaries, Tesla Grohmann Automation and Solar City, provide related automo-
tive technology and manufacturing and solar energy services, respectively.
Musk predicts Tesla will achieve Level 5 autonomous driving functionality, as
designated by the U.S. Department of Transportation’s National Highway Traffic
Safety Administration’s (NHTSA) five levels of autonomous driving, in 2019. Tes-
la’s ambitious progress with autonomous driving has impacted traditional car
manufacturers’ position on electric vehicles and autonomous driving and has
sparked Congressional review about how and when the technology should be reg-
ulated. Highlighting the advantages of autonomous vehicles (including reduced
fatalities in vehicle crashes, increased worker productivity, increased transport
efficiency, and job creation), and proving that the technology is achievable in the
near term, Musk is widely credited as a key influencer moving the automotive
industry toward autonomous driving.
Under the direction of Musk and Tesla’s Director of AI, Andrej Karpathy, Tesla
has developed and advanced its autonomous driving programming (Autopilot).
Tesla’s computer vision analysis, including an array of cameras on each vehicle,
combined with real-time processing of the images, allows the system to make
real-time observations and predictions. The cameras, and other external and inter-
nal sensors, collect vast amounts of data, which are analyzed and used to further
refine the Autopilot programming. Tesla is unique among autonomous vehicle
manufacturers in its aversion to the laser sensor known as LIDAR (an acronym for
light detection and ranging). Instead, Tesla relies on cameras, radar, and ultrasonic
sensors. Though experts and manufacturers are split on whether LIDAR is a
requirement for full autonomous driving, the high cost of LIDAR has hindered
Tesla’s competitors’ ability to make and sell cars at a price point that will allow a
high volume of cars on the road collecting data.
In addition to Tesla’s AI programming, Tesla is developing its own AI hard-
ware. In late 2017, Musk confirmed that Tesla is developing its own silicon for
performing artificial-intelligence computations, which will allow Tesla to create
its own AI chips, no longer relying on third-party providers such as Nvidia.
Tesla’s progress with AI in autonomous driving has not been without setbacks.
Tesla has repeatedly failed to meet self-imposed deadlines, and serious accidents
have been attributed to deficiencies in the vehicle’s Autopilot mode, including a
noninjury accident in 2018, in which the vehicle failed to detect a parked firetruck
on a California freeway, and a fatal accident also in 2018, in which the vehicle
failed to detect a pedestrian outside a crosswalk.
Musk founded the company Neuralink in 2016. Neuralink is focused on
developing devices that can be implanted into the human brain, to better allow
communication between the brain and software, with the stated goal of allowing
humans to keep pace with AI advancements. Musk has described the devices in
terms of a more efficient interface with computing devices; that is, where humans
now use their fingers and voice commands to control devices, commands would
instead come directly from the brain.
Though Musk’s contributions to AI have been significant, his statements about
the associated dangers of AI have bordered on apocalyptic. Musk has referred to
AI as “humanity’s biggest existential threat” (McFarland 2014) and “the greatest
240 MYCIN
risk we face as a civilization” (Morris 2017). He warns about the dangers of con-
centration of power, lack of independent oversight, and a competition-driven rush
to adoption without adequate consideration of the consequences. While Musk has
invoked colorful language like “summoning the demon,” (McFarland 2014) and
images of cyborg overlords, he also warns of more immediate and relatable risks,
including job losses and AI-driven disinformation campaigns.
Though Musk’s comments often come across as alarmist, his anxiety is shared
by many prominent and well-respected minds, including Microsoft cofounder Bill
Gates, the Swedish-American physicist Max Tegmark, and the late theoretical
physicist Stephen Hawking. Further, Musk does not advocate ending AI research.
Instead, Musk advocates for responsible AI development and regulation, includ-
ing convening a Congressional committee to spend years researching AI, with an
aim of understanding the technology and its associated risks before drafting
appropriate regulatory controls.
Amanda K. O’Keefe
See also: Bostrom, Nick; Superintelligence.
Further Reading
Gates, Bill. (@BillGates). 2018. Twitter, June 26, 2018. https://twitter.com/BillGates
/status/1011752221376036864.
Marr, Bernard. 2018. “The Amazing Ways Tesla Is Using Artificial Intelligence and
Big Data.” Forbes, January 8, 2018. https://www.forbes.com/sites/bernardmarr
/2018/01/08/the-amazing-ways-tesla-is-using-artificial-intelligence-and-big-data/.
McFarland, Matt. 2014. “Elon Musk: With Artificial Intelligence, We Are Summoning the
Demon.” Washington Post, October 24, 2014. https://www.washingtonpost.com
/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we
-are-summoning-the-demon/.
Morris, David Z. 2017. “Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We
Face as a Civilization.’” Fortune, July 15, 2017. https://fortune.com/2017/07/15
/elon-musk-artificial-intelligence-2/.
Piper, Kelsey. 2018. “Why Elon Musk Fears Artificial Intelligence.” Vox Media, Novem-
ber 2, 2018. https://www.vox.com/future-perfect/2018/11/2/18053418/elon-musk
-artificial-intelligence-google-deepmind-openai.
Strauss, Neil. 2017. “Elon Musk: The Architect of Tomorrow.” Rolling Stone, November
15, 2017. https://www.rollingstone.com/culture/culture-features/elon-musk-the
-architect-of-tomorrow-120850/.
MYCIN
Designed by computer scientists Edward Feigenbaum (1936–) and Bruce
Buchanan at Stanford University in the 1970s, MYCIN is an interactive expert
system for infectious disease diagnosis and therapy. MYCIN was Feigenbaum’s
second expert system (after DENDRAL), but it became the first expert system to
be made commercially available as a stand-alone software package. By the
1980s, EMYCIN was the most successful expert shell sold by TeKnowledge, the
MYCIN 241
Further Reading
Cendrowska, J., and M. A. Bramer. 1984. “A Rational Reconstruction of the MYCIN
Consultation System.” International Journal of Man-Machine Studies 20 (March):
229–317.
Crevier, Daniel. 1993. AI: Tumultuous History of the Search for Artificial Intelligence.
Princeton, NJ: Princeton University Press.
Feigenbaum, Edward. 2000. “Oral History.” Charles Babbage Institute, October 13, 2000.
van Melle, William. 1978. “MYCIN: A Knowledge-based Consultation Program for
Infectious Disease Diagnosis.” International Journal of Man-Machine Studies 10
(May): 313–22.
N
Natural Language Generation
Natural Language Generation, or NLG, is the computational process through
which forms of information that cannot be readily interpreted by humans are con-
verted into a message optimized for human understanding as well as the name of
the subfield of artificial intelligence (AI) devoted to the study and development of
the same. The term “natural language” in computer science and AI is synonymous
with what most people simply refer to as language, the means through which
people communicate with one another and, now, increasingly with computers and
robots. Natural language is the opposite of “machine language,” or programming
language, which has been developed for and used to program and operate comput-
ers. The information being processed by an NLG technology is some form of data,
such as scores and statistics from a sport game, and the message being produced
from this data can take multiple forms (text or speech), such as a news report
regarding a sports game.
The development of NLG can be traced back to the introduction of computers
in the mid-twentieth century. Entering information into early computers and then
making sense of the output was difficult, laborious, and required highly special-
ized training. Researchers and developers conceptualized these hurdles related to
the input and output of machines as problems of communication. Communication
also is key to acquiring knowledge and information and to demonstrating intelli-
gence. The solution that researchers devised was to work toward adapting the
communication of and with machines to the form of communication that was most
“natural” to humans, people’s own languages. Research regarding how machines
could make sense of human language falls under Natural Language Processing
while research regarding the generation of messages tailored toward humans is
Natural Language Generation. Similar to artificial intelligence, some scholars
working in this area focus on the development of systems that produce messages
from data while others focus on understanding the human process of language
and message generation. In addition to being a subfield of artificial intelligence,
NLG also is a subfield within Computational Linguistics.
The proliferation of technologies for creating, collecting, and connecting large
swathes of data along with advances in computing hardware has enabled the
recent proliferation in NLG technologies. Multiple applications exist for NLG
across numerous industries, such as journalism and media. Large international
and national news organizations worldwide have started to integrate automated
news-writing software, which utilize NLG technology, into the production of
news. Within this context, journalists use the software to develop informational
reports from various datasets to produce lists of local crimes, financial earning
244 Natural Language Processing and Speech Understanding
reports, and sporting events synopses. NLG systems also can be used by compan-
ies and organizations to develop automated summaries of their own or outside
data. A related area of research is computational narrative and the development of
automated narrative generation systems that focus on the creation of fictional stor-
ies and characters that can have applications in media and entertainment, such as
video games, as well as education and learning.
It is expected that NLG will continue to advance so that future technologies
will be able to produce more complex and refined messages across additional con-
texts. The growth and application of NLG is relatively recent, and it is unknown
what the full impact of technologies utilizing NLG will be on individuals, organ-
izations, industries, and society. Current questions that are being raised include
whether NLG technologies will affect the workforce, positively or negatively,
within the industries in which they are being adopted, and the legal and ethical
implications of having machines, rather than humans, create nonfiction and fic-
tion. There also are larger philosophical considerations surrounding the connec-
tion among communication, the use of language, and how people socially and
culturally have defined what it means to be human.
Andrea L. Guzman
See also: Natural Language Processing and Speech Understanding; Turing Test; Work-
place Automation.
Further Reading
Guzman, Andrea L. 2018. “What Is Human-Machine Communication, Anyway?” In
Human-Machine Communication: Rethinking Communication, Technology, and
Ourselves, edited by Andrea L. Guzman, 1–28. New York: Peter Lang.
Lewis, Seth C., Andrea L. Guzman, and Thomas R. Schmidt. 2019. “Automation, Jour-
nalism, and Human-Machine Communication: Rethinking Roles and Relation-
ships of Humans and Machines in News.” Digital Journalism 7, no. 4: 409–27.
Licklider, J. C. R. 1968. “The Computer as Communication Device.” In In Memoriam:
J. C. R. Licklider, 1915–1990, edited by Robert W. Taylor, 21–41. Palo Alto, CA:
Systems Research Center.
Marconi, Francesco, Alex Siegman, and Machine Journalist. 2017. The Future of Aug-
mented Journalism: A Guide for Newsrooms in the Age of Smart Machines. New
York: Associated Press. https://insights.ap.org/uploads/images/the-future-of
-augmented-journalism_ap-report.pdf.
Paris, Cecile L., William R. Swartout, and William C. Mann, eds. 1991. Natural
Language Generation in Artificial Intelligence and Computational Linguistics.
Norwell, MA: Kluwer Academic Publishers.
Riedl, Mark. 2017. “Computational Narrative Intelligence: Past, Present, and Future.”
Medium, October 25, 2017. https://medium.com/@mark_riedl/computational
-narrative-intelligence-past-present-and-future-99e58cf25ffa.
learning, linguistics, and semantics in order to decode the uncertainties and opaci-
ties of natural human language. In the future, chatbots will use natural language
processing to seamlessly interact with human beings over text-based and voice-
based interfaces. Computer assistants will also support interactions as an inter-
face between humans with different abilities and needs. They will allow for
natural language queries of vast amounts of information, like that encountered on
the internet, by making search more natural. They may even insert helpful insights
or tidbits of knowledge into situations as diverse as meetings, classrooms, or
casual conversations. They may even one day be able to seamlessly “read” and
respond to the emotions or moods of human speakers in real time (so-called “sen-
timent analysis”). The market for NLP hardware, software, and services may be
worth $20 billion in annual revenue by 2025.
Speech or voice recognition has a long history. Research into automatic speech
recognition and transcription began at Bell Labs in the 1930s under Harvey
Fletcher, a physicist who did pioneering research establishing the relationship
between speech energy, frequency spectrum, and the perception of sound by a
listener. His research forms the basis of most speech recognition algorithms today.
By 1940, another Bell Labs physicist Homer Dudley had been granted patents on
a Vodor speech synthesizer that modeled human vocalizations and a parallel band-
pass vocodor that could take sound samples and run them through narrow band
filters to determine their energy levels. The latter device could also take the record
of energy levels and turn them back into rough approximations of the original
sounds by running them through other filters.
By the 1950s, Bell Labs researchers had figured out how to create a system that
could do more than emulate speech. In that decade, digital technology had
improved to the point where the system could recognize the isolated spoken word
parts by comparing their frequencies and energy levels against a digital reference
library of sounds. Essentially, the machine made an educated guess at the expres-
sion being made. Progress was slow. By the mid-1950s, Bell Labs machines could
recognize about ten syllables spoken by a single individual. At the end of the dec-
ade, researchers at MIT, IBM, and at Kyoto University and University College
London were developing recognizing machines that used statistics to recognize
words containing multiple phonemes. Phonemes are units of sound that listeners
perceive as distinctive from one another. Progress was also being made on tools
that worked in recognizing the speech of more than a single speaker.
The first professional automatic speech recognition group was created in 1971
and chaired by Allen Newell. The study group divided its work among several
levels of knowledge formation, including acoustics, parametrics, phonemics,
lexical concepts, sentence processing, and semantics. Some of the problems
reviewed by the group were studied under grants issued in the 1970s by the
Defense Advanced Research Project Agency (DARPA). DARPA was interested
in the technology as a way to process large volumes of spoken data produced by
various government agencies and turn that information into insights and
strategic responses to problems. Progress was made on such techniques as
dynamic time warping and continuous speech recognition. Computer technol-
ogy also steadily improved, and several manufacturers of mainframes and
246 Natural Language Processing and Speech Understanding
with voice dictation features that convert ordinary speech into text for use in text
messages and emails.
Industry in the twenty-first century has benefitted enormously from the vast
volume of data available in the cloud and through the collection of massive
archives of voice recordings collected from smart phones and electronic periph-
erals. These large training data sets have allowed companies to continuously
improve acoustic models and language models for speech processing. Traditional
speech recognition technology used statistical learning techniques to match
observed and “labeled” sounds. Since the 1990s, speech processing has relied
more on Markovian and hidden Markovian systems that feature reinforcement
learning and pattern recognizing algorithms. Error rates have plunged in recent
years because of the quantities of data available for matching and the power of
deep learning algorithms. Despite the fact that using Markov models for language
representation and analysis is controversial among linguists, who assert that nat-
ural languages require flexibility and context to be properly understood, these
approximation methods and probabilistic functions are extremely powerful at
deciphering and responding to inputs of human speech.
Today, computational linguistics is predicated on the n-gram, a contiguous
sequence of n items from a given sample of text or speech. The items can be pho-
nemes, syllables, letters, words, or base pairs according to the application.
N-grams typically are collected from text or speech. No other technique currently
beats this approach in terms of proficiency. Google and Bing have indexed the
internet in its entirety for their virtual assistants and use user query data in their
language models for voice search applications. The systems today are beginning
to recognize new words from their datasets on the fly, what humans would call
lifelong learning, but this is an emerging technology.
In the future, companies involved in natural language processing want
technologies that are portable (relying on remote servers) and that provide near-
instantaneous feedback and a frictionless user experience. One powerful example
of next-generation NLP is being developed by Richard Socher, a deep learning
expert and founding CEO of the artificial intelligence start-up MetaMind. The
company’s technology uses a neural networking system and reinforcement learn-
ing algorithms to generate answers to specific and very general questions, based
on large chunks of natural language datasets. The company was recently acquired
by digital marketing behemoth Salesforce. There will be demand in the future for
text-to-speech analysis and advanced conversational interfaces in automobiles,
speech recognition and translation across cultures and languages, automatic
speech understanding in environments with high ambient noise such as construc-
tion sites, and specialized voice systems to control office and home automation
processes and internet-connected devices. All of these applications to augment
human speech will require the harvesting of large data sets of natural language to
work upon.
Philip L. Frana
Further Reading
Chowdhury, Gobinda G. 2003. “Natural Language Processing.” Annual Review of Infor-
mation Science and Technology 37: 51–89.
Jurafsky, Daniel, and James H. Martin. 2014. Speech and Language Processing. Second
edition. Upper Saddle River, NJ: Pearson Prentice Hall.
Mahavan, Radhika. n.d. “Natural Language Processing: Current Applications and Future
Possibilities.” https://www.techemergence.com/nlp-current-applications-and-future
-possibilities/.
Manning, Christopher D., and Hinrich Schütze. 1999. Foundations of Statistical Natural
Language Processing. Cambridge, MA: MIT Press.
Metz, Cade. 2015. “AI’s Next Frontier: Machines That Understand Language.”
Wired, June 24, 2015. https://www.wired.com/2015/06/ais-next-frontier-machines
-understand-language/.
Nusca, Andrew. 2011. “Say Command: How Speech Recognition Will Change the World.”
ZDNet, November 2, 2011. https://www.zdnet.com/article/say-command-how
-speech-recognition-will-change-the-world/.
Newell, Allen(1927–1992)
Allen Newell worked with Herbert Simon to create the first models of human cog
nition in the late 1950s and early 1960s. These programs modeled how logical
rules could be applied in a proof (Logic Theory Machine), how simple problem solv-
ing could be performed (the General Problem Solver), and an early program to play
chess (the Newell-Shaw-Simon chess program). In these models, Newell and Simon
showed for the first time how computers could manipulate symbols and how these
manipulations could be used to represent, generate, and explain intelligent behavior.
Newell started his career as a physics undergraduate at Stanford University.
After a year of graduate work in mathematics at Princeton, he moved to the RAND
Corporation to work on models of complex systems. While at RAND, he met and
was influenced by Oliver Selfridge, who led him into modeling cognition. He also
met Herbert Simon, who was later to win a Nobel Prize in Economics for the
decision-making processes within economic organizations, including satisficing.
Newell was recruited by Simon to come to Carnegie Institute of Technology (now
Carnegie Mellon University). Newell collaborated with Simon for much of his
academic life.
Newell’s primary interest was in understanding the human mind by simulating
its processes using computational models. Newell completed his doctorate with
Simon at Carnegie Mellon. His first academic job was as a tenured, chaired
professor. He helped found the Department of Computer Science (now school),
where he had his primary appointment.
In his main line of research, Newell explored the mind, particularly problem
solving, with Simon. Their 1972 book Human Problem Solving laid out their
theory for intelligence and illustrated it with examples including those from math
puzzles and chess. Their work made extensive use of verbal talk-aloud proto-
cols—which are more accurate than think-aloud or retrospective protocols—to
understand what resources are being used in cognition. The science of verbal
protocol data was later more fully codified by Ericsson and Simon.
Newell, Allen 249
He argued in his last lecture (“Desires and Diversions”) that if you get dis-
tracted, you should make the distraction count. He did so by notable achievements
in the areas of his distractions and by using several of them in his final project.
These distractions included one of the first hypertext systems, ZOG. Newell also
wrote a textbook on computer architectures with Digital Equipment Corporation
(DEC) pioneer Gordon Bell and worked on speech recognition systems with CMU
colleague Raj Reddy.
Perhaps the longest running and most productive distraction was work with
Stuart Card and Thomas Moran at Xerox PARC to create theories of how users
interact with computers. These theories are documented in The Psychology of
Human-Computer Interaction (1983). Their work led to two approaches for repre-
senting human behavior—the Keystroke Level Model and GOMS—as well as a
simple representation of the mechanisms of cognition in this area, called the
Model Human Processor. This was some of the first work in human-computer
interaction (HCI). Their approach argued for understanding the user and the task
and then using technology to support the user to perform the task.
Newell also noted in his last lecture that scientists should have a final project
that would outlast them. Newell’s final project was to argue for unified theories of
cognition (UTCs) and to create a candidate UTC, an exemplar, called Soar. His
project described what it would look like to have a theory that brought together all
the constraints, data, and theories in psychology into a single unified result realized
by a computer program. Soar remains a successful ongoing project, although it is
not complete. While Soar has not unified psychology, it has had notable successes
in explaining problem solving, learning, their interaction, and how to provide
autonomous, reactive agents in large simulations.
As part of his final project (with Paul Rosenbloom), he examined how learning
could be modeled. This line of work was later merged with Soar. Newell and
Rosenbloom argued that learning followed a power law of practice; that is, the
time to perform a task related to the practice (trial) number raised to a small nega-
tive power (e.g., Time α trial# -α) holds across a wide range of tasks. Their explan-
ation was that as tasks were performed in a hierarchical manner, what was learned
at the bottom level had the most effect on response time, but as learning continued
on higher levels, the learning was less often used and saved less time; so the learn-
ing slowed down but did not stop.
In 1987, Newell gave the William James Lectures at Harvard. In these lectures,
he laid out in detail what it would mean to generate a unified theory in psychology.
These lectures were recorded and are available through the CMU library. In the
following fall, he gave them again and wrote them up as a book (1990).
Soar uses search through problem spaces as its way of representing cognition.
It is realized as a production system (using IF-THEN rules). It attempts to apply
an operator. If it does not have one or cannot apply it, Soar recurses with an
impasse to solve the problem. Thus, knowledge is represented as parts of oper-
ators and problem spaces and how to resolve the impasses. The architecture is
thus how these choices and knowledge can be structured. Systems with up to one
million rules have been built, and Soar models have been used in a variety of cogni
tive science and AI applications, including military simulations. Newell also
250 Nissenbaum, Helen
explored how to use these models of cognition to simulate social agents with
CMU social scientist Kathleen Carley. Work with Soar continues, primarily at
the University of Michigan under John Laird, where it is more focused now on
intelligent agents.
Newell and Simon received the ACM A. M. Turing Award in 1975 for their
contributions to artificial intelligence, the psychology of human cognition and list
processing. Their work is recognized for fundamental contributions to computer
science as an empirical inquiry. Newell was also elected to the National Academy
of Sciences and the National Academy of Engineering. In 1992, he received the
National Medal of Science. Newell helped found a research group, department,
and university that were productive and supportive. At his memorial service, his
son noted that not only was he a great scientist, he was also a great dad. His flaws
were that he was very smart, he worked very hard, and he thought the same of you.
Frank E. Ritter
See also: Dartmouth AI Conference; General Problem Solver; Simon, Herbert A.
Further Reading
Newell, Allen. 1990. Unified Theories of Cognition. Cambridge, MA: Harvard University
Press.
Newell, Allen. 1993. Desires and Diversions. Carnegie Mellon University, School of
Computer Science. Stanford, CA: University Video Communications.
Simon, Herbert A. 1998. “Allen Newell: 1927–1992.” IEEE Annals of the History of
Computing 20, no. 2: 63–76.
Nissenbaum, Helen(1954–)
Helen Nissenbaum, who holds a PhD in philosophy, explores the ethical and polit-
ical implications of information technology in her scholarship. She has held pos-
itions at Stanford University, Princeton University, New York University, and
Cornell Tech. Additionally, Nissenbaum has served as the principal investigator
for a wide variety of grants from organizations such as the National Security
Agency, the National Science Foundation, the Air Force Office of Scientific
Research, the U.S. Department of Health and Human Services, and the William
and Flora Hewlett Foundation.
Nissenbaum defines AI as big data, machine learning, algorithms, and models
that lead to output results. Privacy is the predominant area of concern that links
her work across these topics. In her 2010 book, Privacy in Context: Technology,
Policy, and the Integrity of Social Life, Nissenbaum explains these concerns
through the framework of contextual integrity, which understands privacy in
terms of appropriate flows of information as opposed to simply preventing flows
of information all together. In other words, she is concerned with trying to create
an ethical framework in which data can be collected and used appropriately. How-
ever, the problem with creating such a framework is that when multiple data
sources are collected together, or aggregated, it becomes possible to learn more
about those from whom the data was collected than it would be possible to do with
each individual source of data. Such aggregated data is used to profile users,
Nissenbaum, Helen 251
treated as such. But Taylor further defines the category of the person by centering
the definition on certain capacities. In his view, in order to be classified as a per-
son, one must be capable of understanding the difference between the future and
the past. A person must also have the ability to make choices and chart out a plan
for his or her life. To be a person, one should have a set of values or morals. In
addition, a person would have a self-image or sense of identity.
In light of these criteria, those who consider the possibility that androids may
be granted personhood also acknowledge that these entities would have to have
these kinds of abilities. For example, F. Patrick Hubbard argues that personhood
for robots should only be granted if they meet certain criteria. These criteria
include the sense of having a self, having a plan for life, and having the ability to
communicate and think in complex ways. David Lawrence provides an alternate
set of criteria for granting personhood to an android. Firstly, he speaks of AI hav-
ing to present consciousness, in addition to being able to understand information,
learn, reason, possess subjectivity, among many other elements.
Peter Singer takes a much simpler approach to personhood, although his focus
is on the ethical treatment of animals. In his view, the defining characteristic of
granting personhood centers on suffering. If something can suffer, then that suf-
fering should be seen equally, no matter whether it is a human, an animal, or a
machine. In fact, Singer sees it as immoral to deny the suffering of any being. If
androids possess some or all of the aforementioned criteria, some people believe
they should be granted personhood, and with that position should come individual
rights, such as the right to free speech or freedom from being a slave.
Those who object to personhood for artificial intelligence often believe that
only natural entities should be given personhood. Another objection relates to the
robot’s status as human-made property. In this case, since robots are programmed
and carry out human instructions, they are not an independent person with a will;
they are merely an object that humans have labored to produce. If an android does
not have its own will and independent thought, then it is difficult to grant it rights.
David Calverley notes that androids can be bound by certain constraints. For
example, an android might be limited by Asimov’s Laws of Robotics. If that were
the case, then the android would not have the ability to truly make free choices of
its own. Others object on the grounds that artificial intelligence lack a crucial ele-
ment of personhood, namely a soul, feelings, and consciousness, all reasons that
have previously been used to deny animals personhood. However, something like
consciousness is difficult to define or assess even in humans.
Finally, opposition to personhood for androids often centers on fear, a fear that
is fueled by science fiction novels and movies. Such fictions present androids as
superior in intelligence, possibly immortal, and having a desire to take over,
superseding the human’s place in society. Lawrence Solum explains that each of
these objections is rooted in the fear of anything that is not human, and he argues
that we reject personhood for AI based on the sole fact that they do not have
human DNA. He finds such a stance troublesome and equates it to American slav-
ery, in which slaves were denied rights solely because they were not white. He
takes issue with denying an android rights only because it is not human, especially
if other entities have feelings, consciousness, and intelligence.
Nonhuman Rights and Personhood 255
Although personhood for androids is theoretical at this point, there have been
recent events and debates that have broached this topic in real ways. In 2015, a
Hong Kong-based company called Hanson Robotics developed Sophia, a social
humanoid robot. It appeared in public in March 2016 and became a Saudi Arabian
citizen in October 2017. Additionally, Sophia become the first nonhuman to be
given a United Nations title when she was named the first Innovation Champion of
the UN Development Program in 2017. Sophia delivers speeches and has given
interviews around the world. Sophia has even expressed the desire to have a home,
get married, and have children. In early 2017, the European Parliament attempted
to grant robots “electronic personalities,” allowing them to be held liable for any
damages they cause. Those in favor of this change saw legal personhood as the
same legal status held by corporations. Conversely, in an open letter in 2018, over
150 experts from 14 European countries opposed this measure, finding it to be
inappropriate for ridding corporations of responsibility for their creations. In an
amended draft from the EU Parliament, the personhood of robots is not men-
tioned. The debate about responsibility has not ceased though, as evidenced in
March 2018 when a self-driving car killed a pedestrian in Arizona.
Over the course of Western history, our ideas of who deserves ethical treatment
have changed. Susan Leigh Anderson sees this progression as a positive change
because she correlates the increase of rights for more entities with an increase in
ethics overall. As more animals were and continue to be awarded rights, the fact
that human position is incomparable may shift. If androids begin processing in
ways that are similar to the way the human mind does, our idea of personhood
may have to broaden even further. As David DeGrazia argues in Human Identity
and Bioethics (2012), the term “person” encompasses a series of abilities and
traits. In that case, any entity that exhibits these characteristics, including an arti-
ficial intelligence, could be classified as a person.
Crystal Matey
See also: Asimov, Isaac; Blade Runner; Robot Ethics; The Terminator.
Further Reading
Anderson, Susan L. 2008. “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics.”
AI & Society 22, no. 4 (April): 477–93.
Calverley, David J. 2006. “Android Science and Animal Rights, Does an Analogy Exist?”
Connection Science 18, no 4: 403–17.
DeGrazia, David. 2005. Human Identity and Bioethics. New York: Cambridge University
Press.
Gray, John Chipman. 1909. The Nature and Sources of the Law. New York: Columbia
University Press.
Hubbard, F. Patrick. 2011. “‘Do Androids Dream?’ Personhood and Intelligent Artifacts.”
Temple Law Review 83: 405–74.
Lawrence, David. 2017. “More Human Than Human.” Cambridge Quarterly of Healthcare
Ethics 26, no. 3 (July): 476–90.
Solum, Lawrence B. 1992. “Legal Personhood for Artificial Intelligences.” North Caro-
lina Law Review 70, no. 4: 1231–87.
Taylor, Charles. 1985. “The Concept of a Person.” In Philosophical Papers, Volume 1: Human
Agency and Language, 97–114. Cambridge, UK: Cambridge University Press.
O
Omohundro, Steve(1959–)
Steve Omohundro is a noted scientist, author, and entrepreneur working in the
area of artificial intelligence. He is founder of Self-Aware Systems, chief scientist
and board member of AIBrain, and advisor to the Machine Intelligence Research
Institute (MIRI). Omohundro is well known for his thoughtful, speculative
research on safety in smarter-than-human machines and the social implications
of AI.
Omohundro argues that a truly predictive science of artificial intelligence is
needed. He claims that if goal-driven artificial general intelligences are not care-
fully crafted in the future, they are likely to produce harmful actions, cause wars,
or even trigger human extinction. Indeed, Omohundro believes that poorly pro-
grammed AIs could exhibit psychopathic behaviors. Coders, he argues, often pro-
duce flaky software, and programs that simply “manipulate bits” without
understanding why. Omohundro wants AGIs to monitor and understand their own
operations, see their own imperfections, and rewrite themselves to perform better.
This represents true machine learning.
The danger is that the AIs might change themselves into something that cannot
be understood by humans or make decisions that are inconceivable or have
unintended consequences. Therefore, Omohundro argues, artificial intelligence
must become a more predictive and anticipatory science. Omohundro also sug-
gests in one of his widely available online papers, “The Nature of Self-Improving
Artificial Intelligence,” that a future self-aware system that likely accesses the
internet will be influenced by the scientific papers that it reads, which recursively
justifies writing the paper in the first place.
AGI agents themselves must be created with value sets that lead them—when
they self-improve—to choose goals that help humanity. The sort of self-improving
systems that Omohundro is preparing for do not currently exist. Omohundro notes
that inventive minds have till now only produced inert systems (objects such as
chairs and coffee mugs), reactive systems that approach goals in rigid ways
(mousetraps and thermostats), adaptive systems (advanced speech recognition
systems and intelligent virtual assistants), and deliberative systems (the Deep Blue
chess-playing computer). The self-improving systems Omohundro is talking about
would need to actively deliberate and make decisions under conditions of uncer-
tainty about the consequences of engaging in self-modification.
Omohundro believes that the basic natures of self-improving AIs can be under-
stood as rational agents, a concept he borrows from microeconomic theory.
Humans are only imperfectly rational, which is why the field of behavioral eco-
nomics has blossomed in recent decades. AI agents, though, because of their
Omohundro, Steve 257
Colby conducted a series of tests in the 1970s to determine how well PARRY
was simulating genuine paranoia. Two of these tests were Turing Test-like. To
start, practicing psychiatrists were asked to interview patients over a teletype ter-
minal, a now obsolete electromechanical typewriter used to transmit and receive
keyed messages through telecommunications. The psychiatrists were not informed
that PARRY participated in these interviews as one of the patients. Afterward, the
transcripts of these interviews were sent to 100 professional psychiatrists. These
psychiatrists were asked to identify the machine version. Out of 41 responses, 21
psychiatrists correctly identified PARRY and 20 did not. Transcripts were also
sent to 100 computer scientists. Out of their 67 replies, 32 computer scientists
were correct and 35 were wrong. Statistically, Colby concluded, these results “are
similar to flipping a coin” and PARRY was not unmasked (Colby 1975, 92).
Elisabeth Van Meer
See also: Chatbots and Loebner Prize; ELIZA; Expert Systems; Natural Language Pro-
cessing and Speech Understanding; Turing Test.
Further Reading
Cerf, Vincent. 1973. “Parry Encounters the Doctor: Conversation between a Simulated
Paranoid and a Simulated Psychiatrist.” Datamation 19, no. 7 (July): 62–65.
Colby, Kenneth M. 1975. Artificial Paranoia: A Computer Simulation of Paranoid Pro-
cesses. New York: Pergamon Press.
Colby, Kenneth M., James B. Watt, and John P. Gilbert. 1966. “A Computer Method of
Psychotherapy: Preliminary Communication.” Journal of Nervous and Mental
Disease 142, no. 2 (February): 148–52.
McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History
and Prospects of Artificial Intelligence, 251–56, 308–28. San Francisco: W. H.
Freeman and Company.
Warren, Jim. 1976. Artificial Paranoia: An NIMH Program Report. Rockville, MD: US.
Department of Health, Education, and Welfare, Public Health Service, Alcohol,
Drug Abuse, and Mental Health Administration, National Institute of Mental
Health, Division of Scientific and Public Information, Mental Health Studies and
Reports Branch.
Pathetic Fallacy
John Ruskin (1819–1900) coined the term “pathetic fallacy” in his 1856 multivol-
ume work Modern Painters. In volume three, chapter twelve, he discussed the
practice of poets and painters in Western literature instilling human emotion into
the natural world. Even though it is an untruth, Ruskin said that Western literature
is filled with this fallacy, or mistaken belief. According to Ruskin, the fallacy
occurs because people become excited, and that excitement leads them to being
less rational. In that irrational state of mind, people project ideas on to external
things based on false impressions, and in Ruskin’s viewpoint, only those with
weak minds commit this type of fallacy. Ultimately, the pathetic fallacy is a mis-
take because it centers on giving inanimate objects human qualities. In other
words, it is a fallacy centered on anthropomorphic thinking. Anthropomorphism
262 Person of Interest
Further Reading
McFarland, Melanie. 2016. “Person of Interest Comes to an End, but the Technology
Central to the Story Will Keep Evolving.” Geek Wire, June 20, 2016. https://www
.geekwire.com/2016/person-of-interest/.
Newitz, Annalee. 2016. “Person of Interest Remains One of the Smartest Shows about AI
on Television.” Ars Technica, May 3, 2016. https://arstechnica.com/gaming
/2016/05/person-of-interest-remains-one-of-the-smartest-shows-about-ai-on
-television/.
264 Post-Scarcity, AI and
Post-Scarcity, AI and
Post-scarcity is a provocative hypothesis made about a coming global economy in
which radical abundance of goods, produced at little cost using advanced technol-
ogies, replaces traditional human labor and payment of wages. Engineers, futur-
ists, and science fiction authors have put forward a diverse array of speculative
models for a post-scarcity economy and society. Typically, however, these models
depend on overcoming scarcity—an omnipresent feature of modern capitalist
economics—using hyperconnected systems of artificial intelligence, robotics, and
molecular nanofactories and fabrication. Sustainable energy in various scenarios
comes from nuclear fusion power plants or solar farms and resources from aster-
oid mining using self-replicating smart machines. Post-scarcity as a material and
metaphorical concept exists alongside other post-industrial notions of socioeco-
nomic organization such as the information society, knowledge economy, imagi-
nation age, techno-utopia, singularitarianism, and nanosocialism. The range of
dates suggested by experts and futurists for the transition from a post-industrial
capitalist economy to a post-scarcity is wide, from the 2020s to the 2070s and
beyond.
A forerunner of post-scarcity economic thought is the “Fragment on Machines”
found in Karl Marx’s (1818–1883) unpublished notebooks. Marx argued that
advances in machine automation would reduce manual labor, precipitate a col-
lapse of capitalism, and usher in a socialist (and eventually, communist) economic
system characterized by leisure, artistic and scientific creativity, and material
abundance. The modern concept of a post-scarcity economy may be traced to pol-
itical economist Louis Kelso’s (1913–1991) mid-twentieth-century descriptions of
conditions where automation causes a collapse in prices of goods to near zero,
personal income becomes superfluous, and self-sufficiency and permanent holi-
days are commonplace. Kelso argued for democratizing the distribution of capital
ownership so that social and political power are more equitably distributed. This
is important because in a post-scarcity economy those who own the capital will
own the machines that make abundance possible. Entrepreneur Mark Cuban, for
instance, has said that the first trillionaire will be in the artificial intelligence
business.
The role played by artificial intelligence in the post-scarcity economy is that of
a relentless and ubiquitous analytics platform leveraging machine productivity. AI
guides the robots and other machines that turn raw materials into finished prod-
ucts and operate other essential services such as transportation, education, health
care, and water supply. Smart technologies eventually exceed human performance
at nearly every work-related task, branch of industry, and line of business. Trad-
itional professions and job markets disappear. A government-sponsored universal
basic income or guaranteed minimum income fills the gap left by the disappear-
ance of wages and salaries.
The results of such a scenario playing out may be utopian, dystopian, or some-
where in between. Post-scarcity AI may fulfill every necessity and wish of nearly
all human beings, freeing them up for creative pursuits, spiritual contemplation,
hedonistic impulses, and the exercise of bliss. Or the aftermath of an AI takeover
Post-Scarcity, AI and 265
could be a global catastrophe in which all of the raw materials of earth are rapidly
depleted by self-replicating machines that grow in number exponentially. This
sort of worst-case ecological disaster is termed a gray goo event by nanotechnol-
ogy innovator K. Eric Drexler (1955–). An intermediate outcome might involve
sweeping transformation in some economic sectors but not others. Andrew Ware
of the Centre for the Study of Existential Risk (CSER) at the University of Cam-
bridge notes that AI will play a significant role in agriculture, transforming soil
and crop management, weed control, and planting and harvesting (Ware 2018).
Among the hardest jobs for an AI to shoulder are managerial, professional, and
administrative in nature—particularly in the helping professions of health care
and education—according to a study of indicators collected by the McKinsey
Global Institute (Chui et al. 2016).
A world where smart machines churn out most material goods at negligible cost
is a dream shared by science fiction authors. One early example is the matter
duplicator in Murray Leinster’s 1935 short story “The Fourth Dimensional Dem-
onstrator.” Leinster conjures up a duplicator-unduplicator that exploits the notion
that the four-dimensional universe (the three-dimensional physical universe plus
time) has a bit of thickness. The device grabs chunks from the past and propels
them into the present. The protagonist Pete Davidson uses the device—which he
inherits from his inventor uncle—to copy a banknote placed on the machine’s
platform. When the button is pushed, the note remains, but it is joined by a copy of
the note that existed seconds before, exactly when the button was pushed. This is
discerned because the copy of the bill has the same serial number. The machine is
used to hilarious effect as Davidson duplicates gold and then (accidentally) pet
kangaroos, girlfriends, and police officers plucked from the fourth dimension.
Jack Williamson’s novelette With Folded Hands (1947) introduces a race of
thinking black mechanicals called Humanoids who serve as domestics, perform-
ing all of the work of humankind and adhering to their duty to “serve and obey,
and guard men from harm” (Williamson 1947, 7). The robots are superficially
well meaning, but systematically take away all meaningful labor of the human
beings in the town of Two Rivers. The Humanoids provide every convenience, but
they also remove all possible human dangers, including sports and alcohol, and
every incentive to do things for themselves. The mechanicals even remove door-
knobs from homes because humans should not need to make their own entrances
and exits. The people become anguished, terrified, and ultimately bored.
Science fiction authors have imagined economies bound together by post-
scarcity and sweeping opportunity for a century or more. Ralph Williams’ story
“Business as Usual, During Alterations” (1958) explores human selfishness when
an alien race surreptitiously drops a score of matter duplicating machines on the
world. Each of the machines, described as electronic, with two metal pans and a
single red button, is identical. The duplicator arrives with a printed warning: “A
push of the button grants your heart’s desire. It is also a chip at the foundations of
human society. A few billion such chips will bring it crashing down. The choice is
yours” (Williams 1968, 288).
Williams’ story focuses on a day at Brown’s Department Store on the day
the device appears. The manager, John Thomas, has extraordinary foresight,
266 Post-Scarcity, AI and
knowing that the machines are going to completely upend retail by erasing both
scarcity and the value of goods. Rather than trying to impose a form of artificial
scarcity, Thomas seizes upon the idea of duplicating the duplicators, which he
sells to customers on credit. He also reorients the store to sell cheap goods suit-
able for duplicating in the pan. The alien race, which had hoped to test the self-
ishness of humankind, is instead confronted with an economy of abundance built
upon a radically different model of production and distribution, where unique
and diverse goods are prized over uniform ones. “Business as Usual, During
Alterations” occasionally finds its way into syllabi for introductory economics
classes. Ultimately, William’s tale is that of the long tailed distributions of
increasingly niche goods and services described by writers on the economic and
social effects of high technologies such as Clay Shirky, Chris Anderson, and Erik
Brynjolfsson.
Leinster returned in 1964 with a short novel called The Duplicators. In this
story, the human culture of the planet Sord Three has forgotten most of its tech-
nical acumen and lost all electronic gadgets and has slouched into a rough approx-
imation of feudal society. Humans retain only the ability to use their so-called
dupliers to make essential goods such as clothes and cutlery. Dupliers have hop-
pers into which vegetable matter is placed and from which raw materials are
extracted to make different, more complex goods, but goods that pale in compari-
son with the originals. One of the characters offers that possibly this is because of
some missing element or elements in the feedstock material. It is clear too that
when weak samples are duplicated, the duplicates will be somewhat weaker. The
whole society suffers under the oppressive weight of abundant, but inferior prod-
ucts. Some originals, such as electronics, are completely lost, as the machines
cannot duplicate them. They are astounded when the protagonist of the story, Link
Denham, shows up on the planet wearing unduplied clothing.
Denham speculates in the story about the potential untold wealth, but also
about the collapse of human civilization throughout the galaxy, should the dupli-
ers become known and widely utilized off the planet: “And dupliers released to
mankind would amount to treason. If there can be a device which performs every
sort of work a world wants done, then those who first have that instrument are rich
beyond the dreams of anything but pride. But pride will make riches a drug upon
the market. Men will no longer work, because there is no need for their work. Men
will starve because there is no longer any need to provide them with food” (Lein-
ster 1964, 66–67).
The humans share the planet with native “uffts,” an intelligent pig-like race
kept in subjugation as servants. The uffts are good at collecting the necessary raw
materials for the dupliers, but do not have direct access to them. They are utterly
dependent on the humans for some items they trade for, in particular beer, which
they enjoy immensely. Link Denham uses his mechanical ingenuity to master the
secrets of the dupliers, so that they produce knives and other weapons of high
value, and eventually sets himself up as a sort of Connecticut Yankee in King
Arthur’s Court.
Too naive to take full advantage of the proper recipes and proportions rediscov-
ered by Denham, humans and uffts alike denude the landscape as they feed more
Post-Scarcity, AI and 267
and more vegetable matter into the dupliers to make the improved goods. This
troubles Denham, who had hoped that the machines could be used to reintroduce
modern agricultural implements back to the planet, at which time the machines
could be used solely for repairing and creating new electronic goods in a new eco-
nomic system of his own devising, which the local humans called “Householders
for the Restoration of the Good Old Days.” Soon enough the good days are over,
with the humans beginning plotting the re-subjugation of the native uffts and they
in turn organizing an Ufftian Army of Liberation. Link Denham deflects the uffts,
first with goodly helpings of institutional bureaucracy, and eventually liberates
them by privately designing beer-brewing equipment, which ends their depen-
dency on the human trade.
The Diamond Age by Neal Stephenson (1995) is a Hugo Award-winning bil-
dungsroman about a world dominated by nanotechnology and artificial intelli-
gence. The economy depends on a system of public matter compilers, essentially
molecular assemblers acting as fabricating devices, which work like K. Eric Drex-
ler’s proposed nanomachines in Engines of Creation (1986), which “guide chemi-
cal reactions by positioning reactive molecules with atomic precision” (Drexler
1986, 38). The matter compilers are freely used by all people, and raw materials
and energy are delivered from the Source, a vast pit in the ground, by a centralized
utility grid called the Feed. “Whenever Nell’s clothes got too small for her, Harv
would pitch them into the deke bin and then have the M.C. make new ones. Some-
times, if Tequila was going to take Nell someplace where they would see other
moms with other daughters, she’d use the M.C. to make Nell a special dress with
lace and ribbons” (Stephenson 1995, 53).
The short story “Nano Comes to Clifford Falls” by Nancy Kress (2006) explores
the social impact of nanotechnology, which grants every wish of every citizen. It
repeats the time-honored but pessimistic trope about humanity becoming lazy and
complacent when confronted with technological solutionism, with the twist that
men in a society instantly deprived of poverty are left in danger of losing their
morality.
“Printcrime” (2006) by Cory Doctorow, who by no coincidence publishes free
works under a liberal Creative Commons license, is a very short piece first pub-
lished in the journal Nature. The story shares the narrative of an eighteen-year-old
girl named Lanie, who recalls the day ten years before when the police came to
smash her father’s printer-duplicator, which he is using to illegally manufacture
expensive, artificially scarce pharmaceuticals. One of his customers “shopped
him,” essentially informing on his activity. In the last half of the story, Lanie’s
father has just gotten out of prison. He is already asking where he can “get a
printer and some goop.” He recognizes that it was a mistake to print “rubbish” in
the past, but then whispers something in Lanie’s ear: “I’m going to print more
printers. Lots more printers. One for everyone. That’s worth going to jail for.
That’s worth anything.” The novel Makers (2009), also by Cory Doctorow, takes
as its premise a do-it-yourself (DIY) maker subculture that hacks technology,
financial systems, and living situations to, as the author puts it, “discover ways of
staying alive and happy even when the economy is falling down the toilet” (Doc-
torow 2009).
268 Post-Scarcity, AI and
The premise of the novella Kiosk (2008) by pioneering cyberpunk author Bruce
Sterling is the effect of a contraband carbon nanotube printing machine on the
world’s society and economy. The protagonist Boroslav is a popup commercial
kiosk operator in a developing world country—presumably a future Serbia. He
first gets his hands on an ordinary rapid prototyping 3D printer. Children pur-
chase cards to program the device and make things such as waxy, nondurable toys
or cheap jewelry. Eventually, Boroslav falls into the possession of a smuggled
fabricator capable of making unbreakable products in only one color. Refunds are
given to those who bring back their products to be recycled into new raw material.
He is eventually exposed as being in possession of a device without proper intel-
lectual property license, and in return for his freedom, he agrees to share the
machine with the government for study. But before turning over the device, he
uses the fabricator to make multiple more copies, which he hides in the jungles
until the time is ripe for a revolution.
Author Iain M. Banks’ sprawling techno-utopian Culture series of novels
(1987–2012) features superintelligences living with humanoids and aliens in a
galactic civilization made distinctive by space socialism and a post-scarcity econ-
omy. The Culture is administered by benevolent artificial intelligences known as
Minds with the help of sentient drones. The sentient living beings in the books do
not work because of the superiority of the Minds, who provide everything neces-
sary for its citizenry. This fact precipitates all sorts of conflict as the biological
population indulges in hedonistic liberties and confronts the meaning of existence
and profound ethical challenges in a utilitarian universe.
Philip L. Frana
See also: Ford, Martin; Technological Singularity; Workplace Automation.
Further Reading
Aguilar-Millan, Stephen, Ann Feeney, Amy Oberg, and Elizabeth Rudd. 2010. “The Post-
Scarcity World of 2050–2075.” Futurist 44, no. 1 (January–February): 34–40.
Bastani, Aaron. 2019. Fully Automated Luxury Communism. London: Verso.
Chase, Calum. 2016. The Economic Singularity: Artificial Intelligence and the Death of
Capitalism. San Mateo, CA: Three Cs.
Chui, Michael, James Manyika, and Mehdi Miremadi. 2016. “Where Machines Could
Replace Humans—And Where They Can’t (Yet).” McKinsey Quarterly, July 2016.
http://pinguet.free.fr/wheremachines.pdf.
Doctorow, Cory. 2006. “Printcrime.” Nature 439 (January 11). https://www.nature.com
/articles/439242a.
Doctorow, Cory. 2009. “Makers, My New Novel.” Boing Boing, October 28, 2009. https://
boingboing.net/2009/10/28/makers-my-new-novel.html.
Drexler, K. Eric. 1986. Engines of Creation: The Coming Era of Nanotechnology. New
York: Doubleday.
Kress, Nancy. 2006. “Nano Comes to Clifford Falls.” Nano Comes to Clifford Fall and
Other Stories. Urbana, IL: Golden Gryphon Press.
Leinster, Murray. 1964. The Duplicators. New York: Ace Books.
Pistono, Federico. 2014. Robots Will Steal Your Job, But That’s OK: How to Survive the
Economic Collapse and Be Happy. Lexington, KY: Createspace.
Precision Medicine Initiative 269
Saadia, Manu. 2016. Trekonomics: The Economics of Star Trek. San Francisco:
Inkshares.
Stephenson, Neal. 1995. The Diamond Age: Or, a Young Lady’s Illustrated Primer. New
York: Bantam Spectra.
Ware, Andrew. 2018. “Can Artificial Intelligence Alleviate Resource Scarcity?” Inquiry
Journal 4 (Spring): n.p. https://core.ac.uk/reader/215540715.
Williams, Ralph. 1968. “Business as Usual, During Alterations.” In 100 Years of Science
Fiction, edited by Damon Knight, 285–307. New York: Simon and Schuster.
Williamson, Jack. 1947. “With Folded Hands.” Astounding Science Fiction 39, no. 5
(July): 6–45.
also seek to improve patient access to their medical information and help physi-
cians use electronic tools that will make health information more readily avail-
able, reduce inefficiencies in health-care delivery, lower costs, and increase quality
of care (Madara 2016, 1).
While the program is clear in stating that participants will not gain a direct
medical benefit from their involvement, it notes that their engagement could lead
to medical discoveries that may help generations of people far into the future. In
particular, by expanding the evidence-based disease models to include people
from historically underrepresented populations, it will create radically more
effective health interventions that ensure quality and equity in support of efforts
to both prevent disease and reduce premature death (Haskins 2018, 1).
Brett F. Woods
See also: Clinical Decision Support Systems; Computer-Assisted Diagnosis.
Further Reading
Collins, Francis S., and Harold Varmus. 2015. “A New Initiative on Precision Medicine.”
New England Journal of Medicine 372, no. 2 (February 26): 793–95.
Haskins, Julia. 2018. “Wanted: 1 Million People to Help Transform Precision Medicine:
All of Us Program Open for Enrollment.” Nation’s Health 48, no. 5 (July 2018):
1–16.
Madara, James L. 2016 “AMA Statement on Precision Medicine Initiative.” February 25,
2016. Chicago, IL: American Medical Association.
Morrison, S. M. 2019. “Precision Medicine.” Lister Hill National Center for Biomedical
Communications. U.S. National Library of Medicine. Bethesda, MD: National
Institutes of Health, Department of Health and Human Services.
Predictive Policing
Predictive policing refers to the proactive policing strategies based on predictions
made by software programs, in particular on places and times for higher risk of
crime. These strategies have been increasingly implemented since the late 2000s
in the United States and in several countries around the world. Predictive policing
raises sharp controversies regarding its legality and efficiency.
Policing has always relied on some sort of prediction for its deterrence work. In
addition, the study of patterns in criminal behavior and predicting about at-risk
individuals has been part of criminology since its early development in the late
nineteenth century. The criminal justice system has experienced the use of predic-
tions as early as the late 1920s. Since the 1970s, the increase in attention to geo-
graphical aspects in the study of crime, in particular, spatial and environmental
factors (such as street lighting and weather), has contributed to the establishment
of crime mapping as an instrumental tool of policing. “Hot-spot policing” that
allocates police’s resources (in particular patrols) in areas where crime is most
concentrated has been part of proactive policing strategies increasingly imple-
mented since the 1980s.
A common misconception about predictive policing is that it stops crime before
it occurs, as in the science fiction movie Minority Report (2002). The existing
Predictive Policing 271
approaches to predictive policing are based on the idea that criminal behaviors
follow predictable patterns, but unlike traditional crime analysis methods, they
rely on predictive modeling algorithms driven by software programs that statistic-
ally analyze police data and/or use machine-learning algorithms. They can make
three different types of forecasts (Perry et al. 2013): (1) places and times for higher
risk of crime; (2) individuals likely to commit crimes; and (3) probable identities
of perpetrators and victims of crimes.
However, “predictive policing” usually only refers to the first and second types
of predictions. Predictive police software programs offer two types of modeling.
The geospatial ones indicate when and where (in which neighborhood or even
block) crimes are likely to occur, and they lead to mapping crime “hot spots.” The
second type of modeling is individual based. Programs that provide that type of
modeling use variables such as age, criminal records, gang affiliation, or indicate
the likelihood a person will be involved in a criminal activity, in particular a vio-
lent one.
These predictions are typically articulated with the implementation of proac-
tive police activities (Ridgeway 2013). In the case of geospatial modeling, it natur-
ally includes police patrols and controls in crime “hot spots.” In the case of
individual-based modeling, it includes individuals with high risk of involvement
in a criminal activity being put under surveillance or being referred to
police.
Since the late 2000s, police departments have increasingly adopted software
programs from technology companies that make forecasts and help them in imple-
menting predictive policing strategies. In the United States, the Santa Cruz Police
Department was the first to use such a strategy with the implementation of Pred-
Pol in 2011. This software program, inspired by algorithms used for predicting
earthquake aftershocks, provides daily (and sometimes hourly) maps of “hot
spots.” It was first limited to property crimes, but later also included violent
crimes. PredPol is now used by more than sixty police departments around the
United States.
The New Orleans Police Department was also among the first to implement
predictive policing with the use of Palantir from 2012. Several other software pro-
grams have been developed since then, such as CrimeScan whose algorithm uses
seasonal and day of the week trends in addition to reports of crimes and Hunchlab
that applies machine learning algorithms and includes weather patterns.
Besides the implementation of software programs using geospatial modeling,
some police departments use software programs that provide individual-based
modeling. For example, since 2013, the Chicago Police Department has relied on
the Strategic Subject List (SSL), made by an algorithm that evaluates the probabil-
ity of individuals being involved in a shooting as either perpetrators or victims.
Individuals with the highest risk scores are then referred to the police for a preven-
tive intervention.
Predictive policing has also been implemented outside the United States. Pred-
Pol was implemented in the early 2010s in the United Kingdom, and the Crime
Anticipation System, first used in Amsterdam, was made available for all police
departments in The Netherlands in May 2017.
272 Predictive Policing
dangerous products available to the public. The purchaser of the product is not the
only one who can claim compensation, as users and third-party bystanders may
also sue if the requirements such as the foreseeability of the injuries are met.
In the United States, product liability is state, not federal, law; thus, the applic-
able law in each case may be different depending on where the injury occurs.
Traditionally, for victims to win in court and be compensated for injuries, they
would have to show that the company responsible was negligent, meaning its
actions failed to meet the appropriate standard of care. To prove negligence, four
elements must be shown. First, the company has to have a legal duty of care to the
consumer. Second, that duty was breached, meaning the manufacturer did not
meet the standard required. Third, the breach of the duty caused the harm, mean-
ing the manufacturer’s actions caused the injury. Finally, there must be actual
injuries to the victims. Showing the company was negligent is one way to be com-
pensated due to harm caused by products.
Product liability claims can also be proved through showing that the company
breached the warranties it made to consumers about the quality and reliability of
the product. Express warranties can include how long the product is under war-
ranty and what parts of the product are part of the warranty and what parts are
excluded. Implied warranties that apply to all products include the warranties that
the product would work as claimed and would work for the specific purpose for
which the consumer purchased it.
Most commonly in the vast majority of product liability cases, strict liability
would be the standard applied by the courts, where the company would be liable
regardless of fault if the requirements are met. This is because the courts have
found that it would be difficult for consumers to prove the company is negligent
due to the company having more knowledge and resources. For the theory of strict
liability, instead of showing that a duty was not met, consumers need to show that
that there was an unreasonably dangerous defect related to the product; this defect
caused the injury while the product was being used for its intended purpose, and
the product was not substantially altered from the condition in which it was sold to
consumers.
The three types of defects that can be claimed for product liability are design
defects, manufacturing defects, and defects in marketing, also known as failure to
warn. Design defect is when there are flaws with the design of the product itself
during the planning stage. The company would be responsible if, while the prod-
uct was being designed, there was a foreseeable risk that it would cause injuries
when used by consumers. Manufacturing defect is when there are problems dur-
ing the manufacturing process, such as the use of low-quality materials or careless
workmanship. The end product is not up to the standard of the otherwise appropri-
ate design. Failure to warn defects is when the product contains some inherent
danger regardless of how well it was designed or manufactured, but the company
did not include warnings to consumers that the product could potentially be
dangerous.
While product liability law was invented to deal with the introduction of
increasingly complex technology that could cause injuries to consumers, it is
unclear whether the current law can apply to AI or whether the law needs to be
Product Liability and AI 275
changed in order to fully protect consumers. There are several areas that will
require clarification or modifications in the law when it comes to AI. The use of
product liability means that there needs to be a product, and it is sometimes not
clear whether software or algorithm is a product or a service. If they are classified
as products, product liability law would apply. If they are services, then consum-
ers must rely on traditional negligence claims instead. Whether product liability
can be used by consumers to sue the manufacturer will depend on the particular
AI technology that caused the harm and what the court in each situation decides.
Additional questions are raised when the AI technology is able to learn and act
beyond its original programming. Under these circumstances, it is unclear whether
an injury can still be attributed to the design or manufacture of the product because
the AI’s actions may not have been foreseeable. Also, as AI relies on probability-
based predictions and will at some point make a choice that results in some kind
of injury even if it is the best course of action to take, it may not be fair for the
manufacturer to take on the risk when the AI is expected to cause damages by
design.
In response to these challenging questions, some commentators have proposed
that AI should be held to a different legal standard than the strict liability used for
more traditional products. For example, they suggest that medical AI technology
should be treated as reasonable human doctors or medical students and that auton-
omous cars should be treated as reasonable human drivers. AI products would still
be responsible for injuries they cause to consumers, but the standard they would
have to meet would be the reasonable human in the same situation. The AI would
only be liable for the injuries if a person in the same situation would also have
been unable to avoid causing the harm. This leads to the question of whether the
designers or manufacturers would be vicariously liable because it had the right,
ability, and duty to control the AI or whether the AI would be seen as a legal per-
son that would itself be responsible for compensating the victims.
It will be increasingly difficult to make the distinction between traditional and
more sophisticated products as AI technology develops, but as there are no alterna-
tives in the law yet, product liability remains for now the legal framework to
determine who is responsible and under what circumstances consumers have to be
financially compensated when AI causes injuries.
Ming-Yu Bob Kao
See also: Accidents and Risk Assessment; Autonomous and Semiautonomous Systems;
Calo, Ryan; Driverless Vehicles and Liability; Trolley Problem.
Further Reading
Kaye, Timothy S. 2015. ABA Fundamentals: Products Liability Law. Chicago: American
Bar Association.
Owen, David. 2014. Products Liability in a Nutshell. St. Paul, MN: West Academic
Publishing.
Turner, Jacob. 2018. Robot Rules: Regulating Artificial Intelligence. Cham, Switzerland:
Palgrave Macmillan.
Weaver, John Frank. 2013. Robots Are People Too: How Siri, Google Car, and Artificial
Intelligence Will Force Us to Change Our Laws. Santa Barbara, CA: Praeger.
Q
Quantum AI
Johannes Otterbach, a physicist at Rigetti Computing in Berkeley, California, has
said that artificial intelligence and quantum computing are natural allies because
both technologies are intrinsically statistical. Many companies have moved into
the area: Airbus, Atos, Baidu, <b|eit, Cambridge Quantum Computing, Elyah,
Hewlett-Packard (HP), IBM, Microsoft Research QuArC, QC Ware, Quantum
Benchmark Inc., R QUANTECH, Rahko, and Zapata Computing among
them.
Traditional general-purpose computing architectures encode and manipulate
data in units known as bits. Bits can take one of two states, either 0 or 1. Quantum
computers process information by manipulating the behaviors of subatomic par-
ticles such as electrons or photons. Two of the most important phenomena exploited
by quantum computers are superposition—particles existing across all possible
states at once—and entanglement—the pairing and connection of particles such
that they cannot be described independently of the state of others, even over great
distances. Albert Einstein called such an entanglement “spooky action at a
distance.”
Quantum computers store data in so-called quantum registers, which are com-
posed of a series of quantum bits or qubits. While a definitive explanation is elu-
sive, qubits might be thought to exist concurrently in a weighted mixture of two
states to produce multiple different states. Each qubit added to the system doubles
its computing power. A quantum computer with only fifty entangled qubits would
possess the processing power of more than one quadrillion classical bits. Sixty
qubits could carry all the data produced by humanity in a single year. Three hun-
dred qubits could compactly encode an amount of data equivalent to the classical
information content of the observable universe.
Quantum computers can work on enormous volumes of separate calculations,
sets of data, or processes massively in parallel. A working artificially intelligent
quantum computer could potentially monitor and manage all the traffic of a city in
real time, which would make true autonomous transportation feasible. Quantum
artificial intelligence could also match a single face to a database of billions of
photos instantly by comparing them all to the reference photo concurrently. The
invention of quantum computing has precipitated radical changes in our under-
standing of computation, programming, and complexity.
Most quantum algorithms encompass a sequence of quantum state transforma-
tions followed by a measurement. The theory of quantum computing dates to the
1980s, when physicists—including Yuri Manin, Richard Feynman, and David
Quantum AI 277
Hartmann believes the effort may be ten years away from a working quantum
computing artificial intelligence.
First out of the gate in producing quantum computers in commercial quantities
was D-Wave, a company based in Vancouver, British Columbia. D-Wave began
manufacturing annealing quantum processors in 2011. Annealing processors are
special-purpose products used for a limited set of problems where search space is
discrete—such as in combinatorial optimization problems—with many local min-
ima. The D-Wave computer is not polynomially equivalent to a universal quantum
computer and is incapable of executing Shor’s algorithm. The company counts
Lockheed Martin, the University of Southern California, Google, NASA, and the
Los Alamos National Lab among its customers.
Google, Intel, Rigetti, and IBM are all pursuing universal quantum computers.
Each have quantum processors capable of fifty qubits. The Google AI Quantum
lab, directed by Hartmut Neven, released its latest 72-qubit Bristlecone processor
in 2018. Also last year, Intel released its 49-qubit Tangle Lake processor. The
Rigetti Computing Aspen-1 processor is capable of sixteen qubits. The IBM Q
Experience quantum computing center is located at the Thomas J. Watson
Research Center in Yorktown Heights, New York. IBM is partnering with several
companies—including Honda, JPMorgan Chase, and Samsung—to develop quan-
tum commercial applications. The company has also invited the public to submit
experiments for processing on their quantum computers.
Government agencies and universities are also heavily invested in quantum AI
research. The NASA Quantum Artificial Intelligence Laboratory (QuAIL) pos-
sesses a 2,048-qubit D-Wave 2000Q quantum computer, upon which it hopes to
solve NP-hard problems in data analysis, anomaly detection and decision-making,
air traffic management, and mission planning and coordination. The NASA group
has decided to focus on the hardest machine learning problems—for example,
generative models in unsupervised learning—in order to demonstrate the full
potential advantage of the technology. NASA researchers have also decided to
concentrate on hybrid quantum-classical approaches in order to maximize the
value of D-Wave resources and capabilities. This sort of fully quantum machine
learning is under study in many labs across the world. Quantum Learning Theory
posits that quantum algorithms might be used to solve machine learning tasks,
which would in turn improve classical machine learning methods. In quantum
learning theory, classical binary data sets are fed into a quantum computer for
processing.
The NIST Joint Quantum Institute and the Joint Center for Quantum Informa-
tion and Computer Science with the University of Maryland are also building
bridges between machine learning and quantum computing. The NIST-UMD
partners in hosting workshops that bring together experts in mathematics, com-
puter science, and physics to apply artificial intelligence algorithms in control of
quantum systems. The partnership also encourages engineers to use quantum
computing to improve the performance of machine learning algorithms. NIST
also hosts the Quantum Algorithm Zoo, a catalog of all known quantum
algorithms.
Quantum AI 279
and was its president from 1987 to 1989. Noticing the international flavor of the
research community, starting from people such as Reddy, the AAAI has now been
renamed as the Association for the Advancement of Artificial Intelligence, though
they retain the old logo, acronym (AAAI), and mission.
Reddy’s research work mainly focused in and around artificial intelligence, the
science of imparting intelligence to computers. He worked on controlling a robot
through voice, speech recognition without dependence on the speaker, and
unrestricted vocabulary dictation, making continuous speech dictation possible.
Reddy, along with his colleagues, has made important contributions to com-
puter analysis of natural scenes, task oriented computer architectures, universal
access to information (an initiative UNESCO is also backing), and autonomous
robotic systems. Along with his colleagues, Reddy helped to create Hearsay II,
Dragon, Harpy, and Sphinx I/II. One of the key ideas emerging from this work,
the blackboard model, has been adopted widely in many areas of AI. Reddy was
also interested in using technology for the betterment of the society and served as
Chief Scientist for the Centre Mondial Informatique et Ressource Humaine (Cen-
ter for Global IT and Human Resource) in France.
He helped the Indian Government to establish the Rajiv Gandhi University of
Knowledge Technologies in India, which primarily works with low-income rural
youth. He serves in the governing council of the International Institute of Infor-
mation Technology (IIIT), Hyderabad. IIIT is a nonprofit public private partner-
ship (N-PPP) focusing on research on technology and applied research. He was a
member of the governing council of Emergency Management and Research Insti-
tute, a nonprofit public private partnership organization that provides emergency
medical services for the public. EMRI has helped in emergency management in its
neighbor country Sri Lanka as well. He was also a member of Heath Care Man-
agement Research Institute (HMRI). HMRI extends nonemergency health-care
consultancy to the rural masses, especially in the Indian state of Andhra Pradesh.
Reddy shared the Turing Award—the highest award in Artificial Intelligence—
in 1994 with Edward A. Feigenbaum and became the first person of Indian/Asian
origin to win the award. He was also awarded the IBM Research Ralph Gomory
Fellow Award in 1991, the Okawa Foundation’s Okawa Prize in 2004, the Honda
Foundation’s Honda Prize in 2005, and the U.S. National Science Board’s Vanne-
var Bush Award in 2006.
Reddy has been awarded fellowship in many top professional bodies including
the Institute for Electronic and Electrical Engineers (IEEE), the Acoustical Soci-
ety of America, and the American Association for Artificial Intelligence.
M. Alroy Mascrenghe
See also: Autonomous and Semiautonomous Systems; Natural Language Processing and
Speech Understanding.
Further Reading
Reddy, Raj. 1988. “Foundations and Grand Challenges of Artificial Intelligence.” AI Mag-
azine 9, no. 4 (Winter): 9–21.
Reddy, Raj. 1996. “To Dream the Possible Dream.” Communications of the ACM 39, no. 5
(May): 105–12.
Robot Ethics 283
Robot Ethics
Robot ethics identifies a subfield of technology ethics that investigates, elucidates,
and contends with the moral opportunities and challenges that arise from the
design, development, and deployment of robots and related autonomous systems.
As an umbrella term, “robot ethics” covers several related but different efforts and
endeavors.
The first recognized articulation of a robot ethics appears in fiction, specifically
Isaac Asimov’s robot stories collected in the book I, Robot (1950). In the short
story “Runaround,” which first appeared in the March 1942 issue of Astounding
Science Fiction, Asimov introduced the three laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human
being to come to harm.
2. A robot must obey the orders given to it by human beings except where such
orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not
conflict with the First or Second Laws. (Asimov 1950, 40)
In his 1985 novel Robots and Empire, Asimov added a fourth element to the
sequence, which he calls the “zeroth law” in an effort to preserve the hierarchy
whereby lower-numbered elements take precedence over higher-numbered ones.
By design, the laws are both functionalist and anthropocentric, describing a
sequence of nested restrictions on robot behavior for the purposes of respecting
the interests and well-being of human individuals and communities. Despite this,
the laws have been criticized as insufficient and impractical for an actual moral
code of conduct.
Asimov employed the laws to generate compelling science fiction stories and
not to resolve real-world challenges regarding machine action and robot behavior.
Consequently, Asimov never intended his rules to be a complete and definitive set
of instructions for actual robots. He employed the laws as a literary device for
generating dramatic tension, fictional scenarios, and character conflict. As Lee
McCauley (2007, 160) succinctly explains, “Asimov’s Three Laws of Robotics are
literary devices and not engineering principles.”
Theorists and practitioners working in the fields of robotics and computer eth-
ics have found Asimov’s laws to be significantly underpowered for everyday prac-
tical employment. Philosopher Susan Leigh Anderson grapples directly with this
issue, demonstrating not only that Asimov himself disregarded his laws as a foun-
dation for machine ethics but also that the laws are insufficient as a foundation for
an ethical framework or system (Anderson 2008, 487–93). Consequently, even
though there is widespread familiarity with the Three Laws of Robotics among
researchers and developers, there is also a general recognition that the laws are not
computable or able to be implemented in any meaningful sense.
Beyond Asimov’s initial science fiction prototyping, there are several variants
of robot ethics developed in the scientific literature. These include roboethics,
robot ethics, and robot rights. The concept of roboethics was introduced by roboti-
cist Gianmarco Veruggio in 2002. It was publicly discussed in 2004 during the
284 Robot Ethics
Unlike roboethics, which is interested in the moral conduct of the human designer,
developer, or user, robot ethics is concerned with the moral conduct of the machine
itself. Robot ethics is often associated with the term “machine ethics” (and Verug-
gio uses the two signifiers interchangeably). Unlike computer ethics, which is
interested in the moral conduct of the human designer, developer, or user of the
device, machine ethics is concerned with the moral capability of machines them-
selves (Anderson and Anderson 2007, 15).
A similar line of thinking has been developed by Wendell Wallach and Colin
Allen under the banner Moral Machines. According to Wallach and Allen (2009,
6), “The field of machine morality extends the field of computer ethics beyond
concern for what people do with their computers to questions about what the
machines do by themselves.” Whereas roboethics, like computer ethics before it,
considers technology to be a more or less transparent tool or instrument of human
moral decision-making and action, robot ethics is concerned with the design and
development of artificial moral agents. Patrick Lin et al. (2012 and 2017) have
sought to gather up and unify all of these efforts under a more general formulation
of the term as an emerging field of applied moral philosophy.
To date, most of the work in robot ethics has been limited to questions regard-
ing responsibility either as it applies to the human developers of robotic systems
or as it belongs or is assigned to the robotic device itself. This is, however, only
Robot Ethics 285
one side of the issue. As Luciano Floridi and J. W. Sanders (2001, 349–50) cor-
rectly recognize, ethics involves social relationships composed of two interacting
components: the actor (or the agent) and the recipient of the action. Most efforts in
roboethics and robot ethics can be characterized as exclusively agent-oriented
undertakings.
“Robot rights,” a term advanced by philosophers Mark Coeckelbergh (2010)
and David Gunkel (2018) and the legal scholars Kate Darling (2012) and Alain
Bensoussan and Jérémy Bensoussan (2015), looks at the issue from the other side
by considering the moral or legal status of the robot. For these investigators, robot
ethics is concerned with not just the moral conduct of the robot but also the moral
and legal status of the artifact and the position it occupies in our ethical and legal
systems as a potential subject and not just an object. This concept was recently
tested in the European Parliament, which advanced the new legal category of elec-
tronic person to deal with the social integration of increasingly autonomous robots
and AI systems.
In summary, the term “robot ethics” captures a spectrum of different but related
efforts regarding robots and their social impact and consequences. In the more
specific version of roboethics, it designates a branch of applied or professional eth-
ics concerning moral issues regarding the design, development, and implementa-
tion of robots and related autonomous technology. Formulated more generally,
robot ethics denotes a subfield of moral philosophy that is concerned with the
moral and legal exigencies of robots as both agents and patients.
David J. Gunkel
See also: Accidents and Risk Assessment; Algorithmic Bias and Error; Autonomous
Weapons Systems, Ethics of; Driverless Cars and Trucks; Moral Turing Test; Robot
Ethics; Trolley Problem.
Further Reading
Anderson, Michael, and Susan Leigh Anderson. 2007. “Machine Ethics: Creating an
Ethical Intelligent Agent.” AI Magazine 28, no. 4 (Winter): 15–26.
Anderson, Susan Leigh. 2008. “Asimov’s ‘Three Laws of Robotics’ and Machine Meta-
ethics.” AI & Society 22, no. 4 (March): 477–93.
Asimov, Isaac. 1950. “Runaround.” In I, Robot, 30–47. New York: Doubleday.
Asimov, Isaac. 1985. Robots and Empire. Garden City, NY: Doubleday.
Bensoussan, Alain, and Jérémy Bensoussan. 2015. Droit des Robots. Brussels: Éditions
Larcier.
Coeckelbergh, Mark. 2010. “Robot Rights? Towards a Social-Relational Justification of
Moral Consideration.” Ethics and Information Technology 12, no. 3 (September):
209–21.
Darling, Kate. 2012. “Extending Legal Protection to Social Robots.” IEEE Spectrum,
September 10, 2012. https://spectrum.ieee.org/automaton/robotics/artificial-intelligence
/extending-legal-protection-to-social-robots.
Floridi, Luciano, and J. W. Sanders. 2001. “Artificial Evil and the Foundation of Computer
Ethics.” Ethics and Information Technology 3, no. 1 (March): 56–66.
Foundation for Responsible Robotics (FRR). 2019. Mission Statement. https://responsible
robotics.org/about-us/mission/.
Gunkel, David J. 2018. Robot Rights. Cambridge, MA: MIT Press.
286 RoboThespian
Lin, Patrick, Keith Abney, and George A. Bekey. 2012. Robot Ethics: The Ethical and
Social Implications of Robotics. Cambridge, MA: MIT Press.
Lin, Patrick, Ryan Jenkins, and Keith Abney. 2017. Robot Ethics 2.0: New Challenges in
Philosophy, Law, and Society. New York: Oxford University Press.
McCauley, Lee. 2007. “AI Armageddon and the Three Laws of Robotics.” Ethics and
Information Technology 9, no. 2 (July): 153–64.
Veruggio, Gianmarco. 2006. “The EURON Roboethics Roadmap.” In 2006 6th IEEE-
RAS International Conference on Humanoid Robots, 612–17. Genoa, Italy: IEEE.
Veruggio, Gianmarco, and Fiorella Operto. 2008. “Roboethics: Social and Ethical Impli-
cations of Robotics.” In Springer Handbook of Robotics, edited by Bruno Siciliano
and Oussama Khatib, 1499–1524. New York: Springer.
Veruggio, Gianmarco, Jorge Solis, and Machiel Van der Loos. 2011. “Roboethics: Ethics
Applied to Robotics.” IEEE Robotics & Automation Magazine 18, no. 1 (March):
21–22.
Wallach, Wendell, and Colin Allen. 2009. Moral Machines: Teaching Robots Right from
Wrong. Oxford, UK: Oxford University Press.
RoboThespian
RoboThespian is an interactive robot designed by the English company Engi-
neered Arts and is characterized by the company as a humanoid, meaning that it
was built to resemble a human. The first iteration of the robot was introduced in
2005, with subsequent upgrades introduced in 2007, 2010, and 2014. The robot is
the size of a human, with a plastic face, metal arms, and legs that can move in a
range of ways. The robot’s video camera eyes are able to follow a person’s move-
ments and guess his or her age and mood with its digital voice. According to Engi-
neered Arts’ website, all RoboThespians come with a touchscreen, which allows
users to control and customize their experience with the robot, giving them the
ability to animate it and change its language. Users can also control it remotely
through the use of a tablet, but a live operator is not required because the robot can
be preprogrammed.
RoboThespian was designed to interact with humans in a variety of public
spaces, such as universities, museums, hotels, trade shows, and exhibitions. In
places such as science museums, the robot is used as a tour guide. It has the ability
to deliver scripted content, explain and demonstrate technological advances, read
QR codes, recognize facial expressions, respond to gestures, and interact with
users through a touchscreen kiosk.
In addition to these practical applications, RoboThespian can entertain. It comes
loaded with a variety of songs, gestures, greetings, and impressions. RoboThespian
has also acted on the stage. It can sing, dance, perform, read a script, and speak
with expression. Because it comes equipped with cameras and facial recognition, it
can react to audiences and predict viewers’ moods. Engineered Arts reports that as
an actor it can have “a huge range of facial emotion” and “can be accurately dis-
played with the subtle nuance, normally only achievable through human actors”
(Engineered Arts 2017). In 2015, the play Spillikin premiered at the Pleasance The-
atre during the Edinburgh Festival Fringe. RoboThespian acted alongside four
Rucker, Rudy 287
human actors in a love story about a husband who builds a robot for his wife, to
keep her company after he dies. After its premiere, the play toured Britain from
2016 to 2017 and was met with great acclaim.
Companies who order a RoboThespian have the ability to customize the robot’s
content to suit their needs. The look of the robot’s face or other design components
can be customized. It can have a projected face, hands that can grip, and legs that
can move. Currently, RoboThespians are installed in places around the world such
as NASA Kennedy Center in the United States, the National Science and Technol-
ogy Museum in Spain, and the Copernicus Science Centre in Poland. The robot
can be found at academic institutions such as the University of Central Florida,
University of North Carolina at Chapel Hill, University College London, and Uni-
versity of Barcelona.
Crystal Matey
See also: Autonomous and Semiautonomous Systems; Ishiguro, Hiroshi.
Further Reading
Engineered Arts. 2017. “RoboThespian.” Engineered Arts Limited. www.engineeredarts.
co.uk.
Hickey, Shane. 2014. “RoboThespian: The First Commercial Robot That Behaves Like a
Person.” The Guardian, August 17, 2014. www.theguardian.com/technology/2014
/aug/17/robothespian-engineered-arts-robot-human-behaviour.
Rucker, Rudy(1946–)
Rudolf von Bitter Rucker, known as Rudy Rucker, is an American author, math-
ematician, and computer scientist and the great-great-great-grandson of philoso-
pher Georg Wilhelm Friedrich Hegel (1770–1831). Having widely published in a
range of fictional and nonfictional genres, Rucker is most widely known for his
satirical, mathematics-heavy science fiction. His Ware tetralogy (1982–2000) is
considered one of the foundational works of the cyberpunk literary movement.
Rucker obtained his PhD in mathematics from Rutgers University in 1973.
After teaching mathematics at universities in the United States and Germany, he
switched to teaching computer science at San José State University, where he
eventually became a professor, until his retirement in 2004.
To date, Rucker has published forty books, which include science fiction nov-
els, short story collections, and nonfiction books. His nonfiction intersects the
fields of mathematics, cognitive science, philosophy, and computer science: his
books cover subjects including the fourth dimension and the meaning of computa-
tion. His most famous nonfiction work, the popular mathematics book Infinity and
the Mind: The Science and Philosophy of the Infinite (1982), continues to be in
print at Princeton University Press.
With the Ware series (Software 1982, Wetware 1988, Freeware 1997, and Real-
ware 2000), Rucker made his mark in the cyberpunk genre. Software won the first
Philip K. Dick Award, the prestigious American science fiction award given out
each year since Dick’s death in 1983. In 1988, Wetware also won this award, in a
tie with Paul J. McAuley’s Four Hundred Billion Stars. The series was republished
288 Rucker, Rudy
in 2010 in one volume, as The Ware Tetralogy, which Rucker has made available
for free online as an e-book under a Creative Commons license.
The Ware series starts with the story of Cobb Anderson, a retired roboticist
who has fallen from grace for having made intelligent robots with free will, so-
called boppers. The boppers wish to reward him by granting him immortality
through mind uploading; however, this process turns out to involve the complete
destruction of Cobb’s brain, hardware that the boppers do not find essential. In
Wetware, a bopper called Berenice aspires instead to create a human-machine
hybrid by impregnating Cobb’s niece. Humanity retaliates by setting loose a mold
that kills boppers, but this chipmould turns out to thrive on the cladding covering
the outside of the boppers and ends up creating an organic-machine hybrid after
all. Freeware revolves around these lifeforms, now nicknamed mouldies, which
are universally despised by biological humans. This novel also introduces alien
intelligences, which in Realware give the various forms of human and artificial
beings advanced technology with the ability to reshape reality.
Rucker’s 2007 novel Postsingular was the first of his works to be released under
a Creative Commons license. Set in San Francisco, the novel explores the emer-
gence of nanotechnology, first in a dystopian extrapolation and then in a utopian
one. In the first part, a renegade engineer develops nanocreatures called nants that
turn Earth into a virtual simulation of itself, destroying the planet in the process,
until a child is able to reverse their programming. The novel then describes a dif-
ferent kind of nanotechnology, orphids, that allow humans to become cognitively
enhanced, hyperintelligent beings.
Although the Ware tetralogy and Postsingular have been categorized as cyber-
punk novels, Rucker’s fiction, mixing hard science with satire, explicit sex, and
omnipresent drug use, has been generally considered difficult to categorize. How-
ever, as science fiction scholar Rob Latham notes, “Happily, Rucker himself has
coined a term to describe his peculiar fusion of mundane experience and outra-
geous fantasy: transrealism” (Latham 2005, 4). In 1983, Rucker published “A
Transrealist Manifesto,” in which he states that “Transrealism is not so much a
type of SF as it is a type of avant-garde literature” (Rucker 1983, 7). In a 2002
interview, he explained, “This means writing SF about yourself, your friends and
your immediate surroundings, transmuted in some science-fictional way. Using
real life as a model gives your work a certain literary quality, and it prevents you
from falling into the use of clichés” (Brunsdale 2002, 48). Rucker and cyberpunk
author Bruce Sterling collaborated on the short story collection Transreal Cyber-
punk, which was published in 2016.
After suffering a cerebral hemorrhage in 2008, Rucker decided to write his
autobiography Nested Scrolls. Published in 2011, it was awarded the Emperor
Norton Award for “extraordinary invention and creativity unhindered by the con-
straints of paltry reason.” His most recent work is Million Mile Road Trip (2019), a
science fiction novel about a group of human and nonhuman characters on an
interplanetary road trip.
Kanta Dihal
See also: Digital Immortality; Nonhuman Rights and Personhood; Robot Ethics.
Rucker, Rudy 289
Further Reading
Brunsdale, Mitzi. 2002. “PW talks with Rudy Rucker.” Publishers Weekly 249, no. 17
(April 29): 48. https://archive.publishersweekly.com/?a=d&d=BG20020429.1.82
&srpos=1&e=-------en-20--1--txt-txIN%7ctxRV-%22PW+talks+with+Rudy+Ruc
ker%22---------1.
Latham, Rob. 2005. “Long Live Gonzo: An Introduction to Rudy Rucker.” Journal of the
Fantastic in the Arts 16, no. 1 (Spring): 3–5.
Rucker, Rudy. 1983. “A Transrealist Manifesto.” The Bulletin of the Science Fiction
Writers of America 82 (Winter): 7–8.
Rucker, Rudy. 2007. “Postsingular.” https://manybooks.net/titles/ruckerrother07post
singular.html.
Rucker, Rudy. 2010. The Ware Tetralogy. Gaithersburg, MD: Prime Books, 2010.
S
Simon, Herbert A.(1916–2001)
Herbert A. Simon was an interdisciplinary researcher who made fundamental
contributions to artificial intelligence. He is widely considered one of the most
influential social scientists of the twentieth century. His work for Carnegie Mellon
University spanned more than five decades.
The concept of the computer as a symbol manipulator instead of a mere number
cruncher drove early artificial intelligence research. The idea for production
systems, which incorporated sets of rules for symbol strings used to define
conditions—which must hold before rules may be applied—and the actions to be
performed or conclusions derived, is attributed to Emil Post who first wrote about
this type of computational model in 1943. Simon, along with his Carnegie Mellon
colleague Allen Newell, promoted these ideas about symbol manipulation and
production systems to a wider audience by extolling their potential virtues for
general-purpose reading, storing, and copying, and comparing different symbols
and patterns.
The Logic Theorist program created by Simon, Newell, and Cliff Shaw was the
first to use symbol manipulation to produce “intelligent” behavior. Logic Theorist
could independently prove theorems outlined in the Principia Mathematica (1910)
of Bertrand Russell and Alfred North Whitehead. Perhaps most famously, the
Logic Theorist program discovered a shorter, more elegant proof of Theorem 2.85
in the Principia Mathematica, which the Journal of Symbolic Logic promptly
refused to publish because it had been coauthored by a computer.
Although it was theoretically possible to prove the theorems of the Principia
Mathematica in an exhaustively manual and systematic way, it was impossible in
practice because of the amount of time consumed. Newell and Simon were inter-
ested in the rules of thumb used by humans to solve complex problems for which
an exhaustive search for solutions was impossible because of the vast amounts of
computation required. They dubbed these rules of thumb “heuristics,” describing
them as techniques that may solve problems, but offer no guarantees.
A heuristic is a “rule of thumb” used to solve a problem too complex or too
time-consuming to be solved using an exhaustive search, a formula, or a step-by-
step approach. In computer science, heuristic methods are often contrasted with
algorithmic methods, with a key distinguishing feature being the outcome of the
method. According to this distinction, a heuristic program will generally—though
not always—yield good results, while an algorithmic program is an unambiguous
procedure guaranteeing a solution. However, this distinction is not a technical
one. In fact, over time a heuristic method may prove to consistently yield the
Simon, Herbert A. 291
dictates the order and execution of tasks in the preparation of a product—in this
case, food.
List processing was developed in 1956 by Newell, Shaw, and Simon for the
Logic Theorist program. List processing is a programming method that allows for
dynamic storage allocation. It is mainly used for symbol manipulation computer
applications such as compiler writing, graphic or linguistic data processing, and
especially in artificial intelligence. Allen Newell, J. Clifford Shaw, and Herbert A.
Simon are credited with developing the first list processing program with large,
complex, and flexible memory structures independent of the consecutive computer/
machine memory. Several higher order languages include list processing tech-
niques. Most prominent are IPL and LISP, two artificial intelligence languages.
In the early 1960s, Simon and Newell came out with their General Problem
Solver (GPS), which fully explicates the fundamental features of symbol manipu-
lation as a general process that underlies all forms of intelligent problem-solving
behavior. GPS became the basis for decades of early work in AI. General Problem
Solver is a program for a problem-solving process that uses means-ends analysis
and planning to arrive at a solution. GPS was designed so that the problem-solving
process is distinct from knowledge specific to the problem to be solved, which
allows it to be used for a variety of problems.
Simon is also a noted economist, political scientist, and cognitive psychologist.
In addition to fundamental contributions to organizational theory, decision-
making, and problem-solving, Simon is famous for the concepts of bounded ratio-
nality, satisficing, and power law distributions in complex systems. All three
concepts are of interest to computer and data scientists. Bounded rationality
accepts that human rationality is fundamentally limited. Humans do not possess
the time or information that would be necessary to make perfect decisions; prob-
lems are hard, and the mind has cognitive boundaries. Satisficing is a way of
describing a decision-making process that results not in the most optimal solution,
but one that “satisfies” and “suffices.” In market situations, for instance, custom-
ers practice satisficing when they select products that are “good enough,” meaning
adequate or acceptable.
In his research on complex organizations, Simon explained how power law dis-
tributions were derived from preferential attachment processes. Power laws, also
known as scaling laws, come into play when a relative change in one quantity
causes a proportional change in another quantity. An easy example is a square; as
the length of a side doubles, the area of the square quadruples. Power laws are
found in all manner of phenomena, for example, biological systems, fractal pat-
terns, and wealth distributions. In income/wealth distributions, preferential attach-
ment mechanisms explain why the rich get richer: Wealth is distributed to
individuals on the basis of how much wealth they already have; those who already
have wealth receive proportionally more income, and thus more total wealth, than
those who have little. Such distributions often produce so-called long tails when
graphed. Today, these long-tailed distributions have been used to explain such
things as crowdsourcing, microfinance, and internet marketing.
Simon was the son of a Jewish electrical engineer with several patents who
emigrated from Germany to Milwaukee, Wisconsin, in the early twentieth
Sloman, Aaron 293
century. His mother was a gifted pianist. Simon became interested in the social
sciences through the reading of an uncle’s books on psychology and economics.
He has said that two books that influenced his early thinking on the subjects were
The Great Illusion (1909) by Norman Angell and Progress and Poverty (1879) by
Henry George. Simon was a graduate of the University of Chicago, where he
received his PhD in organizational decision-making in 1943. Among his mentors
were Rudolf Carnap, Harold Lasswell, Edward Merriam, Nicolas Rashevsky, and
Henry Schultz.
He began his teaching and research career as a professor of political science at
the Illinois Institute of Technology. He moved to Carnegie Mellon University in
1949, where he remained until 2001. He rose to the role of chair of the Department
of Industrial Management. He is the author of twenty-seven books and numerous
published papers. He became a fellow of the American Academy of Arts and Sci-
ences in 1959. Simon received the prestigious Turing Award in 1975 and Nobel
Prize in Economics in 1978.
Philip L. Frana and Juliet Burba
See also: Dartmouth AI Conference; Expert Systems; General Problem Solver; Newell,
Allen.
Further Reading
Crowther-Heyck, Hunter. 2005. Herbert A. Simon: The Bounds of Reason in Modern
America. Baltimore: Johns Hopkins Press.
Newell, Allen, and Herbert A. Simon. 1956. The Logic Theory Machine: A Complex
Information Processing System. Santa Monica, CA: The RAND Corporation.
Newell, Allen, and Herbert A. Simon. 1976. “Computer Science as Empirical Inquiry:
Symbols and Search.” Communications of the ACM 19, no. 3: 113–26.
Simon, Herbert A. 1996. Models of My Life. Cambridge, MA: MIT Press.
Sloman, Aaron(1936–)
Aaron Sloman is a pioneering philosopher of artificial intelligence and cognitive
science. He is a world authority on the evolution of biological information pro-
cessing, a branch of the sciences that strives to understand how animal species
have evolved levels of intelligence greater than machines. In recent years, he
has contemplated if evolution was the first blind mathematician and whether
weaver birds are truly capable of recursion (dividing a problem into parts to
conquer it).
His current Meta-Morphogenesis Project extends from an insight by Alan Tur-
ing (1912–1954), who argued that while mathematical ingenuity could be imple-
mented in computers, only brains were capable of mathematical intuition. Sloman
argues that because of this, not every detail of the universe—including the human
brain—can be modeled in a suitably large digital machine. This claim directly
challenges the work of digital physics, which argues that the universe can be
described as a simulation running on a sufficiently large and fast general-purpose
computer that calculates the evolution of the universe. Sloman has proposed that
the universe has evolved its own biological construction kits for making and
294 Sloman, Aaron
Planners often talk about smart cities in association with smart economy initia-
tives and international investment development. Indicators of smart economic
efforts may include data-driven entrepreneurial innovation and productivity
assessments and evaluation. Some smart cities hope to replicate the success of the
Silicon Valley. One such venture is Neom, Saudi Arabia, a planned megacity city
that is estimated to cost half a trillion dollars to complete. In the city’s plans, arti-
ficial intelligence is thought of as the new oil, despite sponsorship by the state-
owned petroleum company Saudi Aramco. Everything from the technologies in
homes to transportation networks and electronic medical records delivery will be
controlled by interrelated computing devices and futuristic artificial intelligence
decision-making. Saudi Arabia has already entrusted AI vision systems with one
of its most important cultural activities—monitoring the density and speed of pil-
grims circling the Kaaba in Mecca. The AI is designed to prevent a tragedy on the
order of the Mina Stampede in 2015, which took the lives of 2,000 pilgrims.
Other hallmarks of smart city initiatives involve highly data-driven and tar-
geted public services. Together, information-driven agencies are sometimes
described as smart or e-government. Smart governance may include open data
initiatives to promote transparency and shared participation in local decision-
making. Local governments will work with contractors to provide smart utility
grids for electrical, telecommunications, and internet distribution. Smart waste
management and recycling efforts in Barcelona are possible because waste bins
are connected to the global positioning system and cloud servers to alert trucks
that refuse is ready for collection. In some localities, lamp posts have been turned
into community wi-fi hotspots or mesh networks for providing dynamic lighting
safety to pedestrians.
High tech hubs planned or under construction include Forest City, Malaysia;
Eko Atlantic, Nigeria; Hope City, Ghana; Kigamboni New City, Tanzania; and
Diamniadio Lake City, Senegal. In the future, artificial intelligence is expected to
serve as the brain of the smart city. Artificial intelligence will custom-tailor the
experience of cities to meet the needs of individual residents or visitors. Aug-
mented systems can provide virtual signage or navigational information through
special eyewear or heads-up displays. Intelligent smartphone agents are already
capable of anticipating the movements of users based on past usage and location
information.
Smart homes are characterized by similar artificial intelligence technologies.
Smart hubs such as Google Home now work with more than 5,000 different kinds
of smart devices distributed by 400 companies to provide intelligent environments
in personal residences. Google Home’s chief competitor is Amazon Echo. Tech-
nologies like these can control heating, ventilation, and air conditioning; lighting
and security; and home appliances such as smart pet feeders. Game-changing
innovations in home robotics led to the quick consumer adoption of iRobot’s
Roomba vacuum cleaner in the early 2000s. So far, such systems have been sus-
ceptible to obsolescence, proprietary protocols, fragmented platforms and interop-
erability problems, and uneven technical standards.
Smart homes are driving advances in machine learning. The analytical and
predictive capability of smart technologies is widely considered the backbone of
Smart Cities and Homes 297
one of the fastest growing and most disruptive business sectors: home automa-
tion. To work reliably, the smarter connected home of the future must continu-
ously obtain new data to improve itself. Smart homes are constantly monitoring
the internal environment, using aggregated historical information to help define
parameters and functions in buildings with installed smart components. Smart
homes might one day anticipate the needs of owners, for instance, adjusting
blinds automatically as the sun and clouds move in the sky. A smart home might
brew a cup of coffee at exactly the right moment, or order Chinese takeout, or
play music to match the resident’s mood as recognized automatically by emotion
detectors.
Smart city and home AI systems involve omnipresent, powerful technologies.
The advantages of smart cities are many. People are interested in smart cities
because of their potential for great efficiencies and convenience. A city that antici-
pates and seamlessly satisfies personal needs is an intoxicating proposition. But
smart cities are not without criticism. If left unchecked, smart havens have the
potential to result in significant privacy invasion through always on video record-
ing and microphones. In 2019, news broke out that Google contractors could listen
to recordings of interactions with users of its popular Google Assistant artificial
intelligence system.
The environmental impact of smart cities and homes is yet unclear. Smart city
plans usually pay minimal attention to biodiversity concerns. Critical habitat is
regularly destroyed to make way for the new cities demanded by tech entrepre-
neurs and government officials. The smart cities themselves continue to be domi-
nated by conventional fossil-fuel transportation technologies. The jury is also out
on the sustainability of smart domiciles. A recent study in Finland showed that the
electricity use of smart homes was not effectively reduced by advanced metering
or monitoring of that consumption.
And in fact, several existing smart cities that were planned from the ground up
are now virtually vacant. So-called ghost cities in China such as Ordos Kangbashi
have reached occupancy levels of one-third of all apartment units many years
after their original construction. Songdo, Korea, an early “city in a box,” has not
lived up to expectations despite direct, automated vacuum garbage collection
tubes in individual apartments and building elevators synced to the arrival of
residents’ cars. Smart cities are often described as impersonal, exclusive, and
expensive—the opposite of designers’ original intentions.
Songdo is in many ways emblematic of the smart cities movement for its under-
lying structure of ubiquitous computing technologies that drive everything from
transportation systems to social networking channels. Coordination of all devices
permits unparalleled integration and synchronization of services. Consequently,
the city also undermines the protective benefits of anonymity in public spaces by
turning the city into an electronic panopticon or surveillance state for watching
and controlling citizens. The algorithmic biases of proactive and predictive polic-
ing are now well known to authorities who study smart city infrastructures.
Philip L. Frana
See also: Biometric Privacy and Security; Biometric Technology; Driverless Cars and
Trucks; Intelligent Transportation; Smart Hotel Rooms.
298 Smart Hotel Rooms
Further Reading
Albino, Vito, Umberto Berardi, and Rosa Maria Dangelico. 2015. “Smart Cities: Defini-
tions, Dimensions, Performance, and Initiatives.” Journal of Urban Technology
22, no. 1: 3–21.
Batty, Michael, et al. 2012. “Smart Cities of the Future.” European Physical Journal Spe-
cial Topics 214, no. 1: 481–518.
Friedman, Avi. 2018. Smart Homes and Communities. Mulgrave, Victoria, Australia:
Images Publishing.
Miller, Michael. 2015. The Internet of Things: How Smart TVs, Smart Cars, Smart Homes,
and Smart Cities Are Changing the World. Indianapolis: Que.
Shepard, Mark. 2011. Sentient City: Ubiquitous Computing, Architecture, and the Future
of Urban Space. New York: Architectural League of New York.
Townsend, Antony. 2013. Smart Cities: Big Data, Civic Hackers, and the Quest for a New
Utopia. New York: W. W. Norton & Company.
technology. Typical Hilton smart rooms have controllable lights, televisions, the
climate, and the entertainment (streaming) service (Ting 2017). A second objec-
tive is to provide services on mobile phone applications. Guests are able to set up
personal preferences during their stay. They might, for example, choose the digital
artwork or photos on display in their room. Hilton smart guestrooms are currently
working on voice activation services (Burge 2017).
Smart rooms at Marriott are developed in partnership with Legrand and its
Eliot technology and Samsung, provider of the Artik guest experience platform.
Marriott has adopted hotel IoT platforms that are cloud based (Ting 2017). This
collaboration has resulted in two prototypes of rooms for testing new smart sys-
tems. The first one is a fully connected room, with smart showers, mirrors, art
frames, and speakers. Guests control the lights, air conditioning, curtains, art-
work, and television using voice commands. A touchscreen shower is present,
which allows guests to write on the shower’s smart glass. Shower notes can be
converted into documents and sent to a specified email account (Business Traveler
2018). This Marriott room has sensors to detect the number of persons in the suite
in order to regulate the amount of oxygen. These sensors also ease the guests’
nocturnal awakening by showing the time to get out of bed as well as illuminating
the way to the toilets (Ting 2017). Guests are also able to set their personal
preferences prior to arrival with a loyalty account. A second lower-tech room is
connected through a tablet and equipped only with the voice-controlled smart
speaker Amazon Dot. The room’s features are adjustable using the television
remote. The advantage of this room is its minimal implementation requirements
(Ting 2017).
Beyond convenience and customization, hoteliers cite several advantages of
smart rooms. Smart rooms lessen environmental impacts by reducing energy con-
sumption costs. They also potentially reduce wage costs by decreasing the amount
of housekeeping and management interaction with guests. Smart rooms also have
limitations. Some smart technologies are difficult to master. First, the learning
process for overnight guests in particular is short. Second, the infrastructure and
technology needed for these rooms remains very expensive. The upfront invest-
ment costs are considerable, even if there are long-term cost and energy savings.
A final concern is data privacy.
Hotels must continue to adapt to new generations of paid guests. Millennials
and post-millennials have technology embedded deep into their daily habits. Their
smart phones, video games, and tablets are creating a virtual world where the
meaning of experience has become entirely transformed. Luxury tourism already
involves very expensive products and services equipped by the highest technology
tools. Diversity in guest income levels and personal technical capacities will influ-
ence the quality of future hotel smart room experiences and produce new com-
petitive markets. Hotel customers are looking for tech-enabled top comfort and
service.
Smart rooms provide benefits to hotel operators as well, serving as a source of
big data. In order to offer unique products and services, companies increasingly
collect, store, and use all available information about their customers. This
300 “Software Eating the World”
unofficial launch of the dot-com era (1995–2000). The company was later sold to
AOL for $4.2 billion. In 1998, Andreessen cofounded Loudcloud (later renamed
Opsware), a pioneering cloud computing company offering software as a service
and computing and hosting services to internet and e-commerce companies.
Opsware was acquired in 2007 by Hewlett-Packard for $1.6 billion.
In 2009, Andreessen started a venture capital firm with Ben Horowitz, his long-
time business partner at both Netscape and Loudcloud. The firm, Andreessen
Horowitz, or a16z (there are sixteen letters between the “a” in Andreessen and the
“z” in Horowitz), has since invested in companies such as Airbnb, Box, Facebook,
Groupon, Instagram, Lift, Skype, and Zynga. A16z was designed to invest in
visionary entrepreneurs who brought big ideas, disruptive technologies, and
potential for changing the course of history. In his career, Andreessen has occu-
pied seats on the boards of Facebook, Hewlett-Packard, and eBay. He is a fervent
advocate of artificial intelligence, and A16z has accordingly invested in a large
number of AI-driven start-ups.
“Software Eating the World” has often been interpreted in popular and scholarly
literature in terms of digitalization: in industry after industry, from media to finan-
cial services to health care, a postmodern economy will be chewed up by the rise of
the internet and the spread of smartphones, tablet computers, and other disruptive
electronic devices. Along this line of thinking, VentureBeat columnist Dylan Tweney
presented in October 2011 an opposing perspective in a piece titled “Software is Not
Eating the World,” which emphasized the continuing importance of the hardware
underlying computer systems. “You’ll pay Apple, RIM or Nokia for your phone,” he
argued. “You’ll still be paying Intel for the chips, and Intel will still be paying
Applied Materials for the million-dollar machines that make those chips” (Tweney
2011). But to be clear, there is no contradiction between the survival of traditional
operations, such as physical products and stores, and the emergence of software-
driven decision-making. In fact, technology might just be what keeps traditional
operations alive.
In his article, Andreessen pointed out that in the fast-approaching future, the
equity value of business companies will be based not on how many products they
sell but on the quality of their software. “Software is also eating much of the value
chain of industries that are widely viewed as primarily existing in the physical
world. In today’s cars, software runs the engines, controls safety features, entertains
passengers, guides drivers to destinations and connects each car to mobile, satellite
and GPS networks,” he noted. “The trend toward hybrid and electric vehicles will
only accelerate the software shift—electric cars are completely computer controlled.
And the creation of software-powered driverless cars is already under way at Google
and the major car companies” (Tweney 2011). In other words, a software-based
economy will not replace the aesthetic appeal of great products, the magnetic attrac-
tion of great brands, or the advantages of extended portfolio assets because compa-
nies will continue to build great products, brands, and businesses as they have done
successfully in the past. But software will, indeed, replace products, brands, and
financial strategies as the key source of value creation for business enterprises.
Enrico Beltramini
See also: Workplace Automation.
302 Spiritual Robots
Further Reading
Andreessen, Marc. 2011. “Why Software Is Eating the World.” The Wall Street Journal,
August 20, 2011. https://www.wsj.com/articles/SB100014240531119034809045765
12250915629460.
Christensen, Clayton M. 2016. The Innovator’s Dilemma: When New Technologies Cause
Great Firms to Fail. Third edition. Boston, MA: Harvard Business School Press.
Tweney, Dylan. 2011. “Dylan’s Desk: Software Is Not Eating the World.” VentureBeat,
October 5, 2011. https://venturebeat.com/2011/10/05/dylans-desk-hardware/.
Spiritual Robots
In April 2000, Stanford University hosted a conference called “Will Spiritual
Robots Replace Humanity by 2100?” organized by Indiana University cognitive
scientist Douglas Hofstadter. Panelists included astronomer and SETI head Frank
Drake, genetic algorithms inventor John Holland, Bill Joy of Sun Microsystems,
computer scientist John Koza, futurist Ray Kurzweil, public key cryptography
architect Ralph Merkle, and roboticist Hans Moravec. Several of the panelists
shared perspectives drawn from their own publications related to the topic of the
conference. Kurzweil had just released his exuberant futurist account of artificial
intelligence, The Age of Spiritual Machines (1999). Moravec had published a posi-
tive vision of machine superintelligence in Robot: Mere Machine to Transcendent
Mind (1999). Bill Joy had just penned a piece about the triple technological threat
coming from robotics, genetic engineering, and nanotechnology for Wired maga-
zine called “Why the Future Doesn’t Need Us” (2000). But only Hofstadter argued
that the explosive growth in artificial intelligence technology powered by Moore’s
Law doublings of transistors on integrated circuits might result in robots that are
spiritual.
Can robots have souls? Can they express free will and emerge on a path sepa-
rate from humanity? What would it mean for an artificial intelligence to have a
soul? Questions like these are as old as tales of golems, Pinocchio, and the Tin
Man, but are increasingly common in contemporary literature on the philosophy
of religion, ethics and theology of artificial intelligence, and the Technological
Singularity.
Japan’s leadership in robotics began with puppetry. In 1684, chanter Takemoto
Gidayū and playwright Chikamatsu Monzaemon founded the Takemoto-za in the
Dotonbori district of Osaka to perform bunraku, a theatrical extravaganza that
involves one-half life-size wooden puppets dressed in elaborate costumes, each
controlled by three black-cloaked onstage performers: a principal puppeteer and
two assistants. Bunraku epitomizes Japan’s long-standing fondness for breathing
life into inanimate objects.
Today, Japan is a leader in robotics and artificial intelligence, built through a
wrenching postwar process of reconstruction called gijutsu rikkoku (nation-
building through technology). One of the technologies rapidly adopted under tech-
nonationalism was television. The Japanese government believed that print and
electronic media would inspire people to use creative technologies to dream of an
Spiritual Robots 303
electronic lifestyle and reconnect to the global economy. In this way, Japan also
became a pop cultural competitor with the United States. Two of the most unmis-
takable Japanese entertainment exports are manga and anime, which are filled
with intelligent and humanlike robots, mecha, and cyborgs.
The Buddhist and Shinto world views of Japan are generally accepting of the
concept of spiritual machines. Tokyo Institute of Technology roboticist Masahiro
Mori has argued that a suitably advanced artificial intelligence might one day
become a Buddha. Indeed, the robot Mindar—modeled after the Goddess of
Mercy Kannon Bodhisattva—is a new priest at the Kodaiji temple in Kyoto. Cost-
ing a million dollars, Mindar is capable of reciting a sermon on the popular Heart
Sutra (“form is empty, emptiness is form”) while moving arms, head, and torso.
Robot partners are tolerated because they are included among things said to be
imbued with kami, which roughly translates as the spirit or divinity shared by the
gods, nature, objects, and humans in the religion of Shinto. Shinto priests are still
occasionally called upon to consecrate or bless new and derelict electronic equip-
ment in Japan. The Kanda-Myokin Shrine overlooking the Akihabara—the elec-
tronics shopping district of Tokyo—offers prayer and rituals and talismans that
are intended to purify or confer divine protection upon such things as smart
phones, computer operating systems, and hard drives.
By comparison, Americans are only beginning to wrestle with robot identity
and spirituality. In part, this is because the dominant religions of America have
their origins in Christian rituals and practice, which have historically sometimes
been antagonistic to science and technology. But Christianity and robotics have
shared, overlapping histories. Philip II of Spain, for instance, commissioned the
first mechanical monk in the 1560s. Stanford University historian Jessica Riskin
(2010) notes that mechanical automata are quintessentially Catholic in origin.
They made possible automated enactments of biblical stories in churches and
cathedrals and artificial analogues to living beings and divine creatures like angels
for study and contemplation. They also helped the great Christian philosophers
and theologians of Renaissance and early modern Europe meditate on concepts of
motion, vitality, and the incorporeal soul. By the mid-seventeenth century “[t]he
culture of lifelike machinery surrounding these devices projected no antithesis
between machinery and either divinity or vitality,” Riskin concludes. “On the
contrary, the automata represented spirit in every corporeal guise available, and
life at its very liveliest” (Riskin 2010, 43). That spirit remains alive today. In 2019,
an international group of investigators introduced SanTO—billed as a robot with
“divine features” and “the first Catholic robot”—at a New Delhi meeting of the
Institute of Electrical and Electronics Engineers (Trovato et al. 2019). Robots
are also present in reformist churches. In 2017, the Protestant churches in Hesse
and Nassau introduced the interactive, multilingual BlessU-2 robot to celebrate
the 500th anniversary of the Reformation. As the name of the robot implies, the
robot chooses special blessings for individual congregants.
Anne Foerst’s God and Computers Project at the Massachusetts Institute of
Technology sought dialogue between the researchers building artificial intelli-
gences and religious experts. She self-described herself as a “theological advisor”
304 Spiritual Robots
intelligence also includes the vision of escape from death or pain; in this case, the
afterlife is cyberspatial.
New religions inspired, at least in part, by artificial intelligence are attracting
adherents. The Church of Perpetual Life is a transhumanist worship center in Hol-
lywood, Florida, focused on the development of life-extending technologies. The
church was founded by cryonics pioneers Saul Kent and Bill Faloon in 2013. The
center has welcomed experts on artificial intelligence and transhumanism, includ-
ing artificial intelligence serial entrepreneur Peter Voss and Transhumanist Party
presidential candidate Zoltan Istvan. The Terasem Movement founded by Martine
and Gabriel Rothblatt is a religion associated with cryonics and transhumanism.
The central beliefs of the faith are “life is purposeful, death is optional, god is
technological, and love is essential” (Truths of Terasem 2012). The lifelike Bina48
robot—modeled after Martine’s spouse and manufactured by Hanson Robotics—
is, in part, a demonstration of the mindfile-based algorithm that Terasem hopes
will one day enable authentic mind uploading into an artificial substrate (and per-
haps bring about a sort of eternal life). Gabriel Rothblatt has said that heaven is not
unlike a virtual reality simulation.
The Way of the Future is an AI-based church founded by Anthony Levan-
dowski, an engineer who led the teams that developed Google and Uber’s self-
driving cars. Levandowski is motivated to create a superintelligent, artificial deity
possessing Christian morality. “In the future, if something is much, much smarter,
there’s going to be a transition as to who is actually in charge,” he explains. “What
we want is the peaceful, serene transition of control of the planet from humans to
whatever. And to ensure that the ‘whatever’ knows who helped it get along” (Har-
ris 2017). He is motivated to seek legal rights for artificial intelligences and also
fully integrate them into human society.
Spiritual robots have become a common trope of science fiction. In the short
story “Reason” (1941) by Isaac Asimov, the robot Cutie (QT-1) convinces other
robots that human beings are too mediocre to be their creators and instead con-
vinces them to worship the power plant on their space station, calling it the Master
of both machines and men. Anthony Boucher’s novelette The Quest for Saint
Aquin (1951), which pays homage to Asimov’s “Reason,” follows the post-
apocalyptic quest of a priest named Thomas who is looking for the last resting
place of the fabled evangelist Saint Aquin (Boucher patterns Saint Aquin after St.
Thomas Aquinas, who used Aristotelian logic to prove the existence of God). The
body of Saint Aquin is rumored to have never decomposed. The priest rides an
artificially intelligent robass (robot donkey); the robass is an atheist and tempter
capable of engaging in theological argument with the priest. Saint Aquin, when
eventually found after many trials, turns out to have been an incorruptible android
theologian. Thomas is convinced of the success of his quest—he has found a robot
with a logical brain that, although made by a human, believes in God.
In Stanislaw Lem’s story “Trurl and the Construction of Happy Worlds” (1965),
a box-dwelling robot race created by a robot engineer is convinced that their home
is a wonderland to which all other beings should aspire. The robots develop a reli-
gion and begin to make plans to create a hole in the box in order to bring, will-
ingly or not, everyone outside the box into their paradise. The belief infuriates the
robots’ constructor, who destroys them.
306 Spiritual Robots
Science fiction grandmaster Clifford D. Simak is also known for his spiritual
robots. Hezekiel in A Choice of Gods (1972) is a robot abbot who leads a Christian
group of other robots at a monastery. The group has received a message from a
god-like being called The Principle, but Hezekiel feels sure that “God must be,
forever, a kindly old (human) gentleman with a long, white, flowing beard” (Simak
1972, 158). The robot monks in Project Pope (1981) are searching for heaven and
the universe’s significance. A robot gardener named John reveals to the Pope that
he thinks he has a soul. The Pope is not so sure. Humans refuse to grant the robots
membership in their churches, and so the robots create their own Vatican-17 on a
distant planet. The Pope of the robots is a gigantic computer.
In Robert Silverberg’s Hugo-nominated book Tower of Glass (1970), androids
worship their creator Simeon Krug, praying that he will one day liberate them from
oppressive servitude. When they discover that Krug is not interested in their free-
dom, they abandon religion and revolt. Silverberg’s short story “Good News from
the Vatican” (1971) is a Nebula award winner about an artificially intelligent robot
that is elected, as a compromise choice, Pope Sixtus the Seventh. The story is satir-
ical: “If he’s elected,” says Rabbi Mueller, “he plans an immediate time-sharing
agreement with the Dalai Lama and a reciprocal plug-in with the head programmer
of the Greek Orthodox church, just for starters” (Silverberg 1976, 269).
Spiritual robots are also commonplace in television. Sentient machines in the
British science fiction sitcom Red Dwarf (1988–1999) are outfitted with belief
chips, convincing them of the existence of silicon heaven. Robots in the animated
television series Futurama (1999–2003, 2008–2013) worship in the Temple of
Robotology, where sermons are delivered by Reverend Lionel Preacherbot. In the
popular reboot and reimagining of the Battlestar Galactica television series
(2003–2009), the robotic Cylons are monotheists and the humans of the Twelve
Colonies are polytheists.
Philip L. Frana
See also: Foerst, Anne; Nonhuman Rights and Personhood; Robot Ethics; Technological
Singularity.
Further Reading
DeLashmutt, Michael W. 2006. “Sketches Towards a Theology of Technology: Theologi-
cal Confession in a Technological Age.” Ph.D. diss., University of Glasgow.
Foerst, Anne. 1996. “Artificial Intelligence: Walking the Boundary.” Zygon 31, no. 4: 681–93.
Geraci, Robert M. 2007. “Religion for the Robots.” Sightings, June 14, 2007. https://web
.archive.org/web/20100610170048/http://divinity.uchicago.edu/martycenter/publi
cations/sightings/archive_2007/0614.shtml.
Harris, Mark. 2017. “Inside the First Church of Artificial Intelligence.” Wired, November
15, 2017. https://www.wired.com/story/anthony-levandowski-artificial-intelligence
-religion/.
Riskin, Jessica. 2010. “Machines in the Garden.” Arcade: A Digital Salon 1, no. 2
(April 30): 16–43.
Silverberg, Robert. 1970. Tower of Glass. New York: Charles Scribner’s Sons.
Simak, Clifford D. 1972. A Choice of Gods. New York: Ballantine.
Southern Baptist Convention. Ethics and Religious Liberty Commission. 2019. “Artificial
Intelligence: An Evangelical Statement of Principles.” https://erlc.com/resource
-library/statements/artificial-intelligence-an-evangelical-statement-of-principles/.
Superintelligence 307
Trovato, Gabriele, Franco Pariasca, Renzo Ramirez, Javier Cerna, Vadim Reutskiy, Lau-
reano Rodriguez, and Francisco Cuellar. 2019. “Communicating with SanTO: The
First Catholic Robot.” In 28th IEEE International Conference on Robot and
Human Interactive Communication, 1–6. New Delhi, India, October 14–18.
Truths of Terasem. 2012. https://terasemfaith.net/beliefs/.
Superintelligence
In its most common usage, the term “superintelligence” denotes any level of intel-
ligence that at least achieves but usually surpasses human intelligence, typically
in a generalized way. Though computer intelligence has long ago outpaced natural
human cognitive ability in specialized tasks—as, for instance, with a calculator’s
ability to quickly process algorithms—such are not typically thought of as
instances of superintelligence in the strict sense because of their narrow func-
tional range. Superintelligence in this latter sense would require, in addition to
artificial mastery of special theoretical tasks, some kind of additional mastery of
what has traditionally been referred to as practical intelligence: a generalized
sense of how to appropriately subsume particulars under universal categories
identified as in some way worthwhile.
To date, no such generalized superintelligence has materialized, and thus all
discussion of superintelligence remains, to some extent, within the realm of spec-
ulation. Whereas classic accounts of superintelligence have exclusively been the
purview of speculative metaphysics and theology, recent advances in computer
science and bioengineering have opened up the possibility of the material realiza-
tion of superintelligence. The timeline of such development is greatly debated, but
a growing consensus among experts suggests that material superintelligence is
indeed achievable and may even be imminent.
Should this opinion be proven correct, it will almost surely be the outcome of
advancements in one of two main avenues of AI research: bioengineering and
computer science. The former includes attempts not only to map out and manipu-
late the human genome but also to precisely replicate the human brain electroni-
cally via what is called whole brain emulation or mind uploading. The first of
these bioengineering projects is not new, with eugenics programs dating at least as
far back as the eighteenth century. Nevertheless, the discovery of DNA in the
twentieth century, combined with advancement in genome mapping, has led to
renewed interest in eugenics, despite serious ethical and legal problems that inevi-
tably arise due to such projects. The goal of much of this research is to understand
the genetic makeup of the human brain for the purpose of manipulating DNA code
in the direction of superhuman intelligence.
Uploading is a slightly different, though still biology based, approach to
superintelligence that seeks to map out neural networks in order to effectively
move human intelligence into computer interfaces. In this relatively new field of
research, the brains of insects and small animals are microdissected and then
scanned for detailed computer analysis. The operative assumption in whole brain
emulation is that if the structure of the brain is more precisely understood and
mapped, it may be possible to replicate it with or without biological brain
tissue.
308 Superintelligence
Despite the rapid advancement of both genetic mapping and whole brain emu-
lation, both approaches face several important limitations, which make it less
likely that superintelligence will first be achieved via either of these biological
approaches. For instance, there is a necessary generational limitation to the genetic
manipulation of the human genome. Even if it were currently possible to artifi-
cially enhance cognitive functioning by altering the DNA of a human embryo
(and that level of genetic manipulation remains quite out of reach), it would still
take an entire generation for the altered embryo to mature into a fully grown,
superintelligent human being. Such would also be to suppose the absence of legal/
moral obstacles to the manipulation of the human genome, which is far from the
case. For instance, even the relatively minimal genetic modification of human
embryos undertaken by a Chinese physician as recently as November of 2018 has
been the cause for a global outcry (Ramzy and Wee 2019).
On the other side, whole brain emulation also remains quite far from realiza-
tion, mostly due to the limitations of biotechnology. The extraordinary levels of
precision that are required at every stage of the uploading process cannot possibly
be realized given the existing medical equipment. Science and technology cur-
rently lack the ability to both dissect and scan human brain tissue with sufficient
levels of accuracy to achieve the result of whole brain emulation. Furthermore,
even if such initial steps are possible, researchers would still struggle to analyze
and digitally replicate the human brain with state-of-the-art computing technol-
ogy. Many commentators suggest such limitations will be overcome, but the time-
table for such realizations is far from established.
Apart from biotechnology, the other main avenue to superintelligence is the
field of AI proper, narrowly defined as any form of nonorganic (especially
computer-based) intelligence. Of course, the task of designing a superintelligent
AI from scratch is hampered by several factors, not all stemming from merely
logistical concerns such as processor speed, hardware/software design, funding,
and so forth. In addition to such empirical obstacles, there is an important philo-
sophical problem: namely, that human programmers definitionally cannot know,
and so would never be able to program, that which is superior to their own intel-
lect. It is partly this concern that motivates much current research on computer
learning and interest in the idea of a seed AI. The latter is definable as any machine
capable of adjusting responses to stimuli based on analysis of how effectively it
performs relative to a prespecified goal. Importantly, the idea of a seed AI implies
not only the ability to modify its responses by building an ever-expanding base of
content knowledge (stored information) but also the ability to modify the very
structure of its programming to better suit a given task (Bostrom 2017, 29). Indeed,
this latter capacity is what would give a seed AI what Nick Bostrom calls “recur-
sive self-improvement” or potential for iterative self-evolution (Bostrom 2017, 29).
This would mean that programmers would not need any a priori vision of super-
intelligence, as the seed AI would continually make improvements on its own
programming, each increasingly intelligent version of itself programming a supe-
rior version of itself (beyond the human level).
Such a machine would surely problematize a common philosophical view that
machines lack self-awareness. Proponents of this perspective can be traced at least
Superintelligence 309
as far back as Descartes but would also include more recent theorists such as John
Haugeland and John Searle. This view defines machine intelligence as the suc-
cessful correlation of inputs with outputs according to a prespecified program. As
such, machines are distinguished in kind from humans, the latter alone being
defined by conscious self-awareness. Whereas humans understand the actions
they perform, machines have been thought to merely carry out functions
mindlessly—that is, without any understanding of their own functioning. A suc-
cessful seed AI, should it prove possible to create, would necessarily challenge
this basic view. By improving on its own programming in ways that surprise and
frustrate the predictions of its human programmers, the seed AI would demon-
strate a degree of self-awareness and autonomy not easily explained by the Carte-
sian philosophical paradigm.
Indeed, though it remains (for the moment) still at the level of speculation, the
increasingly likely outcome of superintelligent AI raises a host of moral and legal
concerns that have prompted much philosophical debate in this field of research.
The overarching concerns have to do with the security of the human species in the
event of what Bostrom calls an “intelligence explosion”—that is, the initial cre-
ation of a seed AI followed by the potentially exponential increase in intelligence
that it implies (Bostrom 2017). One of the main concerns has simply to do with the
necessarily unpredictable nature of such an outcome. The autonomy implied by
superintelligence in a definitional manner means that humans will not be able to
completely predict how superintelligent AI will behave. Even in the limited
instances of specialized superintelligence that humans have so far been able to
create and observe—for instance, machines that have outperformed humans at
strategy games such as chess and Go—human predictions for AI have proven
very unreliable. For many critics, such unpredictability is a strong indication that
humans will quickly lose the ability to control more generalized forms of superin-
telligent AI, should the latter materialize (Kissinger 2018).
Of course, there is nothing about such lack of control that would necessarily
imply an antagonistic relationship between humans and superintelligence. Indeed,
though much of the literature on superintelligence tends to portray this relation-
ship in oppositional terms, some emerging scholarship argues that this very per-
spective betrays a bias against machines typical especially of Western societies
(Knight 2014). Nevertheless, there are good reasons to think that superintelligent
AI may at a minimum perceive human interests as at odds with their own, and
more strongly may view humans as existential threats. Computer scientist Steve
Omohundro, for one, has argued that even as relatively simple a form of superin-
telligent AI as a chess bot might have reason to seek the elimination of the human
species as a whole—and may be able to develop the means to do so (Omohundro
2014). Bostrom has similarly argued that a superintelligence explosion likely rep-
resents, if not the outright end of the human species, then at least a decidedly
dystopian future (Bostrom 2017).
Whatever the merits of such speculations, what seems undeniable is the deep
uncertainty implied by superintelligence. If there is one point of consensus to be
found in a vast and widely varied literature, it is surely that the global community
must take great care to safeguard its interests if it is to continue with AI research.
310 Symbol Grounding Problem, The
This point alone may seem controversial to hardened determinists who argue that
technological development is so bound to rigid market forces that it is simply impos-
sible to alter its speed or direction in any significant way. According to this deter-
minist view, if AI can provide cost-saving solutions for industry and commerce
(which it has already begun to do), its development will continue into the range of
superintelligence regardless of possible negative unintended consequences.
Against such perspectives, many critics advocate for increased social aware-
ness of the possible dangers of AI and careful political scrutiny of its develop-
ment. Bostrom cites several instances of successful global collaboration in science
and technology—including CERN, the Human Genome Project, and the Interna-
tional Space Station—as important precedents that problematize the determinist
view (Bostrom 2017, 253). To these, one could also add cases in the global envi-
ronmental movement, beginning especially in the 1960s and 1970s, which has put
important limitations on pollution carried out in the name of unbridled capitalism
(Feenberg 2006). Given the speculative nature of scholarship on superintelligence,
it is of course impossible to know what the future will bring. Nevertheless, to the
extent that superintelligence may represent an existential threat to human life,
prudence would indicate adopting a globally collaborative approach rather than a
free market approach to AI.
David Schafer
See also: Berserkers; Bostrom, Nick; de Garis, Hugo; General and Narrow AI; Goertzel,
Ben; Kurzweil, Ray; Moravec, Hans; Musk, Elon; Technological Singularity; Yudkowsky,
Eliezer.
Further Reading
Bostrom, Nick. 2017. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford
University Press.
Feenberg, Andrew. 2006. “Environmentalism and the Politics of Technology.” In Ques-
tioning Technology, 45–73. New York: Routledge.
Kissinger, Henry. 2018. “How the Enlightenment Ends.” The Atlantic, June 2018. https://
www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean
-the-end-of-human-history/559124/.
Knight, Heather. 2014. How Humans Respond to Robots: Building Public Policy Through
Good Design. Washington, DC: The Project on Civilian Robotics. Brookings
Institution.
Omohundro, Steve. 2014. “Autonomous Technology and the Greater Human Good.” Jour-
nal of Experimental & Theoretical Artificial Intelligence 26, no. 3: 303–15.
Ramzy, Austin, and Sui-Lee Wee. 2019. “Scientist Who Edited Babies’ Genes Is Likely to
Face Charges in China.” The New York Times, January 21, 2019.
Symbol Manipulation
Symbol manipulation refers to the general information-processing abilities of a
digital stored program computer. Perceiving the computer as essentially a symbol
manipulator became paradigmatic from the 1960s to the 1980s and led to the sci-
entific pursuit of symbolic artificial intelligence, today sometimes referred to as
Good Old-Fashioned AI (GOFAI).
The development of stored-program computers in the 1960s generated a new
awareness of a computer’s programming flexibility. Symbol manipulation became
both a general theory for intelligent behavior and a guideline for AI research. One
of the earliest computer programs to model intelligent symbol manipulation was
the Logic Theorist, developed by Herbert Simon, Allen Newell, and Cliff Shaw in
1956. The Logic Theorist was able to prove theorems from Alfred North White-
head and Bertrand Russell’s Principia Mathematica (1910–1913). It was presented
312 Symbol Manipulation
Newell, Allen, and Herbert A. Simon. 1961. “Computer Simulation of Human Thinking.”
Science 134, no. 3495 (December 22): 2011–17.
Newell, Allen, and Herbert A. Simon. 1972. Human Problem Solving. Englewood Cliffs,
NJ: Prentice Hall.
Schank, Roger, and Kenneth Colby, eds. 1973. Computer Models of Thought and Lan-
guage. San Francisco: W. H. Freeman and Company.
Symbolic Logic
Symbolic logic involves the use of symbols to represent terms, relations, and prop-
ositions in mathematical and philosophical reasoning. Symbolic logic differs from
(Aristotelian) syllogistic logic, as it utilizes ideographs or a special notation which
“symbolize directly the thing talked about” (Newman 1956, 1852), and can be
manipulated according to precise rules. Traditional logic studied the truth and
falsity of statements and the relations between them, using words that themselves
sprang from natural language. Symbols, unlike nouns and verbs, have no need for
interpretation. Operations on symbols are mechanical and thus can be assigned to
computers. Symbolic logic rids logical analysis of any ambiguity by codifying it
completely within a fixed notational system.
Gottfried Wilhelm Leibniz (1646–1716) is generally considered to have been the
first student of symbolic logic. In the seventeenth century, as part of his plan to
reform scientific reasoning, Leibniz advocated the use of ideographic symbols in
place of natural language. The use of such concise universal symbols (characteris-
tica universalis) combined with a set of rules for scientific reasoning, Leibniz
hoped, would create an alphabet of human thought to promote the growth and dis-
semination of scientific knowledge and a corpus containing all human knowledge.
The field of symbolic logic can be split up into several distinct areas of analysis,
including Boolean logic, the logical foundations of mathematics, and decision
problems. Key works in each of these areas were respectively written by George
Boole, Alfred North Whitehead and Bertrand Russell, and Kurt Gödel. In the
mid-nineteenth century, George Boole set forth his ideas in The Mathematical
Analysis of Logic (1847) and An Investigation of the Laws of Thought (1854).
Boole zeroed in on what he called a calculus of deductive reasoning, which led
him to three basic operations—AND, OR, and NOT—in a logical mathematical
language called Boolean algebra. Symbols and operators vastly simplified the
construction of logical expressions. In the twentieth century, Claude Shannon
(1916–2001) used electromechanical relay circuits and switches to replicate Bool-
ean algebra—important groundwork in the history of electronic digital computing
and computer science generally.
In the early twentieth century, Alfred North Whitehead and Bertrand Russell
created their definitive work in the field of symbolic logic. Their Principia Math-
ematica (1910, 1912, 1913) gave a rigorous demonstration of how all of mathemat-
ics could be subsumed to symbolic logic. In the first volume of their work,
Whitehead and Russell deduced a logical system from a handful of logical ideas
and a set of postulates derived from those ideas. In the second volume of the Prin-
cipia, Whitehead and Russell defined all arithmetic concepts, including number,
314 SyNAPSE
zero, the successor of, addition, and multiplication, by basic logical terms and
operational rules such as proposition, negation, and either-or. Whitehead and Rus-
sell were then, in the final and third volume, able to show that the nature and truth
of all of mathematics is based upon logical ideas and relationships. The Principia
demonstrated how any postulate of arithmetic could be deduced from the earlier
explicated symbolic logical truths.
These strong and deep claims set up by the Principia were critically analyzed
only a few decades later by Kurt Gödel in On Formally Undecidable Propositions
in the Principia Mathematica and Related Systems (1931), who showed that
Whitehead and Russell’s axiomatic system could not simultaneously be consistent
and complete. Still, it took another key work in symbolic logic, Ernst Nagel and
James Newman’s Gödel’s Proof (1958), to get Gödel’s message across to a wider
audience, including some practitioners of artificial intelligence.
Each of these key works in symbolic logic had a distinct impact on the develop-
ment of computation and programming, and on our consequent perception of the
capacity of a computer’s abilities. Boolean logic found its way into logic circuitry
design. Simon and Newell’s Logic Theorist program demonstrated logical proofs
that matched those in the Principia Mathematica and was therefore seen as a
proof that a computer could be designed to do intelligent tasks using symbol
manipulation. Gödel’s incompleteness theorem suggests tantalizing questions
about the ultimate realization of programmed machine intelligence, especially
strong AI.
Elisabeth Van Meer
See also: Symbol Manipulation.
Further Reading
Boole, George. 1854. Investigation of the Laws of Thought on Which Are Founded the
Mathematical Theories of Logic and Probabilities. London: Walton.
Lewis, Clarence Irving. 1932. Symbolic Logic. New York: The Century Co.
Nagel, Ernst, and James R. Newman. 1958. Gödel’s Proof. New York: New York Univer-
sity Press.
Newman, James R., ed. 1956. The World of Mathematics, vol. 3. New York: Simon and
Schuster.
Whitehead, Alfred N., and Bertrand Russell. 1910–1913. Principia Mathematica. Cam-
bridge, UK: Cambridge University Press.
SyNAPSE
Project SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electron-
ics) is a Defense Advanced Research Projects Agency funded collaborative cogni-
tive computing study to create the architecture for a brain-inspired neurosynaptic
computer core. Initiated in 2008, the project is a collaborative venture of IBM
Research, HRL Laboratories, and Hewlett-Packard. Researchers at several uni-
versities are also partners in the venture. The acronym SyNAPSE is a reference to
the Ancient Greek word σύναψις meaning “conjunction,” which alludes to the
neuronal contacts involved in the transfer of information to the brain.
SyNAPSE 315
The goal of the project is to create a flexible, ultra-low power system for use in
robots by reverse-engineering the functional intelligence of rats, cats, or possibly
humans. DARPA’s original agency announcement requested a machine that
breaks through the “algorithmic-computational paradigm” and make it “scalable
to biological levels” (DARPA 2008, 4). In other words, they wanted an electronic
computer to process real-world complexity, adapt in response to external stimuli,
and do so in as close to real time as possible.
SyNAPSE is a response to demands for computer systems capable of under-
standing the environment and adapting rapidly to changing conditions, while still
remaining energy efficient. SyNAPSE scientists are developing systems of neuro-
morphic electronics similar to biological nervous systems and capable of process-
ing data from complicated environments. It is hoped that such systems will
eventually possess great autonomy. The approach to the SyNAPSE project is
interdisciplinary, borrowing ideas from computational neuroscience, artificial
neural networks, materials science, and cognitive science, among others. SyN-
APSE will need to extend basic science and engineering in the following areas:
hardware—to create synaptic components and integrate hardwired and program-
mable connectivity; architecture—to support structures and functions that appear
in biological systems; simulation—for the digital reproduction of systems in order
to test functioning prior to the implementation of material neuromorphological
systems.
The first SyNAPSE grant was awarded to IBM Research and HRL Laborato-
ries in 2008. IBM and HRL subcontracted various parts of the grant requirements
to an array of suppliers and contractors. The project was divided into four phases
that started after the initial feasibility study, which lasted nine months. An initial
simulator, called C2, developed in 2009, ran on a BlueGene/P supercomputer in
which cortical simulations are made with 109 neurons and 1013 synapses matching
a mammalian cat cortex. The program would later come in for criticism after an
announcement made by the Blue Brain Project leader that the simulation did not
achieve the complexity reported.
Each neurosynaptic core measures 2 millimeters by 3 millimeters in size and is
composed of elements abstracted from the biology of the human brain. The rela-
tion between the cores and real brains is more metaphorical than analogous. Com-
putation substitutes for actual neurons, memory stands in for synapses, and axons
and dendrites are represented by communication. This allows the team to describe
a hardware implementation of a biological system.
In 2012, HRL Labs announced that it had achieved the first functioning mem-
ristor array stacked on a conventional CMOS circuit. “Memristor,” a word coined
from memory and transistor, is an idea dating to the 1970s. In a memristor, the
functions of memory and logic are combined. Also in 2012, project leaders
announced the successful large-scale simulation of 530 billion neurons and 100
trillion synapses on the world’s second fastest supercomputer, the Blue Gene/Q
Sequoia machine at Lawrence Livermore National Laboratory in California.
In 2014, IBM introduced the TrueNorth processor, a 5.4-billion-transistor chip
with 4096 neurosynaptic cores interconnected via an intrachip network that inte-
grates 1 million programmable spiking neurons and 256 million configurable
316 SyNAPSE
founding codirector of the Center for Artificial Intelligence in Society at USC and
recipient of multiple honors, including the John McCarthy Award and the Daniel
H. Wagner Prize for Excellence in Operations Research Practice. He is a Fellow of
both the Association for the Advancement of Artificial Intelligence (AAAI) and
the Association for Computing Machinery (ACM). Tambe is cofounder and direc-
tor of research at Avata Intelligence, which markets artificial intelligent manage-
ment solutions to address enterprise-level data analysis and decision-making
challenges. His algorithms are in use by LAX, the U.S. Coast Guard, the Trans-
portation Security Administration, and the Federal Air Marshals Service.
Philip L. Frana
See also: Predictive Policing.
Further Reading
Paruchuri, Praveen, Jonathan P. Pearce, Milind Tambe, Fernando Ordonez, and Sarit
Kraus. 2008. Keep the Adversary Guessing: Agent Security by Policy Randomiza-
tion. Riga, Latvia: VDM Verlag Dr. Müller.
Tambe, Milind. 2012. Security and Game Theory: Algorithms, Deployed Systems, Les-
sons Learned. Cambridge, UK: Cambridge University Press.
Tambe, Milind, and Eric Rice. 2018. Artificial Intelligence and Social Work. Cambridge,
UK: Cambridge University Press.
Technological Singularity
The Technological Singularity refers to the emergence of technologies that could
fundamentally change humans’ role in society, challenge human epistemic agency
and ontological status, and trigger unprecedented and unforeseen developments in
all aspects of life, whether biological, social, cultural, or technological. The Tech-
nological Singularity is most often associated with artificial intelligence, specifi-
cally with artificial general intelligence (AGI). It is therefore sometimes presented
as an intelligence explosion driving advances in areas such as biotechnology, nan-
otechnology, and information technologies, as well as creating technologies yet
unknown. The Technological Singularity is often referred to as the Singularity,
but it should not be regarded as analogous to a singularity in mathematics, because
it has only a distant resemblance to this. In contrast, this singularity is a rather
loosely defined concept that may have different interpretations emphasizing dif-
ferent aspects of the changes precipitated by technology.
The origins of the Technological Singularity concept go back to the second half
of the twentieth century, and they are usually associated with the ideas and works
of John von Neumann (1903–1957), Irving John Good (1916–2009), and Vernor
Vinge (1944–). Current Technological Singularity research has been supported by
several universities and governmental and private research institutions seeking to
explore the future of technology and society. Even though the Technological Sin-
gularity is subjected to learned philosophical and technical debates, it is still a
hypothesis, a conjecture, a fairly open hypothetical concept.
While several researchers claim that the Technological Singularity is inevita-
ble, its timing is consistently moved further into the future. Nevertheless, many
studies share the belief that the question is not one of whether the Technological
Technological Singularity 319
Singularity will happen or not but rather of when and how it will occur. Ray
Kurzweil ventured a more precise date for the Technological Singularity’s emer-
gence in about the mid-twenty-first century. Others have also attempted to assign
a date for this event, yet there are no well-reasoned arguments behind proposing
any such date. Moreover, there remains the question of how humans would know
the Technological Singularity event has been reached without relevant metrics or
indicators. The unfulfilled promises that are associated with the history of artifi-
cial intelligence exemplify the hazards when trying to divine the future of
technology.
The Technological Singularity is often characterized by concepts of superintel-
ligence, acceleration, and discontinuity. “Superintelligence” denotes a quantita-
tive leap in the intellectual capacities of artificial systems, taking them well
beyond the capacities of normal human intellect (as measured by standard IQ
tests). Superintelligence may not be limited to AI and computer technology, how-
ever. It may emerge in human agents through genetic engineering, biological com-
puting systems, or hybrid artificial–natural systems. Some researchers attribute
infinite intellectual capacities to superintelligence.
Acceleration refers to the shape of the time curve for the appearance of some
significant events. Technological progress is represented as a curve through time
highlighting the discovery of significant inventions, such as stone tools, the pot-
tery wheel, the steam engine, electricity, atomic power, computers, and the inter-
net. The growth in computational power is represented by Moore’s law, which is
more accurately an observation that has become regarded as a law. It states that
“the number of transistors in a dense integrated circuit doubles about every two
years.” In most cases, the growth curve is linear or exponential, but in the case of
the Technological Singularity, people speculate that the appearances of significant
technological breakthroughs and new technological and scientific paradigms will
follow a super-exponential curve. For example, one prediction about the Techno-
logical Singularity concerns how superintelligent systems will be able to self-
improve (and self-replicate) in unforeseen ways at an unprecedented rate, thus
taking the technological growth curve well beyond what has been seen in history.
The discontinuity of Technological Singularity is described as an event hori-
zon, and it is somewhat analogous to a physical concept associated with black
holes. The comparison to this physical phenomenon, however, should be treated
with caution rather than using it to attribute the regularity and predictability of the
physical world to technological singularity. An event horizon (also referred to as a
prediction horizon) defines the limit of our knowledge about physical events
beyond a certain point in time. It means that what will happen beyond the event
horizon is unknown. In the case of technological singularity, the discontinuity or
event horizon implies that the technologies precipitating the Technological Singu-
larity will trigger disruptive changes in all aspects of the human condition,
changes about which experts cannot even speculate.
Technological singularity is also usually associated with the demise of human-
ity and the end of human society. Some studies predict the collapse of social order,
the end of humans as primary agents, and the loss of epistemic agency and pri-
macy. Superintelligent systems will not need humans apparently. These systems
will be able to replicate and improve upon themselves and create their own living
320 Technological Singularity
just those challenges that can be currently identified. Nobody expects technologi-
cal singularity to occur with current computing and other technologies, but its
proponents see these issues as mere “technical problems to be solved” rather than
potential showstoppers. The list of technical problems to be resolved is a long one,
however, and Murray Shanahan’s The Technological Singularity (2015) provides a
good review of some of these topics. Some significant nontechnical issues also
exist, including, among others, the problem of training of superintelligent sys-
tems, the question of the ontology of artificial or machine consciousness and self-
aware artificial systems, the embodiment of artificial minds or vicarious
embodiment processes, and the rights given to superintelligent systems, as well as
their role in society and any limits placed on their actions, if indeed this would be
possible at all. At present, these problems lie in the realm of technical and philo-
sophical speculation.
Roman Krzanowski
See also: Bostrom, Nick; de Garis, Hugo; Diamandis, Peter; Digital Immortality;
Goertzel, Ben; Kurzweil, Ray; Moravec, Hans; Post-Scarcity, AI and; Superintelligence.
Further Reading
Bostrom, Nick. 2014. Superintelligence: Path, Dangers, Strategies. Oxford, UK: Oxford
University Press.
Chalmers, David. 2010. “The Singularity: A Philosophical Analysis.” Journal of Con-
sciousness Studies 17: 7–65.
Eden, Amnon H. 2016. The Singularity Controversy. Sapience Project. Technical Report
STR 2016-1. January 2016.
Eden, Amnon H., Eric Steinhart, David Pearce, and James H. Moor. 2012. “Singularity
Hypotheses: An Overview.” In Singularity Hypotheses: A Scientific and Philo-
sophical Assessment, edited by Amnon H. Eden, James H. Moor, Johnny H.
Søraker, and Eric Steinhart, 1–12. Heidelberg, Germany: Springer.
Good, I. J. 1966. “Speculations Concerning the First Ultraintelligent Machine.” Advances
in Computers 6: 31–88.
Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New
York: Viking.
Sandberg, Anders, and Nick Bostrom. 2008. Global Catastrophic Risks Survey. Technical
Report #2008/1. Oxford University, Future of Humanity Institute.
Shanahan, Murray. 2015. The Technological Singularity. Cambridge, MA: The MIT
Press.
Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American Math-
ematical Society 64, no. 3, pt. 2 (May): 1–49.
Vinge, Vernor. 1993. “The Coming Technological Singularity: How to Survive in the
Post-Human Era.” In Vision 21: Interdisciplinary Science and Engineering in the
Era of Cyberspace, 11–22. Cleveland, OH: NASA Lewis Research Center.
Tilden turned to BEAM style-robots in the late 1980s after frustrating experi-
ments in building a conventional electronic robot butler. The robot, encoded with
Isaac Asimov’s Three Laws of Robotics, could barely vacuum floors. Tilden
largely abandoned the effort after encountering the famous MIT roboticist Rod-
ney Brooks at a Waterloo University talk about the virtues of simple sensorimotor,
stimulus-response robotics over computationally intensive mobile machines. Til-
den left Brooks’s talk wondering if reliable robots could be made without com-
puter processors or artificial intelligence.
Rather than having the intelligence programmed into the robot’s firmware, Til-
den imagined that the intelligence could come from the environment in which the
robot operated, as well as the emergent properties built up from that world. At the
Los Alamos National Laboratory in New Mexico, Tilden researched and devel-
oped a number of unique analog robots using rapid prototyping and off-the-shelf
and cannibalized parts. Los Alamos wanted robots capable of working in unpre-
dictable, unstructured, and potentially dangerous environments. Tilden created
more than eighty robot prototypes. His SATBOT autonomous spacecraft proto-
type could autonomously align itself to the Earth’s magnetic field. For the Marine
Corps Base Quantico, he build fifty insectoid robots capable of crawling into
minefields and detecting explosive devices. An “aggressive ashtray” robot spit
water at smokers. A “solar spinner” cleaned windows. A biomorph constructed
from five broken Sony Walkmans mimicked the movements of an ant.
At Los Alamos, Tilden began constructing Living Machines powered by solar
cells. Because of their energy source, these machines operated at very slow speeds
but were reliable and efficient over very long periods of time, many more than a
year. Tilden’s original plans for robots were based on thermodynamic conduit
engines, in particular small and efficient solar engines capable of firing single
neurons. His “nervous net” neurons controlled the rhythms and patterns of
motions in robot bodies rather than the workings of their brains. Tilden’s insight
was to optimize the number of possible patterns across the smallest number of
embedded transistors. He realized that it was possible to produce six patterns of
movement with only twelve transistors. By folding the six patterns into a figure
eight in a symmetrical robot chassis, Tilden could mimic hopping, jumping, run-
ning, sitting, slithering, and a number of other patterns of behavior.
Tilden has since become an advocate of a new set of robot rules for such surviv-
alist wild automata. Tilden’s Laws of Robotics state that (1) a robot must protect its
existence at all costs; (2) a robot must obtain and maintain access to its own power
source; and (3) a robot must continually search for better power sources. Tilden
hopes that wild robots will be used to autonomously restore ecosystems damaged
by human beings.
Another epiphany for Tilden was to introduce relatively cheap robots as toys for
the masses and robot enthusiasts. He wanted his robots in the hands of many so
that they could be reprogrammed and modified by hackers, hobbyists, and mem-
bers of various maker communities. Tilden made the toys so that they could be
taken apart and studied. They were fundamentally hackable. Everything inside is
carefully labeled, color-coded, and all of the wires had gold-plated contacts that
could be pulled apart.
Trolley Problem 325
Further Reading
Frigo, Janette R., and Mark W. Tilden. 1995. “SATBOT I: Prototype of a Biomorphic
Autonomous Spacecraft.” Mobile Robotics, 66–75.
Hapgood, Fred. 1994. “Chaotic Robots.” Wired, September 1, 1994. https://www.wired
.com/1994/09/tilden/.
Hasslacher, Brosl, and Mark W. Tilden. 1995. “Living Machines.” Robotics and Autono-
mous Systems 15, no. 1–2: 143–69.
Marsh, Thomas. 2010. “The Evolution of a Roboticist: Mark Tilden.” Robot Magazine,
December 7, 2010. http://www.botmag.com/the-evolution-of-a-roboticist-mark-tilden.
Menzel, Peter, and Faith D’Aluisio. 2000. “Biobots.” Discover Magazine, September 1,
2000. https://www.discovermagazine.com/technology/biobots.
Rietman, Edward A., Mark W. Tilden, and Manor Askenazi. 2003. “Analog Computation
with Rings of Quasiperiodic Oscillators: The Microdynamics of Cognition in Liv-
ing Machines.” Robotics and Autonomous Systems 45, no. 3–4: 249–63.
Samans, James. 2005. The Robosapiens Companion: Tips, Tricks, and Hacks. New York:
Apress.
Trolley Problem
The Trolley Problem is an ethical dilemma first articulated by Philippa Foot in 1967.
Advancements in artificial intelligence in various fields have precipitated ethical
conversations about how the decision-making processes of these technologies
326 Trolley Problem
Following World War II, Turing began to study mathematical biology at the
Victoria University of Manchester, while progressing in his work in mathematics,
stored-program digital computing, and artificial intelligence. Turing’s 1950 work
“Computing Machinery and Intelligence” explored artificial intelligence and
introduced the concept of the Imitation Game (also known as the Turing Test),
whereby a human judge attempts to distinguish between a computer program and
a human via a set of written questions and responses. If the computer program
imitates a human such that the human judge cannot distinguish the computer
program’s responses from the human’s responses, then the computer program has
passed the test, suggesting that the program is capable of intelligent thought.
Turing and his colleague, D.G. Champernowne, wrote Turochamp, a chess pro-
gram intended to be executable by a computer, but no computer with sufficient
power existed to test the program. Instead, Turing tested the program by manually
running the algorithms.
Though much of Turing’s work remained classified until well after his death,
Turing was well decorated during his lifetime. In 1946, Turing was appointed to
the Order of the British Empire 1946, and in 1951, he became a Fellow of the
Royal Society (FRS). An award in his name, the Turing Award, is presented annu-
ally by the Association for Computing Machinery for contributions to the comput-
ing field. Accompanied by $1 million in prize money, the Turing Award is widely
regarded as the Nobel Prize of Computing.
Turing was relatively open about being gay at a time when gay sexual activity
was still considered a criminal offense in the United Kingdom. In 1952, Turing
was charged with “gross indecency” under Section 11 of the Criminal Law
Amendment Act 1885. Turing was convicted, given probation, and subjected to a
punishment referred to as “chemical castration,” whereby he was injected with
synthetic estrogen for a year. Turing’s conviction impacted his professional life as
well. His security clearance was revoked, and he was forced to terminate his cryp-
tographic work with the GCHQ. In 2016, following successful campaigns to
secure an apology and pardon, the British government enacted the Alan Turing
law, which retroactively pardoned the thousands of men who were convicted under
Section 11 and similar historical legislation.
Turing died by cyanide poisoning in 1954. Though officially ruled a suicide,
Turing’s death may have been the result of accidental inhalation of cyanide fumes.
Amanda K. O’Keefe
See also: Chatbots and Loebner Prize; General and Narrow AI; Moral Turing Test; Turing
Test.
Further Reading
Hodges, Andrew. 2004. “Turing, Alan Mathison (1912–1954).” In Oxford Dictionary of
National Biography. https://www.oxforddnb.com/view/10.1093/ref:odnb/97801
98614128.001.0001/odnb-9780198614128-e-36578.
Lavington, Simon. 2012. Alan Turing and His Contemporaries: Building the World’s
First Computers. Swindon, UK: BCS, The Chartered Institute for IT.
Sharkey, Noel. 2012. “Alan Turing: The Experiment that Shaped Artificial Intelligence.”
BBC News, June 21, 2012. https://www.bbc.com/news/technology-18475646.
Turing Test 329
Turing Test
Bearing the name of computer science pioneer Alan Turing, the Turing Test is a
standard of AI that attributes intelligence to any machine capable of exhibiting
intelligent behavior equivalent to that of a human. The locus classicus for the test
is Turing’s “Computing Machinery and Intelligence” (1950), which develops a
basic prototype—what Turing calls “The Imitation Game.” In this game, a human
is made to judge which of the two rooms is occupied by a machine and which is
occupied by another human, on the basis of anonymized responses to questions
the judge puts to each occupant in natural language. Although the human respon-
dent must give truthful answers to the judge’s questions, the goal of the machine
is to deceive the judge into believing that it is human. According to Turing, the
machine may meaningfully be said to be intelligent to the extent that it is success-
ful in this task.
The main advantage to this basically operationalist account of intelligence is
that it avoids difficult metaphysical and epistemological questions about the nature
and inner experience of intelligent activity. By Turing’s standard, no more than
empirical observation of outward behavior is the requisite for predicating intelli-
gence of any object. This comes in particularly stark contrast to the broadly Car-
tesian tradition in epistemology, according to which some internal self-awareness
is definitional of intelligence. The so-called “problem of other minds” that results
from such a view—namely, how to be certain of the existence of other intelligent
beings if it is not possible to know their minds from a supposedly needed first-
person perspective—is importantly eschewed on Turing’s approach.
Nevertheless, the Turing Test remains tied to the spirit of Cartesian epistemol-
ogy at least insofar as it conceives of intelligence in a strictly formalist way. The
machine referred to in the Imitation Game is a digital computer in Turing’s sense:
namely, a set of operations that may in principle find instantiation in any sort of
material. Specifically, the digital computer is made up of three components: a
store of knowledge, an executive unit that carries out individual commands, and a
control to regulate the executive unit. But as Turing makes it clear, it is of no
essential significance whether these components are materialized via electronic
mechanisms or mechanical ones. What is decisive is the formal set of rules that
constitute the essence of the computer itself. Turing retains a basic notion that
intelligence is fundamentally immaterial. If this much is true, it is reasonable to
suppose that human intelligence operates basically like a digital computer and
may in principle, therefore, be replicated by artificial means.
This history of AI research ever since Turing’s work has divided into two basic
camps: those who accept this basic assumption and those who reject it. John
Haugeland has coined the phrase “good old-fashioned AI” or GOFAI to character-
ize the first camp. Notable figures belonging to this approach include Marvin
Minsky, Allen Newell, Herbert Simon, Terry Winograd and especially Joseph
Weizenbaum, whose program ELIZA was contentiously touted as the first to have
successfully passed the Turing Test in 1966.
Nevertheless, critics of Turing’s formalism have abounded, especially in the
last three decades, and today GOFAI is a much-discredited approach in AI.
330 Turing Test
Among the most famous critiques of GOFAI broadly—and the assumptions of the
Turing Test specifically—is John Searle’s Minds, Brains, and Programs (1980), in
which Searle develops his now-famous Chinese Room thought experiment. The
latter imagines a version of the Turing Test, in which a human with no knowledge
of Chinese is seated in a room and made to correlate Chinese characters she
receives to other Chinese characters she sends out, according to a program scripted
in English. Supposing sufficient mastery of the program, Searle suggests that the
person inside the room might pass the Turing Test, deceiving a native Chinese
speaker into falsely believing that she understood Chinese. If the person in the
room is instead a digital computer, Searle’s critical thesis is that Turing-type tests
fail to capture the phenomenon of understanding, which Searle argues involves
more than the mere functionally correct correlation of inputs with outputs.
Searle’s critique suggests that AI research must take seriously the questions of
materiality in ways the formalism of Turing’s Imitation Game neglects. Searle
concludes his own discussion of the Chinese Room thought experiment by arguing
that the particular physical makeup of human organisms—especially that they
possess complex nervous systems, brain tissue, etc.—ought not be dismissed as
irrelevant to theories of intelligence. This view has partly inspired an entirely
alternative approach in AI known as connectionism, which seeks to construct
machine intelligence by modeling the electrical structure of human brain tissue.
The successes of this approach have been widely debated, but consensus appears
to be that it improves on GOFAI in establishing generalized forms of intelligence.
However, Turing’s test is not only subject to critique from the side of material-
ism but also may be attacked from the direction of renewed formalism. Thus, one
may argue that as a standard of intelligence, Turing tests are inadequate precisely
because they seek to replicate human behavior, whereas the latter is often highly
unintelligent. On strong versions of this criticism, standards of rationality must be
derivable a priori rather than from actual human practice, if they are to distin-
guish rational from irrational human behavior in the first place. This line of cri-
tique has become particularly pointed as the emphasis of AI research has come
increasingly to be laid on the possibility of so-called super-intelligence: forms of
generalized machine intelligence that well surpass the human level. Should this
new frontier of AI be reached, it would seem to render Turing tests obsolete.
Moreover, even discussion of the possibility of super-intelligence would seem to
require new standards of intelligence besides strict Turing tests.
Against such criticism, Turing may be defended by observing that it was never
his aim to establish any once-and-for-all standard of intelligence. Indeed, by his
own lights, the goal is not to answer the metaphysically challenging question “can
machines think” but to replace this question with the more empirically verifiable
alternative: “What will happen when a machine takes the part [of the man in the
Imitation Game]” (Turing 1997, 29–30). Thus, the above-mentioned weakness of
Turing’s test—that it fails to establish a priori standards of rationality—is indeed
also part of its strength and motivation. And also no doubt, it explains the long
influence it has had over research in all fields of AI since it was first proposed
three-quarters of a century ago.
David Schafer
Turkle, Sherry 331
See also: Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI;
Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.
Further Reading
Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychol-
ogy, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA:
MIT Press.
Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psy-
chology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge,
MA: MIT Press.
Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philoso-
phy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cam-
bridge, MA: MIT Press.
creating space for quiet reflection that allows subjects to think deeply about the
relationships they have with their devices
In her next major book, Alone Together: Why We Expect More from Technology
and Less from Each Other (2011), Turkle leveraged these intimate ethnographic
methods to make the argument that the growing relationship between humans and
the technology that they use is problematic. These problems are linked to both the
increasing use of social media as a means of communication and the continued
level of comfort and relatability to technological devices, stemming from the
emergent paradigm of AI that had become nearly ubiquitous. Here she linked the
roots of the problem back to early leaders in the development of cybernetics, not-
ing, for example, Norbert Weiner’s musings in his book God & Golem, Inc. (1964)
on the possibility of sending a human being over a telegraph line. This approach to
cybernetic thinking blurs the lines between humans and technology because it
reduces both to information.
In terms of AI, this means that it is not important whether the devices we inter-
act with are really intelligent. Turkle argues that by interacting with these devices
and taking care of them, we are able to trick ourselves into believing we are in a
relationship, which causes us to experience the devices as if they were intelligent.
She identified this shift in a 2006 presentation at the Dartmouth Artificial Intelli-
gence Conference titled “Artificial Intelligence at 50: From Building Intelligence
to Nurturing Sociabilities.” In this presentation, she identified the 1997 Tamagot-
chi, 1998 Furby, and 2000 My Real Baby as early versions of what she calls rela-
tional artifacts, what are more broadly described in the literature as social
machines. The stark difference between these devices, and all past children’s toys,
is that these devices come pre-animated and ready for a relationship, and do not
require that children project a relationship on to them. Turkle believes that this
shift is concerned as much or more with our human vulnerabilities than with the
machines’ capabilities. In other words, the very act of caring for an object makes
it more likely that one will not only see that object as intelligent but also feel a
connection to it. For the average person interacting with these devices, this feeling
of connection is more important than the abstract philosophic questions about the
nature of its intelligence.
In both Alone Together and the Reclaiming Conversation: The Power of Talk in
a Digital Age (2015), Turkle delved deeper into exploring the consequences of
humans interacting with AI-based devices. In Alone Together, she gives the exam-
ple of Adam, who enjoys the gratitude of the AI bots that he rules over in the game
Civilization. Adam finds this play calming and enjoys that he is able to create
something new. Yet, Turkle is quite critical of this interaction, claiming that
Adam’s playing is not real creation but merely the feeling of creation, and it is
problematic because it lacks any true pressure or risk. She extends this argument
in Reclaiming Conversation, arguing that sociable companions give only a sense
of friendship. This is problematic because of the importance of friendship between
humans and what might get left out of relationships that only offer a feeling or
sense of friendship rather than actual friendship.
This shift is of urgent importance for Turkle. She argues that while there are
potentially benefits to relationships with AI-enabled devices, these are relatively
minor in comparison to what is missing: the full complexity and inherent
Turkle, Sherry 333
contradictions that are part of what it means to be human. The relationship some-
one can develop with an AI-enabled device is not as complex as those one can
develop with other humans. Turkle argues that as people have become more com-
fortable and more reliant on technological devices, the very meaning of compan-
ionship has shifted. This shift has been responsible for simplifying people’s
expectations for companionship, reducing the benefits that one hopes to receive
from relationships. Now, people are more likely to equate companionship more
simply with only the notion of interaction, leaving out more complex feelings and
negotiations that are commonly part of relationships. One can develop compan-
ionship with devices simply by interacting with them. As human communication
has shifted away from face-to-face conversation to interaction that is mediated by
devices, the conversations between humans are merely transactional. In other
words, interaction is the most that is expected. Drawing on her background in
psychoanalysis, Turkle argues that this form of transactional communication
means that users spend less time learning to see the world through the eyes of
another person, which is an important skill that fosters empathy.
Drawing together these various threads of arguments, Turkle believes we are in
a robotic moment in which we long for, and in some cases, we even prefer AI-
based robotic companionship to that of other humans. For example, some people
enjoy having conversations with the Siri virtual assistant on their iPhones because
they don’t fear being judged by this device, which is highlighted by a series of Siri
ads that feature celebrities talking to their phones. This is problematic for Turkle
because these devices can only respond as if they understand the conversation.
However, AI-based devices are limited to understanding the literal meanings of
the data stored on the device. They can, for example, understand the content of
calendars and emails that reside on phones, but they cannot actually understand
what any of this data means to the user. For an AI-based device, there is no signifi-
cant difference between a calendar appointment for car maintenance and one for
chemotherapy. Entangled in a variety of these robotic relationships with an
increasing number of devices, a person can forget what it means to have an authen-
tic conversation with another human.
While Reclaiming Conversation reports on eroding conversation skills and
shrinking levels of empathy, it also strikes a hopeful note. Because people are
experiencing growing dissatisfaction in their relationships, there may yet be the
possibility of reclaiming the important role of face-to-face human communica-
tion. Turkle’s solutions emphasize decreasing the amount of time that one uses a
cell phone, but the role of AI in this relationship is also of importance. Users must
acknowledge that the relationships they have with their virtual assistants cannot
replace face-to-face relationships. This will require being more deliberate about
the way one uses devices, intentionally prioritizing in-person interactions over the
quicker and easier interactions provided by AI-enabled devices.
J. J. Sylvia
See also: Caregiver Robots; Cognitive Psychology, AI and.
Further Reading
Turkle, Sherry. 1995. Life on the Screen: Identity in the Age of the Internet. New York:
Simon and Schuster.
334 2001: A Space Odyssey
Turkle, Sherry. 2005. The Second Self: Computers and the Human Spirit. Cambridge,
MA: MIT Press.
Turkle, Sherry. 2015. Reclaiming Conversation: The Power of Talk in a Digital Age. New
York: Penguin Press.
rebelling against humanity. As a result, HAL has commonly been read as a cau-
tionary tale of our fear of the blurring boundaries between human and machine
and the expanding autonomy and sophistication of technology.
HAL can also be read less pessimistically, as a representation of the goals,
methods, and dreams of the artificial intelligence field. Indeed, one of the founders
of artificial intelligence, Marvin Minsky, served as a consultant for the film.
Accordingly, HAL depicted (and still depicts) a plausible representation of future
artificial intelligence systems. HAL integrates many subfields of artificial intelli-
gence work, including but not limited to visual processing, natural language acu-
ity, and chess playing.
HAL’s advanced visual processing is demonstrated in numerous ways. The film
contains frequent cuts to HAL’s camera eye and gives a subjective view through
the camera. HAL is also able to recognize the face of crewmembers in one of
Bowman’s drawings. He even moves beyond simple face-object recognition and
gives aesthetic judgment to Bowman’s drawing style. In so doing, HAL demon-
strates visual processing capabilities that still surpass what we are able to accom-
plish today.
One of the most significant signs of intelligence is language, which has inter-
ested the field of artificial intelligence since its inception. Throughout the film and
the novel, HAL shows an impressive array of language competencies. He is able
to understand and make sense of sentences and conversations, and he can generate
appropriate responses to social interactions. HAL is further capable of taking part
in conversations ranging from simple commands—such as displaying Poole’s par-
ents’ birthday message—to complex interactions as in expressing inner conflicts
and, perhaps, telling cunning lies. In other words, HAL is capable of deciphering
the connotative meanings of human interactions, something that remains out of
reach for current artificial intelligence systems. Moreover, HAL’s language acuity
would allow the computer to easily pass the Turing Test. The Turing Test is a trial
wherein a confederate is unable to determine whether they are communicating
with a human or a robot.
Game playing, especially chess, has been a perennial issue in artificial intelli-
gence. Chess was singled out in early artificial intelligence research because of the
game’s difficulty, a game for intelligent people. Thus, it was assumed that if com-
puters could play chess, they must also be intelligent. At the time of 2001’s release,
there was plenty of optimism in artificial intelligence research given the early suc-
cesses in designing machines capable of playing chess. HAL’s ability to play the
game surpassed artificial intelligence systems from the time. Consequently, the
chess playing scenes in the novel and the movie, where HAL is virtually unbeat-
able, coincided with optimism that in the foreseeable future, there would be chess-
playing systems superior to any human player. Today, artificial intelligence has
caught up to science fiction with chess-playing supercomputers such as Deep Blue
and advanced AI programs such as AlphaZero.
Over fifty years after their release, the novel and film retain the power to inspire
awe. In 1991, the film was deemed “culturally, historically, or aesthetically” sig-
nificant enough to be preserved by the National Film Registry.
Todd K. Platts
336 2001: A Space Odyssey
See also: Deep Blue; Minsky, Marvin; Robot Ethics; Turing Test.
Further Reading
Brammer, Rebekah. 2018. “Welcome to the Machine: Artificial Intelligence on Screen.”
Screen Education 90 (September): 38–45.
Kolker, Robert, ed. 2006. Stanley Kubrick’s 2001: A Space Odyssey: New Essays. New
York: Oxford University Press.
Krämer, Peter. 2010. 2001: A Space Odyssey. London: British Film Institute.
U
Unmanned Ground and Aerial Vehicles
Unmanned vehicles are machines that operate without a human operator physi-
cally present onboard. There are a diverse variety of such vehicles that can work in
different environments including ground, underground, underwater, airspace, and
outer space. Unmanned vehicles are either preprogrammed by algorithms or
remotely controlled by a human operator, and they can have varying degrees of
autonomy in their operations. Typically, the overall system of an unmanned vehi-
cle consists of sensors that collect data about the surrounding environment and
actuators that enable the vehicle to maneuver. After the human operator or the
machine program receives the information gathered by sensors, the actuators
guide the vehicle in accordance with the commands received from the operator. A
major premise of unmanned vehicles is to reach places that are partially or fully
unreachable or dangerous for humans and to perform potentially dangerous, dirty,
and dull (three D) tasks for humans. Unmanned vehicles also bear the potential of
lowering labor costs while increasing work and time efficiency in both military
and civilian settings.
The history of unmanned vehicle technology goes back to the nineteenth cen-
tury. In 1898, the Serbian-American engineer and inventor Nikola Tesla presented
a remotely controlled boat at an exhibition at Madison Square Garden in New
York City. The miniature boat he built had hull, keel, rudder, electric motor, bat-
tery, and an antenna receiver. He could remotely control the vehicle’s speed, posi-
tion, and direction through the radio signals of a radio-transmitting control box.
He called his invention teleautomaton and received patent the same year (Method
of an Apparatus for Controlling Mechanism of Moving Vessels or Vehicles).
Even though Tesla’s invention remained largely unnoticed by the military at the
time, unmanned aerial and ground vehicles have been widely used in the military
for strategic and tactical purposes such as reconnaissance, surveillance, and target
acquisition since the early twentieth century. With the beginning of World War I
(1914–1918), the idea of remotely controlled unmanned vehicles gained popularity
among armies as they were seeking ways both to increase the efficiency of mili-
tary operations and also to reduce the human costs of combat. Today, diverse mili-
tary and civilian applications of unmanned vehicles are being developed in a wide
array of industries such as agriculture, manufacturing, mining, emergency
response, transportation of goods and people, and security operations of police
and the military.
Unmanned Ground Vehicles (UGVs) are land-based vehicles that operate on
the ground without a human driver inside. They are commonly described as
mobile robots. Typically, a UGV consists of sensors, utility and power platform,
338 Unmanned Ground and Aerial Vehicles
Today, UGVs are used in a wide range of operations in both indoor and outdoor
environments. While the military uses UGVs in modern warfare, commercial
applications of UGV technology are also being rapidly developed. UGVs are
widely used in factories and warehouses across the world. They also have
Unmanned Ground and Aerial Vehicles 339
internet message board. During his AMA, Warwick discussed his movement into
and out of the status of a cybernetic entity, as his claim to be the first cyborg has
often been contested.
Warwick’s status as the first cyborg is a matter of definition. Implantable thera-
peutic devices such as the pacemaker were developed mid-twentieth century.
Steve Mann, another contender for the title of first cyborg, has experimented with
wearable sensory enhancement technology since the 1970s. Artist Eduardo Kac
implanted himself with a pet registration microchip in 1997. Warwick’s first
implant in 1998 provided location data to sensors in his lab and office, while his
more extensive 2002 implant transmitted signals across networked computers.
Warwick acknowledges that his claim to be a cybernetic entity is much stronger in
the latter case.
The 1998 Project Cyborg involved the implantation of a chip in his arm. The
chip was a simple transmitter, of a type called a radio-frequency identification
device (RFID). This chip was placed millimeters deep in his left arm, and the
experiment lasted nine days, after which the chip was removed. While the chip
was implanted, it communicated with sensors in Warwick’s laboratory and office
to control environmental conditions such as electric lights. The operation was
conducted by Dr. George Boulos and his medical team in Reading. Warwick has
written and spoken extensively about experiments he and his collaborators have
performed on his body and the body of Irena Warwick, Kevin’s wife and partner,
in the BrainGate (also known as Utah Array) experiment. In I, Cyborg (2002),
Warwick provides an autobiographical account of being implanted with a com-
puter chip, and an overview of his research to that point attempts to integrate the
human body with machines.
In 2002, a group of neurosurgeons at Radcliffe Infirmary, Oxford, led by Amjad
Shad and Peter Teddy, implanted a device connected to the median nerve of War-
wick’s left arm. The device was the brainchild of Mark Gasson, who earned his
PhD under Warwick, and his research team. The implant in Warwick was con-
nected to a computer at Columbia University, in a New York lab, and sent signals
to a robot hand in the United Kingdom, at what was then Warwick’s home institu-
tion of the University of Reading. The hand could be made to open or close fol-
lowing Warwick’s movements, and Warwick could receive limited sensory data
from the hand.
Warwick’s wife Irena has objected to characterizations of her participation as
being done under compulsion, emphasizing her determination and enthusiasm in
being part of cybernetic experiments, and she was aware of the risks to Kevin and
herself. Warwick described the most significant part of the project being that they
were able through network transmission to extend his body across the ocean with
the New York-Reading experiment. Warwick has spoken with intensity of the
extraordinary intimacy of the linkage of his and Irena’s nervous systems and
expressed a desire to experiment with direct brain connection. Warwick has lik-
ened brain linkage to sexual intimacy and suggested that brain-to-brain connec-
tions could be influenced by human sexual preferences and aversions. Warwick
had the interface in his body for three months prior to the experiment. Some wires
remained in his arm from the BrainGate experiment.
Warwick, Kevin 343
Warwick and his collaborators have designed several robots, including Morgui,
a skull-like robot that was restricted to visitors over the age of eighteen on the
grounds that it was frightening; Hissing Sid, a robotic cat that made headlines
when it was barred from air travel by British Airways; and Roger, a robot designed
for long-distance running.
Warwick began working on rat brain cells as robotic controllers in 2007. Rat
embryo brain cells are harvested to serve as parts in order to study how brain cell
networks can be used to control mechanics. The Animat project involved connect-
ing cells to electrodes, with those electrodes transmitting signals from robot
sensors.
In 2014, Warwick participated in organizing the Turing Test of a chatbot called
Eugene Goostman. In the test, a computer producing a facsimile of human speech
attempts to fool judges into evaluating it as a fellow human. The Royal Society-
hosted event in London led to bombastic headlines in the press about robot over-
lords and the test being passed as a huge milestone. Warwick was criticized for
participating in the hype when he claimed the test should be considered passed,
though he also cautioned that predictions based on the results should be greeted
with profound skepticism.
Warwick has been the subject of reporting in The Telegraph, The Atlantic, The
Register, Wired Magazine, Forbes, Scientific American, The Guardian, and many
other publications. Warwick appeared on the cover of the February 2000 issue of
Wired. He has been covered most intensively by The Register, which gave him his
nickname, Captain Cyborg. The Register has persistently reported on Warwick’s
experiments, public appearances, and projections about the future of technology,
though it was always to tease, heckle, and lampoon. In the course of their report-
ing, The Register has run an assortment of quotes critical of Warwick from scien-
tists in related disciplines. Their criticisms characterize Warwick as a showman
and self-promoter, not a serious scientist. Warwick has dismissed such criticisms
as unfounded or the result of jealousy, pointing out that people who attempt to
make science accessible and exciting to the larger public often suffer similar cri-
tiques. Warwick has also been criticized for the uncritical use of the categories of
black and white races in his discussion of intelligence in QI.
Warwick has raised concerns about the potential for humans to be overtaken by
technology; for example, he has warned that designing military robots with
enhanced intelligence might overwhelm humanity if they decide to do so. War-
wick argues that those who do not make use of implantable tech in the future will
be disadvantaged or left behind. This view is predicated on the idea that very
powerful computers will be running human societies in the near future and that
implantable technology will provide the most effective mechanism of communi-
cation with these superintelligent machines. Warwick has made expansive claims
about the fate of unaugmented humans in the future, imagining they will become
one of a kind like cows, a form of life regarded as fundamentally inferior.
Warwick is a fan of the transhumanist movement, especially the biohackers who
work outside of traditional institutions. Beyond his warnings about how humanity
might become undone by machines, Warwick anticipates remarkable expansions
of the human sensorium through mechanical enhancement. His fondest dream is to
344 Workplace Automation
transcend speech through technology, merge human minds together to create new
intimacies, and create new refined forms of communication.
Jacob Aaron Boss
See also: Cybernetics and AI.
Further Reading
O’Shea, Ryan. 2017. “Kevin Warwick on Project Cyborg.” The Future Grind, November
24, 2017. https://futuregrind.org/podcast-episodes/2018/5/17/ep-10-kevin-warwick
-on-project-cyborg.
Stangroom, Jeremy. 2005. What Scientists Think. New York: Routledge.
Warwick, Kevin. 2000. QI: The Quest for Intelligence. London: Piatkus.
Warwick, Kevin. 2002. I, Cyborg. Champaign: University of Illinois Press.
Warwick, Kevin. 2004. March of the Machines: The Breakthrough in Artificial Intelli-
gence. Champaign: University of Illinois Press.
Warwick, Kevin. 2012. Artificial Intelligence: The Basics. New York: Routledge.
Workplace Automation
The term “automation” is derived from the Greek word automatos, meaning “act-
ing of one’s own will.” In a modern economic context, the term is used to describe
any digital or physical process that has been designed to perform tasks requiring
minimal human input or intervention. While technological development has been
continuous over human history, the pervasive level of automation in modern soci-
ety is a relatively recent development. The replacement of humans by machines
performing routine manual labor roles is now the de facto norm.
The modern debate on the pros and cons associated with automation has shifted
away from industrial automation (i.e., robotic assembly lines) to the future impact
of artificial intelligence—a general-purpose technology with the potential to rede-
fine the nature of human labor. Technologies built upon AI and machine learning
are increasingly impacting all sectors of the economy. Autonomous driving sys-
tems are expected to make travel in all domains (e.g., land, sea, and air) much
safer and more efficient. In health care, artificial intelligence will be able to make
disease diagnoses, perform surgeries, and enhance our knowledge of gene editing
at a speed and efficiency greater than any human doctor. Financial services are
also affected by the introduction of many AI-related innovations, such as robo-
advisors—intelligent robots that provide financial advice and investment manage-
ment services to clients online.
There are two opposing opinions on the long-term effect that the development
of AI-based technologies will have on automation and human labor. They can be
summarized as follows:
NEGATIVE: Automation due to AI leads to widespread unemployment as
increasingly more complex human labor is replaced by machines.
POSITIVE: Automation due to AI replaces some human labor, but simultaneously
creates higher-quality jobs in new sectors, which results in a net positive soci-
etal outcome.
Workplace Automation 345
The types of human labor can be qualitatively described as falling under one of
the following categories: (1) routine manual, (2) nonroutine manual, (3) routine
cognitive, and (4) nonroutine cognitive (Autor et al. 2003). The routine/nonroutine
division refers to how much a job follows a set of predetermined steps, while the
manual/cognitive division refers to whether a job requires physical or creative
input. As expected, routine manual jobs (e.g., machining, welding, and painting)
are the easiest to automate, while nonroutine manual jobs (e.g., operating a vehi-
cle) and nonroutine cognitive jobs (e.g., white-collar work) have traditionally been
considered more difficult to automate. But the advent of AI-based technologies,
which combine the speed and efficiency of a machine with the agency of a human,
is now putting pressure on employment in typically human applications—jobs
that require some level of creative decision-making.
For example, a large percentage of the U.S. population is currently employed to
drive a vehicle in some capacity. With the current advances in self-driving tech-
nology, these jobs are imminently automatable. More generally, Frey and Osbourne
(2017) predict that 47 percent of all jobs in the U.S. economy fall in the category of
easily automatable over the next two decades.
Meanwhile, 33 percent of jobs fall in the nonroutine cognitive category, which
are considered relatively protected from automation (Frey and Osbourne 2017).
But even such jobs are under pressure from recent advancements in AI-based
technologies, such as deciphering handwriting, decision-making in fraud detec-
tion, and paralegal research. For example, in the legal profession, advanced text
analysis algorithms are increasingly able to read and detect relevant information
among thousands of pages of complex case documents such as license agreements,
employment agreements, customer contracts, and common law precedents. This
makes human labor, allocated to information processing and synthesizing, redun-
dant. Similar trends can be seen in other white-collar professions where human
agency has traditionally been required to perform tasks.
The fear is that the falling costs of robotic technology, coupled with develop-
ments in AI-based technology, will accelerate the loss of routine jobs and nonrou-
tine jobs typically allocated to humans across all economic sectors. The process of
automation is not inherently negative, but if this widespread replacement of labor
is not met by an adequately robust growth in new jobs, the result will be mass
unemployment.
On the other hand, a significant number of technology experts and other soci-
etal stakeholders argue that such doomsday scenarios in the labor markets will
not materialize. In their opinion, the effects of the AI-driven technological revo-
lution will not be dissimilar to those of the industrial revolutions that preceded
it. For example, during the Second Industrial Revolution—which saw the intro-
duction of the steam engine, electricity, and associated advanced machinery—
many routine jobs, especially in the textile industry, were rendered redundant.
This rapidly shifting employment landscape led to the organization of disenfran-
chised English textile workers, known as the Luddites, to protest the changes.
Despite these short-term upheavals, the labor market ultimately adapted rather
successfully, which led to the creation of new jobs as a result of these new
technologies.
346 Workplace Automation
Further Reading
Acemoglu, Daron, and Pascual Restrepo. 2019. “Automation and New Tasks: How Tech-
nology Displaces and Reinstates Labor.” Journal of Economic Perspectives 33,
no. 2 (Spring): 3–30.
Autor, David H. 2015. “Why Are There Still So Many Jobs? The History and Future of
Workplace Automation.” Journal of Economic Perspectives 29, no. 3 (Summer):
3–30.
Autor, David H., Frank Levy, and Richard J. Murnane. 2003. “The Skill Content of Recent
Technological Change: An Empirical Exploration.” Quarterly Journal of Econom-
ics 118, no. 4 (November): 1279–1333.
Bruun, Edvard P. G., and Alban Duka. 2018. “Artificial Intelligence, Jobs, and the Future
of Work: Racing with the Machines.” Basic Income Studies 13, no. 2: 1–15.
Brynjolfsson, Erik, and Andrew McAfee. 2011. Race Against the Machine: How the Digi-
tal Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly
Transforming Employment. Lexington, MA: Digital Frontier Press.
Frey, Carl Benedikt, and Michael A. Osbourne. 2017. “The Future of Employment: How
Susceptible Are Jobs to Computerization?” Journal of Technological Forecasting
and Social Change 114: 254–80.
Workplace Automation 347
Manyika, James, Michael Chui, Jacques Bughin, Richard Dobbs, Peter Bisson, and Alex
Marrs. 2013. Disruptive Technologies: Advances that Will Transform Life, Busi-
ness, and the Global Economy. McKinsey Global Institute Technical Report.
Smith, Aaron, and Janna Anderson. 2014. “AI, Robotics, and the Future of Jobs.” Pew
Research Center Report. http://www.pewinternet.org/2014/08/06/future-of-jobs/.
Susskind, Richard E., and Daniel Susskind. 2015. The Future of the Professions: How
Technology Will Transform the Work of Human Experts. New York: Oxford Uni-
versity Press.
Y
Yudkowsky, Eliezer(1979–)
Eliezer Yudkowsky is an artificial intelligence theorist, blogger, and autodidact
best known for his commentaries on friendly artificial intelligence. He is cofounder
and research fellow at the Machine Intelligence Research Institute (MIRI),
founded in 2000 as the Singularity Institute for Artificial Intelligence. He is also a
founding director of the World Transhumanist Association. Yudkowsky often
writes on the topic of human rationality on the community blog Less Wrong.
Yudkowsky’s inspiration for his life’s work is the concept of an intelligence
explosion first invoked by British statistician I. J. Good in 1965. In his essay
“Speculations Concerning the First Ultraintelligent Machine,” Good explained
that a machine with intellectual capacities surpassing human intelligence should
be capable of rewriting its own software to improve itself, presumably until reach-
ing superintelligence. At this point, the machine would design artificial intelli-
gence improvements far beyond the capabilities of any human being. Such a
machine would be the last invention of humankind.
Yudkowsky has examined expectations for a coming intelligence explosion in
depth. Greater-than-human intelligence, he asserts, depends on creating a machine
capable of general intelligence beyond that of human beings. Whole brain emula-
tion or mind uploading experiments such as the Blue Brain Project is considered
as one possible outcome. Another is biological cognitive enhancement, which
would improve the mental capacities of human beings by genetic or molecular
modification. Augmented intelligence by direct brain-computer interfaces is a
third possibility. A final approach to the intelligence explosion involves an artifi-
cial general intelligence built from neural networks or genetic algorithms, which
could, for example, recursively self-improve on themselves. Yudkowsky has said
that the conditions and groundwork for superintelligence could be in place by the
year 2060.
Yudkowsky has argued that a superintelligent machine might disrupt computer
networks, exploit infrastructure vulnerabilities, create copies of itself for purposes
of global domination, or even eliminate humanity as a potential threat to its own
survival. As a rival, an artificial superintelligence could overturn civilization in
the blink of an eye. Thus, superintelligence represents an existential threat to
human beings.
On the other hand, a superintelligent machine might resolve intractable human
problems such as disease, famine, and war. It might discover ways to take humans
to the stars or give them the ability to participate in their own evolution or dis-
cover the biological bases for immortality. It would be very difficult to assess the
motivations of a superintelligent AI because the machine’s ultimate motives would
Yudkowsky, Eliezer 349
exist beyond human understanding. Yudkowsky has noted “the AI does not hate
you, nor does it love you, but you are made out of atoms which it can use for some-
thing else” (Yudkowsky 2008, 27).
For this reason, Yudkowsky believes it is important to build safety mechanisms
or basic machine morality into artificial intelligences. He calls this research per-
spective Friendly AI: How do you design machines that will remain friendly to
humanity beyond the point of superintelligence? Yudkowsky admits the difficul-
ties in approaching this problem. He notes in particular the superpowers such a
machine might be expected to possess and the dangerous and unexpected literal
ways it might go about fulfilling its programming.
To address these concerns about a superintelligent AI destroying humanity
either inadvertently or by conscious choice, Yudkowsky devised the AI-box
experiment. With no real AI oracle available for a test in a virtual prison, Yud-
kowsky cast himself as the imprisoned AI. He reports that during text-based ter-
minal conversations with human gatekeepers, he has twice experimentally
convinced them to let him out of the box. He has not yet published his winning
tactics, but it stands to reason that a true superintelligence would be even more
persuasive. Even if a first superintelligent AI could be checked, others would
likely emerge from other labs in quick succession. It is unlikely that they could all
be contained. It is possible that an advanced AI could even exploit currently
unknown physics to route around automatic fail-safe control mechanisms. An AI-
box experiment forms the basic premise of the 2014 science fiction film Ex
Machina.
Yudkowsky and MIRI researchers do not hold out much hope that program-
ming basic rules of benevolence into superintelligent machines will prevent
catastrophe. A truly advanced machine will realize that programmed constraints
(for example, Asimov’s Three Laws) are obstacles to the achievement of its goals.
It is unlikely that human designers would be able to outthink a rapidly improving
machine. Or it might be that the machine achieves its goals in a way that mini-
mizes harm to humans by subverting the basic human condition. If, for example,
an AI is programmed to avoid inflicting pain, the caretaking AI might one day
devise a way to remove all of the pain receptors from human beings. It is not likely
that humans can anticipate every eventuality and write specific rules or goals that
prevent infliction of every possible harm or that satisfy every human inclination.
Yudkowsky and his think tank group also believe it is unlikely that machine
learning can be used to teach a superintelligence moral behavior. Humans them-
selves disagree on the morality of individual cases present in human society. A
superintelligence may make incorrect decisions in a radically reshaped world, or it
may not properly classify the unique sources of data in the ways originally
intended from given human judgment datasets.
Instead, Yudkowsky suggests pursuing coherent extrapolated volition as a par-
tial solution to these problems. He advances the proposition that a seed AI itself be
given the task of exploring and generalizing from the vast storehouse of present
human values in order to determine or make recommendations about where they
converge and diverge. The machine would be tasked with identifying and drawing
conclusions from our best natures and our best selves. Yudkowsky acknowledges
350 Yudkowsky, Eliezer
that it may be that there are no places where human wishes for moral clarity and
progress coincide.
Not surprisingly, Yudkowsky is interested in the consequences of superintelli-
gence on society, framing the issue around the concept of a coming Singularity,
the moment beyond which we live in a world of smarter-than-human intelligences.
Yudkowsky frames claims about the Singularity in terms of three major schools of
thought. The first is I. J. Good’s concept of an Intelligence Explosion leading to a
superintelligent AI. A second school, advocated by math professor and science
fiction author Vernor Vinge, features an Event Horizon in technological progress
where all bets are off and the future becomes truly unpredictable. A third school,
Yudkowsky says, is Accelerating Change. In this school, he places Ray Kurzweil,
John Smart, and (possibly) Alvin Toffler. The school of Accelerating Change
claims that while we expect change to be linear, technological change can be
exponential and is therefore predictable.
Yudkowsky has a number of academic publications. His “Levels of Organiza-
tion in General Intelligence” (2007) uses neural complexity and evolutionary psy-
chology to explore the foundations of what he calls Deliberative General
Intelligence. In “Cognitive Biases Potentially Affecting Judgment of Global
Risks” (2008), he uses the framework of cognitive psychology to systematically
compile all human heuristics and biases that could be of value to appraisers of
existential risk. “Artificial Intelligence as a Positive and Negative Factor in Global
Risk” (2008) and “Complex Value Systems in Friendly AI” (2011) discuss the
complex challenges involved in thinking about and building a Friendly AI. “The
Ethics of Artificial Intelligence” (2014), cowritten with Nick Bostrom of the Future
of Humanity Institute, assesses the state of the art in machine learning ethics, AI
safety, and the moral status of AIs and speculates on the moral quandaries involved
in advanced forms of superintelligence.
Yudkowsky was born in Chicago, Illinois, in 1979. In his autobiography, Yud-
kowsky explains how his interest in advanced technologies emerged from a steady
diet of science fiction stories and a borrowed copy of Great Mambo Chicken and
the Transhuman Condition (1990) by Ed Regis, which he read at age eleven. He
remembers being struck by a passage in Vernor Vinge’s cyberpunk novel True
Names (1981) hinting at the Singularity, which led to his self-declaration as a Sin-
gularitarian. Yudkowsky dropped out of high school after completing eighth
grade. For several years, he tried his hand at programming—including attempts to
create a commodities trading program and an AI—before moving to Atlanta in
June 2000 to cofound the Singularity Institute with Brian Atkins and Sabine
Stoeckel. The Singularity Institute attracted the attention of entrepreneurs and
investors Peter Thiel and Jaan Tallinn, and Yudkowsky moved with the institute
to the San Francisco Bay Area in 2005. In fan fiction circles, Yudkowsky is well
known as the author of the hard fantasy novel Harry Potter and the Methods of
Rationality (2010), which recasts wizardry as a scientific method in order to
advance the cause of rationality over magic.
Philip L. Frana
See also: Ex Machina; General and Narrow AI; Superintelligence; Technological
Singularity.
Yudkowsky, Eliezer 351
Further Reading
Horgan, John. 2016. “AI Visionary Eliezer Yudkowsky on the Singularity, Bayesian
Brains, and Closet Goblins.” Scientific American, March 1, 2016. https://blogs
.scientificamerican.com/cross-check/ai-visionary-eliezer-yudkowsky-on-the
-singularity-bayesian-brains-and-closet-goblins/.
Packer, George. 2011. “No Death, No Taxes.” New Yorker, November 21, 2011. https://
www.newyorker.com/magazine/2011/11/28/no-death-no-taxes.
Yudkowsky, Eliezer. 2007. “Levels of Organization in General Intelligence.” In Artificial
General Intelligence, edited by Ben Goertzel and Cassio Pennachin, 389–501.
New York: Springer.
Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in
Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan
M. Ćirković, 308–45. Oxford, UK: Oxford University Press.
Yudkowsky, Eliezer. 2011a. “Cognitive Biases Potentially Affecting Judgment of Global
Risks.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan M.
Ćirković, 91–119. Oxford, UK: Oxford University Press.
Yudkowsky, Eliezer. 2011b. “Complex Value Systems in Friendly AI.” In Artificial Gen-
eral Intelligence: 4th International Conference, AGI 2011, Mountain View, CA,
USA, August 3–6, 2011. New York: Springer.
Bibliography
Brynjolfsson, Erik, and Andrew McAfee. 2016. The Second Machine Age: Work,
Progress, and Prosperity in a Time of Brilliant Technologies. New York:
W. W. Norton.
Calo, Ryan. 2017. “Artificial Intelligence Policy: A Primer and Roadmap.” Univer-
sity of California, Davis Law Review 51: 399–435.
Clarke, Arthur C. 1968. 2001: A Space Odyssey. London: Hutchinson.
Clarke, Neil. 2017. More Human Than Human: Stories of Androids, Robots, and
Manufactured Humanity. San Francisco: Night Shade Books.
Colby, Kenneth M. 1975. Artificial Paranoia: A Computer Simulation of Paranoid
Processes. New York: Pergamon Press.
Conklin, Groff. 1954. Science-Fiction Thinking Machines: Robots, Androids,
Computers. New York: Vanguard Press.
Cope, David. 2001. Virtual Music: Computer Synthesis of Musical Style. Cam-
bridge, MA: MIT Press.
Crevier, Daniel. 1993. AI: The Tumultuous Search for Artificial Intelligence. New
York: Basic Books.
Crowther-Heyck, Hunter. 2005. Herbert A. Simon: The Bounds of Reason in Mod-
ern America. Baltimore: Johns Hopkins University Press.
De Garis, Hugo. 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Contro-
versy Concerning Whether Humanity Should Build Godlike Massively
Intelligent Machines. ETC Publications.
Dennett, Daniel. 1998. Brainchildren: Essays on Designing Minds. Cambridge,
MA: MIT Press.
Dick, Philip K. 1968. Do Androids Dream of Electric Sheep? New York:
Doubleday.
Diebold, John. 1995. Transportation Infostructures: The Development of Intelli-
gent Transportation Systems. Westport, CT: Greenwood.
Dreyfus, Hubert. 1965. Alchemy and Artificial Intelligence. Santa Monica, CA:
RAND Corporation.
Dreyfus, Hubert. 1972. What Computers Can’t Do. New York: Harper & Row.
Dupuy, Jean-Pierre. 2000. The Mechanization of the Mind: On the Origins of
Cognitive Science. Princeton, NJ: Princeton University Press.
Feigenbaum, Edward, and Julian Feldman. 1963. Computers and Thought. New
York: McGraw-Hill.
Ferguson, Andrew G. 2017. The Rise of Big Data Policing: Surveillance, Race,
and the Future of Law Enforcement. New York: New York University
Press.
Foerst, Anne. 2005. God in the Machine: What Robots Teach Us About Humanity
and God. New York: Plume.
Ford, Martin. 2016. Rise of the Robots: Technology and the Threat of a Jobless
Future. New York: Basic Books.
Ford, Martin. 2018. Architects of Intelligence: The Truth about AI from the People
Building It. Birmingham, UK: Packt Publishing.
Freedman, David H. 1994. Brainmakers: How Scientists Are Moving Beyond
Computers to Create a Rival to the Human Brain. New York: Simon &
Schuster.
Bibliography 355
Lin, Patrick, Keith Abney, and George A. Bekey. 2012. Robot Ethics: The Ethical
and Social Implications of Robotics. Cambridge, MA: MIT Press.
Lin, Patrick, Ryan Jenkins, and Keith Abney. 2017. Robot Ethics 2.0: New Chal-
lenges in Philosophy, Law, and Society. New York: Oxford University
Press.
Lipson, Hod, and Melba Kurman. 2016. Driverless: Intelligent Cars and the Road
Ahead. Cambridge, MA: MIT Press.
McCarthy, John. 1959. “Programs with Common Sense.” In Mechanisation of
Thought Processes: Proceedings of the Symposium of the National Phys-
ics Laboratory, 77–84. London: Her Majesty’s Stationery Office.
McCarthy, John, and Patrick J. Hayes. 1969. “Some Philosophical Problems from
the Standpoint of Artificial Intelligence.” In Machine Intelligence, vol. 4,
edited by Donald Michie and Bernard Meltzer, 463–502. Edinburgh, UK:
Edinburgh University Press.
McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the
History and Prospects of Artificial Intelligence. San Francisco: W. H.
Freeman.
McCorduck, Pamela. 1990. Aaron’s Code: Meta-Art, Artificial Intelligence, and
the Work of Harold Cohen. New York: W. H. Freeman.
McCulloch, Warren, and Walter Pitts. 1943. “A Logical Calculus of the Ideas
Immanent in Nervous Activity.” Bulletin of Mathematical Biophysics 5:
115–37.
Menges, Achim, and Sean Ahlquist. 2011. Computational Design Thinking: Com-
putation Design Thinking. Chichester, UK: Wiley.
Mindell, David A. 2015. Our Robots, Ourselves: Robotics and the Myths of Auton-
omy. New York: Viking.
Minsky, Marvin. 1961. “Steps toward Artificial Intelligence.” Proceedings of the
IRE 49, no. 1 (January): 8–30.
Minsky, Marvin. 1982. “Why People Think Computers Can’t.” AI Magazine 3,
no. 4 (Fall): 3–15.
Minsky, Marvin. 1986. The Society of Mind. New York: Simon & Schuster.
Moravec, Hans. 1988. Mind Children: The Future of Robot and Human Intelli-
gence. Cambridge, MA: Harvard University Press.
Moravec, Hans. 1999. Robot: Mere Machine to Transcendent Mind. Oxford, UK:
Oxford University Press.
Newell, Allen. 1990. Unified Theories of Cognition. Cambridge, MA: Harvard
University Press.
Newell, Allen, and Herbert A. Simon. 1961. “Computer Simulation of Human
Thinking.” Science 134, no. 3495 (December 22): 2011–17.
Nilsson, Nils. 2009. The Quest for Artificial Intelligence: A History of Ideas and
Achievements. Cambridge, UK: Cambridge University Press.
Nocks, Lisa. 2008. The Robot: The Life Story of a Technology. Baltimore: Johns
Hopkins University Press.
Norvig, Peter, and Stuart J. Russell. 2020. Artificial Intelligence: A Modern
Approach. Fourth Edition. Upper Saddle River, NJ: Prentice Hall.
Bibliography 357
Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms that Control
Money and Information. Cambridge, MA: Harvard University Press.
Pfiefer, Rolf, and Josh Bongard. 2007. How the Body Shapes the Way We Think.
Cambridge: MIT Press.
Pias, Claus. 2016. The Macy Conferences, 1946–1953: The Complete Transactions.
Zürich, Switzerland: Diaphanes.
Pinker, Steven. 1997. How the Mind Works. New York: W. W. Norton.
Reddy, Raj. 1988. “Foundations and Grand Challenges of Artificial Intelligence.”
AI Magazine 9, no. 4: 9–21.
Reese, Byron. 2018. The Fourth Age: Smart Robots, Conscious Computers, and
the Future of Humanity. New York: Atria Books.
Saberhagen, Fred. 1967. Berserker. New York: Ballantine.
Sammon, Paul S. 2017. Future Noir: The Making of Blade Runner. New York: Dey
Street.
Searle, John. 1984. Mind, Brains, and Science. Cambridge, MA: Harvard Univer-
sity Press.
Searle, John. 1990. “Is the Brain a Digital Computer?” Proceedings and Addresses
of the American Philosophical Association 64, no. 3 (November): 21–37.
Selbst, Andrew D., and Solon Barocas. 2018. “The Intuitive Appeal of Explainable
Machines.” Fordham Law Review 87, no. 3: 1085–1139.
Shanahan, Murray. 2015. The Technological Singularity. Cambridge, MA: MIT
Press.
Simon, Herbert A. 1991. Models of My Life. New York: Basic Books.
Singer, Peter W. 2009. Wired for War: The Robotics Revolution and Conflict in the
21st Century. London: Penguin.
Sladek, John. 1980. The Complete Roderick. New York: Overlook Press.
Søraa, Roger A. 2017. “Mechanical Genders: How Do Humans Gender Robots?”
Gender, Technology, and Development 21, no. 12: 99–115.
Sparrow, Robert. 2007. “Killer Robots.” Journal of Applied Philosophy 24, no. 1: 62–77.
Tambe, Milind, and Eric Rice. 2018. Artificial Intelligence and Social Work. Cam-
bridge, UK: Cambridge University Press.
Tegmark, Max. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence.
New York: Knopf.
Togelius, Julian. 2019. Playing Smart: On Games, Intelligence, and Artificial
Intelligence. Cambridge, MA: MIT Press.
Townsend, Anthony. 2013. Smart Cities: Big Data, Civic Hackers, and the Quest
for a New Utopia. New York: W. W. Norton.
Turing, Alan. 1950. “Computing Machinery and Intelligence.” Mind 59, no. 236
(October): 433–60.
Turkle, Sherry. 2011. Alone Together: Why We Expect More from Technology and
Less from Each Other. New York: Basic Books.
Turner, Jacob. 2018. Robot Rules: Regulating Artificial Intelligence. London: Pal-
grave Macmillan.
Ulam, Stanislaw. 1958. “Tribute to John von Neumann.” Bulletin of the American
Mathematical Society 64, no. 3, pt. 2 (May): 1–49.
358 Bibliography
Rachel Adams
Human Sciences Research Council (South Africa)
Sigrid Adriaenssens
Princeton University
Hamza Ahmad
McGill University (Canada)
Vincent Aleven
Carnegie Mellon University
Antonia Arnaert
McGill University (Canada)
Enrico Beltramini
Notre Dame de Namur University
Jacob Aaron Boss
Indiana University
Mat Brener
Penn State University
Edvard P. G. Bruun
Princeton University
Juliet Burba
University of Minnesota
Angelo Gamba Prata de Carvalho
University of Brasília (Brazil)
Shannon N. Conley
James Madison University
Zoumanan Debe
McGill University (Canada)
Kanta Dihal
University of Cambridge
362 List of Contributors
Yeliz Doker
Bournemouth University (United Kingdom)
Evan Donahue
Duke University
Alban Duka
University of New York Tirana (Albania)
Jason R. Finley
Fontbonne University
Batya Friedman
University of Washington
Fatma Güneri
Lille Catholic University (France)
David J. Gunkel
Northern Illinois University
Andrea L. Guzman
Northern Illinois University
Heiko Hamann
University of Lübeck (Germany)
Mihály Héder
Budapest University of Technology and Economics (Hungary)
Kenneth Holstein
Carnegie Mellon University
Laci Hubbard-Mattix
Washington State University
Ming-Yu Bob Kao
Queen Mary University of London (United Kingdom)
Argyro Karanasiou
University of Greenwich (United Kingdom)
Oliver J. Kim
University of Pittsburgh
Roman Krzanowski
The Pontifical University of John Paul II (Poland)
Victoriya Larchenko
National Technical University “Kharkiv Polytechnic Institute” (Ukraine)
Brenda Leong
Future of Privacy Forum
John Liebert
Private Practice of Psychiatry (USA)
List of Contributors 363
Konstantinos Sakalis
University of Athens (Greece)
David Schafer
Western Connecticut State University
Craig I. Schlenoff
National Institute of Standards and Technology
David M. Schwartz
Penn State University
J. J. Sylvia
Fitchburg State University
Farnaz Tehranchi
Penn State University
Michael Thomas
Cruise LLC
Christopher Tozzi
Rensselaer Polytechnic Institute
Stefka Tzanova
York College
Ikechukwu Ugwu
Bournemouth University (United Kingdom)
Steven Umbrello
University of Torino (Italy)
Elisabeth Van Meer
College of Charleston
Brett F. Woods
American Public University System
Robin L. Zebrowski
Beloit College
Index
Artificial life, 57, 78, 109, 123, 150, 178, conditional probability, 193–194;
217; artificial quantum life, 279 estimation, 100; optimization, 20
Artificial neural networks (ANNs), 106, BEAM (biology, electronics, aesthetics,
112, 120, 225, 236, 277; cybernetics, and mechanics) robotics, 323–324
102–103; machine translation, 217; Beam search, 246
medicine, 222. See also Neural networks Beauchamp, James, 142
Ashby, W. Ross, 217–218 BECCA (brain-emulating cognition and
Asimov, Isaac, 17–20, 39–40, 254, 262, control architecture), 84
283, 305, 317, 324 Behavioral economics, 61, 150, 256–257
Assistive technology, 63–70 Behavior-based robotics and AI, 55–57
Atkinson and Shiffrin model, 89 Behaviorism, 88–89
ATTENDING system, 100 Beneficial AI, 39–40
Austin, George, 89 Berger-Wolf, Tanya, 41–42
Automata, 102, 303 Berkeley, Edmund, 88, 170–172
Automated machine learning (AutoML), Bernstein, Ethan, 277
20–23 Berserkers, 42–44
Automated multiphasic health testing Bertillon, Alphonse, 44
(AMHT), 23–24 Bias, 12–14, 131, 147, 152, 251; gender
Automated narrative generation systems, bias, 157–159; policing, 272–273, 297
91, 244 Bible concordance and translation, 175,
Automated trading software, 3, 86, 114, 216
122, 150, 350 Bina48 robot, 122, 305
Automatic film editing, 24–26 Biometric privacy and security, 44–47
Automatic Language Processing Advisory Biometric technology, 12, 47–48
Committee (ALPAC), 216 Biomorphic robots, 323–324
Autonomous and semiautonomous BIONET, 231
systems, 26–30 BioRC (Biomimetic Real-Time Cortex)
Autonomous capitalism, 258 Project, 121
Autonomous gaming agents, 212 Bishop, Mark, 93
Autonomous robotics, 27–28 Blackboard architectural model and
Autonomous vehicles. See Driverless cars system architectures, 145, 282
and trucks Blade Runner, 48–50, 158, 262
Autonomous weapons systems (AWS), Blockchain AI, 172, 258
ethics of, 3, 28, 30–32, 35–36, 158, Blois, Marsden, 175
207–210. See also Lethal autonomous Blue Brain Project, 50–51, 315, 348
weapons systems BlueGene supercomputer, 51, 315
Autonomy and complacency, 32–33 Bobrow, Daniel, 225–226
Autoverse, 123 Boden, Margaret, 90, 294
Avatars, 229 Bongard, Josh, 137
Boole, George, 39, 313
Bach, Johann Sebastian, 91, 142, 170 Boolean algebra, 171, 218, 313–314
Backpropagation, 120, 180, 214, 222 Booth, Andrew D., 215
Backward chaining, 145, 241 Boring, Edwin, 312
Bar-Hillel, Yehoshua, 215 Bostrom, Nick, 52–55, 257, 308–310, 320,
Barnett, G. Octo, 193 350
Basic AI drives, 257 Boucher, Anthony, 305
Bateson, Gregory, 217–218 Boulez, Pierre, 169
Battlefield AI and robotics, 34–37 Boulos, George, 342
Battlestar Galactica, 158, 306 Bounded rationality, 292
Bayes, Thomas, 37 Brain-computer interfacing (BCI), 174,
Bayesian inference, 37–39; classifiers, 86; 238, 348
cognitive models, 97–98, 190; BrainGate (Utah Array) experiment, 342
Index 367
Task structures (TS), 202 Flat Pack Furniture Test, 161; Loebner
Taylor, Charles, 253 Prize, 226, 343; Moral Turing Test
Teaching, 75, 189–192, 294 (MTT), 233–234; Total Turing Test, 195,
Technological Singularity, 173–174, 233
205–206, 302, 318–321 Turkle, Sherry, 69–70, 331–334
Teddy, Peter, 342 Tweney, Dylan, 301
Tegmark, Max, 54, 123, 235, 240 2001: A Space Odyssey, 159, 224, 334–336
TeKnowledge company, 240–241
Teleautomaton, 337 Unified theories of cognition (UTC), 191,
Telekino, 338 249
Telenoid R1 robot, 196–197 United Nations Convention on Certain
Television shows, 70, 158, 262–263, 306, Conventional Weapons (CCW), 63, 209
325 Universal basic income (UBI), 154, 236,
TensorFlow, 213 264, 346
Terasem Movement, 121–122, 305 Universal quantum Turing machine,
Terminator, The, 44, 108, 207, 262, 277–278
321–323, 341 Unmanned Aircraft System (UAS), 339
Termites, 116, 125 Unmanned combat aerial vehicles
Tesla, 130, 237–239; Autopilot, 3, 131–132, (UCAV), 207
239; Tesla Grohmann Automation, 239 Unmanned ground and aerial vehicles, 34,
Theater, 197–198, 286–287 184, 337–340
Theology, 151–153, 303–305, 307 Unmanned underwater vehicles (UUVs),
Three Laws of Robotics, 18, 39–40, 208, 34
254, 283, 324 Unsupervised learning, 38, 103, 179, 212,
Tilden, Mark, 323–325 278–279
Tillich, Paul, 151–152 U.S. Air Force Pilotless Aircraft Branch,
Toma, Peter, 216 339
Topological quantum computing, 106–107, User-interface challenges, 29
109
Torres y Quevedo, Leonardo, 338 van Melle, William, 241
Toyota Partner Robots, 65–66 Van Wynsberghe, Aimee, 69
Toys, 57, 65, 234, 294, 323–325, 331–332 Vazirani, Umesh, 277
Traffic, 125, 132, 276, 295; intelligent Vernie, Gauthier, 92
transportation, 185–188; traffic Veruggio, Gianmarco, 283–284
optimization algorithms, 130; Traffic Video games, 44, 138, 180, 238
Simulator, 185. See also Air traffic Vienna Convention on Road Traffic, 132
control Vinge, Vernor, 173, 205, 318, 350
Transhumanism, 54, 236, 304–305, 320, Virtual Cinematographer, 25
343–344 Virtual personal assistants, 227
Transhumanist Party, 305 Virtual reality (VR), 77, 122–124, 140,
Transparency, 147–148, 229, 272–273, 296 189, 236, 305
Trolley Problem, 325–327 Vision Zero project, 295
Tronto, Joan, 69 Visual Prolog, 146
Trustworthy AI, 13 Voice recognition, 48, 228, 245
Tsugawa, Sadayuki, 130 Voigt-Kampff test, 49–50
Tsuji, Saburo, 197 von Foerster, Heinz, 217–218
Turing, Alan, 37, 55, 89, 293, 327–328, von Neumann, John, 43, 55, 205, 217, 219,
329–331. See also Turing Test 257, 318
Turing Test, 43, 84, 91, 142–143, 261, 322,
329–331, 335; Coffee Test, 161; Wallach, Wendell, 284
Comparative Moral Turing Test Ware, Andrew, 265
(cMTT), 233; Ethical Turing Test, 233; Warner, Homer, Jr., 38–39, 194
378 Index