The Everyday Life of An Algorithm: Daniel Neyland
The Everyday Life of An Algorithm: Daniel Neyland
The Everyday Life of An Algorithm: Daniel Neyland
Algorithm
Daniel Neyland
The Everyday Life of an Algorithm
Daniel Neyland
© The Editor(s) (if applicable) and The Author(s) 2019. This book is an open access publication.
Open Access This book is licensed under the terms of the Creative Commons Attribution
4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits
use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons license and indicate if changes were made.
The images or other third party material in this book are included in the book’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material
is not included in the book’s Creative Commons license and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain
permission directly from the copyright holder.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are
exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and
information in this book are believed to be true and accurate at the date of publication.
Neither the publisher nor the authors or the editors give a warranty, express or implied,
with respect to the material contained herein or for any errors or omissions that may have
been made. The publisher remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.
This Palgrave Pivot imprint is published by the registered company Springer Nature
Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Acknowledgements
Thanks to the algorithms who took part in this book. You know who
you are. And you know who I am too. I am the human-shaped object.
Thanks to the audiences who have listened, watched and become
enwrapped by the algorithms. Your comments have been noted. Thanks
to Inga Kroener and Patrick Murphy for their work. Thanks to Sarah,
and to Thomas and George who have been learning about algorithms at
school. And thanks to Goldsmiths for being the least algorithmic insti-
tution left in Britain. The research that led to this book was funded by
European Research funding, with an FP7 grant (no. 261653) and under
the ERC project MISTS (no. 313173).
v
Contents
References 139
Index 149
vii
List of Figures
ix
CHAPTER 1
Opening
An algorithm is conventionally defined as ‘a process or set of rules to be
followed in calculations or other problem-solving operations, especially by
a computer’.1 In this sense, an algorithm strictly speaking is nothing more
I will begin this task by working through some of the recent academic
literature on algorithms. I will then pursue the everyday as an important
foreground for the subsequent story of algorithms. Finally, I will set out
the structure of the rest of this book.
Algorithmic Discontent
One obvious starting point for an enquiry into algorithms is to look at
an algorithm. And here, despite the apparent drama of algorithmic opac-
ity (in Fig. 1.1) is an algorithm:
This is taken from a project that sought to develop an algorith-
mic surveillance system for airport and train station security (and is
introduced in more detail along with the airport and train station and
their peculiar characteristics in Chapter 2). The algorithm is designed
as a set of ordered step-by-step instructions for the detection of aban-
doned luggage. It is similar in some respects to the instructions for my
Everyday
Some existing academic work on algorithms engages with ‘algorithmic
life’ (Amoore and Piotukh 2015). But this tends to mean the life
of humans as seen (or governed) through algorithms. If we want to
make sense of algorithms, we need to engage with their everyday life.
However, rather than continually repeat the importance of ‘the everyday’
as if it is a concept that can somehow address all concerns with algo-
rithms or is in itself available as a neat context within which things will
make sense, instead I suggest we need to take seriously what we might
mean by the ‘everyday life’ of an algorithm. If we want to grasp a means
to engage with the entanglements of a set of ordered instructions like
our abandoned luggage algorithm, then we need to do some work to set
out our terms of engagement.
The everyday has been a focal point for sociological analysis for sev-
eral decades. Goffman’s (1959) pioneering work on the dramaturgical
staging of everyday life provides serious consideration of the behaviour,
sanctions, decorum, controls and failures that characterise an array of
10 D. NEYLAND
first, it has been made by humans; second, it substitutes for the actions
of people and is a delegate that permanently occupies the position of a
human; and third, it shapes human action by prescribing back what sort of
people should pass through the door. (1992: 160)
1 INTRODUCTION: EVERYDAY LIFE AND THE ALGORITHM 13
Prescribing back is the means through which the door closer acts on
the human, establishing the proper boundaries for walking into rooms
and the parameters for what counts as reasonably human from the
groom’s perspective (someone with a certain amount of strength, abil-
ity to move and so on). Prescribing acts on everyday life by establish-
ing an engineered morality of what ought to count as reasonable in the
human encounters met by the groom. This makes sense as a premise: to
understand the abandoned luggage algorithm’s moves in shaping human
encounters, we might want to know something of how it was made by
humans, how it substitutes for the actions of humans and what it pre-
scribes back onto the human (and these will be given consideration in
Chapter 3). But as Woolgar and Neyland (2013) caution, the certainty
and stability of such prescribing warrants careful scrutiny. Prescribing
might, on the one hand, form an engineer’s aspiration (in which case its
accomplishment requires scrutiny) or, on the other hand, might be an
ongoing basis for action, with humans, doors and grooms continuously
involved in working through the possibilities for action, with the break-
down of the groom throwing open the possibility of further actions. In
this second sense, prescribing is never more than contingent (in which
case its accomplishment also requires scrutiny!).
Collectively these ideas seem to encourage the adoption of three kinds
of analytical sensibility7 for studying the everyday life of an algorithm.
First, how do algorithms participate in the everyday? Second, how do
algorithms compose the everyday? Third, how (to what extent, through
what means) does the algorithmic become the everyday? These will be
pursued in subsequent chapters to which I will now turn attention.
Notes
1. See Concise Oxford Dictionary (1999).
2. Ziewitz (2016) and Neyland (2016).
3. https://www.theguardian.com/science/2016/sep/01/how-algorithms-
rule-our-working-lives.
4. h ttp://uk.businessinsider.com/harvard-mathematician-reveals-algo-
rithms-make-justice-system-baised-worse-black-people-crime-police-2017-
6?r=US&IR=T.
5. h ttps://www.dezeen.com/2017/06/01/algorithm-seven-million-
different-jars-nutella-packaging-design/.
6. h ttp://economictimes.indiatimes.com/news/science/new-algo-
rithm-to-teach-robots-human-etiquettes/articleshow/59175965.cms.
7. An analytic sensibility is a way of thinking about and organising a deter-
mined scepticism towards the ethnographic material that a researcher
is presented with during the course of their fieldwork. It is not as strong
as an instruction (there is no ethnographic algorithm here), neither is it
entirely without purpose. The analytic sensibility can set a direction which
then subsequently can be called to sceptical account as the study develops.
References
Adkins, L., & Lury, C. (2012). Measure and Value. London: Wiley-Blackwell.
Amoore, L. (2011). Data Derivatives: On the Emergence of a Security Risk
Calculus for Our Times. Theory, Culture and Society, 28(6), 24–43.
Amoore, L., & Piotukh, V. (Eds.). (2015). Algorithmic Life: Calculative Devices
in the Age of Big Data. London: Routledge.
Austin, C. (1962). How to Do Things with Words. Oxford: Clarendon Press.
Beer, D. (2009). Power Through the Algorithm? Participatory Web Cultures and
the Technological Unconscious. New Media & Society, 11(6), 985–1002.
Braudel, F. (1979). The Structures of Everyday Life: Civilization and Capitalism
15th–18th Century (Vol. 1). New York, NY: Harper and Row.
Callon, M. (1998). The Laws of the Market. Oxford: Blackwell.
Cochoy, F. (1998). Another Discipline for the Market Economy: Marketing as a
Performative Knowledge and Know-How for Capitalism. In M. Callon (Ed.),
The Laws of the Market (pp. 194–221). Oxford: Blackwell.
Concise Oxford Dictionary. (1999). 10th ed. Oxford: Oxford University Press.
Corsín Jiménez, A., & Estalella, A. (2016). Ethnography: A Prototype. Ethnos,
82(5), 1–16.
Crawford, K. (2016). Can an Algorithm Be Agonistic? Ten Scenes from Life in
Calculated Publics. Science, Technology and Human Values, 41(1), 77–92.
18 D. NEYLAND
MacKenzie, D. (2009). Making Things the Same: Gases, Emission Rights and
the Politics of Carbon Markets. Accounting, Organisations and Society, 34,
440–455.
MacKenzie, D., Muniesa, F., & Siu, L. (Eds.). (2007). Do Economists Make
Markets? On the Performativity of Economics. Oxford: Princeton University
Press.
Marres, N. (2013). Why Political Ontology Must Be Experimentalized: On
Eco-Show Homes as Devices of Participation. Social Studies of Science, 43(3),
417–443.
Michael, M. (2006). Technoscience and Everyday Life—The Complex Simplicities
of the Mundane. Berkshire: Open University Press.
Mitchell, T. (2002). Rule of Experts. Berkley: University of California Press.
Mol, A. (2006, December 7–10). Bodies in Theory: Physical Actors of Various
Kinds. Paper Presented at The Stuff of Politics Conference, Worcester
College, Oxford.
Muniesa, F., & Callon, M. (2007). Economic Experiments and the Construction
of Markets. In D, MacKenzie, F. Muniesa, & L. Siu (Eds.), Do Economists
Make Markets? (pp. 163–188). Oxford: Princeton University Press.
Muniesa, F., Milo, Y., & Callon, M. (2007). Market Devices. Oxford:
Wiley-Blackwell.
Muniesa, F., et al. (2017). Capitalization: A Cultural Guide. Paris, France:
Presses des Mines.
Neyland, D. (2016). Bearing Account-able Witness to the Ethical Algorithmic
System. Science, Technology and Human Values, 41(1), 50–76.
Neyland, D., & Möllers, N. (2016). Algorithmic IF … THEN Rules and the
Conditions and Consequences of Power. Information, Communication and
Society, 20(1), 45–62.
Pasquale, F. (2015, March). The Algorithmic Self. The Hedgehog Review.
Institute for Advanced Studies in Culture, University of Virginia.
Pollner, M. (1974). Mundane Reason. Cambridge: Cambridge University Press.
Porter, T. (1995). Trust in Numbers: The Pursuit of Objectivity in Science and
Public Life. Princeton, NJ: Princeton University Press.
Schuppli, S. (2014). Deadly Algorithms: Can Legal Codes Hold Software
Accountable for Code That Kills? Available from: https://www.radicalphiloso-
phy.com/commentary/deadly-algorithms.
Slavin, K. (2011). How Algorithms Shape Our World. Available from: http://
www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world.html.
Spring, T. (2011). How Google, Facebook and Amazon Run the Internet.
Available from: http://www.pcadvisor.co.uk/features/internet/3304956/
how-google-facebook-andamazonrun-the-internet/.
Stalder, F., & Mayer, C. (2009). The Second Index. Search Engines,
Personalization and Surveillance. In K. Becker & F. Stalder (Eds.), Deep
20 D. NEYLAND
Search. The Politics of Search Beyond Google (pp. 98–115). Piscataway, NJ:
Transaction Publishers.
Strathern, M. (2002). Abstraction and Decontextualisation: An Anthropological
Comment. In S. Woolgar (Ed.), Virtual Society? Technology, Cyberbole, Reality
(pp. 302–313). Oxford: Oxford University Press.
Woolgar, S., & Neyland, D. (2013). Mundane Governance. Oxford: Oxford
University Press.
Ziewitz, M. (2016). Governing Algorithms: Myth, Mess, and Methods. Science,
Technology and Human Values, 41(1), 3–16.
Open Access This chapter is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/
by/4.0/), which permits use, sharing, adaptation, distribution and reproduction
in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit line
to the material. If material is not included in the chapter’s Creative Commons
license and your intended use is not permitted by statutory regulation or exceeds
the permitted use, you will need to obtain permission directly from the copyright
holder.
CHAPTER 2
Abstract This chapter sets out the algorithms that will form the focus
for this book and their human and non-human associations. The chapter
focuses on one particular algorithmic system developed for public trans-
port security and explores the ways in which the system provided a basis
for experimenting with what computer scientists termed human-shaped
objects. In contrast to much of the social science literature on algorithms
that suggests the algorithm itself is more or less fixed or inscrutable, this
chapter will instead set out one basis for ethnographically studying the algo-
rithm up-close and in detail. Placing algorithms under scrutiny opens up
the opportunity for studying their instability and the ceaseless experimen-
tation to which they are subjected. An important basis for this experimen-
tation, the chapter will suggest, is elegance. The chapter will suggest that
elegance opens up a distinct way to conceive of the experimental prospects
of algorithms under development and their ways of composing humans.
Opening
How can we get close to the everyday life of an algorithm? Building
on the Introduction to this book, how can we make sense of the ways
algorithms participate in everyday life, compose the everyday or are
What Is Experimentation?
The tradition of studying the experiment in science and technol-
ogy studies (STS) has been focused around the space of the laboratory
(Latour and Woolgar 1979), forms of expertise (Collins and Evans
2007) and devices (Latour 1987) that render the laboratory a cen-
tre of, for example, calculation. The laboratory becomes the space into
which the outside world is drawn in order to approximate its conditions
for an experiment. Or alternatively, the conditions of the laboratory are
extended into the world beyond the laboratory in order to take the labo-
ratory experiment to the world (Latour 1993). The experiment as such,
then, becomes a replicable phenomenon through which some feature of
2 EXPERIMENTATION WITH A PROBABLE HUMAN-SHAPED OBJECT 23
the world is proclaimed. And we see some parallels drawn with economic
experiments that similarly seek to draw up variables to manage and con-
trol, a world to be drawn into the economic laboratory or a set of condi-
tions to be extended beyond the laboratory (Muniesa and Callon 2007).
The economic experiment, like the laboratory experiment, is as much
about demonstration, as it is about discovery (Guala 2008).
In Chapter 5, we will see that in the later stages of the everyday life
of our algorithm, these concerns for control and demonstration came to
the fore—particularly when research funders wanted to see results. But
for now, our algorithm—the abandoned luggage algorithm from the
Introduction—sits somewhat meekly and unknowing in an office. It is
waiting, but it does not know for what it waits: not for an experiment
in the closely controlled sense of the term, not for a pristine laboratory
space and not for a set of controlled variables, even human or luggage
variables, through which it can demonstrate its capacity to grasp the
world around it. To begin with it awaits experimentation.
Experimentation sits somewhat apart from experiments. In place
of controls or neatly defined space come proposals, ideas, efforts to try
things and see what happens. Experimentation can be as much a part of
qualitative social science as it can be a part of algorithmic computer sci-
ence. In the social sciences, experimentation has been used as an impetus
by Marres (2013) to experimentalise political ontology and by Johansson
and Metzger (2016) to experimentalise the organisation of objects. What
these works point towards is the fundamental focus for experimentation:
that the nature of just about anything can be rendered at stake within
the experimental realm. Scholars have also begun to conceive of exper-
imentalising economic phenomena (Wherry 2014; Muniesa 2016a, b).
This is interesting for drawing our attention towards the ways in which
what might otherwise be quite controlled, laboratory-like settings can
be opened up for new lines of thought through experimentation. These
works draw on a patchy history of social science experimentation that
has tended to raise insights and ethical concerns in equal measure. One
historical route for the development of such experimentation has been
Garfinkel’s (1963, 1967) breach experiments. Here, the aim was to dis-
rupt—or breach—taken-for-granted features of everyday life in every-
day settings in order to make those features available for analysis. But
unlike the laboratory tradition, the breaches for Garfinkel were broadly
experimental in the sense of providing some preliminary trials and find-
ings to be further worked on. They were heuristic devices, aiding the
24 D. NEYLAND
The basis for initial experimentation within the project was a series of
meetings between the project participants. Although there were already
some expectations set in place by the funding proposal and its success,
the means to achieve these expectations and the precise configuration of
expectations had not yet been met. I now found myself sat in these meet-
ings as an ethnographer, watching computer scientists talking to each
other mostly about system architectures, media proxies, the flow of digi-
tal data—but not so much about algorithms.
Attaining a position as an ethnographer on this project had been the
result of some pre-project negotiation. Following the project coordi-
nator’s request that I carry out an ethical review of the project under
development, I had suggested that it might be interesting, perhaps vital,
to carry out an assessment of the system under development. Drawing
on recent work on ethics and design and ethics in practice,2 I suggested
that what got to count as the ethics of an algorithm might not be easy
26 D. NEYLAND
discuss a point of detail or refine precisely what it was that had just been
proposed in the meeting and what this might look like for the system.
A typical architecture from one of the later meetings looked like this as
shown in Fig. 2.1.
By this point in the project, it had been agreed that the existing sur-
veillance cameras in transport hubs operated by SkyPort and StateTrack
would feed into the system (these are represented by the black cam-
era-shaped objects on the left). After much discussion, it had been
agreed that the digital data from these cameras would need to feed into
a media proxy. Both sets of computer scientists were disappointed to
learn that the transport hubs to be included in the project had a range
of equipment. Some cameras were old, some new, some high definition
and some not, and each came with a variable frame rate (the number
of images a second that would flow into the algorithmic system). The
media proxy was required to smooth out the inconsistencies in this flow
of data in order that the next component in the system architecture
would then be able to read the data. Inconsistent or messy data would
prove troublesome throughout the project, but in these meetings, it was
assumed that the media proxy would work as anticipated.
After some discussion, it was agreed that the media proxy would
deliver its pristine data to two further system components. These com-
prised the Event Detection system and the Route Reconstruction
system. The Event Detection system was where the algorithms (includ-
ing the abandoned luggage algorithm of the Introduction to this book)
would sit. The idea was that these algorithms would sift through tera-
bytes of digital video data and use IF-THEN rules to select out those
events that security operatives in a train station or airport would need
to see. In discussions between the computer scientists and transport
firms, it was agreed that abandoned luggage, people moving the wrong
way (counter-flow) and people moving into forbidden areas (such as the
train track in train stations or closed offices in airports) would be a use-
ful basis for experimentation in the project. These would later become
the basis for algorithmically experimenting with the basic idea that rel-
evant images could be detected within flows of digital video data. For
now, it was still assumed that algorithms could simply be dropped into
this Event Detection component of the architecture. Relevant images
would then be passed to the User Interface (UI) with all data deemed
irrelevant passed to the Privacy Enhancement System. This was put for-
ward as a key means to achieve the ethical aims of the project. It was
suggested that only a small percentage of video data was relevant within
an airport or train station, that only a small percentage of data needed
to be seen and that the rest of the data could be stored briefly in the
Privacy Enhancement System before being securely deleted. It later tran-
spired that detecting relevant images, getting the algorithms to work and
securely deleting data were all major technical challenges. But for now, in
these early project meetings, it was assumed that the system would work
as expected.
The Route Reconstruction component was a later addition. This fol-
lowed on from discussions between the transport firms and the computer
scientists, in which it became clear that having an image of, for example,
an abandoned item of luggage on its own was not particularly useful in
security terms. Transport operatives wanted to know who had left the
luggage, where they had come from and where they went next. The the-
ory behind the Route Reconstruction system (although see Chapter 3
for an analysis of this) was that it would be possible to use probabilistic
means to trace the history around an event detected by an algorithm.
The UI would then give operatives the option to see, for example, how
an item of luggage had been abandoned, by whom, with whom they
were walking and so on. This would mean that the Privacy Enhancement
System would need to store data for as long as these reconstructions
were required. It was assumed that most would be performed within
30 D. NEYLAND
The algorithms and their associated code for this project built on the
decade of work carried out, respectively, by University 1 and University
2. As these long histories of development had been carried out by var-
ious colleagues within these Universities over time, tinkering with the
algorithms was not straightforward. When computer scientists initially
talked of ‘dropping in’ algorithms into the system, this was partly in the
hope of avoiding many of the difficulties of tinkering, experimenting
and tinkering again with algorithms that were only partially known to
the computer scientists. As we saw in the Introduction with the aban-
doned luggage algorithm, the algorithm establishes a set of rules which
are designed to contribute to demarcating relevant from irrelevant video
data. In this way, such rules could be noted as a means to discern peo-
ple and things that could be ignored and people and things that might
need further scrutiny. If such a focus could hold together, the algorithms
could indeed be dropped in. However, in practice, what constituted a
human-shaped object was a matter of ongoing work.
Let’s return to the subject of the Introduction to then explore exper-
imentation with human-shaped objects. As a reminder, these are the
IF-THEN rules for the abandoned luggage algorithm (Fig. 2.2):
in this project, various more or less concise ways to classify were drawn
up and considered or abandoned either because they would require too
much processing power (probably quite persuasive but not concise) or
were too inaccurate (quite concise, but produced results that were not at
all persuasive). At particular moments, (not very serious) consideration
was even given to change the everyday life into which the algorithms
would enter in order to make classification a more straightforward mat-
ter. For example, computer scientists joked about changing the airport
architecture to suit the system, including putting in higher ceilings, con-
sistent lighting and flooring, and narrow spaces to channel the flow of
people. These were a joke in the sense that they could never be accom-
modated within the project budget. Elegance had practical and financial
constraints.
A first move in classifying objects was to utilise a standard practice
in video analytics: background subtraction. This method for identifying
moving objects was somewhat time-consuming and processor intensive,
and so not particularly elegant. But these efforts could be ‘front-loaded’
prior to any active work completed by the system. ‘Front-loading’ in this
instance meant that a great deal of work would be done to produce an
extensive map of the fixed attributes of the setting (airport or train sta-
tion) prior to attempts at classification work. Mapping the fixed attrib-
utes would not then need to be repeated unless changes were made to
the setting (such changes included in this project a change to a shop-
front and a change to the layout of the airport security entry point).
Producing the map provided a basis to inform the algorithmic system
what to ignore, helping to demarcate relevance and irrelevance in an ini-
tial manner. Fixed attributes were thus nominally collated as non-suspi-
cious and irrelevant in ways that people and luggage, for example, could
not be, as these latter objects could not become part of the map of
attributes (the maps were produced based on an empty airport and train
station). Having a fixed map then formed the background from which
other entities could be noted. Any thing that the system detected that
was not part of the map would be given an initial putative identity as
requiring further classification.
The basis for demarcating potentially relevant objects depended to
some degree, then, on computer scientists and their understanding of
spaces such as airports, maps that might be programmed to ignore for
a time certain classes of objects as fixed attributes, and classification sys-
tems that might then also—if successful—provide a hesitant basis for
36 D. NEYLAND
Fig. 2.3 An anony-
mous human-shaped
bounding box
38 D. NEYLAND
camera it appeared on) and its direction and velocity. For the Event
Detection algorithms of moving into a forbidden space or moving in
the wrong direction (e.g. going back through airport security or going
the wrong way through an entry or exit door in a rush hour train sta-
tion), these bounding boxes were a concise form of identification. They
enabled human-shaped objects to be selected with what might be a
reasonable accuracy and consistency and without using too much pro-
cessing effort. They were elegant, even if visually they looked a bit ugly
and did little to match the actual shape of a human beyond their basic
dimensions.
However, for abandoned luggage, something slightly different was
required. In experimentation, in order to successfully and consistently
demarcate human-shaped objects and luggage-shaped objects and their
separation, a more precisely delimited boundary needed to be drawn
around the putative objects. This required the creation of a pixel mask
that enabled the algorithmic system to make a more precise sense of
the human- and luggage-shaped objects, when and if they separated
(Fig. 2.4).
This more closely cropped means to parameterise and classify objects
could then be used to issue alerts within the system. IF a human-
shaped object split from a luggage-shaped object, IF the human-shaped
object continued to move, IF the luggage-shaped object remained sta-
tionary, IF the luggage-shaped object and human-shaped object were
Fig. 2.4 A close-
cropped pixelated
parameter for human-
and luggage-shaped
object
2 EXPERIMENTATION WITH A PROBABLE HUMAN-SHAPED OBJECT 39
Fig. 2.5 An item of
abandoned luggage
40 D. NEYLAND
not succeeded entirely in meeting all the goals of the project yet, it had
struggled to initially produce a set of results that could elegantly capture
sufficient information to accurately and consistently identify abandoned
luggage and had to be changed (to a pixel-based process), and it was reli-
ant on digital maps and background subtraction, but it had nonetheless
started to get into the action.
Conclusion
In this chapter, I have started to build a sense of the everyday life in
which our algorithm was becoming a participant. In experimental spaces,
our algorithm was starting to make a particular sense of what it means to
be human and what it means to be luggage. The IF-THEN rules and the
development of associated software/code, the building of a system archi-
tecture and set of components provided the grounds for rendering things
like humans and luggage at stake. To return to Pollner’s (1974) work
(set out in the Introduction), the algorithm was starting to set out the
basis for delimiting everyday life. The algorithm was beginning to insist
that the stream of digital video data flowing through the system acted on
behalf of an account as either luggage-shaped or human-shaped or back-
ground to be ignored. In addressing the question how do algorithms
participate in everyday life, we have started to see in this chapter that
they participate through technical and experimental means. Tinkering
with ways to frame the human-shaped object, decide on what might
count as elegant through concision, satisfaction and persuasion, are all
important ways to answer this question. But we can also see that this
participation is hesitant. The bounding box is quite elegant for two of
the system’s algorithmic processes (moving the wrong way and moving
into a forbidden area) but not particularly persuasive or satisfactory for
its third process (identifying abandoned luggage). And thus far, all we
have seen is some initial experimentation, mostly involving the human-
shaped objects of project participants. This experimentation is yet to fully
escape the protected conditions of experimentation. As we will see in
Chapters 4 and 5, moving into real time and real space, many of these
issues in relation to algorithmic participation in everyday life have to be
reopened.
It is in subsequent chapters that we will start to look into how the
algorithm becomes the everyday and how algorithms can even compose
the everyday. For now, these questions have been expressed in limited
2 EXPERIMENTATION WITH A PROBABLE HUMAN-SHAPED OBJECT 41
ways, for example when the computer scientists joked about how they
would like to change the airport architecture to match the needs of the
system. In subsequent chapters, as questions continue regarding the
ability of the algorithm to effectively participate in everyday life, these
questions resurface. In the next chapter, we will look at how the algo-
rithmic system could become accountable. This will pick up on themes
mentioned in the Introduction on transparency and accountability and
will explore in greater detail the means through which the everyday life
of the algorithm could be made at stake. As the project upon which this
book is based was funded in order to produce a more ethical algorithmic
system, these questions of accountability were vital.
Notes
1. The names have been made anonymous.
2. Design and ethics have a recent history (Buscher et al. 2013; Introna and
Nissenbaum 2000; Suchman 2011). However, Anderson and Sharrock
(2013) warn against assuming that design decisions set in place an algo-
rithmic ethic.
References
Anderson, B., & Sharrock, W. (2013). Ethical Algorithms. Available from:
http://www.sharrockandanderson.co.uk/wp-content/uploads/2013/03/
Ethical-Algorithms.pdf.
Beer, D. (2009). Power Through the Algorithm? Participatory Web Cultures and
the Technological Unconscious. New Media & Society, 11(6), 985–1002.
Bennett, C. (2005). What Happens When You Book an Airline Ticket
(Revisited): The Collection and Processing of Passenger Data Post 9/11.
In M. Salter & E. Zureik (Eds.), Global Surveillance and Policing (pp. 113–
138). Devon: Willan.
Bowker, G., & Star, S. L. (2000). Sorting Things Out. Cambridge, MA: MIT
Press.
Buscher, M., Wood, L., & Perng, S.-Y. (2013, May). Privacy, Security, Liberty.
In T. Comes, F. Fiedrich, S. Fortier, & J. Geldermann (Eds.), Proceedings
of the 10th International ISCRAM Conference, Baden-Baden, Germany.
Available from: http://www.iscramlive.org/portal/iscram2013proceedings.
Accessed October 24, 2014.
Collins, H., & Evans, R. (2007). Rethinking Expertise. Chicago, IL: University
of Chicago Press.
42 D. NEYLAND
Norris, C., & Armstrong, G. (1999). The Maximum Surveillance Society: The
Rise of CCTV. London: Berg.
Pollner, M. (1974). Mundane Reason. Cambridge: Cambridge University Press.
Suchman, L. (2011). Subject Objects. Feminist Theory, 12(2), 119–145.
Taylor, E. (2010). I Spy with My Little Eye. Sociological Review, 58(3), 381–405.
Van der Ploeg, I. (2003). Biometrics and Privacy: A Note on the Politics of
Theorizing Technology. Information, Communication, Society, 6(1), 85–104.
Wherry, F. (2014). Analyzing the Culture of Markets. Theory and Society, 43(3–
4), 421–436.
Woolgar, S., & Neyland, D. (2013). Mundane Governance. Oxford: Oxford
University Press.
Open Access This chapter is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/
by/4.0/), which permits use, sharing, adaptation, distribution and reproduction
in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit line
to the material. If material is not included in the chapter’s Creative Commons
license and your intended use is not permitted by statutory regulation or exceeds
the permitted use, you will need to obtain permission directly from the copyright
holder.
CHAPTER 3
Opening
Simply being able to see an algorithm in some ways displaces aspects
of the drama that I noted in the Introduction to this book. If one of
the major concerns with algorithms is their opacity, then being able
Accountability
Within the project we are considering, the ethical aims put forward from
the original bid onwards were to reduce the amount of visual data made
visible within a video surveillance system, to reduce the amount of data
that gets stored and to do so without developing new algorithms. These
were positioned as a basis on which my ethnographic work could hold
the system to account. They were also presented as a potential means
to address popular concerns regarding questions of algorithmic open-
ness and transparency, at least theoretically enabling the algorithm, its
authorship and consequences to be called to question by those subject
to algorithmic decision-making processes (James 2013; Diakopoulos
2013). A more accountable algorithm might address concerns expressed
3 ACCOUNTABILITY AND THE ALGORITHM 47
studying how the algorithmic system produces outputs that are designed
to be used as part of organisational practices to make sense of a scene
placed under surveillance by the algorithmic system. In this way, the
human-shaped object and luggage-shaped object of Chapter 2 can be
understood as part of this ongoing, account-able production of the sense
of a scene in the airport or train station in which the project is based.
I will refer to these sense-making practices as the account-able order of
the algorithmic system. Importantly, having algorithms participate in
account-ability changes the terms of the account-able order (in compar-
ison with the way sense was made of the space prior to the introduction
of the algorithmic system).
Making sense of this account-able order may still appear to be some
distance from the initial concerns with accountability which I noted
in the opening to this chapter, of algorithmic openness and transpar-
ency. Indeed, the ethnomethodological approach appears to be char-
acterised by a distinct set of concerns, with ethnomethodologists
interested in moment to moment sense-making, while calls for algo-
rithmic accountability are attuned to the perceived needs of those
potentially subject to actions deriving from algorithms. The account-
able order of the algorithm might be attuned to the ways in which
algorithms participate in making sense of (and in this process compos-
ing) everyday life. By contrast, calls for algorithmic accountability are
attuned to formal processes whereby the algorithm and its consequences
can be assessed. However, Suchman et al. (2002) suggest that work-
place actions, for example, can involve the simultaneous interrelation of
efforts to hold each other responsible for the intelligibility of our actions
(account-ability) while located within constituted ‘orders of accountabil-
ity’ (164). In this way, the account-able and the accountable, as different
registers of account, might intersect. In the rest of this chapter, I will
suggest that demands for an algorithm to be accountable (in the sense of
being transparent and open to question by those subject to algorithmic
decision-making and their representatives) might benefit from a detailed
study of the account-able order of an algorithmic system under devel-
opment. Being able to elucidate the terms of algorithmic participation
in making sense of scenes placed under surveillance—as an account-able
order—might assist in opening the algorithmic system to accountable
questioning. However, for this to be realised requires practically manag-
ing the matter of intersecting different registers of account.
3 ACCOUNTABILITY AND THE ALGORITHM 53
operatives went about making sense of, for example, abandoned lug-
gage. Everyday competences that might otherwise never be articulated
needed to be drawn to the fore here. Operatives talked of the need to
know the history around an image, what happened after an item had
been left, and with whom people had been associating. Computer sci-
entists thus looked to develop the Route Reconstruction component of
the system. This was a later addition to the system architecture as we saw
in Chapter 2. The University 1 team of computer scientists presented a
digital surveillance Route Reconstruction system they had been working
on in a prior project (using a learning algorithm to generate probabilis-
tic routes). Any person or object once tagged relevant, they suggested,
could be followed backwards through the stream of video data (e.g.
where had a bag come from prior to being abandoned, which human
had held the bag) and forwards (e.g. once someone had dropped a bag,
where did they go next). This held out the potential for the algorithms
and operatives to take part in successively and account-ably building a
sense for a scene. From a single image of, say, an abandoned item of
luggage, the algorithm would put together histories of movements of
human-shaped objects and luggage-shaped objects and future move-
ments that occurred after an item had been left. As operatives clicked
on these histories and futures around the image of abandoned luggage,
both operatives and algorithms became active participants in successively
building shared relevance around the image. Histories and futures could
become a part of the constitutive expectancies of relations between algo-
rithms and operatives.
Route Reconstruction would work by using the background maps
of fixed attributes in the train station and airport and the ability of the
system to classify human-shaped objects and place bounding boxes
around them. Recording and studying the movement of human-shaped
bounding boxes could be used to establish a database of popular routes
human-shaped objects took through a space and the average time it took
a human-shaped object to walk from one camera to another. The sys-
tem would use the bounding boxes to note the dimensions, direction
and speed of human-shaped objects. The Route Reconstruction system
would then sift through the digital stream of video images to locate,
for example, a person who had been subject to an alert and trace the
route from which they were most likely to have arrived (using the data-
base of most popular routes), how long it should have taken them to
appear on a previous camera (based on their speed) and search for any
3 ACCOUNTABILITY AND THE ALGORITHM 59
Fig. 3.2 A probabilistic tree and children (B0 and F0 are the same images)
to work, far greater amounts of data were required prior to the system
operating (e.g. digitally mapping the fixed attributes of a setting such as
an airport and fixing in place parameters for objects such as luggage and
humans, producing bounding boxes, metadata, tracking movements, con-
stituting a database of popular routes). The introduction of the algorith-
mic system also seemed to require a much more precise definition of the
account-able order of airport and train station surveillance activities. The
form that the order took was both oriented to the project’s ethical aims
and gave a specific form to those aims. Yet this emerging form was also
a concern for questions of accountability being asked on behalf of future
data subjects—those who might be held to account by the newly emerging
algorithmic system.
The specific material forms that were given to the project’s ethical
aims—such as the User Interface and Route Reconstruction system—
were beginning to intersect with accountability questions being raised by
the ethics board. In particular, how could this mass of new data being
produced ever meet the ethical aim to reduce data or the ethical aim
to not develop new surveillance algorithms? In the next section, I will
explore the challenges involved in this intersection of distinct registers of
account by engaging with the work of the ethics board.
Account-ability and Accountability
Through the Ethics Board
As I suggested in the opening to this chapter, formal means of accountability
are not without their concerns. Unexpected consequences, rituals, the build-
ing of new assets are among an array of issues with which accountability
can become entangled. In the algorithm project, the key entanglement
was between the kinds of account-ability that we have seen develop-
ing in this chapter, through which the algorithms began to participate
more thoroughly in everyday life, and accountability involving questions
asked on behalf of future data subjects—those who might be subject to
algorithmic decision-making. This latter approach to accountability
derived from a series of expectations established in the initial project bid,
among project partners and funders that somehow and in some way the
ethical aims of the project required an organised form of assessment.
This expectation derived partly from funding protocols that place a
strong emphasis on research ethics, the promises of the original funding
62 D. NEYLAND
The way the system ‘will work’—its means of making sense of the space
of the airport and train station—encouraged a number of questions from
the ethics board, enabling the system to be held accountable. For exam-
ple, the Data Protection Officers involved in the board asked during the
first meeting:
Is there a lot of prior data needed for this system? More so than before?
Are people profiled within the system?
How long will the system hold someone’s features as identifiable to them
as a tagged suspect?
Conclusion
In this chapter, we can see that our algorithms are beginning to par-
ticipate in everyday life in more detailed ways. They are not only clas-
sifying putative human-shaped and luggage-shaped objects. They are
also taking part in the production of accounts that make sense of the
actions in which those objects are also taking part: being abandoned,
moving the wrong way, moving into a forbidden space. This participa-
tion in the account-able order of everyday life is an achievement based
on years of work by the computer scientists and significant efforts in
the project to work with operatives to figure out their competences and
how a system might be built that respects and augments these compe-
tences while also accomplishing the project’s ethical aims. Such aims were
also the key grounds for intersecting this increasing participation in the
account-ability of everyday life with the sense of accountability pursued
by the ethics board. Regular meetings, minutes, publicly available reports,
the development of questions into design protocols for the emerging sys-
tem, creating new bases for experimentation, each formed ways in which
accountability could take shape—as a series of questions asked on behalf
of future data subjects. In a similar manner to the literature that opened
this chapter, this more formal process of accountability came with its
own issues. Unanticipated questions arose, the system being subjected to
account kept changing, some things didn’t work for a time, and my own
role in accountability came under scrutiny. In place of any counter expec-
tation that algorithms could be made accountable in any straightforward,
routine manner, came this series of questions and challenges.
What, then, can be said about future considerations of algorithms
and questions of accountability? First, it seemed useful in this project
to engage in detail with the account-able order of the algorithmic sys-
tem. This displaced a formal approach to accountability, for example,
carrying out an audit of algorithmic activity, with an in-depth account
of the sense-making activities of the system. Second, however, this
approach to account-ability did nothing on its own to address questions
68 D. NEYLAND
References
Beer, D. (2009). Power Through the Algorithm? Participatory Web Cultures and
the Technological Unconscious. New Media & Society, 11(6), 985–1002.
Button, G., & Sharrock, W. (1998). The Organizational Accountability of
Technological Work. Social Studies of Science, 28(1), 73–103.
Diakopoulos, N. (2013). Algorithmic Accountability Reporting: On the
Investigation of Black Boxes. Available from: http://towcenter.org/wp-con-
tent/uploads/2014/02/78524_Tow-Center-Report-WEB-1.pdf.
Dourish, P. (2004). Where the Action Is: The Foundations of Embodied
Interactions. Cambridge, MA: MIT Press.
3 ACCOUNTABILITY AND THE ALGORITHM 69
Slavin, K. (2011). How Algorithms Shape Our World. Available from: http://
www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world.html.
Spring, T. (2011). How Google, Facebook and Amazon Run the Internet.
Available from: http://www.pcadvisor.co.uk/features/internet/3304956/
how-google-facebook-andamazonrun-the-internet/.
Stalder, F., & Mayer, C. (2009). The Second Index. Search Engines,
Personalization and Surveillance. In K. Becker & F. Stalder (Eds.), Deep
Search. The Politics of Search Beyond Google (pp. 98–115). Piscataway, NJ:
Transaction Publishers.
Strathern, M. (2000). The Tyranny of Transparency. British Educational
Research Journal, 26(3), 309–321.
Strathern, M. (2002). Abstraction and Decontextualisation: An Anthropological
Comment. In S. Woolgar (Ed.), Virtual Society? Technology, Cyberbole, Reality
(pp. 302–313). Oxford: Oxford University Press.
Suchman, L., Trigg, R., & Blomberg, J. (2002). Working Artefacts:
Ethnomethods of the Prototype. British Journal of Sociology, 53(2), 165–179.
Open Access This chapter is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/
by/4.0/), which permits use, sharing, adaptation, distribution and reproduction
in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit line
to the material. If material is not included in the chapter’s Creative Commons
license and your intended use is not permitted by statutory regulation or exceeds
the permitted use, you will need to obtain permission directly from the copyright
holder.
CHAPTER 4
Opening
In Chapter 3, I suggested that our algorithms had begun to par-
ticipate in everyday life by becoming involved in establishing the
account-able order of life in the airport and train station. I also sug-
gested that this form of account-ability intersected with concerns of
A calculative agency will be all the more powerful when it is able to: a)
establish a long, yet finite list of diverse entities; b) allow rich and varied
relations between the entities thus selected, so that the space of possible
classifications and reclassifications is largely open; c) formalize procedures
and algorithms likely to multiply the possible hierarchies and classifications
between these entities. As this calculative power depends on the equip-
ments that agencies can rely upon, we can easily understand why it is une-
venly distributed among them. (1238)
78 D. NEYLAND
We can think of our algorithms on these terms: they establish a finite list
of entities (human-shaped objects, luggage-shaped objects, bounding
boxes and close-cropped images), entered into varied relations (object
action states such as moving the wrong way or abandoned), of possible
hierarchies (particularly with the coordinators’ interest in selling the tech-
nology in the future, see Chapter 6). That the algorithms will be the enti-
ties responsible for imposing this hierarchy of relevance on everyday life,
suggests they will play a key part in the formulation of this initial step
towards deletion, among a complex array of relations also involving other
system components, the spaces in which the system operates and so on.
This notion of calculative agency builds on a history of STS work
on calculation. This includes studies of how accuracy is constructed
(MacKenzie 1993), the accomplishment of numeric objectivity (Porter
1995), trading, exchange and notions of equivalence (Espeland and
Sauder 2007; MacKenzie 2009), among many other areas. The kinds of
concern articulated in these works is not focused on numbers as an iso-
lated output of calculation. Instead, numbers are considered as part of
a series of practical actions involved in, for example, solving a problem
(Livingston 2006), distributing resources, accountabilities or responsibil-
ities for action (Strathern 2002), governing a country (Mitchell 2002)
and ascertaining a value for some matter (Espeland and Sauder 2007;
MacKenzie 2009). Taking on these ideas, we can say that our algorithms
are not only involved in classifying human-shaped and other objects and
their action states, but also their relevance and irrelevance. The algorithms
are involved in producing both quantities (a number of alerts, a com-
plex means to parameterise visual data, the production of metadata and
bounding boxes) and qualities (issuing or not issuing an alert, deciding
between relevance and irrelevance). This is the starting point for the neol-
ogism of qualculation (Cochoy 2002; Thrift 2004). For Callon and Law:
First, the relevant entities are sorted out, detached, and displayed within
a single space. Note that the space may come in a wide variety of forms
or shapes: a sheet of paper, a spreadsheet, a supermarket shelf, or a court
4 THE DELETING MACHINE AND ITS DISCONTENTS 79
of law – all of these and many more are possibilities. Second, those enti-
ties are manipulated and transformed. Relations are created between them,
again in a range of forms and shapes: movements up and down lines; from
one place to another; scrolling; pushing a trolley; summing up the evi-
dence. And, third, a result is extracted. A new entity is produced. A rank-
ing, a sum, a decision. A judgment. … And this new entity corresponds
precisely to – is nothing other than – the relations and manipulations that
have been performed along the way. (2005: 715)
something (the rules and conventions for order such as negative numbers)
and for considering nothing (a more literal zero). Following this argu-
ment, to introduce accountable deletion might be to generate instability
and questions as much as order. The nature of data, of algorithms and
their associations might be called into question, and so might the rela-
tions that generated the call for accountability in the first place. Instead
of the algorithmic drama in current academic research that I noted in
the Introduction and Chapter 2, we might have nothing (deletion), but
we might also have a generative something (new accountability relations
through which the deletion is demonstrated alongside difficult questions
regarding what constitutes adequate deletion). The generative dissonance
or profound change in ordering provoked by the blank figure—the some-
thing and nothing—as we shall see, attains a brutish presence: its adequacy
as both something and nothing is difficult to pin down and yet vital to the
marketable future of the technology under development.
The suggestion in policy discussions around deletion and accountably
accomplishing deletion are that in some way an algorithm can be lim-
ited (even through another algorithm). Yet taking on board the work
of Cochoy, Callon, Law, Hetherington and Lee suggests that when a
new qualculative form is constituted and inserted into sociomaterial rela-
tions, it can constitute a something and nothing, a disruption and form
of disorder, a set of questions and not only a limitation. The production
of something and nothing and its accountable accomplishment clearly
requires detailed investigation. This chapter will now begin this inves-
tigation particularly attuned to the possibility that deletion might gen-
erate blank figures, disorder as well as order. Attempts to accountably
demonstrate that nothing has been created from something will be pur-
sued, wherein I will suggest that the certainties of qualculation become
overwhelmed by the disruptive figure of what might constitute deletion.
cause for concern within the project. The computer scientists’ interest in
a conventional form of deletion that was not particularly secure or com-
plete, but was straightforward, stood in contrast to the views expressed
by the project coordinators and the ethics board who for different rea-
sons wanted a more thorough-going form of deletion. Should deletion
simply involve changing the route by which data was accessed, should it
involve expunging data from the system, corrupting or overwriting data?
These questions responded to the project’s ethical aims in different
ways, required different amounts of effort, budget and expertise and
might provide different ways to make sense of the technology’s market
potential (see Chapter 6). These concerns were not easy to separate.
As the project moved beyond the experimental phase that we saw in
Chapters 2 and 3, towards a more fully operational system that would
be tested live in the train station and airport, a decision was required
on what ought to constitute deletion. The consultancy firm that coor-
dinated the project decided, with ethics board support, to pursue
the development of a comprehensive, but complex deletion system.
Eventually, this would involve using solid-state drives for data storage
with data then overwritten by an automated system, making it more or
less irretrievable. To begin with, however, solid-state technology was not
available to the project and the means to automatically overwrite data
was not yet developed in a way that would work on the project’s sys-
tem architecture. Moreover, the system had to also demonstrate that it
could successfully demarcate relevant from irrelevant data in order that
the irrelevant data could be overwritten. And other data which had been
tagged ‘relevant’ once it was no longer needed and metadata (such as
timestamps and bounding box dimensions) would also need to be
deleted. And not just deleted, but demonstrably and accountably deleted
so that various audiences could be shown that deletion had taken place
and that the system worked. TechFirm, a large IT network provider who
were a partner in the project, had taken on the task of ensuring that the
deletion system would be accountable. The complexity of deletion did
not end here: discussions continued around how quickly data should be
deleted. Just how long should data be stored, what was the correct ethi-
cal and practical duration for data storage? Operatives might need to do
Route Reconstruction sometime after an alert had been issued, but ethi-
cal demands suggested such storage times should be limited. As a feature
of the emerging technology under test conditions, 24 hours was initially
set as a data storage period that responded to ethical and emerging pol-
icy imperatives and the practical requirements of operatives.
4 THE DELETING MACHINE AND ITS DISCONTENTS 83
were clear, the maps of the fixed attributes of the experimental settings
were clear, the models for object classification and the action states of
objects as worthy of further scrutiny all seemed clear. The quantities
involved were significant—terabytes of digital video data—but the quali-
ties—mostly operatives clicking on text alerts and watching short videos,
were neatly contained. Following Callon and Law (2005), we could say
that this was the first step towards a straightforward form of qualcula-
tion. Things were separated out and disentangled such that they might
be recombined in a single space (within the algorithmic system). The
background subtraction technique that we saw in Chapter 2 provided
this seemingly straightforward basis for beginning demarcations of rele-
vant data (to be kept) and irrelevant data (to be deleted). A result could
be extracted.
However, the project was now moving beyond its initial experimental
phase. In the airport and train station as the technology moved towards
system testing, the computer scientists from University 1 and 2 began
to engage with the complexities of relevance detection in real time and
real space. They started to look for ways to tidy up the initial steps of
object classifications (which provided approximate shapes for back-
ground subtraction) in the airport and train station, through ever more
closely cropped pixel masks for objects, with any single, isolated pixels
erased and any holes between pixels filled. They suggested masks could
be further tidied by removing shadow, just leaving the new entity. And
these tidied up entities could now be subjected to object classification
with what the computer scientists hoped was greater certainty. They
were cleaned and tidied objects. Object classification would now define
with confidence the objects in view as, for example, human-shaped or
luggage-shaped. Cleaning the images, removing shadow, removing gaps
in pixel masks was more processor intensive than the initial quick and
dirty technique we noted in the earlier experimental phase of the project,
but it was still computationally elegant for the computer scientists. It was
a reasonably quick technique for ascertaining a classification of putative
objects and it was a classification in which they (and other project partici-
pants) could have confidence.
Object classification required this more developed form of qualcula-
tion, drawing entities together into new relations such that they might
be qualified for judging as relevant or irrelevant because the system
faced new challenges in working in real spaces in real time. Classifying
4 THE DELETING MACHINE AND ITS DISCONTENTS 85
phase (see Chapter 5). Everyday life and the algorithmic system did not
see eye to eye.
Work to build the algorithmic deleting machine and constitute an
ordered and certain accountable nothing, a notable absence, instead
became the basis for establishing a precarious kind of uncertain presence.
Orphan frames and the audit log continually generated a disorderly account
of something instead of nothing, a blank figure (Hetherington and Lee
2000) that paid recognition to the terms of its own order (that it should
find and prove the existence of nothing), but also questioned that order (by
finding orphan frames that then required explanation). The system threat-
ened to overwhelm the qualculations that had tried to establish a demarca-
tion between relevant data to be kept and irrelevant data to be deleted.
The audit log generated a notable question for the project partic-
ipants: could the technology still be sold primarily on the basis of its
technical efficacy in deleting? The clear and negative answer to this
question for the coordinators required a significant switch in the con-
ditions under which parties might be invited to engage with the sys-
tem. Initially the project coordinators had sought to take the internal
accountability mechanisms of deletion out into the world as a basis
for bringing the world to the deleting machine. They sought to
develop from nothing, a market-valued something. After these some-
what sketchy results, the project coordinators sought to leave aside
the technical difficulties through which nothing (the deleted) failed
to be effectively and accountably constituted, at the same time as they
continued to embark on concerted market work. As we will see in
Chapter 6, having one form of calculation overwhelmed by this blank
figure, encouraged the coordinators to seek a different basis for order-
ing their calculations.
Conclusion
In this chapter, we have seen that grasping everyday life and participat-
ing in everyday life became more challenging for our algorithms as they
moved from experimentation to something closer to system testing in
the train station and airport. What might be termed the real world con-
ditions of real time and real space operations proved difficult. Indeed the
algorithmic system needed more development to cope with these new
exigencies. Further measurements and a new database were required to
build durable links between the space of the airport and train station and
the video stream that flowed through the system.
90 D. NEYLAND
order to be judged. Yet setting limits for our algorithmic system through
deletion was not straightforward; for something to be convincingly lim-
ited, it needed to be demonstrably and accountably limited. The work
to produce an accountable deleting machine was focused on producing
a machine that could account for itself and the way it set limits, demon-
strating nothing (the product of deletion) as a prior step to something
(the account of nothing, building a world of relations of value into the
technology). However, accountability work was also uncertain and a lit-
tle precarious with the world of relations of people and things assembled
to do accountability, shifting between certainty and uncertainty. The
study of making deleting accountable, emphasised this precariousness—
to prove that nothing existed as a result of something being deleted,
without resurrecting the thing deleted, proved an ongoing conceptual
and practical challenge. As we will see in Chapter 5, this was only the
start of a series of challenges for our algorithms.
References
Article 29 Working Party Accountability Principle. (2010). Available from: http://
ec.europa.eu/justice/policies/privacy/docs/wpdocs/2010/wp173_en.pdf.
Bernal, P. (2011). A Right to Delete? European Journal of Law and Technology, 2(2).
Callon, M., & Law, J. (2005). On Qualcuation, Agency and Otherness.
Environment and Planning D: Society and Space, 23, 717–733.
Callon, M., & Muniesa, F. (2005). Economic Markets as Calculative Collective
Devices. Organization Studies, 26, 1229–1250.
Cochoy, F. (2002). Une Sociologie du Packaging ou l’Aê ne de Buridan Face
au Marche [A Sociology of Packaging, or Buridan’s Ass in the Face of the
Market]. Paris: Presses Universitaires de France.
Espeland, W., & Sauder, M. (2007). Rankings and Reactivity: How Public
Measures Recreate Social Worlds. American Journal of Sociology, 113(1), 1–40.
Gillespie, T. (2013). The Relevance of Algorithms. In T. Gillespie, P.
Boczkowski, & K. Foot (Eds.), Media Technologies: Essays on Communication,
Materiality, and Society. Cambridge, MA: MIT Press.
Goold, B. (2009). Building It In: The Role of Privacy Enhancing Technologies
in the Regulation of Surveillance and Data Collection. In B. Goold &
D. Neyland (Eds.), New Directions in Surveillance and Privacy (pp. 41–61).
Cullompton, Devon: Willan.
Hetherington, K., & Lee, N. (2000). Social Order and the Blank Figure.
Environment and Planning D: Society and Space, 18(2), 69–184.
92 D. NEYLAND
Open Access This chapter is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/
by/4.0/), which permits use, sharing, adaptation, distribution and reproduction
in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit line
to the material. If material is not included in the chapter’s Creative Commons
license and your intended use is not permitted by statutory regulation or exceeds
the permitted use, you will need to obtain permission directly from the copyright
holder.
CHAPTER 5
Opening
In this chapter, our algorithms will continue their journey into the every-
day. Beyond the initial expectation held by project participants (see
Chapters 2 and 3) that in experimental settings the algorithms might
The chapter will begin with a discussion of the ways in which recent
STS literature has handled future orientations in studies of technology
demonstrations, testing, expectations and prototyping. This will provide
some analytic tools for considering the work of the algorithmic system
in its move into forms of testing and demonstration. I will then suggest
that notions of integrity provide a means to turn attention towards the
practices of seeing, forms of morality and materiality made at stake in
demonstrations of our algorithms. The chapter will conclude with a dis-
cussion of the problems now faced by our algorithms as a result of their
demonstrable challenges.
Demonstrating Algorithms
From their initial discussions of system architecture and experimentation
with grasping the human-shaped object (see Chapter 2), to the start of
system testing, demarcating relevant from irrelevant data and building a
deleting machine (see Chapter 4), the project team had retained a con-
fidence in the project’s premise. The aim was to develop an algorithmic
surveillance system for use, initially, in a train station and airport that
would sift through streams of digital video data and select out relevant
images for human operatives. As I suggested in Chapter 2, the idea was
to accomplish three ethical aims, to reduce the amount of visual video
data that was seen by operatives, to reduce the amount of data that was
stored by deleting irrelevant data and to not develop any new algorithms
in the process. Up until the problems experienced with the deletion sys-
tem (see Chapter 4), achieving these aims had been a difficult and chal-
lenging task, but one in which the project team had mostly succeeded.
Yet the project had never been just about the team’s own success: the
project and in particular the algorithmic system needed to demonstrate
its success (and even become a marketable good, see Chapter 6).
From the project proposal onwards, a commitment had always been
present to put on three types of demonstration for three distinct kinds
of audience. As the person responsible for ethics in the project, I would
run a series of demonstrations for ethical experts, policy makers (mostly
in the field of data protection) and academics who would be called upon
to hold to account the ethical proposals made by the project. End-users
from the train station and airport would also be given demonstrations of
the technology as an opportunity to assess what they considered to be
the potential strengths and weaknesses of the system. Finally, the pro-
ject funders would be given a demonstration of the technology ‘live’ in
the airport at the end of the project, as an explicit opportunity to assess
the merits, achievements, failures and future research that might emanate
from the project. We will take each of these forms of demonstration in
turn and look at the ways in which our algorithms now engage with the
everyday and the questions of integrity these engagements provoke.
5 DEMONSTRATING THE ALGORITHM 101
Demonstrating Ethics
I invited a group of ethical experts (including academics, data protec-
tion officers, politicians and civil liberty organisations) to take part in a
demonstration of the technology and also ran sponsored sessions at three
conferences where academics could be invited along to demonstrations.
The nature of these demonstrations at the time seemed partial (Strathern
2004), and in some ways deferred and delegated (Rappert 2001) the
responsibility for ethical questions from me to the demonstration audi-
ences. The demonstrations were partial in the sense that I could not use
live footage as these events did not take place in the end-user sites and
could only use footage of project participants acting out suspicious behav-
iour due to data protection concerns that would arise if footage were used
of non-project participants (e.g. airport passengers) who had not con-
sented to take part in the demonstrations. Using recorded footage at this
point seemed more like a compromise than an issue of integrity; footage
could be played to audiences of the User Interface and our algorithms
selecting out human-shaped objects, action states (such as abandoned lug-
gage) and even use footage of the Route Reconstruction system to replay
those objects deemed responsible for the events. Audience members were
invited to discuss the ethical advantages and disadvantages they perceived
in the footage. If it raised questions of integrity to any extent, it was per-
haps in the use of recorded footage. But audiences were made aware of
the recorded nature of the footage and the project participants’ roles as
actors. In place of a display of virtuosity (Collins 1988) or an attempt
to manage revelation and concealment (Coopmans 2010) I (somewhat
naively it turned out) aimed to put on demonstrations as moments where
audiences could raise questions of the technology, free from a dedicated
move by any wily demonstrator to manage their experience of seeing.
Along with recorded footage, the audience were shown recordings
of system responses; videos incorporated the technicalities of the Event
Detection component of the system architecture, its selection proce-
dures and provision of alerts. I took audiences through the ways in which
the system put bounding boxes around relevant human-shaped and other
objects deemed responsible for an action and showed a few seconds of
footage leading up to and following an alert. At this moment, I thought
I was giving audiences a genuine recording of the system at work for them
to discuss. However, it later transpired that the recorded footage and sys-
tem response, and my attestation that these were more or less realistic rep-
resentations of system capabilities, each spoke of an integrity belied by later
demonstrations.
102 D. NEYLAND
End-User Demonstrations
The limitations of these initial demonstrations became clear during a
second form of demonstration, to surveillance operatives in the airport.
Several members of the project team had assembled in an office in the air-
port in order to give operatives an opportunity to see the more developed
version of the technology in action. Unlike initial discussions around the
system architecture or initial experimentation with grasping the human-
shaped object (see Chapter 2), our algorithms were now expected to
deliver a full range of competences in real time and real space.18 These
demonstrations also provided an opportunity for operatives to raise issues
regarding the system’s latest design (the User Interface, for example,
had been changed somewhat), its strengths and limitations, and to ask
any questions. This was to be the first ‘live’ demonstration of the tech-
nology using a live feed from the airport’s surveillance system. Although
Simakova (2010) talks of the careful preparations necessary for launching
a new technology into the world and various scholars cite the importance
of revelation and concealment to moments of demonstration (Smith
2009; Coopmans 2010; Collins 1988), this attempt at a ‘demonstration’
to end-users came to seem confident, bordering on reckless in its appar-
ent disregard of care and concealment. Furthermore, although there was
little opportunity to select the audience for the test (it was made up from
operatives who were available and their manager), there was also little
done to position the audience, manage their experience of seeing, incor-
porate them into a compelling narrative or perform any temporal oscil-
lation (between the technology now and how it might be in the future;
Suchman 2011; Brown 2003; Brown and Michael 2003; Simakova 2010;
Smith 2009). The users remained as unconfigured witnesses (Coopmans
2010; Woolgar 1991).
Prior to the demonstration to end-users, the limited preparatory work
of the project team had focused on compiling a set of metrics to be used
for comparing the new algorithmic system with the existing conventional
video-based surveillance system. An idea shared among the computer sci-
entists in the project was that end-users could raise questions regarding the
technology during a demonstration, but also be given the metric results as
indicative of its effectiveness in aiding detection of suspicious events. The
algorithmic and the conventional surveillance system would operate within
the same temporal and spatial location of the airport and the operatives
would be offered the demonstrators’ metric criteria as the basis for judging
5 DEMONSTRATING THE ALGORITHM 103
sameness (Pinch 1993). The metrics would show that the new technology,
with its move to limit visibility and storage, was still at least as effective as
the current system in detecting events, but with added ethics.
This demonstration was designed to work as follows. The operatives of
the conventional surveillance system suggested that over a 6 hour period,
approximately 6 suspicious items that might turn out to be lost or aban-
doned luggage, would be flagged by the operatives and sent to security
operatives on the ground for further scrutiny. On this basis, our aban-
doned luggage algorithm and its IF-THEN rules (see Introduction and
Chapter 2) needed to perform at least to this level for the comparative
measure to do its work and demonstrate that the future would be as effec-
tive as the present, but with added ethics. The system was set to run for
6 hour prior to the arrival in the office of the surveillance operatives so
they could be given the results of the comparative metric. I had also taken
an interest in these comparative metrics. I wanted to know how the effec-
tiveness of our algorithms could be made calculable, what kinds of devices
this might involve, how entities like false positives (seeing things that were
not there) and false negatives (not seeing things that were there) might be
constituted. I wanted to relay these results to the ethical experts who had
taken part in the previous demonstrations on the basis that a clear division
between technical efficacy and ethical achievement was not possible (see
Chapter 3 for more on ethics). If the system worked or did not on this
criteria, would provide a further basis for ethical scrutiny.
In the 6 hour that the system ran, when the conventional surveillance
system would detect 6 items of potentially lost or abandoned luggage,
the algorithmic system detected 2654 potentially suspicious items. This
result went so far off the scale of predicted events, that the accuracy of
the system could not even be measured. That is, there were just too
many alerts for anyone to go through and check the number of false pos-
itives. The working assumption of the computer scientists was that there
were likely to be around 2648 incorrect classifications of human-shaped
and luggage-shaped objects that had for a time stayed together and then
separated. In later checking of a random sample of alerts, it turned out
the system was detecting as abandoned luggage such things as reflective
surfaces, sections of wall, a couple embracing and a person studying a
departure board. Some of these were not fixed attributes of the airport
and so did not feature in the digital maps that were used for background
subtraction. However, object parameterisation should have been able to
calculate that these were not luggage-shaped objects, and the flooring
and walls should have been considered fixed attributes.
104 D. NEYLAND
Fig. 5.1 A human-
shaped object and
luggage-shaped object
incorrectly aggregated
as luggage
Fig. 5.2 A lug-
gage-shaped object
incorrectly classified as
separate from its
human-shaped object
Fig. 5.3 A human-
shaped object’s head
that has been incorrectly
classified as a human in
its own right, measured
by the system as small
and therefore in the
distance and hence in a
forbidden area, set up
for the demonstration
Fig. 5.4 Wall as a
luggage-shaped object
the algorithms were not entirely in the dark about the nature of the
footage. The computer scientists had a developing awareness that the
algorithms could see a space with greater or lesser confidence according
to camera angles, lights, the material floor covering, how busy a space
happened to be and so on. Using recorded data that only included
‘unproblematic’ footage enabled the algorithms to have the best chance
of seeing the space and to be recorded seeing that space successfully.
To replay these recordings as the same as live data, was to conceal the
partially seeing algorithm (the algorithm that sees well in certain con-
trolled conditions). Algorithmic vision (how the algorithm goes about
seeing everyday life) and the constitution of the spaces in which the
algorithms operate (including how the algorithms compose the nature
of people and things) were entangled with questions of material, visual
and moral integrity which we will return to below. However, first and
most pressing for the project team was the question of what to do
about demonstrating the ethical surveillance system to project funders
given the disastrous efficacy results.
The silence that followed the computer scientist’s suggestion was typical
of what seemed to be multiple awkward pauses during the meeting. One
reason for this might have been an ongoing difference among members
of the project team as to how responsibility ought to be distributed for
the disappointing end-user demonstration results. Another reason might
also be a concern that to use recorded data was to effectively undermine
the integrity of the final project demonstration. The computer scientist
went on to make a further suggestion to the project coordinator:
The pause in the meeting following this second suggestion was slightly
shorter than the first and was breached by the project coordinator who
began to set out a fairly detailed response to the situation, giving the
impression that he had been gathering his thoughts for some time. In his
view a live test in the airport, using live video streams was the only possi-
bility for the demonstration to funders. For the train station, his view was
different:
In this excerpt the project coordinator suggests that for the train sta-
tion, not only will recorded footage be used, but the demonstration will
be ‘idealised’. That is, a segment of recorded data will be used that fits
computer scientists’ expectations of what the algorithms are most likely
to correctly see and correctly respond to (where ‘correct’ in both cases
would be in line with the expectations of the project team). Idealising
5 DEMONSTRATING THE ALGORITHM 109
Project Coordinator: I don’t think there’s any need to say anything on any
subject that was not what I just said.
Computer Scientist2: Do we need to test the busy period, or quiet time like
now?
Project Coordinator: Now I think is good.
Computer Scientist1: We need to find the best time to test… it cannot be
too busy. We need to avoid the busy period because of crowding.
Once the ideal timing for a demonstration had been established (late
morning or early afternoon, avoiding the early morning, lunchtime or
early evening busy periods where multiple flights arrived and departed),
other areas of activity that could be idealised were quickly drawn into
discussion. It had become apparent in testing the technology that an
item of abandoned luggage was identified by airport staff using the con-
ventional surveillance system on average once an hour. To ensure that
an item of luggage was ‘abandoned’ in the quiet period would require
that someone known to the project (e.g. an airport employee in plain
clothes) ‘abandoned’ an item of luggage. However, if the luggage was to
be abandoned by someone known to the project, this opened up further
opportunities for idealising the ‘live’ demonstration:
5 DEMONSTRATING THE ALGORITHM 111
For the next run through of the test, one of the project team mem-
bers’ luggage was wrapped in paper to test the hypothesis that this
would increase the likelihood of the object being detected by the
algorithm (Fig. 5.5).
The hypothesis proved to be incorrect as the results for both items
of luggage were broadly similar and continued to be disappointing.
However, it seemed that the algorithms could always successfully accom-
plish background subtraction, classify objects as human-shaped and
luggage-shaped and create an alert based on their action state as sepa-
rate for a certain time and over a certain distance in one, very tightly
112 D. NEYLAND
delineated location in the airport. Here the IF-THEN rules of the algo-
rithm seemed to work. The location provided a further basis to idealise
the ‘live’ demonstration, except that the person ‘abandoning’ the lug-
gage had to be very precise. In initial tests the computer scientists and
the person dropping the luggage had to remain on their phones, pre-
cisely coordinating and adjusting where the luggage should be posi-
tioned. It seemed likely that a lengthy phone conversation in the middle
of a demonstration and continual adjustment of the position of luggage
would be noticed by project funders. The project team discussed alterna-
tives to telephone directions:
Fig. 5.5 Luggage is
idealised
5 DEMONSTRATING THE ALGORITHM 113
After two days of rehearsal, the project coordinator was satisfied that the
airport employee was leaving the luggage in the precisely defined loca-
tion on a consistent basis, that the luggage selected was appropriate,
that it was being left in a natural way (its position was not continually
adjusted following telephone instructions) and the algorithm was suc-
cessfully classifying the luggage-shaped object and issuing an alert that
funders would be able to see in the demonstration.
At this moment it appeared that the demonstration would be ‘live’
and ‘idealised’, but what of its integrity? I was still present to report on
the ethics of the technology under development and the project itself.
In the final preparation meeting prior to the demonstration for research
funders, I suggested that a common motif of contemporary ethics was
accountability and transparency (Neyland 2007; Neyland and Simakova
2009; also see Chapter 3) and this sat awkwardly with the proposed reve-
lation and concealment and positioning of witnesses being proposed. On
the whole, the project team supported the idea of making the demon-
stration more accountable and transparent—this was, after all, a research
project. The project team collectively decided that the demonstration
would go ahead, but the research funders would be told of the actor’s
status as an employee of the airport, that the abandonment itself was
staged, that instructions would be given to the actor in plain sight of
the funders. Revelation and concealment were re-balanced and perhaps a
degree of integrity was accomplished.
these themes in distinct ways. For example, the demonstrations for eth-
ical audiences were initially conceived as free from many of the con-
cerns of revelation and concealment, temporal oscillation and carefully
scripted witnessing. I had (naively) imagined these demonstrations were
occasions in which the technology would be demonstrated in an open
manner, inspiring free discussion of its potential ethical implications.
Yet the demonstration to end-users and prior attempt to collect efficacy
data to render the algorithmic system comparable with the conventional
surveillance system (but with added ethics), revealed the extent of con-
cealment, temporal oscillation and carefully scripted witnessing that had
been required to put together the videos of the system for the ethical
demonstrations. I could now see these videos as demonstrably account-
ing for an algorithmic technology with capabilities far beyond those dis-
played to end-users. We could characterise the ethical demonstration as a
kind of idealised display of virtuosity (Collins 1988), but one which no
project member had confidence in, following the search for efficacy data
for end-users.
Subsequent discussions of the form and content of the demonstra-
tions for project funders suggests that a compromise on integrity was
required. The project coordinator looked to carefully manage revela-
tion and concealment (for the train station only using recorded foot-
age, within conditions that algorithms could see, only using recorded
system responses and only using those responses when the system had
responded correctly; or in the airport controlling the type of luggage, its
location, its careful ‘abandonment’), temporal oscillation (using the foot-
age to conjure an ethical surveillance future to be made available now)
and the elaboration of a world into which witnesses could be scripted
(with the computer scientists, project manager, algorithms and myself
initially in a different position from which to see the world being offered
to project funders).
Yet discussion of demonstrations and their integrity should not lead us
to conclude that this is simply and only a matter of deception. Attending
to the distinct features of integrity through notions of morality, materiality
and vision can help us to explore what kind of everyday life our algorithms
were now entering into. Firstly, our algorithms have been consistently
oriented towards three ethical aims (to see less and address privacy con-
cerns, store less and address surveillance concerns, and only use existing
algorithms as a means to address concerns regarding the expansion of
algorithmic surveillance). Articulating the aims in ethical demonstrations
5 DEMONSTRATING THE ALGORITHM 115
flooring better than others) and the distribution of vision (who and what
sees) and the organisation of vision (who and what is in a position to
produce an account of who and what), are important issues in the integ-
rity of demonstrations. The train station demonstration can have more
or less integrity according to this distribution and organisation of vision.
If recorded footage is used but the algorithms do not know what it is
they will see, this is noted by project participants as having more integ-
rity than if recorded decision-making by the algorithms is also used.
In the event both types of recording were used. Discussions in project
meetings around the demonstration for project funders, led to similar
questions The algorithms need to see correctly (in classifying luggage as
luggage-shaped objects) and to be seen correctly seeing (in producing
system results) by, for example, project funders and ethical experts, in
order to accomplish the visual-moral integrity to which the project has
made claim: that the algorithms can grasp everyday life.
Conclusion
In this chapter, the focus on demonstrating our algorithms’ ability to
grasp everyday life, compose accounts of everyday life and become the
everyday of the airport and train station, has drawn attention to notions
of integrity. Given the project’s ethical aims, work to bring a world into
being through demonstration can be considered as concerted activities
for bringing about a morally approved or better world. The moral terms
of demonstrations can thus go towards establishing a basis from which
to judge their integrity. Close scrutiny of demonstration work can then
open up for analysis two ways of questioning the integrity of the moral
world on show. Through material integrity, questions can be asked of
the properties of demonstrations, what they seem to be and how they
indexically provide for a means to constitute the moral order to which
the demonstration attests. Through visual-integrity questions can be
posed of who and what is seeing, the management of seeing, what it
means to see correctly, and be seen correctly. Material and visual integrity
is managed in such a way as to allow for the demonstrations to produce
a relation of undoubted correspondence between index and context,
establishing the integrity of the material and visual features of the tech-
nology: that it sees and has been seen correctly, and that the acts of see-
ing and those doing the seeing can be noted as having sufficient moral
integrity for those acts of seeing to suffice.
118 D. NEYLAND
Notes
1. See, for example: http://www.theguardian.com/uk/2008/jul/23/
canoe.ukcrime and http://news.bbc.co.uk/1/hi/uk/7133059.stm.
2. See: http://www.dailymail.co.uk/news/article-2478877/Man-sends-
letters-using-DIY-stamps-Royal-Mail-failed-notice.html and http://metro.
co.uk/2013/10/29/royal-fail-anarchist-creates-freepost-system-with-his-
own-stamps-4165420/.
3. See: http://www.thefootballsupernova.com/2012/04/ali-dia-greatest-
scam-premier-league.html and http://www.espncricinfo.com/coun-
ty-cricket-2011/content/story/516800.html.
4. See: http://www.theguardian.com/world/2013/aug/20/government-
fake-bomb-detectors-bolton.
5. See, for example: http://www.news.com.au/lifestyle/health/fake-ni-
gerian-doctors-ar rested-over-womens-deaths/stor y-fneuz9ev-
1226700223105.
6. See, for example: http://www.dailymail.co.uk/news/article-2408159/
Fake-plastic-surgeon-did-treatments-kitchen-left-woman-needing-
hospital-treatment.html.
7. See, for example: http://www.bbc.co.uk/news/uk-england-tyne-24716257;
http://news.bbc.co.uk/1/hi/world/asia-pacific/1310374.stm;
http://edition.cnn.com/2003/US/03/14/sprj.irq.documents/;
5 DEMONSTRATING THE ALGORITHM 119
http://www.theguardian.com/uk/2008/may/05/nationalarchives.second-
worldwar http://news.bbc.co.uk/1/hi/education/1039562.stm; http://
www.timeshighereducation.co.uk/news/fake-verifiable-degrees-offered-on-in-
ternet/167361.article; and http://www.badscience.net/2008/11/hot-foul-
air/#more-824.
8. See Stone (2010) and for example: http://www.science20.com/
between_death_and_data/5_greatest_palaeontology_hoaxes_all_time_3_
archaeoraptor-79473.
9. With around 2.8% of UK pound coins estimated to be fake. See: http://
www.bbc.co.uk/news/business-12578952.
10. We also find Sokal’s fake social science paper sitting alongside a greater
number of fake natural science discoveries (e.g. in cloning: http://
news.bbc.co.uk/1/hi/world/asia-pacific/4554422.stm and http://
www.newscientist.com/article/dn8515-cloning-pioneer-did-fake-re-
sults-probe-finds.html#.UoON3YlFBjp) and fakes in other fields such
as psychology (see faked data in claims that white people fear black peo-
ple more in areas that are dirty or that people act in a more selfish man-
ner when hungry: http://news.sciencemag.org/education/2011/09/
dutch-university-sacks-social-psychologist-over-faked-data).
11. Concise Oxford Dictionary (1999).
12. Such as van Meegreen, Elmyr deHory, Tom Keating, John Drewe and
John Myatt and the Greenhalgh family.
13. Alder et al. (2011) cite the example of the portrait of the Doge Pietro
Loredano which was and then wasn’t and now is again considered a
painting by Tintoretto.
14. I use the term integrity here rather than provenance or an evidential focus
on “chain of custody” (Lynch 1998: 848) as a first step towards explor-
ing index-context relations (taken up further in the next section).
15. Indexical is used in the ethnomethodological sense here, see Garfinkel
(1967).
16. For more on the complexities of seeing, see, for example: Daston and Galison
(1992), Fyfe and Law (1988), Goodwin (1994, 1995), Goodwin and
Goodwin (1996), Hindmarsh (2009), Jasanoff (1998), and Suchman (1993).
17. In a similar manner to Latour’s allies of scientific experiments sent out
into the world to attest to the strength of a scientific fact or discovery.
18. Real time and real space here refer to the naturally occurring unfolding
of events in the train station and airport in contrast to the experimental
stage of the project where spaces and the timing of events might be more
controlled. Here, our algorithms would have to make their first steps
into a less controlled environment, causing anxiety for the computer
scientists.
120 D. NEYLAND
References
Alder, C., Chappell, D., & Polk, K. (2011). Frauds and Fakes in the Australian
Aboriginal Art Market. Crime, Law and Social Change, 56, 189–207.
Brown, N. (2003). Hope Against Hype—Accountability in Biopasts, Presents
and Futures. Science Studies, 2, 3–21.
Brown, N., & Michael, M. (2003). A Sociology of Expectations: Retrospecting
Prospects and Prospecting Retrospects. Technology Analysis and Strategic
Management, 15(1), 3–18.
Clark, C., & Pinch, T. (1992). The Anatomy of a Deception. Qualitative
Sociology, 15(2), 151–175.
Collins, H. (1988). Public Experiments and Displays of Virtuosity: The Core-Set
Revisited. Social Studies of Science, 18(4), 725–748.
Concise Oxford Dictionary. (1999). 10th ed. Oxford: Oxford University Press.
Coopmans, C. (2010). ‘Face Value’: New Medical Imaging Software in
Commercial View. Social Studies of Science, 41(2), 155–176.
Daston, L., & Galison, P. (1992). The Image of Objectivity. Representations, 40,
81–128.
Fyfe, G., & Law, J. (Eds.). (1988). Picturing Power: Visual Depiction and Social
Relations. London: Routledge.
Garfinkel, H. (1963). A Conception of and Experiments With “Trust” as
a Condition of Concerted Stable Actions. In O. J. Harvey (Ed.), The
Production of Reality: Essays and Readings on Social Interaction (pp. 187–
238). New York, USA: The Ronald Press Company.
Garfinkel, H. (1967). Studies in Ethnomethodology. Englewood Cliffs, NJ:
Prentice-Hall.
Goodwin, C. (1994). Professional Vision. American Anthropologist, 96(3),
606–633.
Goodwin, C. (1995). Seeing in Depth. Social Studies of Science, 25(2), 237–274.
Goodwin, C., & Goodwin, M. (1996). Seeing as Situated Activity: Formulating
Planes. In Y. Engestrom & D. Middleton (Eds.), Cognition and
Communication at Work (pp. 61–95). Cambridge: Cambridge University
Press.
Hindmarsh, J. (2009). Work and the Moving Image: Past, Present and Future.
Sociology, 43(5), 990–996.
Jasanoff, S. (1998, October–December). The Eye of Everyman: Witnessing
DNA in the Simpson Trial. Social Studies of Science, 28(5/6), 713–740.
Latour, B., & Woolgar, S. (1979). Laboratory Life. London: Sage.
Lucivero, F., Swierstra, T., & Boenink, M. (2011). Assessing Expectations:
Towards a Toolbox for an Ethics of Emerging Technologies. Nanoethics, 5,
129–141.
5 DEMONSTRATING THE ALGORITHM 121
Open Access This chapter is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/
by/4.0/), which permits use, sharing, adaptation, distribution and reproduction
in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit line
to the material. If material is not included in the chapter’s Creative Commons
license and your intended use is not permitted by statutory regulation or exceeds
the permitted use, you will need to obtain permission directly from the copyright
holder.
CHAPTER 6
Opening
In Chapter 5, the ability of our algorithms to grasp and compose every-
day life in the train station and airport came under significant scrutiny.
Problems in classifying objects and their action states, issuing alerts and
demarcating relevant from irrelevant footage were major concerns for the
The chapter will begin with a brief digression through recent writ-
ing on performativity, before looking at the coordinators’ work to draw
investors into new relations with the algorithmic system. I will suggest
that these relations operated in a similar manner to the object classifica-
tion of our algorithms: investors, territories, future sales and market size
had to be separated out and qualified, calculated and pacified in order
that these new relations of investment might be developed. The chapter
will end with a discussion of where we have reached in the everyday life
of our algorithms.
Performativity
Performativity has played an important part in the recent science
and technology studies (STS) turn towards markets and market-
ing (see, for example, MacKenzie et al. 2007; MacKenzie 2008).
The argument draws on the work of Austin (1962) and his notion
of a performative utterance or speech act. Cochoy (1998) suggests
a performative utterance can be understood as a statement ‘that says
and does what it says simultaneously’ (p. 218). MacKenzie suggests
a distinction can be made between utterances that do something
and those that report on an already existing state of affairs (2008:
16). The most frequently quoted example, drawing on the work of
Austin (1962), is the utterance ‘I declare this meeting open’. Such
an utterance is said to describe and bring into being the state that it
describes—it is a speech act.
Developing this further, Cochoy (1998) suggests: ‘a performative sci-
ence is a science that simultaneously describes and constructs its subject
matter. In this respect, the ‘performation’ of the economy by marketing
directly refers to the double aspect of marketing action: conceptualis-
ing and enacting the economy at the same time’ (p. 218). From this, we
could understand that marketing brings the matter it describes into being.
For other STS scholars, the focus is attuned to markets rather than mar-
keting. For example, Callon suggests: ‘economics in the broadest sense of
the term performs, shapes and formats the economy’ (1998: 2). Araujo
thus suggests that performativity involves market and marketing type
statements making themselves true by bringing into being the subject of
the statement (2007: 218).
126 D. NEYLAND
our algorithms now had to grasp real space, in real time. Here people,
things and events unfolded in a naturally occurring away, across different
floorings and lighting conditions, at different frame rates, with humans
who now acted in oddly normal ways. Children went this way and that
way, adults stood still for too long, luggage did not behave as it ought
and humans wore the wrong kinds of outfits that looked just like the
airport floor. Grasping and composing this everyday was too challeng-
ing. Under test conditions, in place of 6 items of potentially abandoned
luggage came 2654 items. The relevant and irrelevant intermingled in a
disastrous display of technical inefficacy. What had seemed like reasona-
ble demonstrations of the algorithms’ capabilities to ethical audiences,
now had to be questioned. Questions of the material integrity of these
demonstrations (and the extent to which a relation of undoubted corre-
spondence could be maintained between the system put on show and the
world to which it pointed) were only matched by questions of their visual
integrity (of who and what was in a position to see who and what). These
questions continued and even grew for a time as our algorithms moved
towards their final demonstrations to research funders. The king of Event
Detection—abandoned luggage—could only be demonstrated through
a careful whittling away of confounding variables. The flooring, light-
ing, luggage-type, positioning, behaviour of the luggage’s human owner,
frame rate of the camera and other human-shaped objects of the airport
each had to be closely controlled. In place of the algorithm going out
into the world grasping or composing real time, real space everyday life,
a more modest and controlled everyday had to be brought to the system.
And so we find in our final chapter that the algorithms are somewhat
quiet. Away from the drama of contemporary academic writing and popu-
lar media stories, the algorithms take up a meek position in an Exploitation
Report. In place of any fanfare regarding their technical efficacy, comes a
carefully composed account, depending on imaginative and dextrous cal-
culative work. Here, more and less valued geographical regions, customer
types and inferior competitors stand in as proxies for our algorithms. The
calculations, instead of talking about current technical efficacies, point
towards a future potential of market value that could be achieved with
investment. The performative accomplishment of the investment proposi-
tion negates the need for our algorithms’ everyday life to be put on dis-
play. At the end, they are not entirely absent from our story, but from the
Exploitation Report their grasp and composition of everyday life, their pros-
pects of becoming the everyday of the airport and train station, are deleted.
Goodbye algorithm.
136 D. NEYLAND
References
Araujo, L. (2007). Markets, Market-Making and Marketing. Marketing Theory,
7(3), 211–226.
Austin, C. (1962). How to Do Things with Words. Oxford: Clarendon Press.
Barad, K. (2003). Posthumanist Performativity. Signs, 28(3), 801–831.
Bryan, D., Martin, R., Montgomerie, J., & Williams, K. (2012). An Important
Failure: Knowledge Limits and the Financial Crisis. Economy and Society,
41(3), 299–315.
Butler, J. (1997). Excitable Speech: A Politics of the Performative. London:
Routledge.
Butler, J. (2010). Performative Agency. Journal of Cultural Economy, 3(2),
147–161.
Callon, M. (1998). The Laws of the Market. Oxford: Blackwell.
Callon, M. (2006). What Does It Mean to Say That Economics Is Performative?
(CSI Working Paper Series, No. 5). Paris: CSI.
Callon, M. (2010). Performativity, Misfires and Politics. Journal of Cultural
Economy, 3(2), 163–169.
Cochoy, F. (1998). Another Discipline for the Market Economy: Marketing as a
Performative Knowledge and Know-How for Capitalism. In M. Callon (Ed.),
The Laws of the Market (pp. 194–221). Oxford: Blackwell.
Dorn, N. (2012). Knowing Markets: Would Less Be More? Economy and Society,
41(3), 316–334.
Foucarde, M. (2007). The Politics of Method and Its Agentic, Performative and
Ontological Others. Social Science History, 31(1), 107–114.
Lee, B., & LiPuma, E. (2002). Cultures of Circulation: The Imaginations of
Modernity. Public Culture, 14(1), 191–213.
MacKenzie, D. (2003). An Equation and Its Worlds: Bricolage, Exemplars,
Disunity and Performativity in Financial Economics. Social Studies of Science,
33(6), 831–868.
MacKenzie, D. (2008). An Engine, Not a Camera: How Financial Models Shape
Markets. London: MIT Press.
MacKenzie, D., & Pardo Guerra, J. P. (2013). Insurgent Capitalism: Island,
Bricolage and the Re-making of Finance. Available from: http://www.sps.
ed.ac.uk/__data/assets/pdf_file/0003/97500/Island34.pdf.
MacKenzie, D., Muniesa, F., & Siu, L. (Eds.). (2007). Do Economists Make
Markets? On the Performativity of Economics. Oxford: Princeton University
Press.
6 MARKET VALUE AND THE EVERYDAY LIFE OF THE ALGORITHM 137
Open Access This chapter is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/
by/4.0/), which permits use, sharing, adaptation, distribution and reproduction
in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the
chapter’s Creative Commons license, unless indicated otherwise in a credit line
to the material. If material is not included in the chapter’s Creative Commons
license and your intended use is not permitted by statutory regulation or exceeds
the permitted use, you will need to obtain permission directly from the copyright
holder.
References
Adkins, L., & Lury, C. (2012). Measure and Value. London: Wiley-Blackwell.
Alder, C., Chappell, D., & Polk, K. (2011). Frauds and Fakes in the Australian
Aboriginal Art Market. Crime, Law and Social Change, 56, 189–207.
Amoore, L. (2011). Data Derivatives: On the Emergence of a Security Risk
Calculus for Our Times. Theory, Culture and Society, 28(6), 24–43.
Amoore, L., & Piotukh, V. (Eds.). (2015). Algorithmic Life: Calculative Devices
in the Age of Big Data. London: Routledge.
Anderson, B., & Sharrock, W. (2013). Ethical Algorithms. Available from:
http://www.sharrockandanderson.co.uk/wp-content/uploads/2013/03/
Ethical-Algorithms.pdf.
Araujo, L. (2007). Markets, Market-Making and Marketing. Marketing Theory,
7(3), 211–226.
Article 29 Working Party Accountability Principle. (2010). Available from:
http://ec.europa.eu/justice/policies/privacy/docs/wpdocs/2010/
wp173_en.pdf.
Austin, C. (1962). How to Do Things with Words. Oxford: Clarendon Press.
Barad, K. (2003). Posthumanist Performativity. Signs, 28(3), 801–831.
Beer, D. (2009). Power Through the Algorithm? Participatory Web Cultures and
the Technological Unconscious. New Media & Society, 11(6), 985–1002.
Bennett, C. (2005). What Happens When You Book an Airline Ticket
(Revisited): The Collection and Processing of Passenger Data Post 9/11. In
M. Salter & E. Zureik (Eds.), Global Surveillance and Policing (pp. 113–138).
Devon: Willan.
Bernal, P. (2011). A Right to Delete? European Journal of Law and Technology, 2(2).
Bowker, G., & Star, S. L. (2000). Sorting Things Out. Cambridge, MA: MIT
Press.
Concise Oxford Dictionary. (1999). 10th ed. Oxford: Oxford University Press.
Coopmans, C. (2010). ‘Face Value’: New Medical Imaging Software in
Commercial View. Social Studies of Science, 41(2), 155–176.
Corsín Jiménez, A., & Estalella, A. (2016). Ethnography: A Prototype. Ethnos,
82(5), 1–16.
Crawford, K. (2016). Can an Algorithm Be Agonistic? Ten Scenes from Life in
Calculated Publics. Science, Technology and Human Values, 41(1), 77–92.
Daston, L., & Galison, P. (1992). The Image of Objectivity. Representations, 40,
81–128.
Davies, S. (1996). Big Brother—Britain’s Web of Surveillance and the New
Technological Order. London: Pan Books.
De Certeau, M. (1984). The Practice of Everyday Life. Berkeley: University of
California Press.
Diakopoulos, N. (2013). Algorithmic Accountability Reporting: On the
Investigation of Black Boxes. Available from: http://towcenter.org/wp-con-
tent/uploads/2014/02/78524_Tow-Center-Report-WEB-1.pdf.
Dorn, N. (2012). Knowing Markets: Would Less Be More? Economy and Society,
41(3), 316–334.
Dourish, P. (2004). Where the Action Is: The Foundations of Embodied
Interactions. Cambridge, MA: MIT Press.
Drew, C. (2004). Transparency of Environmental Decision Making: A Case of Soil
Cleanup Inside the Hanford 100. Area Journal of Risk Research, 7(1), 33–71.
Ericson, R. V., Doyle, A., & Barry, D. (2003). Insurance As Governance.
Toronto: University of Toronto Press.
Eriksen, S. (2002, October 19–23). Designing for Accountability. Paper
Presented at NordiCHI, Aarhus, Denmark.
Espeland, W., & Sauder, M. (2007). Rankings and Reactivity: How Public
Measures Recreate Social Worlds. American Journal of Sociology, 113(1), 1–40.
Felten, E. (2012). Accountable Algorithms. Available from: https://freedom-to-
tinker.com/2012/09/12/accountable-algorithms/.
Ferguson, J., & Gupta, A. (2002). Spatializing States: Toward an Ethnography
of Neoliberal Governmentality. American Ethnologist, 29(4), 981–1002.
Foucarde, M. (2007). The Politics of Method and Its Agentic, Performative and
Ontological Others. Social Science History, 31(1), 107–114.
Foucault, M. (1980). The Eye of Power. In C. Gordon (Ed.), Power/
Knowledge: Selected Interviews and Other Writings 1972–1977 by Michel
Foucault (pp. 146–165). Sussex: Harvester Press.
Free, C., Salteiro, S., & Shearer, T. (2009). The Construction of Auditability:
MBA Rankings and Assurance in Practice. Accounting, Organizations and
Society, 34, 119–140.
Fyfe, G., & Law, J. (Eds.). (1988). Picturing Power: Visual Depiction and Social
Relations. London: Routledge.
142 References
Slavin, K. (2011). How Algorithms Shape Our World. Available from: http://
www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world.html.
Smith, W. (2004, June 29–July 2). The Misrepresentation of Use in Technology
Demonstrations. 6th Asia Pacific Conference, APCHI 2004, Rotorua, New
Zealand (pp. 431–440).
Smith, W. (2009). Theatre of Use: A Frame Analysis of Information Technology
Demonstrations. Social Studies of Science, 39(3), 449–480.
Spring, T. (2011). How Google, Facebook and Amazon Run the Internet.
Available from: http://www.pcadvisor.co.uk/features/internet/3304956/
how-google-facebook-andamazonrun-the-internet/.
Stalder, F., & Mayer, C. (2009). The Second Index. Search Engines, Personalization
and Surveillance. In K. Becker & F. Stalder (Eds.), Deep Search: The Politics of
Search Beyond Google (pp. 98–115). Piscataway, NJ: Transaction Publishers.
Stone, R. (2010). Altering the Past: China’s Fake Fossil Problem. Science,
330(24), 1740–1741.
Strathern, M. (2000). The Tyranny of Transparency. British Educational
Research Journal, 26(3), 309–321.
Strathern, M. (2002). Abstraction and Decontextualisation: An Anthropological
Comment. In S. Woolgar (Ed.), Virtual Society? Technology, Cyberbole, Reality
(pp. 302–313). Oxford: Oxford University Press.
Strathern, M. (2004). Partial Connections. Oxford: Rowman & Littlefield.
Suchman, L. (1993). Technologies of Accountability: Of Lizards and Aeroplanes.
In G. Button (Ed.), Technology in Working Order: Studies of Work, Interaction,
and Technology (pp. 113–126). London: Routledge.
Suchman, L. (2011). Subject Objects. Feminist Theory, 12(2), 119–145.
Suchman, L., Trigg, R., & Blomberg, J. (2002). Working Artefacts:
Ethnomethods of the Prototype. British Journal of Sociology, 53(2), 165–179.
Taylor, E. (2010). I Spy with My Little Eye. Sociological Review, 58(3), 381–405.
Thrift, N. (2004). Movement-Space: The Changing Domain of Thinking
Resulting from Resulting from New Kinds of Spatial Awareness. Environment
and Planning D: Society and Space, 34(4), 582–604.
Van der Ploeg, I. (2003). Biometrics and Privacy: A Note on the Politics of
Theorizing Technology. Information, Communication, Society, 6(1), 85–104.
Wherry, F. (2014). Analyzing the Culture of Markets. Theory and Society, 43(3–4),
421–436.
Woolgar, S. (1991). Configuring the User: The Case of Usability Trials.
In J. Law (Ed.), A Sociology of Monsters: Essays on Power, Technology and
Domination (pp. 58–97). London: Routledge.
Woolgar, S., & Neyland, D. (2013). Mundane Governance. Oxford: Oxford
University Press.
Ziewitz, M. (2016). Governing Algorithms: Myth, Mess, and Methods. Science,
Technology and Human Values, 41(1), 3–16.
Index
A B
Accountability, 7–9, 12, 14, 26, 41, Blank figure, 80, 81, 83, 89, 90
45–53, 60–68, 74–77, 81, 87–91, Braudel, F., 10
113, 133
Agency, 3, 7, 14, 47, 49, 77, 78
Algorithm, 1–13, 15–17, 22–26, 29, C
31–33, 36, 39–41, 45, 46, 48–54, Calculation, 1, 15, 16, 22, 78, 89,
57, 58, 61, 64–66, 74, 77–79, 124, 127–131, 135
81, 85, 88, 93, 97, 103, 105, Composition, 4, 11, 15, 16, 33, 118,
107, 110, 112, 113, 115, 118, 132, 133, 135
123, 132–135
as opaque/inscrutable, 3, 7, 8, 13,
22, 132 D
and politics, 11 De Certeau, M., 10
and power, 3, 6–9, 14, 32, 35, 36, Deletion, 14, 15, 25, 39, 68, 74–83,
49, 77 86–91, 94, 100, 124, 128, 129,
Algorithmic, 3–9, 13–16, 22–26, 28, 131, 134
30, 32–41, 46, 48–53, 55–57, Demonstrations, 15, 16, 22, 62,
60–68, 74–76, 79–81, 83–86, 94–97, 99–107, 113–117, 124,
89–91, 94, 95, 100, 102–104, 133, 135
107, 114, 115, 118, 124–126, Displays of virtuosity, 95, 101, 109,
128–131, 133, 134 114
children, 2, 6, 60, 133, 135 Dramaturgical metaphors, 94
probability, 59
Assessment, 16, 24–26, 47, 50, 61, 94
E I
Effects, 3, 4, 6–9, 15, 22, 32, 33, 48, Integrity, 16, 87, 94, 95, 97–101,
50, 132, 134 105, 107–109, 113–119, 135
Elegance, 14, 30, 33–35, 39, 74, 133 Investment, 4, 16, 124, 125, 127–
Ethics, 24, 25, 41, 50, 61, 62, 96, 131, 135
100, 101, 103, 113, 114, 133
ethics board, 26, 62, 133, 134
Everyday, 2–5, 7–16, 21–24, 32–36, J
39–41, 46, 47, 49–55, 57–62, Judgement, 79, 86–88, 95
65–68, 73, 74, 77, 78, 85, 86,
89, 90, 93, 94, 100, 105, 107,
109, 113–118, 123–125, 127, K
131–135 Knowing, 8, 85, 96
Expectation, 4, 48, 61, 67, 93 bodies, 10
Experimentation, 14, 22–25, 29, 31, spaces, 34, 119
33, 36, 38–40, 48–50, 57, 64,
67, 74, 85, 89, 90, 100, 102,
109, 113, 124, 134 L
Lefebvre, H., 10
F
Frame-rates, 85 M
Markets, 4, 16, 48, 125–127
making, 16
G share, 16
Garfinkel, H., 23, 51, 54, 55, Mol, A., 11
98, 119 Morality, 10, 13, 16, 95, 114, 116
General Data Protection Regulation,
75, 76
Goffman, E., 9, 10 N
Governmentality, 6, 47, 48 Neo-Foucauldian, 47
Grasping, 8, 68, 75, 89, 100, 102,
118, 129, 132, 133, 135
O
Opacity, 3, 5, 26, 45, 47
H
Human-shaped objects, 13, 14, 22,
30, 31, 34, 36–38, 40, 55, 58, P
59, 63, 78, 85, 101, 115, 135 Performativity, 16, 124–127, 130
Index 151
Pollner, M., 11, 12, 40 Testing, 68, 74, 84, 88–90, 94, 95,
Privacy, 25, 29, 30, 47, 62, 65, 68, 75, 100, 104, 110, 113
76, 114, 128, 130 Transparency, 7, 8, 14, 41, 46, 52,
Probability, 59 113
Proof, 15, 33, 88, 129, 134
U
Q Undoing, 80
Qualculation, 78–81, 83, 84, 87, 89
V
R Value, 3, 8, 15, 16, 47, 65, 68, 74, 75,
Revelation and concealment, 95–97, 78, 80, 88, 90, 91, 96, 99, 118,
99, 101, 102, 109, 113, 114, 116 124, 126–128, 130, 131, 134,
Right to be forgotten, 75, 76, 130 135
Vision, 104, 107, 114, 116, 117
S
Science and Technology Studies (STS), W
14–16, 22, 49, 50, 78, 94, 95, Witnessing, 2, 94, 97, 114
113, 125, 127
Security, 5, 8, 13–15, 24, 29, 35, 38,
46, 53, 57, 60, 88, , 103 X
Something and nothing, 80, 81, 83 X marks the spot, 112
Success and failure, 15
Surveillance, 5, 6, 24, 25, 28, 30, 32,
34, 36, 46, 51–62, 64, 74, 83, Y
86, 100, 102–104, 107, 109, Years of work, 67, 118
110, 114, 124, 128, 130
Z
T Zero, 80, 81
Temporality, 47