Intelligence, Biohacking, Post-Humanities, Inhuman Rationalism, Transhumanism, Xenofeminism

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4
At a glance
Powered by AI
Critical Posthumanism questions the anthropocentrism of modern intellectual life while Speculative Posthumanism opposes human-centric thinking about the long-run implications of modern technology. Critical posthumanists are interested in the posthuman as a cultural and political condition while speculative posthumanists are interested in the possibility of technologically created nonhuman agents.

Critical Posthumanism questions the anthropocentrism of modern thought while Speculative Posthumanism is concerned with the metaphysical possibility of posthumans arising through technological processes. CP uses posthuman as an adjective while SP nominalizes the term.

Transhumanism is an ethical claim that technological enhancement is good, while Speculative Posthumanism is a metaphysical claim about what kinds of things could exist. SP does not make claims about the desirability of posthumans.

Key cross-references: Computational Turn, Artificial Intelligence, Augmented

Intelligence, Biohacking, Post-Humanities, Inhuman Rationalism,


Transhumanism, Xenofeminism
Speculative Posthumanism
David Roden
Posthumanism comes in different flavours. The most common are Critical
Posthumanism (CP) and Speculative Posthumanism (SP). Both are critical of
human-centered (anthropocentric) thinking. However, their critiques apply to different
areas: CP questions the anthropocentrism of modern intellectual life; SP opposes
human-centric thinking about the long-run implications of modern technology.
Critical posthumanists argue that Western Humanism is based on a dualist
conception of a rational, self-governing subject whose nature is self-transparent.
According to Katherine Hayles and Neil Badmington, the term posthuman is
appropriately applied to a late stage of modernity which the legitimating role of the
humanist subject handed down from Descartes to his philosophical successors has
eroded (Badmington 2003; Hayles 1999).
Whereas critical posthumanists are interested in the posthuman as a cultural and
political condition, speculative posthumanists are interested in a possibility of certain
technologically created things. If CP uses posthuman as an adjective, SP
nominalizes the term.
Speculative posthumanists claim that there could be posthumans: that is, there could
be powerful nonhuman agents that arise through some human-instigated
technological process. Another way of putting this is to say that posthumans would
be wide human descendants of current humans who have become nonhuman in
virtue of a process of technical alteration (Roden 2012; 2014).
The term wide descent is used to describe this historical succession because
exclusive consideration of biological descendants of humanity as candidates for
posthumanity would be excessively restrictive. Posthuman making could involve
discrete interventions into the reproductive process such as genetic engineering, or
exotic-seeming technologies such as methods of copying and "uploading" human
minds onto powerful computer systems.
SP is frequently conflated with Transhumanism, but it should not be.
Transhumanists, like classical and modern humanists, wish to cultivate supposedly
unique human capacities such as autonomy, reason and creativity. However, they
hope to add the fruits of advanced technologies to the limited toolkit of traditional
humanism, believing that prospective developments in the so-called NBIC suite of
technologies 1 will allow humans unprecedented control over their capacities and
morphology (Bostrom 2005a, 2005b; Sorgner 2009).
Transhumanism is thus an ethical claim to the effect that technological enhancement
of capacities like intelligence or empathy is a good thing.
SP, by contrast, is a metaphysical claim about the kinds of things that could exist in
the world. It states that there could be posthumans. It does not imply that
posthumans would be better than humans or that their lives would be comparable
from a single moral perspective. One can hold that a posthuman divergence from

1

NBIC stands for Nanotechnology, Biotechnology, Information Technology, and Cognitive Science.

humanity is a significant possibility but not a desirable one (Roden 2012a; Chapter
5).
This is not to say, of course, that SP lacks ethical and political implications but these
become apparent when we have an adequate account of what a posthuman
divergence (or disconnection) might involve. I will return to this issue in the last part
of the entry.
There are no posthumans (as conceived by SP). Thus we are currently ignorant of
their mechanisms of emergence. It is conceivable that posthumans might arise in
many different ways, thus a philosophical posthumanism requires a mechanismindependent account of the concept of the posthuman. For example, we should not
treat mind uploading as a sine qua non of posthumanity because we do not know
that mind uploading is possible or has posthuman-making potential.
A plausible condition for any posthuman-making event is that the resulting nonhuman
entities are capable of acquiring ends and roles that are not set by humans and
that this autonomy is due to some alteration in the technological powers of things.
Given our dated ignorance of posthumans this claim captures the core of the
speculative concept of the posthuman. This is referred to as the Disconnection
Thesis (DT). Roughly, DT states posthumans are feral technological entities. Less
roughly, an agent is posthuman if and only if it can act independently of the Wide
Human - the interconnected system of institutions, cultures, individuals, and
technological systems whose existence depends on biological (narrow) humans
(Roden 2012; Roden 2014: 109-113).
One of the advantages of DT is that it allows us to understand human-posthuman
differences without being committed to a human essence that posthumans will lack.
Rather, we understand WH as an assemblage of biological and non-biological
individuals, whose history stretches from the world of Pleistocene hunter-gatherers to
the modern, interconnected world.
Becoming posthuman, then, is a matter of acquiring a technologically enabled
capacity for independent agency.
The fact that human-posthuman disconnection would not result from a difference in
essential properties does not entail that it would not be significant. Just how
significant depends on the nature of posthumans. But DT says nothing about
posthuman natures beyond ascribing a degree of independence to them. It is thus
multiply satisfiable by beings with different technological origins and very different
natures or powers (e.g. artificial intelligences, mind-uploads, cyborgs, synthetic life
forms, etc.).
Nonetheless, one picture of posthuman technogenesis has had pride of place in
philosophical and fictional writing on the posthuman. This is the prospect of humancreated artificial intelligences (robots, intelligent computer or synthetic life forms)
acquiring human intelligence or greater than human intelligence (superintelligence)
thereby transcending human control or even understanding.
In futurist thought, this is called the technological singularity. The term comes from a
1993 essay by the computer scientist Vernor Vinge The Coming Technological
Singularity: How to survive in the posthuman Era. According to Vinge, a singularity
would involve accelerating recursive improvements in artificial intelligence (AI)
technology. This would come about if the relevant AI or Intelligence Amplification
(IA) technologies were always extendible so that the application of greater
intelligence could yield even more intelligent systems. Our only current means of
producing human-equivalent intelligence is non-extendible: If we have better sex, it
does not follow that our babies will be geniuses (Chalmers 2010: 18).

Given an extendible technology, human or human-equivalent intelligences could


extend that AI/IA technology to create superhuman intelligences (AI+) that would be
even better self-improvers than the earlier AIs. They could consequently make
super-superhumanly intelligent entities (AI++) and so on (Chalmers 2010). If the
technology in question were some kind of machine intelligence, this might result in an
accelerating exponential growth in machine mentation that would leave biological
intelligences such as ourselves far behind.
The minds shot out by this intelligence explosion could be so vast, claims Vinge,
that we have no models for their transformative potential. The best we can do to
grasp the significance of this transcendental event, he claims, is to draw analogies
with an earlier revolution in intelligence: the emergence of posthuman minds would
be as much a step-change in the development of life on earth as the emergence of
humans from non-humans (Vinge 1993). Humans might be no more able to grasp a
post-singularity world than mice are able to grasp concepts in number theory. They
would be lost in a world of essentially incomprehensible and unknowable gods.
But suppose a singularity is not technically possible. Maybe there are hard limits on
intelligence (in this universe at least). Maybe the scenario does not adequately
nuance the notion of intelligence. Still, Vinges scenario raises a troubling issue
concerning our capacity to evaluate the long-run consequences of our technical
activity in areas such as the NBIC technologies. This is because it presupposes a
weaker, thus more plausible, speculative claim: our technical activity could generate
forms of life significantly alien or other to ours.
If posthuman life could be significantly alien or weird then we might not be in a
position to understand it easily, making the evaluation of prospective disconnections
problematic. Do we insist on adopting a humans-first perspective on our technical
activity even though we, or our wide descendants, may cease to be human? Critical
posthumanism implies that the privileging of human life is illegitimate. But what if as
Vinge fears we may be simply unable to understand the things we (our
descendants) might become?
Much depends, here, on the scope for posthuman weirdness. Do we have an a priori
(hence future proof) grip on how strange our posthuman successors could be?
There are two opposed perspectives on this: an anthropologically bounded
posthumanism (ABP) and an anthropologically unbounded posthumanism (AUP).
ABP states that there are transcendental conditions for agency that humans would
necessarily share with posthumans in virtue of being agents at all.
For example, maybe all serious agency requires mastery of language or the ability to
participate in social practices. Perhaps all agents must be capable of pleasure and
pain, must apply Kants categories or be Heideggerian Dasein. If so, there are limits
to posthuman weirdness and the extent to which posthumans can exceed our
understanding.
Do we have evidence for such constraints? If we dont, we should adopt an
unbounded posthumanism according to which there are no future-proof grounds for
viewing posthumans as agents of a particular kind. AUP has the discomforting
consequence, though, that we could only evaluate the ethical perspectives of
posthumans by encountering or becoming them.
If AUP is right, humanists and transhumanists have seriously underestimated the
inhumanism our technological predicament (Roden 2014: Chapter 7). We are not yet
in a position to evaluate the ethics of posthumanity, but we can only do so by
precipitating an event whose consequences are incalculable this side of a
disconnection. AUP and DT jointly imply that there can be no general ethics of the
posthuman, only multiple lines of posthuman becoming and experimentation with

posthuman forms of life and being. At this point, arguably, the perspectives of CP
and SP converge.

You might also like