New Ideas in Psychology: H. O Ğuz Erdin, Mark H. Bickhard

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

New Ideas in Psychology xxx (xxxx) xxx–xxx

Contents lists available at ScienceDirect

New Ideas in Psychology


journal homepage: www.elsevier.com/locate/newideapsych

Representing is something that we do, not a structure that we “use”: Reply to


Gładziejewski
H. Oğuz Erdina,∗, Mark H. Bickhardb
a
Boğaziçi University, Istanbul Turkey
b
Lehigh University, Pennsylvania United States

1. Introduction Gładziejewski and then track some of the details of his paper, showing
how these issues are manifest in it, and attempting to understand how
Interactivism has developed into a broad-range framework — from what we take to be errors are motivated by the presupposed (im-
fundamental metaphysical issues to functional brain properties, from possible!) framework and desiderata.
neural-glial “societies” to social ontologies, language, and the nature of
persons (Bickhard, 2009, 2015a, 2015b; Campbell, 2015). But at its 2. Central ideas of interactivism
core it is a model of representation and cognition. The interactivist
model of representation makes foundational criticisms of assumptions One core principle of the interactivist model is that the metaphy-
concerning representation that have been standard since the pre-So- sical nature of representation is “being truth-value bearing”; that is,
cratics and presents a positive model that differs from others on offer in bearing the normative property of being true or false. The central idea is
several ways. The interactivist model of representation (or re- that forms of functional anticipation of potentialities of agentive interaction
presenting), consequently, does not fit well within standard categories are the loci of such truth values: anticipations of potential interactions
(though it is closest to the general pragmatist framework), and, con- can be true or false. It makes intuitive sense that anticipations can be
sequently, is often miscategorized and misunderstood. true or false, but much remains to be explicated, modeled, elaborated
A recent paper by Gładziejewski (2016) gives us an opportunity to and defended.
address some of these issues. Gładziejewski presupposes a model of the The basic theoretical and philosophical tasks that this core model
nature of representation, in a version derived from Ramsey (2007), that gives rise to can be parsed into four questions: 1) How does this model
is precisely of the sort that the interactivist model argues does not work account for representation? Is it adequate to the multiform kinds of
and is not, in principle, possible. Gładziejewski argues that the inter- representation? 2) What are the consequences of the model for other
activist model does not satisfy the desiderata of this kind of model of mental phenomena, such as perception, cognition, language, and so on?
representation, not taking into account that the interactivist model 3) How are these crucial functions realized in the brain? And 4) What
argues directly against such models and their desiderata. It is, in fact, a are the theoretical and philosophical challenges that might be en-
“good thing” that the interactivist model does not satisfy such desi- countered by this model, and how can they be met?
derata, because, if it did, it would be in direct contradiction with in- These questions have been addressed and elaborated several times.
teractivism's own arguments that such models are incoherent. The How the model can, for example, account for representation of objects
central (though not the only) point is that Gładziejewski begs the or numbers (Bickhard, 2009). There are, in fact, a number of novel
question concerning the interactivist model by assuming precisely what consequences of the model for perceiving, cognition, language, and
the interactivist model argues is impossible, then charging interactivism other phenomena (Bickhard, 2009). The model also yields an account of
with not satisfying these impossible desiderata. brain processes that makes sense of phenomena like volume transmit-
It is, of course, possible that the interactivist in-principle arguments ters, non-zero baseline oscillatory frequencies, silent neurons, gap
against standard models of representation are themselves somehow junctions, astrocytes, and so on (Bickhard, 2015a, 2015b). Finally,
invalid or unsound, but Gładziejewski offers no arguments against there is an account of normative emergence, based on an account of
them. By simply accepting the Ramsey framework for understanding metaphysical emergence more generally, which, in turn, is based on an
and modeling representation, he ignores the arguments against such underlying process metaphysics (Bickhard, 2009, 2015c).
frameworks, and thereby renders his own arguments circular. There are also multiple challenges that are not of the form “how can
In what follows, we will first give an introductory overview of the model address X,” but, instead, pose critical claims that the model
central interactivist ideas tailored to misconceptions found in fails in various ways. These could, of course, turn out to be correct, but


Corresponding author.
E-mail addresses: [email protected] (H.O. Erdin), [email protected] (M.H. Bickhard).

https://doi.org/10.1016/j.newideapsych.2018.02.001
Received 4 June 2017; Received in revised form 20 December 2017; Accepted 13 February 2018
0732-118X/ © 2018 Elsevier Ltd. All rights reserved.

Please cite this article as: Oğuz Erdin, H., New Ideas in Psychology (2018), https://doi.org/10.1016/j.newideapsych.2018.02.001
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx

we consistently find these challenges to be themselves ill-formed. They computational approaches, interactivism situates itself in process and
often make false assumptions, or, in some cases, assumptions that haved control theory, a level of thinking closer to actual machine dynamics
already been addressed and criticized within the overall inteactivist than is the case for higher programming principles such as object or-
model. iented approaches.2 We think this is an important advantage both for
Here we address and correct some such assumptions. The following moving to lower levels (which makes it easier to accommodate lower
8 points should serve as a minimal checklist for any theory or paradigm level mechanisms), hence for naturalizing, and also for unearthing
that comes in close proximity to interactivism. certain key processes underlying standard computational models.
The above point can be made from a functional perspective by rea-
lizing that data or structure in a system can only influence the func-
2.1. Structures can dissolve into system processes
tional3 processing of the system, and that such functional influence does
not require such data or structure. The overall functional organization
This is a generally unfamiliar idea for someone who is not ac-
can always be rendered as a state transition diagram, or perhaps a
customed to interactivism, but it is “just” a consequence of basic pro-
machine table (Turing, 1950), and those organizations, in turn, can
cess models, whether machine theoretic or in terms of dynamic spaces.
always be realized in multiple differing architectures. Any given system
We are so used to thinking in terms of fixed structures and how we use
with data or structures, thus, does not require those data or structures
these fixed structures that we have a tendency to structuralize processes
in any semantic sense, nor in any functional sense: the functional
in order to render them compatible with our intuitions.
consequences of data or structure can always be realized in a different
The basic intuition is that data or structure cannot do anything other
architecture that does not have those data or structures.
than influence process, and any such influences can always, in prin-
This is not to preclude the fact that particular architectures may be
ciple, be “built into” the organization of the processes themselves,
useful for purposes of realization: we do not want to use ‘simple’ uni-
without any data or structure. Thus structures can always be folded into
versal Turing machines rather than computers. The point, instead, is
system processes and disappear as distinct parts of the whole structure,
that any argument that assumes that data or structures are necessary,
yielding a purely functional theory.
either semantically or functionally, is simply wrong.
Here, we have one of the most important insight of interactivism:
representing is more fundamental than representations. Consequently, a
strict differentiation between function and representation begs the 2.2. No homomorphism to the external world needed for representations:
question. This is a common mistake that pops up in many places, in- the rubber band example
cluding Gładziejewski's account.
The formalized version of “dissolving structures” comes from ab- Powers (1973) gave a counterexample to our intuitions that there
stract machine theory and it is called the state-splitting theorem. Consider must be some sort of structural similarity with an entity or situation in
a state transition diagram with one register (i.e. a temporary memory) order to interact properly with that entity or situation: tie two rubber
that can contain a 1 or 0. At some point, the system will store a 1 or 0 in bands together so that they are attached by a knot between them. The
the register, and (perhaps) at some later point it will read that register experimenter puts a finger through one of the rubber bands, and the
and transition in one way if there is a 1 and a different way (perhaps) if subject puts a finger through the other one. The subject's goal is to keep
there is a 0. Now consider the diagram in-between the store and the the knot over a spot on the table, adjusting to motions by the experi-
read. Split the diagram into two copies, one of which would be entered menter. Clearly, if the experimenter moves too fast, the subject cannot
instead of setting a 1 in the register and the other of which would be keep up, but, if the motions are slow enough, there is no difficulty
entered instead of setting a 0. Similarly, the “1” copy of the diagram maintaining the goal of keeping the knot over the designated ‘spot’ on
will connect with whatever would have been switched to if the register the table.
had been set with a 1 and the “0” diagram will connect with whatever The point comes in noticing that a description for this task will in-
would have been switched to if the register had been set with a 0. At volve concentric circles and spirals around the ‘spot’, and the elasticity
this point we have a pure transition diagram with no register that is of rubber bands, but that the cybernetic system that is competent to the
functionally completely equivalent to the diagram with a register. The task will not (necessarily) have any structure whatsoever that is homo-
state-splitting theorem proves that such splitting, though it may become or isomorphic to that task structure. Moreover, this point holds in
very complex, is always possible, thus, that registers (data, internal or general for task structures and systems that are competent to those
functional structures, etc.) are never necessary.1 tasks.
A version of this is already clear in computationalism: any computer Because of such considerations, interactivism does not begin with
program can be realized in an unbounded range of possible computer assumptions of task-competent processes being somehow iso- or homo-
architectures; the above point about process is “merely” a general- morphic to task structures, but, instead, begins with general interaction-
ization of that. In particular, any data structure can be architecturally based differentiations of the environment. Therefore, interactivist re-
eliminated by, for example, moving to a pure machine table organiza- presentations, being grounded on (though not simply equivalent to)
tion that realizes the (or a) relevant state transition diagram. Items, such differentiation processes, will not necessarily require any iso- or
memory, data, structure, etc. can do nothing relevant except influence pro- homomorphism with the actual structures in the external world. And
cess flow, and those influences can, in principle, always be built directly into that point introduces the next topic.
the process organization. The “intuition” to the opposite effect is “just” an
error in intuition carried over from thinking about computers (without 2.3. Classifications of the environment: implicit definitions, not explicit
understanding that they too have unbounded possible architectures that definitions
would give you functionally the same computer).
Abstract machine theory, as distinct from computational models, is One of the key notions in interactivism is differentiation.
not here just to provide an example of state splitting. The interactivist Interactivism distinguishes between differentiation and predication, and
account is fundamentally naturalistic and mechanistic in ways that
standard computational accounts are not; e.g., the interactivist model 2
Machine theory, however, does have inadequacies as a modeling language; for ex-
does not presuppose representation in its “computations on data” ample, it can handle temporal sequence, but not timing (Bickhard & Richie, 1983). These
(Bickhard & Richie, 1983). In general, unlike the standard can be transcended by moving to dynamic systems theory.
3
Note that ““functional” is a system-relational concept – an element cannot have a
function except relative to something other than itself, relative to some system” (Bickhard
1
Bickhard (1980) has an overview of abstract machine theory in Appendix A. & Terveen, 1995, p. 57).

2
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx

also between contact and content. In the following two sections we will Predications are constituted as indications of possible interactions.
explain these distinctions. The predication is that “this” environment is appropriate for, will
Consider the following scenario: upon detecting a fly, a frog's in- support, the indicated interaction. Such anticipations of potentiality
ternal state will change and stabilize in a determinate end state. have truth value: they can be true or false. The bearing of truth value is
Interactivism asks the following question. What other environments the fundamental characteristic of representation.
produce the same end state? It is well known that frogs flick their Such indications also have resources for accounting for more com-
tongues to any dark, small, and moving objects. Thus, any such en- plex forms of representation. One of these is that such anticipatory
vironment, as long as the same end state has been reached, can be indications can branch into multiple possibilities. The frog, for example,
classified as a “tongue flicking” kind of environment even though such might have several tongue flicking opportunities in several directions
an environment might contain numerous non-fly objects, such as peb- — perhaps several flies and a worm. Another resource is that such in-
bles or dots on a screen. There is a lot of evolutionary tinkering needed dications can conditionally iterate: perhaps if the frog were to rotate its
starting from such a coarse differentiation of the environment to a finer- body a bit, other tongue flicking opportunities would become acces-
grained differentiation so that it can differentiate, say, between flies sible: the new opportunities are conditional on the rotation, and the
and pebbles, or between different types of flies. frog might have indications of these linked potentialities.
The above approach to differentiating environments has the con- Such branching and iterating indications can link together to gen-
sequence that an end state that has been reached will implicitly define erate vast webs of anticipatory organizations, and do so in complex
the class of environments that could ‘cause’ it. At this stage we cannot agents, especially humans. Special kinds of subwebs within such overall
have explicit definitions because “there is no semantic information, no webs can constitute representations of objects, and representations of
representational content, available” (Bickhard & Terveen, 1995, p. 60) properties realized within those webs can constitute representations of
to the organism itself although there is a factual correspondence with more abstract kinds, such as of the number three (Bickhard, 2009).
the class of environments. This is another instance where our intuitions Within even the simplest branching organization of anticipations,
can go awry: in interactivism we have states that constitute differ- the organism must select what it will in fact attempt to do. The in-
entiating functions without any content! Differentiation is not in itself dications are of what is (indicated to be) available, while the selections
representation. within such possibilities will depend on other conditions (e.g., hunger)
Note that, in a homomorphism-requiring paradigm, generalization and goals (e.g., to return to a burrow). Such selection processes con-
is the problem which needs to be explained; e.g., how to construct a stitute the domain of motivation (Bickhard, 2000, 2003, 2009).
“chair” representation from encounters with particular chairs. In in- Note that this means that the indicative relationships cannot be
teractivism, on the other hand, the question is to move from general simply causal, nor simply a switching relationship. They have to be
differentiations to differentiations fine enough to support functionally indications of possibilities among which motivational selections can
useful indications of further action and interaction possibilities; so the take place.
aim is towards particularization and not generalization.4 This is indeed
possible but requires sometimes complex constructions (Bickhard,
2.5. Content is NOT represented: what are functional presuppositions?
1998). Also note that, as a consequence of such coarse beginnings, all
the frog can have is a differentiation of the form “a ‘tongue flicking’
Functional presuppositions are almost trivial when one thinks about
kind of environment.” In other words, a classification based solely on
an ordinary machine, say, a pen.7 What conditions should there be in
frog's internal states and their future possibilities. At this stage only an
order for it to function properly? Some of them are the following: the
external observer can represent the correspondence between such in-
temperature of the environment should be between certain limits (e.g.,
ternal processes and the external world. Although differentiation and
not higher than 1000°), the surface the pen will be used on should have
predication might seem to imply each other, they are crucially not
resistance between certain limits (i.e., if it is too slippery the ink will
equivalent. This brings us to the next unintuitive property of inter-
not hold, if it is too high the pen will get stuck), the ink and the che-
activism.
mical composition of its container should not react, there should be
enough downward force (e.g. gravity!), and so on. It is clear that al-
2.4. Differentiation is not the same as predication: contact vs. content though this list begins with trivial considerations, it can easily expand
to include so many elements that one can reach all the way up to the
Interactivism makes a distinction between differentiation and pre- whole universe. But a pen nevertheless manages to work quite effort-
dication, and argues that differentiation does not entail predication: In lessly, so we shouldn't worry much about this theoretical issue at this
standard frameworks, if you predicate something (say, being red) to stage.
things then you differentiate all those things from other things (that is Now as compared to the standard accounts of representations, there
non-red things) and conversely. Yet in interactivism we have to sepa- is again an unintuitive turn in interactivism when it comes to functional
rate these. What distinguishes predication from differentiation is tem- presuppositions. Assume that a frog sees a fly or a worm. Usually the
porality.5 In interactivism differentiation happens when a certain pro- image thus produced (or some homomorphic structure of it) is taken to
cess reaches its end state as described in the previous item — it be the representation of the fly or worm so that representation then
differentiates the class of conditions that would support arriving at that causes a tongue flicking response, or supports an (unconscious) in-
end state. This is called contact and, as we have said above, there is no ference to a conclusion that such a tongue flicking is possible. In in-
content yet. Content comes with predication: “as a predication, a pro- teractivism, on the other hand, there being a fly or a worm in an ap-
cedure is indicated as a potentiality” (Bickhard, 1998, p. 192, emphasis propriate location is the “presupposed conditions for the proper
added).6 functioning of the tongue flicking procedure” (Bickhard, 1998, p. 192,
emphasis as in the original).8 Thus what many theories of
4
Generalization certainly also occurs, but this is much easier to account for if re-
presentation is a matter of differentiation in the first place, rather than, say, impressions (footnote continued)
of particulars from which universals need to be inferred or constructed. terminology. Similarly, an indication that an interaction is possible, based on some dif-
5
This is another characteristic of interactivism in that, unlike Turing machine theory, ferentiation of the environment, can be true or false, but, unlike concepts, the differ-
temporality has to be intrinsic (Bickhard & Richie, 1983). entiations upon which such indications can be based are not themselves explicitly re-
6
A similar point is that ‘concepts’ are somehow representations, but they apply or not: presentational.
7
they do not have truth value per se — “chair” is neither true nor false. An attribution that With human derived, artifactual, normativity.
8
a concept can be applied, however, can be true or false; can be a proposition, in standard Note that anticipating is a functional process, realized in certain kinds of agentive

3
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx

Content is also intrinsically modal, counterfactual, and normative. Both


presuppositions and content are implicit, not “available” to the system,
but success or failure based on those “contents” are available to the
system because if the interaction fails, then the presuppositions
(whatever they are) are false. This internal, functional, detection of in-
teraction failure, thus of presupposition failure, thus of representational
falsity, is one of the advantages of being future oriented, not past or-
iented (Bickhard, 2009).
The normative approach to representations is a crucial point of in-
teractivism and thus a more detailed exposition is presented in the next
sub-sections.

2.6. Goals without representing environmental goal conditions: normativity


and emerging representing

To sum up: representational predications are indications of potential


interaction strategies or procedures. These are predicated of classes of
environments which are themselves differentiated via end states of
certain other processes — they implicitly predicate that this environ-
ment is one of those that would support the indicated interaction, in-
cluding its indicated potential goals, if any. The indicated interaction
Fig. 1. Differentiation and indication. procedures have presuppositions about the environment and such
presuppositions constitute their contents — which are implicit, modal,
representation regard as representational content is “just” a functional counterfactual, and, therefore, have no causal role.
presupposition in interactivism.9 With such presuppositions we can see the beginnings of how error
Dynamic (functional) presuppositions are not “represented” at all in (and, crucially, system detectable error) is possible within inter-
this model — that would be circular — instead, such dynamic pre- activism. At this point in the discussion, this is error with respect to
suppositions constitute the content of indications that represent that the reaching, or failing to reach, indicated goals.11 The core point is that
environment satisfies those presupposed conditions. The representa- indications can be true or false. In more detail, the “basic idea is that
tional relationship is with the environment, not with the contents of the other subsystems in the overall system can use the differentiations in
representation; to confuse the two is a serious misrendering of the in- those final states to differentiate their own internal goal-directed pro-
teractivist model and, as we will see, Gładziejewski makes the confu- cessing” (Bickhard & Terveen, 1995, p. 61). To give a simple schema-
sion. tized example, consider a differentiator D8 ending up either in final
Since the content/vehicle distinction and causal role are the usual state F22 or else in F44 after interacting with tokens from environment
go-to notions when discussing various theories of representation, we classes E22 or E44, respectively. These end states can be taken by the
will make a few remarks regarding their situation in interactivism. In its system organization to indicate the possible interaction strategies St75
standard version, the vehicle is taken to be a physical structure such as a or St451 for a process P117 with goal G38. We schematize all this in
computer chip's semiconductor gates opening and closing, and the Fig. 1.
content is taken to be what that hardware (or event) is supposed to If selection of St75 or St451 fails in achieving G38, then the system
represent, such as a car or its movement. As we have seen, in inter- can detect that an error has happened somewhere along the line. Such a
activism structures dissolve into system processes so that vehicle as normative functioning provides the ground for representing to emerge
structure seems to be in jeopardy, and this is indeed the case. If “vehicle” in the possible interaction strategies (St75 or St451).12
is taken to mean entity, then no — that does not suffice.10 If “vehicle” is We have to emphasize that in the figure above there is no label
taken to mean some sort of functional indication of interactive po- pinpointing the exact location of anticipatory processes. The reason is
tentiality, then that's fine. Note that any such function will be realized that “there is a distributivity in interactive representations that is
in some way, e.g., in the brain. epistemically intrinsic – that could not be altered without destroying
As for the causal role, interactive presuppositions are modal (future the fundamental character of the interactive representations” (Bickhard
oriented possibilities) and counterfactual (if A holds and B happens, then & Terveen, 1995. p. 305). This is one another aspect of interactivism
… ) so that they per se do not have a causal role. They are the class of where one's intutions can be misleading. Thus, a full quote is perhaps in
sufficient conditions that would support the interaction if one or more order (Bickhard & Terveen, 1995, p. 305):
of those sufficient conditions were the case. Presuppositions in this This is a functional distributivity, rather than just an implementa-
sense can explain why an interaction succeeds or fails, but they do not tional distributivity, involved in the webs of indications of further
cause anything (certainly not in a sense of being efficient causes). But potential interactions … given an environmental differentiation.
the biological processes that realize the functional processes—that is, Representation is emergent precisely in those functional indications
anticipations—certainly do have a “causal” role. Similarly, content also of further potentialities, and those indications are intrinsically dis-
does not have a causal role since presuppositions constitute content. tributed within and across the organization of such indications. That
is, interactive representation is intrinsically relational, and, there-
fore, necessarily distributed over organizations of such relations.
(footnote continued)
(e.g., biological) processes whereas presuppositions are (relational) properties of those
(anticipative) functional processes.
9 11
The example of the pen is of an artifactual, thus derivative, function, whereas the Recall that the fundamental criterion for being representational is possessing truth
anticipative function in the interactive model is intrinsically emergent in the nature of the value — possessing the possibility of being true or false, of the indications being correct or
agent. Both involve presuppositions, nevertheless. in error.
10
Even if it is “just” a bit pattern “entity” in a computer! Even a bit pattern is, for 12
In this discussion, the normativity is relative to achieving or not achieving a goal.
example, a pointer only insofar as the (functional) program treats it as such (rather than We will see below that there is a more basic form of representational normativity involved
treating it as, e.g., an integer). in ‘just’ anticipation per se.

4
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx

This is in contrast to neural network and Parallel Distributed At times such attractors have been proposed as constituting re-
Processing approaches where the distributivity can be taken as an im- presentations of any pattern or start condition that is within the re-
plementational distributivity. It is important to keep in mind this diver- levant attractor basin: that a particular kind of initial condition has
gence between neural network accounts and interactivism when we been encountered is supposed to be “represented” by the fact that the
begin investigating Gładziejewski's account of Beer's virtual organisms dynamics have settled into that ‘corresponding’ attractor.
in section 3.2.2. It should be clear, however, that, while such a dynamic system may
As a final remark, note that if the concept of goal requires re- differentiate among its various possible attractor basins, there is no re-
presentations then the whole account will be circular. The goal subsystems presentation of what is being differentiated. As constituting differ-
in interactivism can be as simple as a switch (e.g., repeat procedure A entiators, such systems might be useful for a broader system, with the
with variations if the error is not below a threshold or else switch to appropriate normativities, for setting up indications of further dynamic
procedure B). In other words, goals in interactivism are functional goals possibilities, in which case normative truth value, thus representing,
of the system and its subsystems, and “the logical function that goals might emerge.
serve … is to provide criteria for error” (Bickhard & Terveen, 1995, p. But the differentiators do not themselves represent: what they dif-
63). ferentiate is inherent in the dynamics, but the dynamics cannot be
On the other hand goals— as ‘merely’ “final common trajectories” “true” or “false.” The dynamics settle into a particular attractor or an-
— can be emergent much as in Brooks' subsumption robotics where other as a matter of factual contact with the environment. There is no
local interaction rules generate global goals for the robot, such as anticipation that could yield emergent truth value (though, again, such
picking up soda cans in an office (Brooks, 1991). We think that the an anticipation might be set up conditionally on such differentiations).
emergence of goals in Brooks' robots renders the emergence of such The settling may be a simple or a complex dynamical process, but it is
final common trajectory outcomes from system processes especially not in itself normative. If it were, then, for example, a soap film settling
clear. But because Brooks' robots do not have anticipatory processes, into an “equal surface tension” form within a wire loop would con-
they lack interactivist representations properly understood. stitute representation.
It might seem at this stage that interactive representation requires Neural and connectionist nets are dynamic systems that can settle
goals but this is not the case, and this brings us to the next item. into having certain attractor landscapes via training, which might in-
volve feedback, or might involve “settling” in terms of the statistical
2.7. Representations without goals: anticipatory functions are all that is properties of the input stream. Although they can be trained, what they
needed ultimately learn is simply correspondences. The nets do not “learn that
they are correspondences, nor what those correspondences are with”
In standard approaches to cognition, representation is considered to (Bickhard, 1993, p. 295). In this regard, they are no different from
be an all or nothing phenomenon. In interactivism there is a pro- standard covariance approaches to representation which require cor-
liferation of levels. Although we will not present the full hierarchy here respondences one way or the other. As we will see, this is also the case
(see Bickhard, 1998), it is easy to see that if we get rid of the goal for Gładziejewski's two-factor approach. For interactivism, neural net-
subsystem in the figure above (G38), we can still have indicative an- works can be taken as emergent implicit differentiators. Because this being
ticipations with truth values and hence obtain a more primitive form of only part of what is required, they are not learners of emergent re-
representation. Actually, the reason we first introduced error within a presentation (Bickhard & Terveen, 1995, p. 301).
larger goal subsystem was for ease of understanding. If all there is to “learning” is relaxation to an attractor, interactivism
The key insight is that truth value does not require goals, only antici- will not consider it to be genuine learning. Interactivism requires that
pations. The anticipatory indications may be selected toward the func- the system learn for itself. In order for a system to learn for itself, it must
tional service of reaching a goal, and “reaching a goal” is another way have the capability for generating system detectable error — intrinsic
in which the overall process can succeed or fail, but such “goal failure” normative error, not just ‘error’ as defined by an external designer or
is neither necessary nor sufficient for error per se. It is not necessary observer — and for that to happen its processes should be of the an-
because the anticipation could fail, whether or not the goal is reached; ticipating nature. It is crucial to note that such anticipations entail a
e.g., a frog might anticipate jumping into the pond with the goal of reversal of emphasis from input-to-output dynamic causality to output-
escaping a predator without realizing that one of its legs is stuck in a to-input potentialities (Bickhard & Terveen, 1995, p. 294). The system
crack. The rotational motion thus resulted from the attempt might must anticipate its output and the interaction of that output with other
nevertheless rescue it from the predator's attack. Conversely, the an- surrounding processes should be the new input. In other words, the flow
ticipation may succeed and nevertheless fail to achieve the goal (e.g., a must be inherently circular, not sequential. In this sense representation
frog in a laboratory might flick its tongue successfully to a dot on a “is fundamentally a matter of anticipation, and not a matter of a system
screen without achieving its goal of satiating its hunger). Thus even being a retrospective spectator back down its input sequence”
with goals, it is still the anticipatory function that has presuppositions (Bickhard & Terveen, 1995, p. 304). Note that approaching the issue of
and truth values. representation from the general issue of “how learning for a system
More formally, normative goal functionality and normative antici- itself is possible” again led us to render representations in terms of
patory functionality must be differentiated, and, thus, it must be noted indications of potential interactions, as we were arguing was the case in
that truth value does not depend on goal functional success, though it the previous items.
may involve such success (or failure). What is required is normative
functionality, and anticipation is such a normative functionality that is 3. A critique of Gładziejewski's charge against interactivism
more basic than goal functionality. (Meanwhile, “goals” need only be
set-points, not representations themselves.) 3.1. An outline of Gładziejewski's argument

Gładziejewski claims that representations as depicted in action or-


2.8. Does a soap bubble learn during its adjustments to a wire loop? Neural
iented theories are too unconstrained because they do not meet
networks?
Ramsey's job description challenge (Ramsey, 2007; we will abbreviate the
challenge as JDC). Such accounts are too liberal in admitting certain
Many dynamic systems settle into attractor points or trajectories.13
structures or systems to be representational which should not be con-
sidered as representational according to other norms. Assuming this
13
Which can, in certain circumstances, be strange attractors. argument works, Gładziejewski goes on to offer a remedy, namely a

5
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx

two-factor theory of representation: something is a representation if it 3.2. Gładziejewski's critique of interactivism


guides actions by exploiting its structural similarity to whatever it is
representing (Gładziejewski, 2016, p. 23; subsequent page references Gładziejewski presents two arguments for showing that inter-
are to this article, unless otherwise noted). activism does not meet the JDC for its concept of representation. The
Gładziejewski's aim is not to determine “whether ACToRs [action first one is a conceptual argument and the second one is empirical
guidance theories of representation] give us a good theory of how in- (Gładziejewski, 2016, p. 18). We will begin with the conceptual argu-
tentional content is determined” but rather to “concentrate on their ment.
explanatory value for cognitive science” (p. 17, emphasis as in the ori-
ginal). He draws on Ramsey's JDC: In a given theory of cognitive sci-
3.2.1. Gładziejewski's conceptual charge
ence, look for the functional roles of the purported representations; this
Gładziejewski's main idea is that the interactivist definition of re-
is their job description. Then, evaluate whether this job description “can
presentation, as indications of interactive potentialities, is a strongly
be classified—in an intuitive, natural and understandable manner—as
reductive one, which attempts to “explain representational normativity
playing the role of representation” (Gładziejewski, p. 17). Ramsey's
with the normativity of action” (p. 18). This reduction in turn renders
own examples of what does not meet his job description challenge are
representation talk explanatorily useless because everything “said using
receptors and Dretske’s (1988) notion of representation. Here we have
representational talk could be more economically said by reference to
to be careful about the second step in JDC because the critique of any
actual or possible (inter)actions and their conditions of success” (p. 18).
candidate depends on the criterial norms being applied. Gładziejewski
One should take extreme caution here because one of the central points
does not give a clear enough account of them at the beginning of his
of interactivism is emergence. Hence, there is no straightforward way to
paper, but at this stage his criteria include:
replace representational talk with action talk in such a way that the
former becomes explanatorily useless.
1. It should make “sense to describe these structures as standing-in for
Gładziejewski is moving too fast here. One might as well say that
something” (p. 17, emphasis as in the original). The entities worthy
since atoms are composed of elementary particles like quarks, it is ex-
of the name “representation” should function as the entities “we
planatorily useless to keep “atoms” in chemistry. If the issue is merely
pretheoretically recognize as representations—like maps, fuel
pragmatic, there is great use of atomistic talk, even talk of Newtonian
gauges, or sentences of natural language—function” (p. 18). In other
physics where it is appropriate. If Gładziejewski intends an in-principle
words, they should be like external representations.
argument, then substantial issues are either not addressed at all or only
2. It should be indispensable; it should “enable us to understand phe-
insufficiently handled. In Gładziejewski's view, Bickhard should be so
nomena in way that is impossible without it” (p. 17).
ruthless a reductionist that his theory renders all talk of representations
superfluous. In fact, the opposite is true. Representations (indications of
This second criterion will be the key reason for accusing ACToRs of
interactive potentialities) are needed (for any complex agent), and they
being too liberal.
emerge whenever the internal control processes reach a certain kind of
In section 5 where he outlines his two-factor theory, Gładziejewski
complexity such as when more than one option for interaction becomes
is clear that structural similarity is what he has in mind (see especially
possible or when the interfaces between subsystems need to cope up
his footnote 8 on page 23 for a clarification). He aims for a necessary
with the required complexity. When it comes to learning and memory,
and sufficient condition for something to be a representation: “action
even stand-in types of representations are needed (Bickhard, 1998).14 It
guidance is by itself insufficient for making something a representation,
is quite ironic that a theory emphasizing emergence almost everywhere
what other conditions need to be met?” (p. 22). Certain conclusions and
is accused of strong reductionism.
certain steps in his arguments during the earlier parts of his paper can
Gładziejewski's argument for reduction seems to be along the fol-
only be understood when one realizes that he has this agenda in mind.
lowing lines:
Now before moving on, we would like to anticipate a little about the
general form of our critique. Notice a fork in his approach: if by According to interactivism, those indications are internal structures
“functional role” he means the functional role of “representation” as a or processes that (1) are activated or arise before the organism en-
concept used within cognitive science and philosophy of cognitive sci- gages in particular action (say, predator avoidance); (2) can, and
ence (as a metatheoretical consideration), then the above intuitive sometime will, lead to the organism actually performing this action;
criteria seem to us totally ad hoc. We will return to this issue in section (3) have semantic or representational contents that are determined
3. If, on the other hand, he means representation's functional role as a by the conditions of success (dynamic presuppositions) of the action
structure within a theory, then the first item in our list of interactivist to which they can lead. (p. 18)
ideas (structure dissolving into process) comes in conflict with his ac-
According to him (3) follows from (1) and (2) because “the re-
count presented so far. According to his description of JDC, one has to
presentational status … is fully constituted by the role they play with
highlight the purported representations and then look for their func-
respect to guiding or preselecting action” (p. 18). But here we have to
tions. But this assumes a differentiation between function and re-
ask: the representational status of what? This is another instance of the
presentation. Consequently he is already rejecting a central feature of
assumption that representation is structural, so the issue (for
interactivism; namely, that structures can dissolve into processes, and
Gładziejewski) is how those structures are functionally used. There
that representing emerges in a kind of functioning. He is begging the
seems to be no recognition of the emergence of representing (and not
question against the interactivist model, whose claim is that re-
representation) in certain kinds of functioning for interactivism.
presenting is itself an emergent kind of functioning.
Although these three properties do apply, they are highly in-
Gładziejewski agrees with the criticisms of purely correspondence-
sufficient and potentially misleading from the interactivist perspective.
centric theories of representation as given by Bickhard (1993; Bickhard
Note that properties (1) and (2) are true for computational symbol
& Terveen, 1995). On the other hand, the reason he finds purely action-
processing models too. Symbols are activated and processed, culmi-
centric theories lacking is that they fail “to give us an idea about what
nating in a final decision before they are sent to outputs which then
distinguishes representational action-guidance from nonrepresenta-
lead the computer or the robot to actually perform an action. The first
tional action-guidance” (p. 23). We will respond to this criticism later.
two properties are near-universal for any model that posits structure for
First, we should delve into his criticism of interactivism proper. We are
not addressing his evaluation of Rosenberg and Anderson's action gui-
dance theory of representation (Anderson & Rosenberg, 2004). 14
But note that they are not quite “standard” representations because the contents are
not explicit.

6
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx

some level of cognition. Only the third seems specific to interactivism. reason for claiming that interactivism gives us an explanatorily valu-
But then how can one infer that “indications have property (3) in able notion of representation” (p. 18). He begins by stating that “in-
virtue of having properties (1) and (2)” (p. 18)? At best (3) follows from teractive representations are not explicit, but rather represent their
(1) and (2) if the intrinsic normativity of action is recognized and taken into contents only implicitly” (p. 18). He then presents an argument from
account, but Gładziejewski doesn't do that; instead he treats action as a Ramsey that implicit representation does not entail inner, structural
simple motion or cause, and then smuggles normativity in as “success.” representation, and, thus, does not satisfy the JDC.
If action is not taken as normative, then (3) does not follow from (1) First of all, note that, as mentioned more than once up to now, the
and (2). There have to be other criteria that are missing in his account, contents are not represented. So, the locution “represent their contents”
which in turn lets him accuse interactivism of reductionism. We have indicates a confusion on Gładziejewski's part. It is not the representing
presented some of these criteria in section 1 above. of contents that constitutes something as representational: it is the
Perhaps more fatally, the problem is that property (3) is stated so possessing of (presupposed) contents about something else (e.g., the
vaguely that it might be acceptable per se! For example, if “determined environment) that makes something representational.
by” is taken to mean “caused by” or “inferable from,” then the clause Gładziejewski mentions that presuppositional content is implicit,
seriously misrepresents the interactivist model. If “determined by” is then references Ramsey's arguments concerning implicit representation,
understood as “constituted by,” then it might be correct. Gładziejewski and then concludes that “this does not entail that there is anything even
ignores how dynamic functional presuppositions are intrinsic to inter- remotely functionally resembling a representation inside the system”
active anticipations, and so misconstrues the interactivist account of (Gładziejewski, 2016, p. 19, emphasis in the original) — that is, there is
content. He is missing its emergent core. no entailment that there is an isomorphic structure inside the system. It
Now, focusing on the third property, Gładziejewski says that the is not clear that Ramsey's notion of “implicit” is the same as the in-
“simple fact that some internal activity could eventually cause an action teractivist notion,16 but, setting that aside, it is certainly correct that
whose success depends on environmental conditions gives, by itself, no interactivist representation does not require isomorphic structure inside
leverage to the idea that this activity represents those conditions” (p. 18, the system. Citing that as a criticism of the interactivist model, how-
emphasis in the original). But for interactivism the relevant activities do ever, thoroughly begs the question whether structural isomorphism
not in any way represent the conditions they functionally presuppose. should be criterial for representation at all. The interactivist model
Rather, those ‘activities’ emergently bear those assumptions about the holds that the most basic criterion is that of bearing truth value, not of
environment. So, the representational relationship is with the environ- being interpretable as an isomorphic correspondence. So, we welcome
ment, not with the content about that environment. Unfortunately, as the conclusion that implicit presuppositional truth value does not entail
we mentioned in the previous section, he does not give clear enough structural correspondence. The reason Gładziejewski has not found
criteria at this stage for what makes something represent something anything remotely resembling (structural) representations is because he
else. Thus, we have to go back to our list above. is looking for something in a place where it is by definition not there.
If Gładziejewski is taking the first item in the list, then at least a His second argument takes the form of a reductio. Even if one ac-
necessary condition for him is that the representation be a stand-in. But cepts implicit representations (Of the stand-in type? There also seems to
this makes the charge vacuous because, other than Ramsey's job de- be a confusion in that interactivist content is implicit, but representa-
scription challenge, he does not present any arguments as to why one tion in the sense of indications is not implicit), one still “needs to pro-
should stick to such a norm for representations. Then, the question vide a good rationale for using representations talk at all” (p. 19). He
becomes as to why JDC requires pre-theoretical and intuitive notions of then reconstructs a possible interactivist response as follows:
representation such as having structurally similar inner models, espe-
cially in the light of all the problems that beset such standard notions. 1. Each indicated action has dynamic presuppositions.
For this reason it seems that he is begging the question. As mentioned 2. Those presuppositions constitute the content of representations in
above, Gładziejewski himself agrees with those charges. But the interactivism.
“charges” that Gładziejewski agrees to are in-principle, and thus his 3. “There is nothing in the system that would explicitly represent this
move is to a model that (he agrees, apparently) is in-principle im- content. (p. 19). The representations are indications of potential
possible. That is, Gładziejewski seems to agree with the critique that interactions that “enable the organism to perform actions” and this
correspondence, thus structural, models of mental representation “is the only way in which those indications are related to external
cannot be correct, but then advocates precisely such a correspondence/ conditions” (p. 19).
structural model. 4. Hence, all the content must be implicit in interactivism.
Finally, the accusation of offering an over-liberal account of re-
presentation follows as a consequence of the first argument. Once one For this argument to work as a reductio, implicit content must be
accepts that everything has been reduced to causal-action talk by a unacceptable. But implicit content is unacceptable, in this argument,
species of ACToR, one might think that there are many types of non- only if structural isomorphism is criterial for representation. As men-
normative action guidance.15 Being non-normative, such action gui- tioned several times so far, we do not accept that criterion, so relying on
dance would “certainly” not be representational. So, if action guidance it begs the question.
that is not representational seems possible on this construal of non- Furthermore, we can accept that this is a partially correct char-
normative action, one ends up being too liberal in attributing re- acterization of the interactivist position — content is most fundamen-
presentations to such “actions.” But if the first argument is unsuccessful, tally constituted in implicit presupposition. Nevertheless, interactivism
as we have claimed, this consequence falls with it too. does allow for the existence of explicit representations. Second level
To sum up, Gładziejewski's argument at this stage seems to be representations that interact with the first level do render explicit
merely that a representation must be a stand-in because of JDC, and properties that are only implicit in the level below. This requires further
interactivism's representations are not stand-ins. He then goes on to model construction within the theory and perhaps that's why it is
offer two more arguments (again short ones) against a possible response missing in Gładziejewski's article. Because of time and space constraints
to his accusation that interactivism is reductive hence that there is “no we cannot provide further details here but the relevant constructions
are available in Bickhard (1998).

15
It's not clear what non-normative action guidance would be, unless the “action” at
issue is itself non-normative, and, thus, not really action. For us a mere, say Newtonian, 16
There are, in fact, several notions of “implicitness” within the interactivist model per
motion without normativity does not fall under the category of “action.” se (Bickhard, 1998).

7
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx

He continues: independent of whether interactivism per se is right or wrong.17


Must interactivism acknowledge representation in Beer's organisms
1. Adding to this mechanism that “they also represent contents, even because they satisfy the interactivist criteria for having representations?
implicitly, simply adds nothing genuinely new or explanatorily va- Gładziejewski has two criteria for his claim:
luable to the picture” (p. 19).
2. This “merely presupposes representations” (p. 19) so that the 1. The ability to select from one of two possible actions.
question is still being begged. 2. The existence of internal states and processes at the middle layer
that precede action. These processes decide whether the organism
Once again, Gładziejewski assumes that representations represent should catch or avoid the falling object. They can be thought of as
their content. The interactivist will deny this. For that matter, it should indications of potential interactions. Gładziejewski cites Beer: “his
be denied by any candidate model of representation: if representation is results […] suggest that the ‘decision’ [about how to act] is re-
supposed to be constituted in representing content, no matter how this peatedly made and unmade as the agent and the object interact until
is supposed to be the case, the model will be circular — representation the organism eventually “commits” and actually performs one of the
(of content) is presupposed in (supposedly) modeling representation. actions” (p. 19).
Thus, since content is not necessarily represented in interactivism, the
above charges are the ones that are indeed begging the question. To these we should perhaps add the following just to make his
Moreover, given that the conclusion in 2 is the complaint made by the presuppositions clear:
interactivists against standard approaches, we were surprised that
Gładziejewski makes the same charge against the interactivists. We 3. This is a “goal” directed system.
think that this is only possible because he assumes that the notions of 4. It interacts with its virtual environment and has “learning” cap-
representation in the two different paradigms can be compared and abilities (the reason for the scare-quotes in these two items is that
interchanged in a straightforward manner. For him representation re- from an interactivist point of view these organisms have neither
quires structural similarity, whereas for us such a property (when it genuine goals nor learning capacities, as we will see below).
does exist) can be derived from more basic processes, so that it is not a
necessity but a possibility; a possibility which could have evolutionary Because of these Gładziejewski gives the following argument whose
or functional advantages, in some circumstances. conclusion is that interactivism should grant representations to Beer's
Before we move on to Gładziejewski's empirical charge, one final organisms (p. 19):
remark about the nature of structural similarities in interactivism. The It seems, then, that interactivism would have us think that in this
sense in which the intuition of homomorphism — structural corre- case there are patterns of internal activity that (1) precede action and
spondence, or stand-in — can be most accurately realized within in- (2) could potentially lead to the agent performing a particular action;
teractivism is a modal sense: the organization of the possibilities of in- thus we should say that these patterns indicate interactive potentialities
teraction given a system for engaging in such interaction, for successful that therefore (3) represent the action's conditions of success. It follows
(true) presuppositions (thus representation), is homomorphic to the that Beer's virtual agent is a non-representational system that is cate-
organization of potentialities afforded by the environment. Thus modal gorized as representational by Bickhard's theory.
trajectories creatable by the organism can correctly “trace” modal Would these suffice for a minimal representational system according
possibilities afforded by the organism-environment relationship. That to interactivism? The answer is no. There are several reasons for why
is, there is a kind of modal homomorphism involved. But, as mentioned they fail the interactivist criteria.
in item 1.2 of the first section, there never needs to be a functionally First, Gładziejewski yet again has representations representing their
explicit structure that is homomorphic to any actual structure in the contents.
world. Second, Beer's robots lack the normativity required for genuine re-
presentations: their internal states are differentiations but not anticipa-
3.2.2. Gładziejewski's empirical charge tions. What Gładziejewski has been describing of Beer's account holds
For his second charge against interactivism, Gładziejewski presents for any dynamical system that relaxes into one of its attractors (see item
Randall Beer’s (2003) anti-representational virtual organisms as 1.8 in section 1). If this is the case, then there is no representation in
meeting interactivist criteria for having representations, thus rendering Beer's organisms in the interactivist sense. If interactivism granted an-
interactivism too liberal to meet the JDC (Gładziejewski, 2016, p. 14). ticipations to any dynamical system relaxing to one of its attractors,
We will first delineate why he thinks Beer's organisms have no re- then Gładziejewski would be right in his criticism of it being too liberal.
presentations. Then we will consider why he thinks interactivism must In fact, there have been many interactivist criticisms of connectionist
endow Beer's organisms with them. and other network models many times for exactly this reason (e.g.,
Beer's virtual organisms have a three level neural network which Bickhard & Terveen, 1995).
lets them eventually classify two different kinds of objects and engage In such neural networks mere input processing cannot detect gen-
in appropriate ‘actions’: avoid diamond-shaped objects and catch circles uine error for the system itself because any input is just like any other
(hybrids of these objects are also involved, but they are not important input. There cannot be any normative distinction between non-error
for our purposes here). Gładziejewski agrees with Beer in declaring sensory input and pure error input — the inputs influence dynamics in
these agents non-representational; so, they conclude, any representa- differing ways, but there is no anticipation. Here interactions are not
tional talk in this setting is explanatorily useless. One reason for this is “being epistemically essential to the constitution of the representations”
that “the virtual agent does not make use of any kind of model of its (Bickhard & Terveen, 1995, p. 293). In other words, there is no genuine
environment” (p. 19). Also, there are “no internal, causally relevant normativity here and hence no anticipatory processes as Gładziejewski
structures governing its actions that correspond to those categories” concludes in (3) in the above quote.
and “there is nothing inside it that would meet the JDC” (p. 19). Thus, The adjusting to ‘error’ — the ‘learning’ involved — is an abuse of
we don't need to postulate representations to explain anything about
these creatures. According to Gładziejewski representation necessarily
requires a correspondence-based inner model. If there is no such inner 17
Reminder: the interactivist model argues that correspondence models are incoherent
structure, there is nothing representational. This rules out, by as- as fundamental models of representation (though there may be derivative forms of re-
sumption, any other approach to representation, and in particular the presentation that can be understood in correspondence terms), and Gładziejewski seems to
interactivist variety of representation. So the question is being begged, accept those arguments.

8
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx

language—there is ‘error’ only in the interpretation of the observer or interactivist model on this point either. Although they do not violate
designer. To claim that the adjustments in, say, a back-propagation anything in the interactivist model, it should be clear by now that they
system constitute genuine learning would require also saying that the are missing many crucial aspects that they would need to have re-
dynamic adjustments of a soap film across a wire loop constitute presentations or genuine learning.
‘learning’ (see item 1.8 in the first section). To repeat, the issue, then, is
“what constitutes learning for a system itself.” 4. A critique of Gładziejewski's two-factor theory
Perhaps a general lure of the standard neural network architectures
and their learning capacities is the following simplified analogy. In After presenting his critique of ACToRs, Gładziejewski offers a two-
nature, brains resemble neural networks and the designer norms pro- factor theory of representation. He claims that since action guidance by
vided to these networks might be claimed to be given to the brains via itself is not sufficient, one needs action guidance that uses representa-
innateness or by some early developmental constraints provided by the tions which have structural similarities to whatever they are re-
environment. Since learning is genuine in nature, this analogy implies presenting (Gładziejewski, 2016, p. 25). Although, by now, it should be
that neural network learning could be taken as a genuine learning also. fairly clear how an interactivist critique will proceed against any theory
But as we have seen, such networks lack anticipations and thus nor- with such a requirement of homomorphism, we will nevertheless pro-
mativity. According to interactivism the lack of normativity in such ceed with it in the hopes of further illustrating certain aspects of in-
standard architectures is an indication for a need of a radical shift in teractivism.
design principles.18 Furthermore, we find that the analogy to brain One motivation for augmenting action guidance theories is the
processes is not entirely accurate. Because of time and space constraints following (p. 22):
the reader is referred to Bickhard (2015a, b) for a model of brain
“What is missing from this picture is a good idea — or any idea —
processes which is compatible with interactivism.
about the nature of differentia specifica that distinguishes re-
Third, according to interactivist criteria, Beer's system is at most a
presentational action guidance from the kind of action guidance that
basic differentiator of two environments. As mentioned in items 1.4 and
is achieved without employing representations.”
1.8 in section 1, this is not enough for representations although it is a
required first step. Indeed, this is a general critique of all claims that This illustrates a danger of using the notion of “action guidance”
attribute representational states to neural networks from the point of instead of “interaction indication” — ‘guidance’ does not carry the
view of interactivism (Bickhard & Terveen, 1995). Although there are normative anticipatory meanings that “indication” does, so, unless
unsupervised nets also, such nets still relax into states depending on the guidance is further explicated in terms of anticipatory indications, it
statistics of the input flow, and there is no normativity. Moreover, they can easily look like there is nothing normative, thus nothing re-
have to be designed so that the statistics that they pick up on are going presentational, involved.
to drive the relaxation dynamics in the designer-desired manner. So, for Gładziejewski gives an example where the task is to navigate from
any neural network to be a candidate for having representations, they point A to point B in a city and he presents three different methods of
not only have to be more than input classifiers according to a designer- accomplishing this task (p. 22). The first way is just by asking a local to
specified norm, but their interactions with the environment must also guide you through the streets. The second way is by following a trail
be inherently normative and necessary to the constitution of re- from A to B, such as by following small red balls. The third way is by
presentations. using a map where A and B are marked on it. Gładziejewski claims that
Fourth, focusing on Beer's case now, Gładziejewski's discussion only in this last instance can the required differentiation between re-
seems to us to be violating item 1.1 in section 1: it presupposes that presentational and nonrepresentational action guidance be established
representations have to be structures or singular elements. The inter- (p. 22):
activist model does not require that indications are realized in struc-
I propose that the difference-making factor here is the fact that in
tures, so Beer's models could be accepted in terms of their not having
order to succeed, person 3 exploits what we might generally call a
singular structures. That is, Beer's model could be accepted as a simu-
“correspondence” between the map and the terrain. More precisely,
lation of one aspect of the interactivist model — a kind of “minimal”
what I think makes the case of person 3 a case of representation-use
simulation of differentiation — but it cannot be a realization of the
is the fact that (1) this person (a representation user) uses a map (a
model per se. So, it is not the case that “Beer's virtual agent is a non-
representational vehicle) to guide her action with respect to the
representational system that is categorized as representational by Bic-
terrain (what is represented), and (2) she does it by employing a
khard's theory” (p. 19). Beer and Gładziejewski both assume that re-
strategy whose navigational (action-guiding) success is non-acci-
presentations have to be singular items of some sort; that is simply
dentally dependent on whether a certain type of relation holds or
wrong and rejected insofar as representing is an anticipatory activity
holds to a sufficient degreed between the map (representational
and is modeled as such. So, again, the charge against the interactivist
vehicle) and terrain (what is represented).
model is question begging.
In general, although we find Beer's models interesting, questions as It is unfortunate here that the example is explicitly one of using an
to whether they realize representing are multiply confused and con- external representation. Such external representations do exist, and
fusing. They do not contain correspondence structures, but that is OK they do provide a source for the ‘resemblance’ relation between re-
for interactivism. They also do not contain any indicative structures or presentation and represented in the person making use of the re-
elements, but that is also irrelevant: indication or anticipation is func- presentation (as well as the source for normativity in that person). But it
tional, not structural, and need not be realized in singular elements or is precisely the representing that is being done by that person that we are
structures at all. So Beer's models do not violate anything in the attempting to model. Inner representing cannot be of the same form as
external representation: external requires interpretation while internal
cannot, on pain of unbounded regress. This issue resurfaces at multiple
18
E.g., metanets where the connections of a given neural network are themselves points in the article when Gładziejewski makes such claims as “the
nodes of another network so that the connections can be terminated or added or other- lesson is that representations are things that succeed in playing their
wise modified by the metanet itself might be a step in the right direction (though far from action guiding function by exploiting a certain relation between the
sufficient). This makes it possible to change the whole graph topology of the network in
representation itself (the vehicle) and what it represents” (p. 23). Our
addition to mere adjusting of weights. Note that metanets are a more accurate model of
real brains where new connections come and go depending on the learning involved (see
immediate reaction is to ask, exploited by whom? What is needed is for
Bickhard & Terveen, 1995, pp. 301–307 and the references therein). Here too, however, these neural (and other) activities to modulate other activities in an-
there is still no normativity. ticipatory ways.

9
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx

It is even more unfortunate for Gładziejewski to claim that a “map is the available theories turn out to be inadequate. For these reasons, and
a useful action-guider only if its structure matches or resembles — or to in the hopes of making our responses to Gładziejewski more accessible,
the degree it matches or resembles — the structure of that which it we have presented some of the key aspects of interactivism in the first
represents” (p. 23). There are at least three problems here: section.
In the second section we have outlined Gładziejewski's argument,
1. Such “matching or resembling” can be determined only from the and his conceptual and empirical critiques against interactivism. There
perspective of an external observer (unless it is rendered in terms of are four main issues that we are finding problematic. First, missing the
successful interaction indication/guidance, in which case it is either emergent nature of representations in interactivism, he accuses it of
false qua structure or it amounts to the interactivist model). being reductive to action only. According to him interactivists must
2. Everything matches/resembles everything else if the ‘appropriate’ then grant representations too liberally to too many systems, ob-
‘mappings’ are used — that is, what counts as structure is not de- literating the explanatory value of theories of representation. As it is
terminate without a readout or interpretive process, and that is what hopefully clear from our response, the emergent core of interactivism is
is at issue in the first place. necessary for its ambitions. Representations are not structures but
3. The only functional consequence that ‘structure’ can have is to in- emergent processes out of interactive processes, hence the title of this
fluence the flow of process, and any such influence can always be article: Representing is something we do, not a structure that we “use.”
incorporated into the organization of the system dynamics without This brings us to the second issue in his account. Gładziejewski is
any such structure at all. In other words, this aspect of his claim is committed to the necessity of a homo- or isomorphic idea of re-
against item 1.1 in section 1. Perhaps even more fatally, it is against presentation as a stand-in. This in turn corners him into thinking that
item 1.3 too, the rubber band experiment: a goal-directed cybernetic representations have to be structures, which means that they are “used”
feedback system can adequately “guide” task performance when the somehow. We not only find such a structural similarity requirement for
task has a particular structure that has no natural relationship with representations ad hoc, but for many reasons insufficient, and, ulti-
the organization of the cybernetic system at all. mately, incoherent. In order to overcome such insufficiencies, inter-
activism has had to draw distinctions such as contact versus content or
Recall that the central point of the interactivist model is that in- differentiation versus predication. We have delineated these in the first
teraction anticipation is what constitutes the emergence of representa- section and some additional explications are scattered through the ar-
tional normativity from functional (or pragmatic) normativity. If ticle. As we have outlined in section three, Gładziejewski finds an in-
Gładziejewski recognized this central aspect of the interactivist ac- teractivist account insufficient and “enhances” it to a two-factor theory
count, he would, presumably, address how it is possible to have where there needs to be both structural similarity and action guidance
“practical error” regarding interaction anticipation without having that exploits this in a suitable manner. Besides the infinite regress that
truth value, and, therefore, without having representation. Instead, he always arises with models based on analogies with external re-
seems to subsume the interactivist model under the “guidance” model presentations—just as an external representation needs a cognitive
with no normativity, and then claims that guidance per se is not enough agent to interpret it, the cognitive agent's internal representation will
— you need structural guidance. But (and again) the only support for this need a homunculus, an internal agent within the agent, to interpret it,
is his repeated invocation of the JDC, and, at least with regard to the then the homunculus' internal representation will need a second ho-
interactivist model, that completely begs the question. Even on its own munculus, an agent within an agent within an agent, to interpret it, and
terms, this is a strikingly unsupported, ad hoc, criterion. At best, it so on ad infinitum—and the ad hoc nature of his account, we presented
makes an appeal to our intuitions as to what we would want to call a(n three further problems with such an approach.
external) representation (no ‘representing’ as primary here); at worst, it Third, there seems to be a neglect on Gładziejewski's part regarding
involves deep confusions about how structure can be defined, and how the importance of normativity and its naturalization in interactivism.
structure can influence process. Our question at this level is: What if This comes to the fore in his empirical charge against interactivism
that “job description challenge” is itself based on false presuppositions? (second subsection of section two) via Beer's virtual organisms.
This would seem to be the core point. According to interactivism, representations have to have a normative
One final remark before concluding this paper is that Gładziejewski aspect and the only way to naturalize normativity without obliterating
mentions Grush's emulation theory of representation (Grush, 2004) as it is via anticipations. We pointed out that if anticipations are reduced
“a genuinely representational explanation that uses the notion of in- to a dynamical system's usual progression towards one of its attractors
ternal representations as mechanical or automated ‘models’ or ‘maps’” (and the same applies to neural networks), then one cannot speak of
(p.24). Bickhard has published a critique explicitly directed at Grush's normativity. We argued that since, according to interactivism, Beer's
model (Stojanov & Bickhard, 2004). Moreover, as mentioned before neural networks are at most differentiators (albeit ones that “learn” to
there is no problem with structures constituting representations so long differentiate) with no anticipatory processes, then Gładziejewski's em-
as there is some normative dynamics to interpret those structures; this pirical charge using this kind of neural network begs the question
is as true for internal ‘structures’ as it is for external ‘structures’. Also, against interactivism.
Grush's model does not actually satisfy the JDC because the “structures” Fourth, key properties and relations of content from an interactivist
have no (necessary) structural homomorphism with what they are used standpoint seem to be missing in Gładziejewski's account. In both sec-
to ‘guide’ — they are anticipatory (though there is no account of nor- tions two and three we have tried to emphasize that dynamic functional
mativity), they are useful for ‘fast’ anticipations, and such anticipation presuppositions are intrinsic to interactive anticipations and the activ-
does not require structural correspondence. ities of the organism do not represent those conditions and, thus, con-
tent. The fact that the representational relation is not with the content
5. Conclusion but with the environment seems to be a general misunderstanding that
has surfaced in multiple places.
Interactivism, as a model of representation, has many novel aspects We conclude that the interactivist model is not vulnerable to the
that are inconsistent with the strongly ingrained intuitions of standard criticisms posed by Gładziejewski, while his two-factor model is vul-
and long standing theories of representation. However these counter- nerable to criticisms of correspondence models. We hope that our ex-
intuitive aspects of interactivism are needed to overcome the in- plications as to why that is so will promote better understanding of the
sufficiencies and incoherencies of the available models of representa- interactivist model.
tion. We believe that one fundamental lesson of the history of physics is
that our most basic intuitions can be transformed or even abandoned if

10
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx

Acknowledgements 56). Leicester: British Psychological Society.


Bickhard, M. H. (2009). The interactivist model. Synthese, 166(3), 547–591.
Bickhard, M. H. (2015a). Toward a model of functional brain processes I: Central Nervous
We would like to thank Lucas Thorpe for reading and commenting System functional micro-architecture. Axiomathes, 25(3), 217–238.
on our drafts. This project is partially supported by the Tübitak 1001 Bickhard, M. H. (2015b). Toward a model of functional brain processes II: Central
project titled “Concepts and Beliefs: From Perception to Action” with Nervous System functional macro-architecture. Axiomathes, 25(4), 377–407.
Bickhard, M. H. (2015c). The metaphysics of emergence. Kairos, 12, 7–25.
grant number 114K348. Bickhard, M. H., & Richie, D. M. (1983). On the nature of representation: A case study of
James Gibson's theory of perception. New York: Praeger.
References Bickhard, M. H., & Terveen, L. (1995). Foundational issues in artificial intelligence and
cognitive science: Impasse and solution. Amsterdam: North-Holland.
Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47,
Anderson, M. L., & Rosenberg, G. (2004). A brief introduction to action guidance theory 139–159.
of representation. Proceedings of the 26th Annual Conference of the Cognitive Science Campbell, R. J. (2015). The metaphysics of emergence. New York: Palgrave Macmillan.
Society, 1185–1190. Dretske, F. I. (1988). Explaining behavior. Cambridge, MA: MIT Press.
Beer, R. D. (2003). The dynamics of active categorical perception in an evolved agent. Grush, R. (2004). The emulation theory of representation: Motor control, imagery and
Adaptive Behavior, 11, 209–243. perception. Behavioral and Brain Sciences, 27, 377–442.
Bickhard, M. H. (1980). Cognition, convention, and communication. New York: Praeger. Gładziejewski, P. (2016). Action guidance is not enough, representations need corre-
Bickhard, M. H. (1993). Representational content in humans and machines. Journal of spondence too: A plea for a two-factor theory of representation. New Ideas in
Experimental & Theoretical Artificial Intelligence, 5, 285–333. Psychology, 40, 13–25.
Bickhard, M. H. (1998). Levels of representationality. Journal of Experimental & Theoretical Powers, W. T. (1973). Behavior: The control of perception. Chicago: Aldine.
Artificial Intelligence, 10, 179–215. Ramsey, W. (2007). Representation reconsidered. Cambridge: Cambridge University Press.
Bickhard, M. H. (2000). Motivation and emotion: An interactive process model. In R. D. Stojanov, G., & Bickhard, M. H. (2004). Representation: Emulation and anticipation.
Ellis, & N. Newton (Eds.). The caldron of consciousness (pp. 161–178). Amsterdam: J. Behavioral and Brain Sciences, 27(3), 418.
Benjamins. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.
Bickhard, M. H. (2003). An integration of motivation and cognition. In L. Smith, C.
Rogers, & P. Tomlinson (Eds.). Development and motivation: Joint perspectives (pp. 41–

11

You might also like