New Ideas in Psychology: H. O Ğuz Erdin, Mark H. Bickhard
New Ideas in Psychology: H. O Ğuz Erdin, Mark H. Bickhard
New Ideas in Psychology: H. O Ğuz Erdin, Mark H. Bickhard
1. Introduction Gładziejewski and then track some of the details of his paper, showing
how these issues are manifest in it, and attempting to understand how
Interactivism has developed into a broad-range framework — from what we take to be errors are motivated by the presupposed (im-
fundamental metaphysical issues to functional brain properties, from possible!) framework and desiderata.
neural-glial “societies” to social ontologies, language, and the nature of
persons (Bickhard, 2009, 2015a, 2015b; Campbell, 2015). But at its 2. Central ideas of interactivism
core it is a model of representation and cognition. The interactivist
model of representation makes foundational criticisms of assumptions One core principle of the interactivist model is that the metaphy-
concerning representation that have been standard since the pre-So- sical nature of representation is “being truth-value bearing”; that is,
cratics and presents a positive model that differs from others on offer in bearing the normative property of being true or false. The central idea is
several ways. The interactivist model of representation (or re- that forms of functional anticipation of potentialities of agentive interaction
presenting), consequently, does not fit well within standard categories are the loci of such truth values: anticipations of potential interactions
(though it is closest to the general pragmatist framework), and, con- can be true or false. It makes intuitive sense that anticipations can be
sequently, is often miscategorized and misunderstood. true or false, but much remains to be explicated, modeled, elaborated
A recent paper by Gładziejewski (2016) gives us an opportunity to and defended.
address some of these issues. Gładziejewski presupposes a model of the The basic theoretical and philosophical tasks that this core model
nature of representation, in a version derived from Ramsey (2007), that gives rise to can be parsed into four questions: 1) How does this model
is precisely of the sort that the interactivist model argues does not work account for representation? Is it adequate to the multiform kinds of
and is not, in principle, possible. Gładziejewski argues that the inter- representation? 2) What are the consequences of the model for other
activist model does not satisfy the desiderata of this kind of model of mental phenomena, such as perception, cognition, language, and so on?
representation, not taking into account that the interactivist model 3) How are these crucial functions realized in the brain? And 4) What
argues directly against such models and their desiderata. It is, in fact, a are the theoretical and philosophical challenges that might be en-
“good thing” that the interactivist model does not satisfy such desi- countered by this model, and how can they be met?
derata, because, if it did, it would be in direct contradiction with in- These questions have been addressed and elaborated several times.
teractivism's own arguments that such models are incoherent. The How the model can, for example, account for representation of objects
central (though not the only) point is that Gładziejewski begs the or numbers (Bickhard, 2009). There are, in fact, a number of novel
question concerning the interactivist model by assuming precisely what consequences of the model for perceiving, cognition, language, and
the interactivist model argues is impossible, then charging interactivism other phenomena (Bickhard, 2009). The model also yields an account of
with not satisfying these impossible desiderata. brain processes that makes sense of phenomena like volume transmit-
It is, of course, possible that the interactivist in-principle arguments ters, non-zero baseline oscillatory frequencies, silent neurons, gap
against standard models of representation are themselves somehow junctions, astrocytes, and so on (Bickhard, 2015a, 2015b). Finally,
invalid or unsound, but Gładziejewski offers no arguments against there is an account of normative emergence, based on an account of
them. By simply accepting the Ramsey framework for understanding metaphysical emergence more generally, which, in turn, is based on an
and modeling representation, he ignores the arguments against such underlying process metaphysics (Bickhard, 2009, 2015c).
frameworks, and thereby renders his own arguments circular. There are also multiple challenges that are not of the form “how can
In what follows, we will first give an introductory overview of the model address X,” but, instead, pose critical claims that the model
central interactivist ideas tailored to misconceptions found in fails in various ways. These could, of course, turn out to be correct, but
∗
Corresponding author.
E-mail addresses: [email protected] (H.O. Erdin), [email protected] (M.H. Bickhard).
https://doi.org/10.1016/j.newideapsych.2018.02.001
Received 4 June 2017; Received in revised form 20 December 2017; Accepted 13 February 2018
0732-118X/ © 2018 Elsevier Ltd. All rights reserved.
Please cite this article as: Oğuz Erdin, H., New Ideas in Psychology (2018), https://doi.org/10.1016/j.newideapsych.2018.02.001
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx
we consistently find these challenges to be themselves ill-formed. They computational approaches, interactivism situates itself in process and
often make false assumptions, or, in some cases, assumptions that haved control theory, a level of thinking closer to actual machine dynamics
already been addressed and criticized within the overall inteactivist than is the case for higher programming principles such as object or-
model. iented approaches.2 We think this is an important advantage both for
Here we address and correct some such assumptions. The following moving to lower levels (which makes it easier to accommodate lower
8 points should serve as a minimal checklist for any theory or paradigm level mechanisms), hence for naturalizing, and also for unearthing
that comes in close proximity to interactivism. certain key processes underlying standard computational models.
The above point can be made from a functional perspective by rea-
lizing that data or structure in a system can only influence the func-
2.1. Structures can dissolve into system processes
tional3 processing of the system, and that such functional influence does
not require such data or structure. The overall functional organization
This is a generally unfamiliar idea for someone who is not ac-
can always be rendered as a state transition diagram, or perhaps a
customed to interactivism, but it is “just” a consequence of basic pro-
machine table (Turing, 1950), and those organizations, in turn, can
cess models, whether machine theoretic or in terms of dynamic spaces.
always be realized in multiple differing architectures. Any given system
We are so used to thinking in terms of fixed structures and how we use
with data or structures, thus, does not require those data or structures
these fixed structures that we have a tendency to structuralize processes
in any semantic sense, nor in any functional sense: the functional
in order to render them compatible with our intuitions.
consequences of data or structure can always be realized in a different
The basic intuition is that data or structure cannot do anything other
architecture that does not have those data or structures.
than influence process, and any such influences can always, in prin-
This is not to preclude the fact that particular architectures may be
ciple, be “built into” the organization of the processes themselves,
useful for purposes of realization: we do not want to use ‘simple’ uni-
without any data or structure. Thus structures can always be folded into
versal Turing machines rather than computers. The point, instead, is
system processes and disappear as distinct parts of the whole structure,
that any argument that assumes that data or structures are necessary,
yielding a purely functional theory.
either semantically or functionally, is simply wrong.
Here, we have one of the most important insight of interactivism:
representing is more fundamental than representations. Consequently, a
strict differentiation between function and representation begs the 2.2. No homomorphism to the external world needed for representations:
question. This is a common mistake that pops up in many places, in- the rubber band example
cluding Gładziejewski's account.
The formalized version of “dissolving structures” comes from ab- Powers (1973) gave a counterexample to our intuitions that there
stract machine theory and it is called the state-splitting theorem. Consider must be some sort of structural similarity with an entity or situation in
a state transition diagram with one register (i.e. a temporary memory) order to interact properly with that entity or situation: tie two rubber
that can contain a 1 or 0. At some point, the system will store a 1 or 0 in bands together so that they are attached by a knot between them. The
the register, and (perhaps) at some later point it will read that register experimenter puts a finger through one of the rubber bands, and the
and transition in one way if there is a 1 and a different way (perhaps) if subject puts a finger through the other one. The subject's goal is to keep
there is a 0. Now consider the diagram in-between the store and the the knot over a spot on the table, adjusting to motions by the experi-
read. Split the diagram into two copies, one of which would be entered menter. Clearly, if the experimenter moves too fast, the subject cannot
instead of setting a 1 in the register and the other of which would be keep up, but, if the motions are slow enough, there is no difficulty
entered instead of setting a 0. Similarly, the “1” copy of the diagram maintaining the goal of keeping the knot over the designated ‘spot’ on
will connect with whatever would have been switched to if the register the table.
had been set with a 1 and the “0” diagram will connect with whatever The point comes in noticing that a description for this task will in-
would have been switched to if the register had been set with a 0. At volve concentric circles and spirals around the ‘spot’, and the elasticity
this point we have a pure transition diagram with no register that is of rubber bands, but that the cybernetic system that is competent to the
functionally completely equivalent to the diagram with a register. The task will not (necessarily) have any structure whatsoever that is homo-
state-splitting theorem proves that such splitting, though it may become or isomorphic to that task structure. Moreover, this point holds in
very complex, is always possible, thus, that registers (data, internal or general for task structures and systems that are competent to those
functional structures, etc.) are never necessary.1 tasks.
A version of this is already clear in computationalism: any computer Because of such considerations, interactivism does not begin with
program can be realized in an unbounded range of possible computer assumptions of task-competent processes being somehow iso- or homo-
architectures; the above point about process is “merely” a general- morphic to task structures, but, instead, begins with general interaction-
ization of that. In particular, any data structure can be architecturally based differentiations of the environment. Therefore, interactivist re-
eliminated by, for example, moving to a pure machine table organiza- presentations, being grounded on (though not simply equivalent to)
tion that realizes the (or a) relevant state transition diagram. Items, such differentiation processes, will not necessarily require any iso- or
memory, data, structure, etc. can do nothing relevant except influence pro- homomorphism with the actual structures in the external world. And
cess flow, and those influences can, in principle, always be built directly into that point introduces the next topic.
the process organization. The “intuition” to the opposite effect is “just” an
error in intuition carried over from thinking about computers (without 2.3. Classifications of the environment: implicit definitions, not explicit
understanding that they too have unbounded possible architectures that definitions
would give you functionally the same computer).
Abstract machine theory, as distinct from computational models, is One of the key notions in interactivism is differentiation.
not here just to provide an example of state splitting. The interactivist Interactivism distinguishes between differentiation and predication, and
account is fundamentally naturalistic and mechanistic in ways that
standard computational accounts are not; e.g., the interactivist model 2
Machine theory, however, does have inadequacies as a modeling language; for ex-
does not presuppose representation in its “computations on data” ample, it can handle temporal sequence, but not timing (Bickhard & Richie, 1983). These
(Bickhard & Richie, 1983). In general, unlike the standard can be transcended by moving to dynamic systems theory.
3
Note that ““functional” is a system-relational concept – an element cannot have a
function except relative to something other than itself, relative to some system” (Bickhard
1
Bickhard (1980) has an overview of abstract machine theory in Appendix A. & Terveen, 1995, p. 57).
2
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx
also between contact and content. In the following two sections we will Predications are constituted as indications of possible interactions.
explain these distinctions. The predication is that “this” environment is appropriate for, will
Consider the following scenario: upon detecting a fly, a frog's in- support, the indicated interaction. Such anticipations of potentiality
ternal state will change and stabilize in a determinate end state. have truth value: they can be true or false. The bearing of truth value is
Interactivism asks the following question. What other environments the fundamental characteristic of representation.
produce the same end state? It is well known that frogs flick their Such indications also have resources for accounting for more com-
tongues to any dark, small, and moving objects. Thus, any such en- plex forms of representation. One of these is that such anticipatory
vironment, as long as the same end state has been reached, can be indications can branch into multiple possibilities. The frog, for example,
classified as a “tongue flicking” kind of environment even though such might have several tongue flicking opportunities in several directions
an environment might contain numerous non-fly objects, such as peb- — perhaps several flies and a worm. Another resource is that such in-
bles or dots on a screen. There is a lot of evolutionary tinkering needed dications can conditionally iterate: perhaps if the frog were to rotate its
starting from such a coarse differentiation of the environment to a finer- body a bit, other tongue flicking opportunities would become acces-
grained differentiation so that it can differentiate, say, between flies sible: the new opportunities are conditional on the rotation, and the
and pebbles, or between different types of flies. frog might have indications of these linked potentialities.
The above approach to differentiating environments has the con- Such branching and iterating indications can link together to gen-
sequence that an end state that has been reached will implicitly define erate vast webs of anticipatory organizations, and do so in complex
the class of environments that could ‘cause’ it. At this stage we cannot agents, especially humans. Special kinds of subwebs within such overall
have explicit definitions because “there is no semantic information, no webs can constitute representations of objects, and representations of
representational content, available” (Bickhard & Terveen, 1995, p. 60) properties realized within those webs can constitute representations of
to the organism itself although there is a factual correspondence with more abstract kinds, such as of the number three (Bickhard, 2009).
the class of environments. This is another instance where our intuitions Within even the simplest branching organization of anticipations,
can go awry: in interactivism we have states that constitute differ- the organism must select what it will in fact attempt to do. The in-
entiating functions without any content! Differentiation is not in itself dications are of what is (indicated to be) available, while the selections
representation. within such possibilities will depend on other conditions (e.g., hunger)
Note that, in a homomorphism-requiring paradigm, generalization and goals (e.g., to return to a burrow). Such selection processes con-
is the problem which needs to be explained; e.g., how to construct a stitute the domain of motivation (Bickhard, 2000, 2003, 2009).
“chair” representation from encounters with particular chairs. In in- Note that this means that the indicative relationships cannot be
teractivism, on the other hand, the question is to move from general simply causal, nor simply a switching relationship. They have to be
differentiations to differentiations fine enough to support functionally indications of possibilities among which motivational selections can
useful indications of further action and interaction possibilities; so the take place.
aim is towards particularization and not generalization.4 This is indeed
possible but requires sometimes complex constructions (Bickhard,
2.5. Content is NOT represented: what are functional presuppositions?
1998). Also note that, as a consequence of such coarse beginnings, all
the frog can have is a differentiation of the form “a ‘tongue flicking’
Functional presuppositions are almost trivial when one thinks about
kind of environment.” In other words, a classification based solely on
an ordinary machine, say, a pen.7 What conditions should there be in
frog's internal states and their future possibilities. At this stage only an
order for it to function properly? Some of them are the following: the
external observer can represent the correspondence between such in-
temperature of the environment should be between certain limits (e.g.,
ternal processes and the external world. Although differentiation and
not higher than 1000°), the surface the pen will be used on should have
predication might seem to imply each other, they are crucially not
resistance between certain limits (i.e., if it is too slippery the ink will
equivalent. This brings us to the next unintuitive property of inter-
not hold, if it is too high the pen will get stuck), the ink and the che-
activism.
mical composition of its container should not react, there should be
enough downward force (e.g. gravity!), and so on. It is clear that al-
2.4. Differentiation is not the same as predication: contact vs. content though this list begins with trivial considerations, it can easily expand
to include so many elements that one can reach all the way up to the
Interactivism makes a distinction between differentiation and pre- whole universe. But a pen nevertheless manages to work quite effort-
dication, and argues that differentiation does not entail predication: In lessly, so we shouldn't worry much about this theoretical issue at this
standard frameworks, if you predicate something (say, being red) to stage.
things then you differentiate all those things from other things (that is Now as compared to the standard accounts of representations, there
non-red things) and conversely. Yet in interactivism we have to sepa- is again an unintuitive turn in interactivism when it comes to functional
rate these. What distinguishes predication from differentiation is tem- presuppositions. Assume that a frog sees a fly or a worm. Usually the
porality.5 In interactivism differentiation happens when a certain pro- image thus produced (or some homomorphic structure of it) is taken to
cess reaches its end state as described in the previous item — it be the representation of the fly or worm so that representation then
differentiates the class of conditions that would support arriving at that causes a tongue flicking response, or supports an (unconscious) in-
end state. This is called contact and, as we have said above, there is no ference to a conclusion that such a tongue flicking is possible. In in-
content yet. Content comes with predication: “as a predication, a pro- teractivism, on the other hand, there being a fly or a worm in an ap-
cedure is indicated as a potentiality” (Bickhard, 1998, p. 192, emphasis propriate location is the “presupposed conditions for the proper
added).6 functioning of the tongue flicking procedure” (Bickhard, 1998, p. 192,
emphasis as in the original).8 Thus what many theories of
4
Generalization certainly also occurs, but this is much easier to account for if re-
presentation is a matter of differentiation in the first place, rather than, say, impressions (footnote continued)
of particulars from which universals need to be inferred or constructed. terminology. Similarly, an indication that an interaction is possible, based on some dif-
5
This is another characteristic of interactivism in that, unlike Turing machine theory, ferentiation of the environment, can be true or false, but, unlike concepts, the differ-
temporality has to be intrinsic (Bickhard & Richie, 1983). entiations upon which such indications can be based are not themselves explicitly re-
6
A similar point is that ‘concepts’ are somehow representations, but they apply or not: presentational.
7
they do not have truth value per se — “chair” is neither true nor false. An attribution that With human derived, artifactual, normativity.
8
a concept can be applied, however, can be true or false; can be a proposition, in standard Note that anticipating is a functional process, realized in certain kinds of agentive
3
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx
4
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx
This is in contrast to neural network and Parallel Distributed At times such attractors have been proposed as constituting re-
Processing approaches where the distributivity can be taken as an im- presentations of any pattern or start condition that is within the re-
plementational distributivity. It is important to keep in mind this diver- levant attractor basin: that a particular kind of initial condition has
gence between neural network accounts and interactivism when we been encountered is supposed to be “represented” by the fact that the
begin investigating Gładziejewski's account of Beer's virtual organisms dynamics have settled into that ‘corresponding’ attractor.
in section 3.2.2. It should be clear, however, that, while such a dynamic system may
As a final remark, note that if the concept of goal requires re- differentiate among its various possible attractor basins, there is no re-
presentations then the whole account will be circular. The goal subsystems presentation of what is being differentiated. As constituting differ-
in interactivism can be as simple as a switch (e.g., repeat procedure A entiators, such systems might be useful for a broader system, with the
with variations if the error is not below a threshold or else switch to appropriate normativities, for setting up indications of further dynamic
procedure B). In other words, goals in interactivism are functional goals possibilities, in which case normative truth value, thus representing,
of the system and its subsystems, and “the logical function that goals might emerge.
serve … is to provide criteria for error” (Bickhard & Terveen, 1995, p. But the differentiators do not themselves represent: what they dif-
63). ferentiate is inherent in the dynamics, but the dynamics cannot be
On the other hand goals— as ‘merely’ “final common trajectories” “true” or “false.” The dynamics settle into a particular attractor or an-
— can be emergent much as in Brooks' subsumption robotics where other as a matter of factual contact with the environment. There is no
local interaction rules generate global goals for the robot, such as anticipation that could yield emergent truth value (though, again, such
picking up soda cans in an office (Brooks, 1991). We think that the an anticipation might be set up conditionally on such differentiations).
emergence of goals in Brooks' robots renders the emergence of such The settling may be a simple or a complex dynamical process, but it is
final common trajectory outcomes from system processes especially not in itself normative. If it were, then, for example, a soap film settling
clear. But because Brooks' robots do not have anticipatory processes, into an “equal surface tension” form within a wire loop would con-
they lack interactivist representations properly understood. stitute representation.
It might seem at this stage that interactive representation requires Neural and connectionist nets are dynamic systems that can settle
goals but this is not the case, and this brings us to the next item. into having certain attractor landscapes via training, which might in-
volve feedback, or might involve “settling” in terms of the statistical
2.7. Representations without goals: anticipatory functions are all that is properties of the input stream. Although they can be trained, what they
needed ultimately learn is simply correspondences. The nets do not “learn that
they are correspondences, nor what those correspondences are with”
In standard approaches to cognition, representation is considered to (Bickhard, 1993, p. 295). In this regard, they are no different from
be an all or nothing phenomenon. In interactivism there is a pro- standard covariance approaches to representation which require cor-
liferation of levels. Although we will not present the full hierarchy here respondences one way or the other. As we will see, this is also the case
(see Bickhard, 1998), it is easy to see that if we get rid of the goal for Gładziejewski's two-factor approach. For interactivism, neural net-
subsystem in the figure above (G38), we can still have indicative an- works can be taken as emergent implicit differentiators. Because this being
ticipations with truth values and hence obtain a more primitive form of only part of what is required, they are not learners of emergent re-
representation. Actually, the reason we first introduced error within a presentation (Bickhard & Terveen, 1995, p. 301).
larger goal subsystem was for ease of understanding. If all there is to “learning” is relaxation to an attractor, interactivism
The key insight is that truth value does not require goals, only antici- will not consider it to be genuine learning. Interactivism requires that
pations. The anticipatory indications may be selected toward the func- the system learn for itself. In order for a system to learn for itself, it must
tional service of reaching a goal, and “reaching a goal” is another way have the capability for generating system detectable error — intrinsic
in which the overall process can succeed or fail, but such “goal failure” normative error, not just ‘error’ as defined by an external designer or
is neither necessary nor sufficient for error per se. It is not necessary observer — and for that to happen its processes should be of the an-
because the anticipation could fail, whether or not the goal is reached; ticipating nature. It is crucial to note that such anticipations entail a
e.g., a frog might anticipate jumping into the pond with the goal of reversal of emphasis from input-to-output dynamic causality to output-
escaping a predator without realizing that one of its legs is stuck in a to-input potentialities (Bickhard & Terveen, 1995, p. 294). The system
crack. The rotational motion thus resulted from the attempt might must anticipate its output and the interaction of that output with other
nevertheless rescue it from the predator's attack. Conversely, the an- surrounding processes should be the new input. In other words, the flow
ticipation may succeed and nevertheless fail to achieve the goal (e.g., a must be inherently circular, not sequential. In this sense representation
frog in a laboratory might flick its tongue successfully to a dot on a “is fundamentally a matter of anticipation, and not a matter of a system
screen without achieving its goal of satiating its hunger). Thus even being a retrospective spectator back down its input sequence”
with goals, it is still the anticipatory function that has presuppositions (Bickhard & Terveen, 1995, p. 304). Note that approaching the issue of
and truth values. representation from the general issue of “how learning for a system
More formally, normative goal functionality and normative antici- itself is possible” again led us to render representations in terms of
patory functionality must be differentiated, and, thus, it must be noted indications of potential interactions, as we were arguing was the case in
that truth value does not depend on goal functional success, though it the previous items.
may involve such success (or failure). What is required is normative
functionality, and anticipation is such a normative functionality that is 3. A critique of Gładziejewski's charge against interactivism
more basic than goal functionality. (Meanwhile, “goals” need only be
set-points, not representations themselves.) 3.1. An outline of Gładziejewski's argument
5
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx
6
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx
some level of cognition. Only the third seems specific to interactivism. reason for claiming that interactivism gives us an explanatorily valu-
But then how can one infer that “indications have property (3) in able notion of representation” (p. 18). He begins by stating that “in-
virtue of having properties (1) and (2)” (p. 18)? At best (3) follows from teractive representations are not explicit, but rather represent their
(1) and (2) if the intrinsic normativity of action is recognized and taken into contents only implicitly” (p. 18). He then presents an argument from
account, but Gładziejewski doesn't do that; instead he treats action as a Ramsey that implicit representation does not entail inner, structural
simple motion or cause, and then smuggles normativity in as “success.” representation, and, thus, does not satisfy the JDC.
If action is not taken as normative, then (3) does not follow from (1) First of all, note that, as mentioned more than once up to now, the
and (2). There have to be other criteria that are missing in his account, contents are not represented. So, the locution “represent their contents”
which in turn lets him accuse interactivism of reductionism. We have indicates a confusion on Gładziejewski's part. It is not the representing
presented some of these criteria in section 1 above. of contents that constitutes something as representational: it is the
Perhaps more fatally, the problem is that property (3) is stated so possessing of (presupposed) contents about something else (e.g., the
vaguely that it might be acceptable per se! For example, if “determined environment) that makes something representational.
by” is taken to mean “caused by” or “inferable from,” then the clause Gładziejewski mentions that presuppositional content is implicit,
seriously misrepresents the interactivist model. If “determined by” is then references Ramsey's arguments concerning implicit representation,
understood as “constituted by,” then it might be correct. Gładziejewski and then concludes that “this does not entail that there is anything even
ignores how dynamic functional presuppositions are intrinsic to inter- remotely functionally resembling a representation inside the system”
active anticipations, and so misconstrues the interactivist account of (Gładziejewski, 2016, p. 19, emphasis in the original) — that is, there is
content. He is missing its emergent core. no entailment that there is an isomorphic structure inside the system. It
Now, focusing on the third property, Gładziejewski says that the is not clear that Ramsey's notion of “implicit” is the same as the in-
“simple fact that some internal activity could eventually cause an action teractivist notion,16 but, setting that aside, it is certainly correct that
whose success depends on environmental conditions gives, by itself, no interactivist representation does not require isomorphic structure inside
leverage to the idea that this activity represents those conditions” (p. 18, the system. Citing that as a criticism of the interactivist model, how-
emphasis in the original). But for interactivism the relevant activities do ever, thoroughly begs the question whether structural isomorphism
not in any way represent the conditions they functionally presuppose. should be criterial for representation at all. The interactivist model
Rather, those ‘activities’ emergently bear those assumptions about the holds that the most basic criterion is that of bearing truth value, not of
environment. So, the representational relationship is with the environ- being interpretable as an isomorphic correspondence. So, we welcome
ment, not with the content about that environment. Unfortunately, as the conclusion that implicit presuppositional truth value does not entail
we mentioned in the previous section, he does not give clear enough structural correspondence. The reason Gładziejewski has not found
criteria at this stage for what makes something represent something anything remotely resembling (structural) representations is because he
else. Thus, we have to go back to our list above. is looking for something in a place where it is by definition not there.
If Gładziejewski is taking the first item in the list, then at least a His second argument takes the form of a reductio. Even if one ac-
necessary condition for him is that the representation be a stand-in. But cepts implicit representations (Of the stand-in type? There also seems to
this makes the charge vacuous because, other than Ramsey's job de- be a confusion in that interactivist content is implicit, but representa-
scription challenge, he does not present any arguments as to why one tion in the sense of indications is not implicit), one still “needs to pro-
should stick to such a norm for representations. Then, the question vide a good rationale for using representations talk at all” (p. 19). He
becomes as to why JDC requires pre-theoretical and intuitive notions of then reconstructs a possible interactivist response as follows:
representation such as having structurally similar inner models, espe-
cially in the light of all the problems that beset such standard notions. 1. Each indicated action has dynamic presuppositions.
For this reason it seems that he is begging the question. As mentioned 2. Those presuppositions constitute the content of representations in
above, Gładziejewski himself agrees with those charges. But the interactivism.
“charges” that Gładziejewski agrees to are in-principle, and thus his 3. “There is nothing in the system that would explicitly represent this
move is to a model that (he agrees, apparently) is in-principle im- content. (p. 19). The representations are indications of potential
possible. That is, Gładziejewski seems to agree with the critique that interactions that “enable the organism to perform actions” and this
correspondence, thus structural, models of mental representation “is the only way in which those indications are related to external
cannot be correct, but then advocates precisely such a correspondence/ conditions” (p. 19).
structural model. 4. Hence, all the content must be implicit in interactivism.
Finally, the accusation of offering an over-liberal account of re-
presentation follows as a consequence of the first argument. Once one For this argument to work as a reductio, implicit content must be
accepts that everything has been reduced to causal-action talk by a unacceptable. But implicit content is unacceptable, in this argument,
species of ACToR, one might think that there are many types of non- only if structural isomorphism is criterial for representation. As men-
normative action guidance.15 Being non-normative, such action gui- tioned several times so far, we do not accept that criterion, so relying on
dance would “certainly” not be representational. So, if action guidance it begs the question.
that is not representational seems possible on this construal of non- Furthermore, we can accept that this is a partially correct char-
normative action, one ends up being too liberal in attributing re- acterization of the interactivist position — content is most fundamen-
presentations to such “actions.” But if the first argument is unsuccessful, tally constituted in implicit presupposition. Nevertheless, interactivism
as we have claimed, this consequence falls with it too. does allow for the existence of explicit representations. Second level
To sum up, Gładziejewski's argument at this stage seems to be representations that interact with the first level do render explicit
merely that a representation must be a stand-in because of JDC, and properties that are only implicit in the level below. This requires further
interactivism's representations are not stand-ins. He then goes on to model construction within the theory and perhaps that's why it is
offer two more arguments (again short ones) against a possible response missing in Gładziejewski's article. Because of time and space constraints
to his accusation that interactivism is reductive hence that there is “no we cannot provide further details here but the relevant constructions
are available in Bickhard (1998).
15
It's not clear what non-normative action guidance would be, unless the “action” at
issue is itself non-normative, and, thus, not really action. For us a mere, say Newtonian, 16
There are, in fact, several notions of “implicitness” within the interactivist model per
motion without normativity does not fall under the category of “action.” se (Bickhard, 1998).
7
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx
8
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx
language—there is ‘error’ only in the interpretation of the observer or interactivist model on this point either. Although they do not violate
designer. To claim that the adjustments in, say, a back-propagation anything in the interactivist model, it should be clear by now that they
system constitute genuine learning would require also saying that the are missing many crucial aspects that they would need to have re-
dynamic adjustments of a soap film across a wire loop constitute presentations or genuine learning.
‘learning’ (see item 1.8 in the first section). To repeat, the issue, then, is
“what constitutes learning for a system itself.” 4. A critique of Gładziejewski's two-factor theory
Perhaps a general lure of the standard neural network architectures
and their learning capacities is the following simplified analogy. In After presenting his critique of ACToRs, Gładziejewski offers a two-
nature, brains resemble neural networks and the designer norms pro- factor theory of representation. He claims that since action guidance by
vided to these networks might be claimed to be given to the brains via itself is not sufficient, one needs action guidance that uses representa-
innateness or by some early developmental constraints provided by the tions which have structural similarities to whatever they are re-
environment. Since learning is genuine in nature, this analogy implies presenting (Gładziejewski, 2016, p. 25). Although, by now, it should be
that neural network learning could be taken as a genuine learning also. fairly clear how an interactivist critique will proceed against any theory
But as we have seen, such networks lack anticipations and thus nor- with such a requirement of homomorphism, we will nevertheless pro-
mativity. According to interactivism the lack of normativity in such ceed with it in the hopes of further illustrating certain aspects of in-
standard architectures is an indication for a need of a radical shift in teractivism.
design principles.18 Furthermore, we find that the analogy to brain One motivation for augmenting action guidance theories is the
processes is not entirely accurate. Because of time and space constraints following (p. 22):
the reader is referred to Bickhard (2015a, b) for a model of brain
“What is missing from this picture is a good idea — or any idea —
processes which is compatible with interactivism.
about the nature of differentia specifica that distinguishes re-
Third, according to interactivist criteria, Beer's system is at most a
presentational action guidance from the kind of action guidance that
basic differentiator of two environments. As mentioned in items 1.4 and
is achieved without employing representations.”
1.8 in section 1, this is not enough for representations although it is a
required first step. Indeed, this is a general critique of all claims that This illustrates a danger of using the notion of “action guidance”
attribute representational states to neural networks from the point of instead of “interaction indication” — ‘guidance’ does not carry the
view of interactivism (Bickhard & Terveen, 1995). Although there are normative anticipatory meanings that “indication” does, so, unless
unsupervised nets also, such nets still relax into states depending on the guidance is further explicated in terms of anticipatory indications, it
statistics of the input flow, and there is no normativity. Moreover, they can easily look like there is nothing normative, thus nothing re-
have to be designed so that the statistics that they pick up on are going presentational, involved.
to drive the relaxation dynamics in the designer-desired manner. So, for Gładziejewski gives an example where the task is to navigate from
any neural network to be a candidate for having representations, they point A to point B in a city and he presents three different methods of
not only have to be more than input classifiers according to a designer- accomplishing this task (p. 22). The first way is just by asking a local to
specified norm, but their interactions with the environment must also guide you through the streets. The second way is by following a trail
be inherently normative and necessary to the constitution of re- from A to B, such as by following small red balls. The third way is by
presentations. using a map where A and B are marked on it. Gładziejewski claims that
Fourth, focusing on Beer's case now, Gładziejewski's discussion only in this last instance can the required differentiation between re-
seems to us to be violating item 1.1 in section 1: it presupposes that presentational and nonrepresentational action guidance be established
representations have to be structures or singular elements. The inter- (p. 22):
activist model does not require that indications are realized in struc-
I propose that the difference-making factor here is the fact that in
tures, so Beer's models could be accepted in terms of their not having
order to succeed, person 3 exploits what we might generally call a
singular structures. That is, Beer's model could be accepted as a simu-
“correspondence” between the map and the terrain. More precisely,
lation of one aspect of the interactivist model — a kind of “minimal”
what I think makes the case of person 3 a case of representation-use
simulation of differentiation — but it cannot be a realization of the
is the fact that (1) this person (a representation user) uses a map (a
model per se. So, it is not the case that “Beer's virtual agent is a non-
representational vehicle) to guide her action with respect to the
representational system that is categorized as representational by Bic-
terrain (what is represented), and (2) she does it by employing a
khard's theory” (p. 19). Beer and Gładziejewski both assume that re-
strategy whose navigational (action-guiding) success is non-acci-
presentations have to be singular items of some sort; that is simply
dentally dependent on whether a certain type of relation holds or
wrong and rejected insofar as representing is an anticipatory activity
holds to a sufficient degreed between the map (representational
and is modeled as such. So, again, the charge against the interactivist
vehicle) and terrain (what is represented).
model is question begging.
In general, although we find Beer's models interesting, questions as It is unfortunate here that the example is explicitly one of using an
to whether they realize representing are multiply confused and con- external representation. Such external representations do exist, and
fusing. They do not contain correspondence structures, but that is OK they do provide a source for the ‘resemblance’ relation between re-
for interactivism. They also do not contain any indicative structures or presentation and represented in the person making use of the re-
elements, but that is also irrelevant: indication or anticipation is func- presentation (as well as the source for normativity in that person). But it
tional, not structural, and need not be realized in singular elements or is precisely the representing that is being done by that person that we are
structures at all. So Beer's models do not violate anything in the attempting to model. Inner representing cannot be of the same form as
external representation: external requires interpretation while internal
cannot, on pain of unbounded regress. This issue resurfaces at multiple
18
E.g., metanets where the connections of a given neural network are themselves points in the article when Gładziejewski makes such claims as “the
nodes of another network so that the connections can be terminated or added or other- lesson is that representations are things that succeed in playing their
wise modified by the metanet itself might be a step in the right direction (though far from action guiding function by exploiting a certain relation between the
sufficient). This makes it possible to change the whole graph topology of the network in
representation itself (the vehicle) and what it represents” (p. 23). Our
addition to mere adjusting of weights. Note that metanets are a more accurate model of
real brains where new connections come and go depending on the learning involved (see
immediate reaction is to ask, exploited by whom? What is needed is for
Bickhard & Terveen, 1995, pp. 301–307 and the references therein). Here too, however, these neural (and other) activities to modulate other activities in an-
there is still no normativity. ticipatory ways.
9
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx
It is even more unfortunate for Gładziejewski to claim that a “map is the available theories turn out to be inadequate. For these reasons, and
a useful action-guider only if its structure matches or resembles — or to in the hopes of making our responses to Gładziejewski more accessible,
the degree it matches or resembles — the structure of that which it we have presented some of the key aspects of interactivism in the first
represents” (p. 23). There are at least three problems here: section.
In the second section we have outlined Gładziejewski's argument,
1. Such “matching or resembling” can be determined only from the and his conceptual and empirical critiques against interactivism. There
perspective of an external observer (unless it is rendered in terms of are four main issues that we are finding problematic. First, missing the
successful interaction indication/guidance, in which case it is either emergent nature of representations in interactivism, he accuses it of
false qua structure or it amounts to the interactivist model). being reductive to action only. According to him interactivists must
2. Everything matches/resembles everything else if the ‘appropriate’ then grant representations too liberally to too many systems, ob-
‘mappings’ are used — that is, what counts as structure is not de- literating the explanatory value of theories of representation. As it is
terminate without a readout or interpretive process, and that is what hopefully clear from our response, the emergent core of interactivism is
is at issue in the first place. necessary for its ambitions. Representations are not structures but
3. The only functional consequence that ‘structure’ can have is to in- emergent processes out of interactive processes, hence the title of this
fluence the flow of process, and any such influence can always be article: Representing is something we do, not a structure that we “use.”
incorporated into the organization of the system dynamics without This brings us to the second issue in his account. Gładziejewski is
any such structure at all. In other words, this aspect of his claim is committed to the necessity of a homo- or isomorphic idea of re-
against item 1.1 in section 1. Perhaps even more fatally, it is against presentation as a stand-in. This in turn corners him into thinking that
item 1.3 too, the rubber band experiment: a goal-directed cybernetic representations have to be structures, which means that they are “used”
feedback system can adequately “guide” task performance when the somehow. We not only find such a structural similarity requirement for
task has a particular structure that has no natural relationship with representations ad hoc, but for many reasons insufficient, and, ulti-
the organization of the cybernetic system at all. mately, incoherent. In order to overcome such insufficiencies, inter-
activism has had to draw distinctions such as contact versus content or
Recall that the central point of the interactivist model is that in- differentiation versus predication. We have delineated these in the first
teraction anticipation is what constitutes the emergence of representa- section and some additional explications are scattered through the ar-
tional normativity from functional (or pragmatic) normativity. If ticle. As we have outlined in section three, Gładziejewski finds an in-
Gładziejewski recognized this central aspect of the interactivist ac- teractivist account insufficient and “enhances” it to a two-factor theory
count, he would, presumably, address how it is possible to have where there needs to be both structural similarity and action guidance
“practical error” regarding interaction anticipation without having that exploits this in a suitable manner. Besides the infinite regress that
truth value, and, therefore, without having representation. Instead, he always arises with models based on analogies with external re-
seems to subsume the interactivist model under the “guidance” model presentations—just as an external representation needs a cognitive
with no normativity, and then claims that guidance per se is not enough agent to interpret it, the cognitive agent's internal representation will
— you need structural guidance. But (and again) the only support for this need a homunculus, an internal agent within the agent, to interpret it,
is his repeated invocation of the JDC, and, at least with regard to the then the homunculus' internal representation will need a second ho-
interactivist model, that completely begs the question. Even on its own munculus, an agent within an agent within an agent, to interpret it, and
terms, this is a strikingly unsupported, ad hoc, criterion. At best, it so on ad infinitum—and the ad hoc nature of his account, we presented
makes an appeal to our intuitions as to what we would want to call a(n three further problems with such an approach.
external) representation (no ‘representing’ as primary here); at worst, it Third, there seems to be a neglect on Gładziejewski's part regarding
involves deep confusions about how structure can be defined, and how the importance of normativity and its naturalization in interactivism.
structure can influence process. Our question at this level is: What if This comes to the fore in his empirical charge against interactivism
that “job description challenge” is itself based on false presuppositions? (second subsection of section two) via Beer's virtual organisms.
This would seem to be the core point. According to interactivism, representations have to have a normative
One final remark before concluding this paper is that Gładziejewski aspect and the only way to naturalize normativity without obliterating
mentions Grush's emulation theory of representation (Grush, 2004) as it is via anticipations. We pointed out that if anticipations are reduced
“a genuinely representational explanation that uses the notion of in- to a dynamical system's usual progression towards one of its attractors
ternal representations as mechanical or automated ‘models’ or ‘maps’” (and the same applies to neural networks), then one cannot speak of
(p.24). Bickhard has published a critique explicitly directed at Grush's normativity. We argued that since, according to interactivism, Beer's
model (Stojanov & Bickhard, 2004). Moreover, as mentioned before neural networks are at most differentiators (albeit ones that “learn” to
there is no problem with structures constituting representations so long differentiate) with no anticipatory processes, then Gładziejewski's em-
as there is some normative dynamics to interpret those structures; this pirical charge using this kind of neural network begs the question
is as true for internal ‘structures’ as it is for external ‘structures’. Also, against interactivism.
Grush's model does not actually satisfy the JDC because the “structures” Fourth, key properties and relations of content from an interactivist
have no (necessary) structural homomorphism with what they are used standpoint seem to be missing in Gładziejewski's account. In both sec-
to ‘guide’ — they are anticipatory (though there is no account of nor- tions two and three we have tried to emphasize that dynamic functional
mativity), they are useful for ‘fast’ anticipations, and such anticipation presuppositions are intrinsic to interactive anticipations and the activ-
does not require structural correspondence. ities of the organism do not represent those conditions and, thus, con-
tent. The fact that the representational relation is not with the content
5. Conclusion but with the environment seems to be a general misunderstanding that
has surfaced in multiple places.
Interactivism, as a model of representation, has many novel aspects We conclude that the interactivist model is not vulnerable to the
that are inconsistent with the strongly ingrained intuitions of standard criticisms posed by Gładziejewski, while his two-factor model is vul-
and long standing theories of representation. However these counter- nerable to criticisms of correspondence models. We hope that our ex-
intuitive aspects of interactivism are needed to overcome the in- plications as to why that is so will promote better understanding of the
sufficiencies and incoherencies of the available models of representa- interactivist model.
tion. We believe that one fundamental lesson of the history of physics is
that our most basic intuitions can be transformed or even abandoned if
10
H.O. Erdin, M.H. Bickhard New Ideas in Psychology xxx (xxxx) xxx–xxx
11