(Doi 10.1002/ (Sici) 1097-4571 (199106) 42:5-372::aid-Asi7-3.0.Co 2-t) Yves J. Khawam
(Doi 10.1002/ (Sici) 1097-4571 (199106) 42:5-372::aid-Asi7-3.0.Co 2-t) Yves J. Khawam
(Doi 10.1002/ (Sici) 1097-4571 (199106) 42:5-372::aid-Asi7-3.0.Co 2-t) Yves J. Khawam
Yves J. Khawam
Ecole de Bibliothkonomie
et des Sciences de Ifnformation,
Mont&al (PO), Canada H3C 3J7
This study addresses the problems involving the adaptation of cybernetic models to operational realities.
More precisely, three epistemological views are in turn
investigated so as to determine the problems regarding
information transfer between a model and the real
world. Of the three epistemologies under investigation:
realism, a priorism, and phenomenology, the latter
demonstrates the most promise in terms of opening up
operational possibilities for the model, but introduces
problems involving the adaptation of the model to the
reality.
Preface
The purpose of the present work is to discuss concepts underpinning the building of truly intelligent machines: to offer meaning to biological systems in
psychological settings. Such an endeavor is ipso facto
within the scope of Artificial Intelligence (AI), yet it is
an area which remains largely ignored by AI researchers, efforts being instead concentrated on the development of faster and larger rule-based systems.
In tracing the emergence of AI-related paradigms
one finds that world views have shifted from the Newtonian mechanics model of explaining phenomenon,
to information theory, to the information processing
level of modeling which presently characterizes cognitive science (McCorduck, 1979). Indeed with each shift
in paradigm, cybernetics/AI research has turned to
philosophy in order to secure a new paradigm. Once
secured, however, the research strangely ceased further
investigation into the philosophical grounds of the work
at hand. Even though not sustained by research, it is
this authors contention that such alienation hinders research in AI.
The information community would benefit greatly
from such exposure since it may lead to rethinking
some of the basic aspects of intelligent systems, which
Received July
January
26, 1990.
0 1991
by John
March
Universith de Mont&al,
A,
in turn would bring a fresh perspective to areas of stagnation where progress is sought solely through the development of more efficient algorithms.
This study attempts to readdress philosophical
grounds for artificial intelligence, based heavily on the
work of the last school to have systematically investigated such issues: the genetic epistemologists.
Introduction
1991
CCC 0002-8231/91/050372-08$04.00
June 1991
373
374
June 1991
machine to conserve the same rules during a specified activity, nor can it distinguish between different
activities.
Friedberg who is cited by Green (1963), has shown
that a computer cannot resolve a problem of this type if
it works randomly. His experience consisted of a computer producing from certain instructions, a combination which permitted in turn another computer to
produce a unique output for every two or three inputs;
for example: to invent rules of binary addition. After
10,000 attempts, the appropriate program had still not
been produced even though the combinations were
free. Everything occurred as though the input did not
exist to the machine. Its output being aleatory, the
image created of the universe was constantly changing
and therefore had no constant relation to the configurations of the inputs at a given time. As was seen with the
empirical machine, meaning can only be established by
considering symbols and objects in the same manner.
This method cannot be applied here due to the nature
of the initial hypothesis: it is a principle of conventionalism not to bring meaning to objects. For the machine,
this means that all manipulations are performed on insignificant symbols: due to an object (an input), the
machine will invent an undefined quantity of nameswithout ever conserving one-during
the sequence of
operation.
One could build a set of rules of stability into the
machine. This is what the conventionalists
suggest
when they speak of constituents of structures, a structure being by definition resistant to transformations.
The machine essentially produces combinations, and in
the measure that some are repeated or varied, it could
produce rules for the combinations (programs). From
these programs, it could produce one that would stabilize the rules during a certain activity. However, this
stabilizing would itself be submitted to the activity of
the machine and consequently, it would have to be regulated by a superprogram, . . .etc. Here one finds again
the same undefined regression that was the case with the
empirical machine, and it rises from the same attempt at
introducing into the system, a property that the system
itself has to build up in order to exist in that manner.
Another approach would be to make the conventionalistic machine reliable. To impose a structure upon
it that would not allow it to transform an input into a
random output: from the concept of this structure, the
transformation would make it either an a prioric or an
empirical machine.
It has been indicated that in order to establish a link
between the subject and the medium, empiricism has
had to attribute a certain activity to the subject. In
pragmatism (a mild approach to conventionalism since
consequences of actions are considered), the problem is
inverted. It is one of regulating the activity of the subject by itself, the medium being unable to regulate it
from the exterior. If one presupposes these internal dif-
ficulties resolved-which
in the case of psychology,
would mean to turn the problem over to biology-and
if one imagines a machine capable of inventing and following rules, it becomes evident that one has still not
dealt with the adequacy of these images corresponding
to the exterior reality. For a pragmatic machine that
does not have to deal with the effects of its actions on
reality, a criterion is a mere convention, but once chosen, the machine will have to maintain its outputs consistent with the inputs, either by intervention within
the medium or by modifying its internal rules. One sees
that even in its most attenuated form, conventionalism
does not allow for a link between the significant and
the signified: the object not existing independently
from the will of the subject since the idea of commodity
is only part of its internal conventions.
Prospects Based on Phenomenology
The last solution investigated is derived from phenomenology (the descriptive analysis of the subjective
process). This relativistic view links the subject and object together through an interactive process within a
preestablished harmony which removes the inconveniences postulated in realism and a priorism. The relativism or interactionism which Piaget (1970) discusses
resolves these inconveniences by coupling the empirical
and pragmatic machines, that is, recognizing their complementary aspects and insisting upon the need for psychology to base itself on a machine which already
contains an elementary structure with a dynamic inside
(the reflexes tied to the need of the organism); the cognitive construction constantly assimilating past and
present actions. The model for this coupling of the two
machines however, does not stipulate a fusion of the
two preceding theories (the logical addition of two
static systems), but the interactive coupling of two dynamic systems which will involve new properties different from the ones previously mentioned. Intuitively, an
example of this process would be the coupling of a
motor and a regulator which would result in stability
and control which did not preexist in either of the components. It follows that these properties depend on the
type of transformations produced by the linking components, and of the disposition of the communication
channels between the elements of the system. But these
questions involving the nature of the organic needs of a
structure, and of the coordination between the reflex
mechanism and the structuring of the circuits (permitting the elementary operations of the combinative part
and of the sensory organs) will not be dealt with in this
work since their essence is more of a biological nature.
Not analyzing the physical links between the elements of the two machines, one will therefore make the
broadest possible hypothesis: that each element is in interaction with all others. This brings to light some of
the consequences of this interactionism.
From the
June 1991
375
376
Epilogue
Due to the difficulty of addressing epistemological
problems, present AI research has by and large opted to
circumvent these first principles of knowledge. Some
have even gone so far as to claim that AI is foremost a
subbranch of engineering and can therefore not be a
philosophy (Putnam, 1988). Others have pointed out
that in order to build machines that are as intelligent as
people, we must first establish a science of cognition
since presently: we have only fragments of the conception, and some of those are certainly incorrect (Waltz,
1988). While Churchland (1986) contends that classical
AI is much less likely to yield conscious machines than
neurophilosophy,
Searle (1990) argues that AI can
never give rise to minds since computer programs
merely manipulate symbols whereas a brain attaches
meaning to them. Nevertheless, it is only upon the systematic expounding of grounds for knowledge that the
field of AI will realize-if
not resolve-its
limitations proper.
If one simply wants to build an expert system which
will draw a few inferences from a knowledge base, such
a system is executable by a few relatively simple procedural steps, but if the goal of AI is to create truly intelligent machines, one cannot simply leap over the
barrier of epistemology. Instead, one has to deal with it
since it is that barrier which eventually dictates the future progress of the system. Creativity in approaches to
the grounds for knowledge-such
as Turkles (1988)
proposed alliance between psychoanalysis and AI-will
be the determinant factor regarding the feasibility of
creating artificial intelligence. Since the mind does
not behave in a series of definable symbols, it may well
do to return to the branchpoint of placing what is
presently known of the symbols within the context of
an epistemological framework.
June 1991
References
Ashby, W. R. (1963). An introduction to cybernetics, London: Chapman & Hall.
Ashby, W. R. (1960). Design for a brain, London: Chapman
& Hall.
Campbell,
J. (1982). Gramatical man, New York: Simon and Schuster.
Cellerier,
G. (1968). Cybernktique et epistemologie, Paris: Presses
Universitaires
de France.
Churchland,
P.S. (1986). Neurophilosophy: Toward a Unified Understanding of the Mind/Brain, Cambridge:
MIT Press.
Crosson,
F. J. & Sayre, K. M. (Eds.) (1967). Philosophy and cybernetics, London,
Notre Dame, Indiana:
University
of Notre Dame
Press.
Evans, C. R. (Ed.) (1968). Key papers: Cybernetics, Baltimore:
University Park Press.
Gallie, W. B. (1952). Peirce andpragmatism,
Harmondsworth,
Middlesex: Penguin Books.
George, F. H. (1961). The brain as a computer, Oxford, New York:
Pergamon
Press.
George, F. H. (1979). Philosophical foundations of cybernetics, Kent,
England:
Abacus Press.
Green, B. F. (1963). Digital computers in research, New York: McGraw-Hill.
Grey Walter, W. (1953). The living brain, London:
Duckworth.
Helvey, T. C. (1971). The age of information: an Interdisciplinary
Survey of Cybernetics,
Englewood
Cliffs, NJ: Educational
Technology Publications.
Machlup,
F., & Mansfield,
U. (1983). The study of information: Interdisciplinary messages, New York: John Wiley and Sons.
McCorduck,
Pamela (1979). Machines who think: A personal inquiry
into the history and prospects of artificial intelligence. San Francisco: W. H. Freeman and Company.
Parsegian,
V. L. (1973). This cybernetic world, New York: Anchor
Books.
Piaget,
J. (1962). Introduction a lepistemologie g&&ique, Paris:
Presses Universitaires
de France.
Piaget, J. (1970). Epistemologie g&Bique, Paris: Presses Universitaires de France.
Pierce, W. H. (1964). Redundancy
in computers,
Scientific American, Jan.
Putnam,
H. (1988). Much ado about not very much. Daedalus, 269281.
Searle, J. R. (1990). Is the brains mind a computer program?
Scientific American, 26-31.
Shannon,
C. E. (1948). A mathematical
theory of information.
Bell
System Technical Journal, 27 379-423, 623-656.
Tamine, J. (1970). La cybernetique, Bruxelles, Paris: Humanism
An
2000.
Turkle, S. (1988). Artificial
intelligence
and psychoanalysis:
A new
alliance. Daedalus, 241-268.
von Neumann,
J. (1958). The computer and the brain, New Haven:
Yale Univ. Press.
Waltz, D. (1988). The prospects
of building
truly intelligent
machines. Daedalus, 191-212.
Wiener, N. (1950). The human use of human beings, Cambridge,
MA: The Riverside
Press.
Winograd,
S., & Cowan, J. D. (1963). Reliable computation in the
presence of noise, Cambridge,
Mass: MIT Press.
June 1991
377