1202 (2012) Chris Eliasmith: Science Et Al

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

A Large-Scale Model of the Functioning Brain

Chris Eliasmith et al.


Science 338, 1202 (2012);
DOI: 10.1126/science.1225266

This copy is for your personal, non-commercial use only.

Permission to republish or repurpose articles or portions of articles can be obtained by


following the guidelines here.
The following resources related to this article are available online at
www.sciencemag.org (this information is current as of July 2, 2013 ):
A correction has been published for this article at:
http://www.sciencemag.org/content/338/6113/1420.2.full.html
Updated information and services, including high-resolution figures, can be found in the online
version of this article at:
http://www.sciencemag.org/content/338/6111/1202.full.html
Supporting Online Material can be found at:
http://www.sciencemag.org/content/suppl/2012/11/28/338.6111.1202.DC1.html
A list of selected additional articles on the Science Web sites related to this article can be
found at:
http://www.sciencemag.org/content/338/6111/1202.full.html#related
This article cites 58 articles, 18 of which can be accessed free:
http://www.sciencemag.org/content/338/6111/1202.full.html#ref-list-1
This article has been cited by 2 articles hosted by HighWire Press; see:
http://www.sciencemag.org/content/338/6111/1202.full.html#related-urls
This article appears in the following subject collections:
Computers, Mathematics
http://www.sciencemag.org/cgi/collection/comp_math

Science (print ISSN 0036-8075; online ISSN 1095-9203) is published weekly, except the last week in December, by the
American Association for the Advancement of Science, 1200 New York Avenue NW, Washington, DC 20005. Copyright
2012 by the American Association for the Advancement of Science; all rights reserved. The title Science is a
registered trademark of AAAS.

Downloaded from www.sciencemag.org on July 2, 2013

If you wish to distribute this article to others, you can order high-quality copies for your
colleagues, clients, or customers by clicking here.

A Large-Scale Model of the


Functioning Brain
Chris Eliasmith,* Terrence C. Stewart, Xuan Choo, Trevor Bekolay, Travis DeWolf,
Yichuan Tang, Daniel Rasmussen
A central challenge for cognitive and systems neuroscience is to relate the incredibly complex behavior
of animals to the equally complex activity of their brains. Recently described, large-scale neural models
have not bridged this gap between neural activity and biological function. In this work, we present a
2.5-million-neuron model of the brain (called Spaun) that bridges this gap by exhibiting many different
behaviors. The model is presented only with visual image sequences, and it draws all of its responses with
a physically modeled arm. Although simplified, the model captures many aspects of neuroanatomy,
neurophysiology, and psychological behavior, which we demonstrate via eight diverse tasks.
arge-scale neural simulations are becoming increasingly common [see (1) for a
review]. These include the ambitious Blue
Brain Project (2), which has simulated about
1 million neurons in cortical columns and includes
considerable biological detail, accurately reflecting spatial structure, connectivity statistics, and
other neural properties. More recent work has simulated many more neurons, such as the 1 billion
neurons simulated in the Cognitive Computation Project (3), which has been hailed as a catscale simulation. A human-scale simulation of
100 billion neurons has also been reported (4).
Although impressive scaling has been achieved,
no previous large-scale spiking neuron models have
demonstrated how such simulations connect to a
variety of specific observable behaviors. The focus
of this past work has been on scaling to larger numbers of neurons and more detailed neuron models.
Unfortunately, simulating a complex brain alone
does not address one of the central challenges for
neuroscience: explaining how complex brain activity
generates complex behavior. In contrast, we present
here a spiking neuron model of 2.5 million neurons
that is centrally directed to bridging the brainbehavior gap. Our model embodies neuroanatomical
and neurophysiological constraints, making it directly comparable to neural data at many levels of
analysis. Critically, the model can perform a wide
variety of behaviorally relevant functions. We show
results on eight different tasks that are performed
by the same model, without modification.
All inputs to the model are 28 by 28 images of
handwritten or typed characters. All outputs are
the movements of a physically modeled arm
that has mass, length, inertia, etc. For convenience,
we refer to the model as Spaun (Semantic Pointer Architecture Unified Network) (see Fig. 1 and
supplementary materials and methods section S1.1).
Many of the tasks we have chosen are the subject of
extensive modeling in their own right [e.g., image
recognition (5, 6), serial working memory (WM)
(7, 8), and reinforcement learning (RL) (9, 10)],
and others demonstrate abilities that are rare for

Centre for Theoretical Neuroscience, University of Waterloo,


Waterloo, ON N2J 3G1, Canada.
*To whom correspondence should be addressed. E-mail:
[email protected]

1202

neural network research and have not yet been demonstrated in spiking networks (e.g., counting, question
answering, rapid variable creation, and fluid reasoning). The eight tasks (termed A0 to A7) that
Spaun performs are: (A0) Copy drawing. Given a
randomly chosen handwritten digit, Spaun should
produce the same digit written in the same style
as the handwriting (movie S1; all supplemental
movies can be viewed at http://nengo.ca/build-abrain/spaunvideos). (A1) Image recognition. Given
a randomly chosen handwritten digit, Spaun should
produce the same digit written in its default writing (movie S2). (A2) RL. Spaun should perform
a three-armed bandit task, in which it must determine which of three possible choices generates the
greatest stochastically generated reward. Reward
contingencies can change from trial to trial (movie
S3). (A3) Serial WM. Given a list of any length,
Spaun should reproduce it (movie S4). (A4) Counting. Given a starting value and a count value, Spaun
should write the final value (that is, the sum of the
two values) (movie S5). (A5) Question answering.
Given a list of numbers, Spaun should answer
either one of two possible questions: (i) what is in
a given position in the list? (a P question) or (ii)
given a kind of number, at what position is this
number in the list? (a K question) (movie S6).
(A6) Rapid variable creation. Given example syntactic input/output patterns (e.g., 0 0 7 4 7 4;
0 0 2 4 2 4; etc.), Spaun should complete a novel
pattern given only the input (e.g., 0 0 1 4 ?)
(movie S7). (A7) Fluid reasoning. Spaun should
perform a syntactic or semantic reasoning task
that is isomorphic to the induction problems from
the Ravens Progressive Matrices (RPM) test for
fluid intelligence (11). This task requires completing
patterns of the form: 1 2 3; 5 6 7; 3 4 ? (movie S8).
Each input image is shown for 150 ms and separated
by a 150-ms blank (see table S2 for example inputs
for each task). The model is told what the task will
be by showing it an A and the number of the task
(0 to 7). The model is then shown input defining
the task (see Figs. 2 and 3 for examples). Spaun is
robust to invalid input (fig. S10) and performs
tasks in any order without modeler intervention.
Figure 1A shows the anatomical architecture
of the model. Connectivity and functional ascriptions to brain areas in Spaun are consistent with
current empirical evidence (table S1). In general,

30 NOVEMBER 2012

VOL 338

SCIENCE

we modeled neuron and synaptic response properties on the electrophysiology literature for the
relevant anatomical areas. For instance, the basal
ganglia have largely GABAergic neurons, with
dopamine modulating learning in the striatum,
and the cortex has largely N-methyl-D-aspartate and
AMPA synaptic connections (supplementary section S1.3). As a result, the dynamics in the model
are tightly constrained by underlying neural properties (see supplementary section S2.4).
The functional architecture of the model is described in Fig. 1B. The network implementing the
Spaun model consists of three compression hierarchies, an action-selection mechanism, and five subsystems. Components of the model communicate
using spiking neurons that implement neural representations that we call semantic pointers, using
various firing patterns. Semantic pointers can be
understood as being elements of a compressed
neural vector space (supplementary sections S1.1
and S1.2). Compression is a natural way to understand much of neural processing. For instance, the
number of cells in the visual hierarchy gradually
decreases from the primary visual cortex (V1) to the
inferior temporal cortex (IT) (12), meaning that the
information has been compressed from a higherdimensional (image-based) space into a lowerdimensional (feature) space (supplementary section
S1.3). This same kind of operation maps well to the
motor hierarchy (13), where lower-dimensional firing
patterns are successively decompressed (for example, when a lower-dimensional motor representation in Euclidean space moves down the motor
hierarchy to higher-dimensional muscle space).
Compression is functionally important because
low-dimensional representations can be more efficiently manipulated for a variety of neural computations. Consequently, learning or defining different
compression/decompression operations provides a
means of generating neural representations that
are well suited to a variety of neural computations.
The specific compression hierarchies in Spaun are
(see Fig. 1B): (i) a visual hierarchy, which compresses image input into lower-dimensional firing
patterns; (ii) a motor hierarchy that decompresses
firing patterns in a low-dimensional space to drive
a simulated arm; and (iii) a WM, which constructs
compressed firing patterns to store serial position
information. The WM subsystem includes several
subcomponents that provide stable representations
of intermediate task states, task subgoals, and context.
Spauns action-selection mechanism is based
on a spiking basal ganglia model that we have
developed in other work (14) but is here extended
to process higher-dimensional neural representations. The basal ganglia determine which state
the network should be in, switching as appropriate for the current task goals. Consequently,
the models functional states are not hardwired,
as the basal ganglia are able to control the order
of operations by changing information flow between subsystems of the architecture.
The five subsystems, from left to right in Fig.
1B, are used to: (i) map the visual hierarchy firing
pattern to a conceptual firing pattern as needed

www.sciencemag.org

Downloaded from www.sciencemag.org on July 2, 2013

REPORTS

(information encoding), (ii) extract relations between


input elements (transformation calculation), (iii)
evaluate the reward associated with the input (reward evaluation), (iv) decompress firing patterns
from memory to conceptual firing pattern (information decoding), and (v) map conceptual firing patterns to motor firing patterns and control motor
timing (motor processing). Supplementary materials
section S1.3 includes a more detailed description
of each element. It is critical to note that the elements of Spaun are not task-specific. That is, they
are used in a variety of combinations to perform the
chosen tasks, resulting in the same circuitry being
used across tasks. This makes it straightforward
to extend the model to some new tasks (supplementary section S2.4).

The neural connection weights of these subsystems can be learned with a biologically plausible spike-based rule (15), although we use more
efficient optimization methods to determine the
synaptic weights (supplementary section S1.2).
To help explain the functioning of the model,
we consider the serial WM task. Figure 2A shows
the information flow through the model for this
task. The storage and recall states of the network
are common to many tasks. For the WM task,
these states occur immediately one after the other,
although the delay is task-dependent. Initially, seeing the task identifier (A3) switches Spaun into the
storage state. In the storage state, the network compresses the incoming image into a visually based
firing pattern (FP in the figure) that encodes vi-

Fig. 1. Anatomical and functional architecture of Spaun. (A) The anatomical architecture of Spaun shows the
major brain structures included in the model and their connectivity. Lines terminating in circles indicate
GABAergic connections. Lines terminating in open squares indicate modulatory dopaminergic connections. Box
styles and colors indicate the relationship with the functional architecture in (B). PPC, posterior parietal cortex;
M1, primary motor cortex; SMA, supplementary motor area; PM, premotor cortex; VLPFC, ventrolateral prefrontal cortex; OFC, orbitofrontal cortex; AIT, anterior inferior temporal cortex; Str, striatum; vStr, ventral
striatum; STN, subthalamic nucleus; GPe, globus pallidus externus; GPi, globus pallidus internus; SNr, substantia nigra pars reticulata; SNc, substantia nigra pars compacta; VTA, ventral tegmental area; V2, secondary
visual cortex; V4, extrastriate visual cortex. (B) The functional architecture of Spaun. Thick black lines indicate
communication between elements of the cortex; thin lines indicate communication between the actionselection mechanism (basal ganglia) and the cortex. Boxes with rounded edges indicate that the actionselection mechanism can use activity changes to manipulate the flow of information into a subsystem. The
open-square end of the line connecting reward evaluation and action selection denotes that this connection
modulates connection weights. See table S1 for more detailed definitions of abbreviations, a summary of the
function to anatomy mapping, and references supporting Spauns anatomical and functional assumptions.
www.sciencemag.org

SCIENCE

VOL 338

sual features, maps that firing pattern to another


firing pattern that represents the related concept
(e.g., TWO; see supplementary section S1.3), and
then compresses that firing pattern into a memory
trace that is stored in WM. The compression operator (i.e., ) binds the concept firing pattern
(e.g., TWO) to a position representation (e.g., P3)
and adds the result (i.e., TWO P3, as in Fig.
2C) to WM. As shown in Fig. 2C, this process is
repeated as long as items are shown to the model.
Figure 2B shows a screen capture from a
movie of the WM simulation. When the model sees the ? input (as in Fig. 2B), the basal
ganglia reroute cortical connectivity to allow
Spaun to recall the input stored in the dorsolateral
prefrontal cortex (DLPFC). Recall consists of decompressing an item from the stored representation
of the full list, mapping the resulting concept
vector to a known high-level motor command,
and then decompressing that motor command
to specific joint torques to move the arm. This
process is repeated for each position in the WM,
to generate Spauns full written response. Figure
2C shows the entire process unfolding over time,
including spike rasters, conceptual decodings of
the contents of DLPFC, and the input and output.
Critically, no single task captures the distinct
features of this model. To highlight the diversity of tasks the model is able to perform, Fig. 3
shows the results of the model performing a lowlevel perceptual-motor task (the copy-drawing
task), as well as a challenging pattern-induction
task only performed by humans (the RPM task).
Specifically, Fig. 3A demonstrates that the lowlevel perceptual features in the input are available
to Spaun to drive its motor behavior. Figure 3B
demonstrates the RPM task for one sample pattern
(see fig. S6 for an additional example). In this
task, Spaun is presented with two groups of three
related items and must learn the relation between
items in the groups. Spaun then uses its inferred
relation to complete the pattern of a third set of
items. Similarity plots for the DLPFC show conceptual decodings of neural activities. The model
learns the relation between subsequent strings of
numbers by comparing patterns in DLPFC1 and
DLPFC2 (see supplementary section S1.3). Human participants average 89% correct (chance is
13%) on the matrices that include only an induction rule (5 of 36 matrices) (16). Spaun performs
similarly, achieving a match-adjusted success rate
of 88% (see supplementary section S2.3).
To demonstrate that Spaun captures general
psychological features of behavior, it is critical to be
able to simulate populations of participants. Every
time a specific instance of Spaun is generated, the
parameters of the neurons are picked from random
distributions (supplementary section S1.4). Consequently, generating many instances allows for comparison with population-wide behavioral data. Figure 4
compares the recall accuracy of the model as a function of list length and position in a serial recall task
to human population data. As with human data (17),
Spaun produces distinct recency (items at the end
are recalled with greater accuracy) and primacy

30 NOVEMBER 2012

Downloaded from www.sciencemag.org on July 2, 2013

REPORTS

1203

REPORTS

1204

A
Recall

Storage
TWO

TWO

Delay
DLPFC
compress
image

map

visual FP

DLPFC
decompress

compress
concept FP

trace FP

trace FP

map

concept FP

decompress
drawing

motor FP

SMA
DLPFC

Str
PFC
GPi
IT

C
IT
Str
GPi
PFC
DLPFC

DLPFC

SMA

arm

Fig. 2. The serial WM task. (A) Information flow through Spaun during the WM task. Line style and
color indicate the element of the functional architecture in Fig. 1B responsible for that function.
FP, firing pattern. (B) A screen capture from the simulation movie of this task (supplementary
section S2.1), taken at the 2.5-s mark of the time course plot in (C). The input image is on the
right, the output is drawn on the surface below the arm. Spatially organized (neurons with similar
tuning are near one another), low-passfiltered neuron activity is approximately mapped to
relevant cortical areas and shown in color (red is high activity, blue is low). Thought bubbles show
spike trains, and the results of decoding those spikes are in the overlaid text. For Str, the thought
bubble shows decoded utilities of possible actions, and in GPi the selected action is darkest. (C)
Time course of a single run of the serial WM task. The stimulus row shows input images. The arm
row shows digits drawn by Spaun. Other rows are labeled by their anatomical area. Similarity plots
(solid colored lines) show the dot product (i.e., similarity) between the decoded representation
from the spike raster plot and concepts in Spauns vocabulary. These plots provide a conceptual
decoding of the spiking activity, but this decoding is not used by the model (supplementary section
S1.1). Raster plots in this figure are generated by randomly selecting 2000 neurons from the
relevant population and discarding any neurons with a variance of less than 10% over the run.
denotes the convolution compression operator.

30 NOVEMBER 2012

VOL 338

SCIENCE

www.sciencemag.org

Downloaded from www.sciencemag.org on July 2, 2013

(items at the beginning are recalled with greater


accuracy) effects. A good match to human data from
a rapid serial-memory task using digits and short
presentation times (18) is also evident, with 17 of 22
human mean values within the 95% confidence interval of 40 instances of the model. Additional
population comparisons are presented in fig. S8.
To this point, we have only described performance
on three of the eight tasks that Spaun performs. The
tasks not yet discussed are: (i) image recognition,
for which the model achieves 94% accuracy on untrained data from the MNIST handwriting database
[human accuracy is ~98% (19)]; (ii) RL, for which
the model is able to learn reward-dependent actions
in a variable environment using known neural mechanisms (fig. S5); (iii) counting, for which the model
reproduces human reaction times and scaling of variability (fig. S8A); (iv) question answering, for
which the model generates a novel behavioral prediction (fig. S8B); and (v) rapid variable creation,
for which the model instantiates the first neural architecture able to solve this challenging task (fig. S11).
However, the central purpose of this work is
not to explain any one of these tasks, but to propose
a unified set of neural mechanisms able to perform
them all. In a sense, the complex task solved by
Spaun is one of coordination. That is, the rapid
flexibility of biological systems is its target of explanation. The specific dynamics of Spauns responses
to this wide variety of tasks is governed by four
parameters, each of which is set empirically (the
time constants of the neurotransmitters; see supplementary section S2.4). Thus, without fitting, the
model is consistent with dynamics from single cells
and behavior (see figs. S8 and S11 and supplementary section S2.4), and is able to switch between a
wide variety tasks quickly and robustly (fig. S10).
Although Spauns main contribution lies in its
breadth, it also embodies new hypotheses regarding how specific tasks are solved. For instance,
the proposed method of solving the rapid variablecreation task is distinct to Spaun [this task has
been identified as one that no contemporary neural
architecture could perform as quickly as humans
(20)], as is the account of serial WM. Such hypotheses have resulted in new testable predictions
(figs. S8 and S9). Still, Spauns uniqueness lies
in its being a platform for exploring the robust
flexibility of biological cognition. Consider the example of learning: Learning in Spaun takes on
many forms. Although learning takes place in the
RPM, rapid variable-creation, and RL, connectionweight changes only occur in the RL task (supplementary section S2.4). In most neural models, this
kind of learning is often used as the main method
of model construction, and it is possible to learn all
of the elements of the Spaun model in this traditional sense (supplementary section S2.4). However, constructing models in this manner does not
address a central, difficult challenge of learning in
biological brains. That challenge consists of explaining how robust learning can occur in a continuously operating, complex, and multifunctional brain.
Spaun minimally demonstrates this kind of
learning in the RL task, as connection-weight

REPORTS
However, Spaun has little to say about how
that complex, dynamical system develops from
birth. Furthermore, Spaun has many other limitations that distinguish it from developed brains.
For one, Spaun is not as adaptive as a real brain,
as the model is unable to learn completely new
tasks. In addition, both attention and eye position
of the model is fixed, making Spaun unable to
control its own input. Also, its perceptual and conceptual representations are largely limited to the
space of digits from 0 to 9. Anatomically, many

A
stimulus
IT
Str
GPi
PFC
SMA

References and Notes

arm
0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

B
stimulus
IT
Str
GPi
PFC
DLFPC1
DLFPC1
DLPFC2
DLPFC2

PFC
SMA
arm
Time (s)

Fig. 3. Time-course plots for two Spaun tasks. (A) Results of the copy-drawing task. The input/output
pairs for 20 additional runs are shown to the right. (B) Results of an example run of the RPM task,
plotted using the same method as described in Fig. 2C. See text for details.

Data

Model

0. 9

0. 9

0. 8

0. 8

Accuracy

Accuracy

0. 7
0. 6
4 items
5 items
6 items
7 items

0. 5
0. 1
0

areas of the brain are missing from the model.


Those that are included have too few neurons and
perform only a subset of functions found in their
respective areas. Physiologically, the variability of
spiking in the model is not always reflective of
the variability observed in real brains (table S3).
However, we believe that, as available computational power increases, many of these limitations can
be overcome via the same methods as those used
to construct Spaun (supplementary section S2.4).
Even in its current form, Spaun offers a distinctly functional view and set of hypotheses
regarding the neural mechanisms and organization that may underlie basic cognitive functions.
Consequently, Spaun opens new avenues for testing ideas about biological cognition under biologically plausible, more complex, and more
functional settings than previously available.

Acknowledgments: The development of Spaun was supported by


the Natural Sciences and Engineering Research Council of
Canada, the Canada Research Chairs program, the Canadian
Foundation for Innovation, and the Ontario Innovation Trust.
We thank M. van der Meer, B. Tripp, M. Laubach, and three
anonymous reviewers for insightful comments on previous
drafts. All models, scripts, and videos are deposited online at
http://nengo.ca/ and http://models.nengo.ca/.

0. 7
0. 6
4 items
5 items
6 items
7 items

0. 5
0. 1
0

1. H. de Garis, C. Shuo, B. Goertzel, L. Ruiting, Neurocomputing


74, 3 (2010).
2. H. Markram, Nat. Rev. Neurosci. 7, 153 (2006).
3. R. Ananthanarayanan, D. S. Modha, in Proceedings of the
2007 ACM/IEEE Conference on Supercomputing-SC '07
(Association for Computing Machinery Press, New York,
2007), p. 1.
4. E. M. Izhikevich, G. M. Edelman, Proc. Natl. Acad.
Sci. U.S.A. 105, 3593 (2008).
5. M. Ranzato, Y. Boureau, Y. LeCun, Adv. Neural Inf.
Process. Syst. 20, 1 (2007).
6. G. E. Hinton, R. R. Salakhutdinov, Science 313, 504 (2006).
7. T. Orlov, V. Yakovlev, D. Amit, S. Hochstein, E. Zohary,
Cereb. Cortex 12, 306 (2002).
8. B. B. Murdock, Psychol. Rev. 100, 183 (1993).
9. W. Schultz, Nat. Rev. Neurosci. 1, 199 (2000).
10. E. Vasilaki, N. Frmaux, R. Urbanczik, W. Senn, W. Gerstner,
PLOS Comput. Biol. 5, e1000586 (2009).
11. J. Raven, J. Court, Manual for Ravens Progressive
Matrices and Vocabulary Scales (Harcourt Assessment,
San Antonio, TX, 2004).
12. T. Pasternak, J. Bisley, D. Calkins, in Handbook of Psychology,
Biological Psychology, M. Gallagher, R. J. Nelson, Eds. (Wiley,
Hoboken, NJ, 2003), vol. 3, pp. 139185.
13. E. Todorov, The Cognitive Neurosciences, M. S. Gazzaniga,
Ed. (MIT Press, Cambridge, MA, 2009).
14. T. Stewart, T. Bekolay, C. Eliasmith, Front. Decis. Neurosci.
6, article no. 00002 (2012); 10.3389/fnins.2012.00002.
15. D. MacNeil, C. Eliasmith, PLoS ONE 6, e22885 (2011).
16. A. R. Forbes, Br. J. Educ. Psychol. 34, 223 (1964).
17. J. C. Jahnke, J. Exp. Psychol. 76, 618 (1968).
18. B. A. Dosher, Int. J. Psychol. 34, 276 (1999).
19. I. Chaaban, M. R. Scheessele, Human performance on
the USPS database (Technical Report, Indiana Univ.,
South Bend, IN, 2007).
20. R. F. Hadley, Neural Comput. 21, 510 (2009).

Downloaded from www.sciencemag.org on July 2, 2013

changes do not adversely affect the performance


of the model on other tasks. Smaller-scale models
cannot provide even this minimal demonstration,
because they lack the variety of tasks necessary
to demonstrate robustness. As such, Spaun provides a distinct opportunity to test learning algorithms in a challenging but biologically plausible
setting. More generally, Spaun provides an opportunity to test any neural theory that may be
affected by being embedded in a complex, dynamical context, reminiscent of a real neural system.

Item Position

Item Position

Fig. 4. Population-level behavioral data for the WM task. Accuracy is shown as a function of position and list
length for the serial WM task. Error bars are 95% confidence intervals over 40 runs per list length. (A) Human
data taken from (18) (only means were reported). (B) Model data showing similar primacy and recency effects.
www.sciencemag.org

SCIENCE

VOL 338

Supplementary Materials
www.sciencemag.org/cgi/content/full/338/6111/1202/DC1
Materials and Methods
Supplementary Text
Figs. S1 to S12
Tables S1 to S3
References (2174)
Movies S1 to S8 (at http://nengo.ca/build-a-brain/spaunvideos)
28 May 2012; accepted 15 October 2012
10.1126/science.1225266

30 NOVEMBER 2012

1205