Using Stochastic Spiking Neural Networks On SpiNNa

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/321914543

Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint


Satisfaction Problems

Article in Frontiers in Neuroscience · December 2017


DOI: 10.3389/fnins.2017.00714

CITATIONS READS

46 4,615

2 authors:

Gabriel Andrés Fonseca Guerra Steve Furber


Intel The University of Manchester
6 PUBLICATIONS 535 CITATIONS 350 PUBLICATIONS 10,569 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Continuous on-line adaptation in many-core systems: From graceful degradation to graceful amelioration View project

SpiNNaker View project

All content following this page was uploaded by Gabriel Andrés Fonseca Guerra on 21 December 2017.

The user has requested enhancement of the downloaded file.


ORIGINAL RESEARCH
published: 19 December 2017
doi: 10.3389/fnins.2017.00714

Using Stochastic Spiking Neural


Networks on SpiNNaker to Solve
Constraint Satisfaction Problems
Gabriel A. Fonseca Guerra* and Steve B. Furber

Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester,
United Kingdom

Constraint satisfaction problems (CSP) are at the core of numerous scientific and
technological applications. However, CSPs belong to the NP-complete complexity class,
for which the existence (or not) of efficient algorithms remains a major unsolved question
in computational complexity theory. In the face of this fundamental difficulty heuristics and
approximation methods are used to approach instances of NP (e.g., decision and hard
optimization problems). The human brain efficiently handles CSPs both in perception and
behavior using spiking neural networks (SNNs), and recent studies have demonstrated
that the noise embedded within an SNN can be used as a computational resource to
Edited by:
solve CSPs. Here, we provide a software framework for the implementation of such
Gert Cauwenberghs,
University of California, San Diego, noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further
United States demonstrating their potential to implement a stochastic search that solves instances of P
Reviewed by: and NP problems expressed as CSPs. This facilitates the exploration of new optimization
Hesham Mostafa,
University of California, San Diego,
strategies and the understanding of the computational abilities of SNNs. We demonstrate
United States the basic principles of the framework by solving difficult instances of the Sudoku puzzle
Brian La Cour,
and of the map color problem, and explore its application to spin glasses. The solver
University of Texas at Austin,
United States works as a stochastic dynamical system, which is attracted by the configuration that
*Correspondence: solves the CSP. The noise allows an optimal exploration of the space of configurations,
Gabriel A. Fonseca Guerra looking for the satisfiability of all the constraints; if applied discontinuously, it can also
gabriel.fonsecaguerra
@manchester.ac.uk
force the system to leap to a new random configuration effectively causing a restart.
Keywords: SpiNNaker, constraint satisfaction, spiking neural networks, stochastic search, spiking neurons
Specialty section:
This article was submitted to
Neuromorphic Engineering,
a section of the journal
1. INTRODUCTION
Frontiers in Neuroscience
Most practical problems and natural phenomena can be abstracted as systems composed of smaller
Received: 28 August 2017 elements interacting with each other, an element being able to assume one of many states and
Accepted: 06 December 2017
the global configuration of states governed by the nature of the interactions. In practice, each
Published: 19 December 2017
interaction imposes a restriction on the behavior of the units (a constraint). Such a description
Citation: allows the interpretation of the phenomena as a constraint satisfaction problem (CSP), which is
Fonseca Guerra GA and Furber SB
defined by the tuple hX, D, Ci. Here, X = {x1 , ..., xN } is a set of N variables defined over the
(2017) Using Stochastic Spiking
Neural Networks on SpiNNaker to
respective set of non-empty domains D = {D1 , ..., DN }, each xi represents an element of the system
Solve Constraint Satisfaction which can take Di possible states. The constraints C = {C1 , ..., Cm } are hSi , Ri i tuples defined over
Problems. Front. Neurosci. 11:714. m subsets S = {S1 , ..., Sm : Si ⊆ X}, and k relations R = {R1 , ..., Rk } (Russell and Norvig, 2009). In
doi: 10.3389/fnins.2017.00714 general, each Ri is a tuple defined over the Cartesian product of the variable domains, if however,

Frontiers in Neuroscience | www.frontiersin.org 1 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

all relations Ri are defined as 2-tuples, the CSP is called binary. tree; Hopcroft et al., 2006). At each computation step, either the
With this definition, and without taking into account symmetry NDTM takes an action biased toward configurations that lead
N to accepting states or it branches executing all of the allowed
considerations, one has on the order of D possible evaluations
for the values of the set X. (Here D is the average size of actions (Maruoka, 2011). In any case, an NDTM is guaranteed to
the domains). In the case of a Sudoku puzzle, for example, X find a solution if it exists. Although the biased action description
represents the grid cells, the set D consists of the nine possible is unrealistic, the replicative interpretation is only limited by
digits for each cell and C defines the game rules. In this case one the available space and time resources (increasing resources
has 981 possible configurations which after puzzle equivalency are needed as the NDTM advances through the computation
reduction define ≈6.67×1021 possible puzzles (Felgenhauer and tree). Despite the apparent impracticability of manufacturing an
Jarvis, 2005). NDTM, very recently, and based on the replicative properties
A solution to the CSP (if it exists) is an evaluation of X that of the deoxyribonucleic acid (DNA) molecule, Currin et al.
is consistent (satisfies all the constraints ci in C) and complete (2017) reported the first physical design of the embodiment
(includes all variables xi in X). To find such a solution one of an NDTM. The practicability of NDTM remains, however,
implements a search algorithm that explores the state space of uncertain in the near future. Therefore, with a high possibility
all these configurations. The strategy of searching the whole of P 6= NP and no NDTMs available, NP problems stay as a
state space, known as the brute-force algorithm, quickly becomes hard task to be tackled. Importantly, the determination of the
unfeasible as N increases (e.g., requiring more computing time existence (or not) of solutions for a CSP constitutes an NP-
than the age of the universe; Norvig, 2009), demanding the complete problem. Therefore, (1) there are no known efficient
development of cleverer algorithms. The efficiency of such a algorithms that work for general CSPs, despite the fact that there
computing algorithm is conventionally determined with the are polynomial time subcases; and (2) any other NP problem can
definition of its asymptotic time complexity T(n), expressed be expressed as a CSP in polynomial time.
as a function of the input size of the problem n ∝ N NP-Complete problems find applications in a wide range
for a particular encoding language (Gary and Johnson, 1979). of fields, from spin glass systems, resources allocation, and
Notice that for a given problem two different instances of the combinatorial mathematics, to Atari games and public key
same size n could reveal different performance, so T refers cryptography (Gary and Johnson, 1979; Barahona, 1982;
to the worst-case complexity. According to Cobham’s thesis, Fortnow, 2009; Aloupis et al., 2015). Thus, in the absence of
an algorithm is conventionally considered efficient if it admits known efficient algorithms for solving general NP problems,
worst-case polynomial time solutions on a deterministic Turing and the need for at least an approximate solution, the
machine (DTM). Such algorithms build up the P complexity standard strategy is to find either an adequate heuristic or
class, corresponding to T(n) ∈ O(nκ ), where κ is determined an approximation algorithm for the particular instances of
by the nature of the problem (Cobham, 1965). A broader class, the given problem. The success of such non-neural strategies
the NP complexity, contains all decision problems for which a makes them ideal for some practical applications. Here, our
proposed solution can be verified in polynomial time (Cook, interest is rather in the way in which biological organisms use
1971). neuronal networks to efficiently cope with CSPs, in this case
The problem of determining the existence of efficient even the limitations found are enlightening i.e., it could be more
algorithms for solving every NP problem, known as the P versus convenient for an animal to prioritize a nearly-optimal but quick
NP problem, remains unsolved since its establishment by Cook solution, especially if the system is unsolvable. Hopfield and
(1971). When a problem does demand algorithms outside P, Tank (1985) firstly proposed stochastic analog neural networks to
it is said to be intractable, and it is a widely held view that solve decision and optimization problems, they had realized the
this is the case for a large subset of NP. Thus, instances of CSP nature of their previously implemented content addressable
NP are recognized as very hard problems (Fortnow, 2009), the memory (Hopfield, 1982), and of the optimization of perceptual
hardest of which are referred to as NP-Complete, which are inference by Hinton and Sejnowski (1983), both of which
NP problems to which any other NP problem can be reduced used networks of binary neurons. More recently, an alternative
in polynomial time, hence completeness (Karp, 1972)1 . If P 6= approach based on deterministic multistable neural oscillators
NP, NP-complete problems are tractable only by an ideal non- and synaptic plasticity was proposed (Mostafa et al., 2013). All the
deterministic version of the Turing machine (NDTM) (Cook, neural models above are liable to getting stuck in local minima, a
1971; Karp, 1972; Gary and Johnson, 1979). We can think of cleaver solution was achieved by enhancing the model of Mostafa
Turing machines as abstract devices endowed with a set of et al. (2013) with the use of gamma-band rhythmic oscillations
rules to act on a string of symbols, such actions depending on of incommensurable frequencies (not rational multiples of each
both, the machine’s internal state(s) and the input symbol(s). other) (Mostafa et al., 2015b), which further allowed the network
While at each computation node a DTM has a specific action dynamics to stabilize when all constraints are satisfied. The latter
to perform (thus defining a computation path) an NDTM can gave rise to an event-driven, mixed analog/digital prototype
follow a whole family of actions (thus defining a computation chip of incommensurable oscillators which, bespoken to the
distributed nature of CSPs, promises to yield state-of-the-art
1A set of yet harder problems form the NP-Hard class of which P, NP, and NP-
performance (Mostafa et al., 2015a).
Complete problems are subsets, though NP-Hard problems are not necessarily In the middle of the 90s, more biologically plausible versions
NP. of neural networks, the SNNs, were demonstrated to present

Frontiers in Neuroscience | www.frontiersin.org 2 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

equal or superior computational capabilities than those of 2. MATERIALS AND METHODS


analog neurons (Maass, 1995, 1996, 1997). Despite promising
advantages, their implementation demands a high computational 2.1. From Constraint Satisfaction Problems
expense in conventional hardware. Regarding CSPs, Malaka and to Spiking Neural Networks
Buck (2000) achieved an SNN solution of an 8 cities traveling In order to implement the stochastic search we first need to map
salesman problem (TSP). More than a decade later, Habenschuss our CSP into an SNN. Formally, a spiking neural network can be
et al. (2013) demonstrated that the stationary distribution of a defined as a set of spiking neurons N , each one with a threshold
stochastic SNN visits the solution of a hard Sudoku puzzle on function θi , and with connections between two arbitrary neurons
average 2% of the time once it acquires a performance where Ni and Nj established by the set of synapses S ⊆ N X N .
90% of the constraints are satisfied, and finally Jonke et al. (2016), For each element Si,j ∈ S there is a weight parameter wi,j
formalized the application of SNNs to general CSPs, postulating and a response function Ri,j : R+ → R (Maass, 1997). In our
a methodology which allows the shaping of the energy landscape, implementation each neuron Ni corresponds to a leaky integrate
using a modularity principle, controlling the network dynamics and fire (LIF) neuron (Stein, 1967). In this model the dynamics
and causing it to visit the solution to the problem. of the membrane potential u are given by:
The models above suggest that the noisy, distributed and
asynchronous nature of the brain’s processes could be behind du
τm = −u(t) + RI(t). (1)
its computational properties, contrasting with the conventional dt
trends in commercial computer architectures. The brain itself is Here, τm is the membrane time constant, R is the membrane
constantly facing conflicting situations where it should decide resistance and I an external input current. Each time u reaches
actions that best satisfy a number of constraints (Churchland, a threshold value uth a spike is elicited; such events are fully
2008). Hence, we can take advantage of the brain-inspired
characterized by the firing times {t f | u(t f ) = uth and dudt |t=t f >
computers (neuromorphics) to design new strategies for solving
0}. Immediately after a spike the potential is reset to a value
CSPs and gain understanding about which of such strategies
ur , such that limt→tf + u(t) = ur . In our network synapses are
are biologically plausible. Given the NP-complete nature of
uniquely characterized by ωij and the inter-neural separation is
CSPs, it seems natural to consider the research on SNN-solvers
introduced by means of a delay 1ij . In biological neurons each
to be at an early stage, with the need for an even deeper
spike event generates an electrochemical response on the post-
exploration of their dynamics. It is the aim of this work to
synaptic neurons characterized by Ri,j . We use the same function
provide a tool for the exploration of high-dimensional networks
for every pair (i, j), this is defined by the post-synaptic current:
running in biological real time, facilitating the further evolution
of SNN-solvers for CSPs, allowing, for example, the study of the q − t−t0
non-Boltzmann and non-Markovian dynamics of the network j(t) = e τ 2(t − t0 ), (2)
τ
(Crair and Bialek, 1990; Clarke et al., 2015). For this, we
use the Spiking Neural Network Architecture (SpiNNaker), a where q is the total electric charge transferred through the
neuromorphic computer which presents a nice balance between synapse, τ is the characteristic decaying time of the exponential
the very large number of neurons it is able to simulate, its energy function, t0 = t f + 1ij is the arrival time of the spike and
efficiency and the biological real-time feature of the simulations. 2 represents the Heaviside step function. The choice of Ri,j
Neuromorphic computers are electronic devices emulating the potentially affects the network dynamics, and although there
working mechanisms of the brain in the search for alternative are more biologically realistic functions for the post-synaptic
models of computation. They aim to overcome the limitations response, the use of the exponential function in Equation (2)
offered by conventional computational architectures especially constitutes one of our improvements over the previous studies
(but not only) with regard to brain simulations (Mead, 1990; on CSP through SSNs which used a simple square function.
Furber S., 2016; Furber S.B., 2016). Similarly to the prototype In an SNN each neuron is part of a large population. Thus,
chip of incommensurable oscillators of Mostafa et al. (2015a), besides the background current I(t), it receives input from
neuromorphics provide a distributed architecture that resembles the other neurons, as well as a stochastic stimulation from
noisy neurons implementing a Poisson process. In this case,
that of CSPs. They also share the local nature of the constraint
the temporal evolution of the membrane potential (Equation 1)
graph in which generally a constraint relates only a few variables. generalizes to:
SpiNNaker is a real-time asynchronous, multicast, and event-
driven machine (Furber et al., 2013, 2014), features that favor  
d X X f
X
the implementation of stochastic computations. Furthermore, τm u = −u(t) + R I(t) + ωj j(t − tj ) + k j(t − Tk )
it is designed to compute with spiking neurons, overcoming dt
j f k
the computational cost that historically limited implementations (3)
of SNNs compared to artificial neural networks. Through the
following sections, we are going to show how SpiNNaker is able to where the index f accounts for the spike times of principal
implement a stochastic search that solves constraint satisfaction neuron j in the SNN, k is the strength of the kth random
problems (CSP). Besides running in biological time our approach spike, which occurs at time Tk , and J(.) is the response function
improves previous stochastic SNN implementations with the of Equation (2). An SNN has the advantage that its microstate
ability to converge into a stable (long-lasting) solution. ψt = {n1 , n2 ..., nN } at any time t can be defined by the binary

Frontiers in Neuroscience | www.frontiersin.org 3 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

firing state ni ∈ {0, 1} of each neuron Ni , instead of their between domain populations, which tends to allow a single
continuous membrane potential ui ∈ R. Then, the set of firing population to be active. However, the last restriction should not
f be over-imposed, because it could generate saturation and our
times {ti } for every neuron Ni , or equivalently the set of states
{ψt }, corresponds to the trajectory (dynamics) of the network on network will be trapped in local minimum. Instead, the network
the state space. The simulations in this work happen in discrete should constantly explore configurations in an unstable fashion
time (time step = 1ms), so in practice, ψt defines a discrete converging to equilibrium only when satisfiability is found. The
stochastic process (e.g., a random walk). If the next network random connections between populations, together with the
state ψti+1 depends on ψti but is conditionally independent of noisy excitatory populations and the network topology, provide
any ψtj with j < i, the set {ψt } also corresponds to a Markov the necessary stochasticity that allows the system to search for
chain. Habenschuss et al. (2013) demonstrated that this is the satisfiable states. However, this same behavior traps some of the
case when using rectangular PSPs and a generalized definition energy inside the network. For some problems, a dissipation
of the network state, the validity of the Markov property for population could be created to balance the input and output
general SNNs could still depend on the dynamical regime and of energy or to control the entropy level during the stochastic
be affected by the presence of a non-zero probability current for search. In general, there may be situations in which the input
the stationary distribution (Crair and Bialek, 1990). Each possible noise acquired through stimulation can stay permanently in
configuration of the system, a microstate ψi , happens with certain the SNN. Thus, the inclusion of more excitatory stimuli will
probability pi and, in general, it is possible to characterize the saturate the dynamics in very high firing rates, which potentially
macroscopic state of the network with the Shannon entropy (in reaches the limit of the SpiNNaker communication fabric. In
units of bits) (Shannon, 1948): these cases, inhibitory noise is essential too and allows us to
X include arbitrarily many stimulation pulses.
S=− pi log2 pi (4) We demonstrate in the next section that the simple approach
i of controlling the dynamics with the stimulation intensities and
times of the Poisson sources provides an efficient strategy for a
and the network activity: stochastic search for solutions to the studied CSPs.
N
1 XX f
A(t) = δ(t − tj ) (5)
N
j f
2.2. The Spiking Neural Network
Architecture (SpiNNaker)
To compute pi and hence Equation (4) we binned the spikes With large CSPs the equivalent SNN becomes computationally
from each simulation with time windows of 200 ms. In this too expensive for conventional computers, so one of the
type of high-dimensional dynamical system, sometimes the important contributions of our work is the implementation
particular behavior of a single unit is not as relevant as the of the SNN-solver on a computer architecture especially
collective behavior of the network, described for example by designed for computations with spiking neurons. Conventional
Equations (4, 5). supercomputers physically embody a deterministic universal
A constraint satisfaction problem hX, D, Ci can now be Turing machine and are designed to do computations
expressed as an SNN as shown in the pseudo-code of algorithm 1. transferring a high quantity of data in deterministic,
We can do it in three basic steps: (a) create SNNs for each synchronous, repeatable and reliable ways. Although under
domain di of each variable, every neuron is then excited by specif circumstances neuromorphic computers can be described
its associated noise source, providing the necessary energy to by a DTM, they are devices inspired by the working principles
begin exploration of the states {ψ}. (b) create lateral-inhibition of the brain, which is rather asynchronous and unreliable and
circuits between all domains that belong to the same variable. thus has additional features. Although conventional machines
(c) create lateral-inhibition circuits between equivalent domains have achieved impressive performance in automatic computing
of all variables appearing in a negative constraint and lateral- tasks—in part due to the great progress in miniaturization—
excitation circuits for domains in a positive constraint. With when facing the complex inference and cognitive tasks solved
these steps, the resulting network will be a dynamical system naturally by living organisms, biology outperforms them
representation of the original CSP. Different strategies can now by several orders of magnitude, especially with regard to
be implemented to enforce the random process over states ψt energy efficiency. We believe that such features can provide
to find the configuration ψ0 that satisfies all the constraints. advantages in the solution of unsolved problems such as the ones
The easiest and proposed way of implementing such strategies in NP.
is through the functional dependence of the noise intensity with Neuromorphic computing was first introduced by Carver
time. The size of each domain population should be large enough Mead in the 1980s, originally intended for analogue very-
to average out the stochastic spike activity. Otherwise, the system large-scale integration systems. Almost 30 years after Mead’s
will not be stable and will not represent quasi-equilibrium states. work and after a decade of parallel efforts, there are but a
As will be shown it is the size of the domain populations what few very powerful, massively parallel neuromorphic computers:
allows the system to converge into a stable solution. TrueNorth (Merolla et al., 2014), Neurogrid (Benjamin et al.,
The ensemble of populations assigned to every CSP variable 2014), BrainScaleS (Schemmel et al., 2010), and SpiNNaker
xi works as winner-take-all circuits through inhibitory synapses (Painkras et al., 2013). The latter is endowed with the ability

Frontiers in Neuroscience | www.frontiersin.org 4 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

Algorithm 1 | Translation of a CSP into an SNN

# d e f i n e t h e CSP = <X , D , C> t h r o u g h a s e t o f l i s t s .


X= l i s t ( v a r i a b l e s )
D= l i s t ( domains )
S= l i s t ( s u b s e t s _ o f (X ) )
R= l i s t ( r e l a t i o n s _ o v e r ( s _ i i n S ) )
C= l i s t ( c o n s t r a i n t s = t u p l e ( s _ i , r _ i ) )
# a ) c r e a t e an SNN f o r e a c h v a r i a b l e w i t h sub−p o p u l a t i o n s f o r e a c h
domain .
n = size_of_ensemble
for v a r i a b l e x_i in X :
f o r domain d _ i i n D :
p o p u l a t i o n [ x _ i ] [ d _ i ] = c r e a t e an SNN w i t h n n e u r o n s
noise_exc [ x_i ] [ d_i ] = c r e a t e a set of noise
stimulation populations .
apply_stimuli ( noise [ x_i ] [ d_i ] , population [ x_i ] [ d_i ] )
noise_inh [ x_i ] [ d_i ] = c r e a t e a set of noise
dissipation populations .
a p p l y _ d i s s i p a t i o n ( noise_inh [ x_i ] [ d_i ] , population [ x_i ] [ d_i ] )
# b ) u s e i n h i b i t o r y s y n a p s e s t o a c t i v a t e , on a v e r a g e , a s i n g l e domain p e r
variable
f o r domain d _ i i n D :
f o r domain d _ j i n D
i n h i b i t o r y ( population [ x_i ] [ d_i ] , population [ x_i ] [ d_j ] )
# c ) map e a c h c o n s t r a i n t t o an i n h i b i t o r y o r e x c i t a t o r y s y n a p s e .
for c o n s t r a i n t c _ i in C :
r e a d s u b s e t s _ i and r e l a t i o n r _ i from c _ i
f o r v a r i a b l e s x _ i and x _ j i n s _ i :
f o r domain d _ i i n D :
i f c o n s t r a i n t r e l a t i o n r _ i <0:
i n h i b i t i o n ( population [ x_i ] [ d_i ] , population [ x_j ] [ d_i ] )
e l i f c o n s t r a i n t r e l a t i o n r _ i >0:
e x c i t a t i o n ( population [ x_i ] [ d_i ] , population [ x_j ] [ d_i ] )

to model high-dimensional spiking neural networks, low energy computations on hardware. We will show in the next section how
requirements, and a multicast communication protocol. It is these two features bring new opportunities to solve hard CSPs.
based on a globally asynchronous and locally synchronous
(GALS) multi-core System-on-Chip, being event-driven and able
to run in biological time. SpiNNaker is built using a million ARM 3. RESULTS
968 processor cores (of which 60% are currently available). Each
chip on the machine includes 18 processor cores connected by In order to demonstrate the implementation of the SNN
a network on chip (NoC) communication system (Grymel and solver, we present solutions to some instances of NP problems.
Furber, 2011; Furber, 2012; Furber et al., 2013, 2014; Goodman Among the NP-complete problems, we have chosen to showcase
et al., 2013; Painkras et al., 2013). This fundamentally different instances of graph coloring, Latin squares and Ising spin glasses.
architecture paradigm, besides bespoke design for neurobiology Our aim is to offer a tool for the development of stochastic
simulations, makes the SpiNNaker system interesting for search algorithms in large SNNs. We are interested in CSPs to
exploring new implementations of stochastic searches. Here we gain understanding of the dynamics of SNNs under constraints,
explore the computing power of the machine for these more how they choose a particular state and their computational
general computing problems, exploiting the neuromorphic’s abilities. Ultimately, SNNs embedded in neuromorphic hardware
ability to overcome the conventional difficulties of dealing with are intended for the development of new technologies such as
computationally expensive spiking neurons when implemented robotics and neuroprosthetics, constantly interacting with both
on conventional clusters and GPUs. In summary: (i) for the external devices and the environment. In such applications
SpiNNaker spiking neurons are the fundamental modeling units the network needs to adapt itself to time-varying constraints
and (ii) it is a machine intrinsically able to implement stochastic taking one or multiple decisions accordingly, making the

Frontiers in Neuroscience | www.frontiersin.org 5 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

advancement in stochastic searches with SNNs a fundamental that each digit appears only once in each row or column. The
requirement for neuromorphics. NP-completeness of completing a partially filled Latin square
was demonstrated by Colbourn (1984), and among the useful
3.1. Graph Coloring applications of such a problem, one can list authentication, error-
Considering a graph G defined by the ordered pair {V, E}, with detection and error-correction in coding theory. Here we choose
V a set of vertices and E the set of edges connecting them, the the Sudoku puzzle as an instance of a Latin square, in this case,
graph coloring problem consists of finding an assignments of k n = 9 and in addition to the column and row constraints
colors to the elements of the graph (either V, E or both) such that of Latin squares, Sudoku requires the uniqueness of the digits
certain conditions are satisfied (Dailey, 1980). In vertex coloring, in each 3 × 3 sub-grid. We show in Figure 2 the solution to
for example, the colors are assigned to the elements of V in such a an easy puzzle (Ercsey-Ravasz and Toroczkai, 2012), to a hard
way that no adjacent nodes (those connected by an edge) have the Sudoku (Habenschuss et al., 2013) and to the AI Escargot puzzle
same color. A particularly useful applications of this problem is which has been claimed to be the world hardest Sudoku. The
the process of register allocation in compiler optimization which temporal dependence of the network entropy H, firing rate ν,
is isomorphic to graph coloring (Chaitin, 1982). Regarding time and states count  is shown in Figures 2A–C, respectively for the
complexity, general graph coloring is NP-complete for k > 2. easy (Figure 2G), hard (Figure 2H) and AI escargot (Figure 2I)
In the case of planar graphs, three-coloring is NP-complete and, puzzles. In Figure 2E we show a schematic representation of the
thanks to the four color theorem proved by Kenneth Appel and dimensionality of the network for the easy puzzle (Figure 2G),
Wolfgang Haken, four-coloring is in P (Appel and Haken, 1989). each sphere represents a single neuron and synaptic connections
A division of a plane into several regions can be represented have been omitted for clarity, the layer for digit 5 is represented
by a planar graph, familiar versions of which are the geographic also showing the inhibitory effect of a single cell in position (1,3)
maps. In Figure 1A we show the SNN-solver result of a satisfying over its row, column, subgrid and other digits in the cell. In this
four-coloring of the map of the world where colors are assigned case, the total number of neurons is ≈37 k and they form ≈86 M
to countries such that no bordering countries have the same synapses.
color. We have followed the list of countries and borders One major improvement of our implementation with respect
from the United Nations available in Mathematica Wolfram to the work of Habenschuss et al. (2013) is the convergence to
(Wolfram Research, 2017). The corresponding connectivity a stable solution, it is arguably due to the use of subpopulations
graph of the world map in Figure 1A is shown in Figure 1B. instead of single neurons to represent the domains of the CSP
The insets in Figure 1A, show the results of our solver for three- variables as these populations were required to provide stability
coloring of the maps of the territories of Australia (bottom- to the network. The use of the more realistic exponential post-
right) and of Canada (top-left). Figures 1C,D show the time synaptic potentials instead of the rectangular ones used in
dependence of the entropy (top), firing rate (middle), and Habenschuss et al. (2013) is also reflecting a good performance of
number of visited states (bottom) for the map of the world the search as shown in the bottom plots in Figures 2A–C, where
and of Australia, respectively. The color code we use in these the solution is found after visiting only 3, 12 and 26 different
and the following figures is as follows: red means that the state states and requiring 0.8, 2.8, and 6.6 s, respectively, relating well
in the current time bin is different from the one just visited, also with the puzzle hardness. It is important to highlight that
green represents the network staying in the same state, and blue the measurement of the difficulty level of a Sudoku puzzle is still
means that all constraints are satisfied. The dashed vertical lines ambiguous and our solver could need more complex strategies
mark the times at which noise stimulating (blue) or depressing for different puzzles, for example in the transient chaos based
(red) populations began to be active. The normalized spiking rating of Ercsey-Ravasz and Toroczkai (2012) the “platinum
activity of the four color populations for four randomly selected blonde” Sudoku is rated as one of the hardest to solve, and
countries of the world map is shown in Figure 1E evidencing although we have been able to find a solution for it, it is not stable,
the competing behavior along the stochastic search. Interestingly, which means one should control the noisy network dynamics in
although the network has converged to satisfaction during the order to survive the long escape rate of the model presented by
last 20 s (blue region in Figure 1C), the bottom right plot in Ercsey-Ravasz and Toroczkai (2012). We show in Figures 2D,F
Figure 1E reveals that due to the last stimulation the network has the competing activity of individual digit populations of some
swapped states preserving satisfaction, evidencing the stability of randomly chosen cell in both the easy and the AI escargot puzzles,
the convergence. Furthermore, it is noticeable in Figure 1D that the dynamic behavior resembles that of Figure 2 in Ercsey-Ravasz
new states are visited after convergence to satisfiability, this is and Toroczkai (2012) when comparing their dynamic solver for
due to the fact that, when multiple solutions exist, all satisfying this same easy puzzle and the platinum blonde. Further analysis
configurations have the same probability of happening. Although would bring insights into the chaotic dynamics of SNNs when
we choose planar graphs here, the SNN can implement any facing constraints.
general graph, hence more complicated P and NP examples could
be explored. 3.1.2. Ising Spin Systems
For each atom that constitutes a solid, it is possible to define a
3.1.1. Latin Squares net spin magnetic moment µ E which results from the intrinsic
A Latin square is defined as an array of n × n cells in which spin of the subatomic particles and the orbital motion of
n groups of n different symbols are distributed in such a way electrons around their atomic nucleus. Such magnetic moments

Frontiers in Neuroscience | www.frontiersin.org 6 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

FIGURE 1 | (A) Solution to the map coloring problem of the world with four colors and of Australia and Canada with three colors (insets). (B) shows the graph of
bordering countries from (A). The plots of the entropy H (top), mean firing spike rate ν (middle), and states count  (bottom) v.s. simulation time are shown in (C,D) for
the world and Australia maps, evidencing the convergence of the network to satisfying stationary distributions. In the entropy curve red codes for changes of state
between successive time bins, green for no change and blue for the network satisfying the CSP. In the states count line, black dots mean exploration of new states;
the dots are yellow if the network returns to states visited before. In (E) we have plotted the population activity for four randomly chosen CSP variables from (A), each
line represents a color domain.

interact in complex ways giving rise to a range of microscopic at high temperature and ordered ones at low temperature. For
and macroscopic phenomena. A simple description of such ferromagnetic Ji,j > 0 and antiferromagnetic Ji,j < 0 interactions
interactions is given by the Ising model, where each µ E in a the configurations are similar to those in Figures 3D,E for 3D
crystal is represented by a spin SE taking values from {+1, −1} lattices, which correspond to the stable states of our SNN solver
on a regular discrete grid of points {i, j, k}. Furthermore, the when the Ising models for Ji,j > 0 and Ji,j < 0 are mapped
interaction of the spins {SEi } is considered only between nearest to an SNN using algorithm 1 and a 3D grid of 1,000 spins.
neighbors and represented by a constant Ji,j which determines Figure 3G shows the result for a 1D antiferromagnetic spin
if the two neighboring spins will tend to align parallel Ji,j > 0 chain. It is interesting to note that the statistical mechanics of
or anti-parallel Ji,j < 0 with each other. Given a particular spin systems has been extensively used to understand the firing
configuration of spin orientations 9, the energy of the system is dynamics of SNNs, presenting a striking correspondence between
then given by the Hamiltonian operator: their behavior even in complex regimes. Our framework allows
the inverse problem of mapping the SNN dynamics to spin
Ji,j SEi SEj − hE
X X
Ĥ = − SEi (6) interactions. This equivalence between dynamical systems and
i,j i algorithms has largely been accepted and we see an advantage
in computing directly between equivalent dynamical systems.
where hE is an external magnetic field which tends to align However, it is clear that the network parameters should be
the spins in a preferential orientation (Barahona, 1982). In adequately chosen in order to keep the computation valid.
this form each Ji,j defines a constraint Ci,j between the values If instead of fixing Ji,j to some value U for all spin pairs
D = {+1, −1} taken by the variables SEi and SEj . It is easy to {(i, j)} one allows it to take random values from {U, −U}
see that the more constraints are satisfied the lower becomes with probabilities pAF and pFM , it will be found that certain
the value of Ĥ in Equation (6). This simple model allows the interactions would be frustrated (unsatisfiable constraints).
study of phase transitions between disordered configurations Figure 3F illustrates the frustration with three antiferromagnetic

Frontiers in Neuroscience | www.frontiersin.org 7 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

FIGURE 2 | Spiking neural network solution to Sudoku puzzles. (A–C) Show the temporal dependence of the network entropy H, firing rate ν and states count  for
the easy (G), hard (H), and AI escargot (I) puzzles. The color code is the same as that of Figure 1. In (G–I) red is used for clues and blue for digits found by the solver.
(D,F) Illustrate the activity for a random selected cell from (A,C), respectively, evidencing competition between the digits, the lines correspond to a smoothing spline fit.
(E) Schematic representation of the network architecture for the puzzle in (A).

interacting spins in a way that any choice of orientation for the space of all possible configurations. In Figures 3A,B we show
third spin will conflict with one or the other. This extension a quenched state found by our SNN with pAF = 0.5 and
of the Ising model when the grid of interactions is a random pAF = 0.1, respectively. A spin glass in nature will often be
mixture of AF and FM interactions was described by Edwards trapped in local minima and will need specific temperature
and Anderson (1975). The model is the representation of the variations to approach a lower energy state; our SNNs replicate
spin glass systems found in nature, these are crystals with this behavior and allow for the study of thermal processes,
low concentrations of magnetic impurities which, due to the controlling the time variation and intensity of the excitatory
frustrated interactions, are quenched into a frozen random and inhibitory stimulations. If the underlying stochastic process
configuration when the temperature is lowered (at room or high of such stimulations is a good representative of heat in solids,
temperatures the magnetic moments of a material are constantly they will correspond to increase and decrease of temperature,
and randomly precessing around their average orientation). The respectively, allowing, for example, the implementation of
statistical analysis of those systems was fundamental for the simulated annealing optimization. Figure 3C shows the time
evolution of artificial neural networks and machine learning. evolution of the entropy, firing rate and states count for the
Furthermore, the optimization problem of finding the minimum antiferromagnetic 3D lattice of Figure 3D, similar plots but
energy configuration of a spin glass has been shown to be converging to unsatisfying states are found for the spin glasses
NP-complete by Barahona (1982). The quenching of the grid in Figures 3A,B. In the case of the ferromagnetic lattice in
happens when it gets trapped in a local minimum of the state Figure 3E with a very low noise, the network immediately

Frontiers in Neuroscience | www.frontiersin.org 8 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

FIGURE 3 | Spiking neural network simulation of Ising spin systems. (A,B) Show two 2-dimensional spin glass quenched states obtained with interaction probabilities
pAF = 0.5 and pAF = 0.1. The results for the 3-dimensional lattices for CSPs of 1,000 spins with ferromagnetic and antiferromagnetic coupling constant are shown in
(D,E), respectively. In (C) are plotted the temporal dependence of the network entropy, firing rate ν and states count  during the stochastic search for the system in
(D). (F) Illustrates the origin of frustrated interactions in spin glasses. (G) Depicts the result for the 1-dimensional chain. The parameters for the SNNs used are shown
in Table 1.

converges to a solution, if the noise is high, however, it is et al., 2015b) neural networks with mean solving times of 29 and
necessary to stimulate the network several times to have a perfect 153 s, respectively. The solver presented here reduces the mean
ordering. This is because more noise implies more energy to solving time for this puzzle to 6.36 s implying a considerable
violate constraints, even in nature magnetic ordering is lost at improvement in performance for Sudoku neural solvers. The
high temperatures. same network parameters were used to solve the three Sudoku
puzzles in order to show the relation between the stochastic
4. DISCUSSION search and the puzzle difficulty. Clearly, the average time for
convergence increases with the difficulty, but more significant is
The examples of the last section show the basic features of the the strong decrease of the success ratio. Thus, to avoid overfitting,
stochastic search and the use of the entropy, firing rate and the a trade-off between exploratory and greedy behavior needs to
number of states to track the behavior of the network. In order be found for the problem at hand. The state of the art Sudoku
to evaluate the performance of the search, we have performed solvers (see for example Norvig, 2009; Dong, 2012) are able to
a series of runs for each simulation until the network has solve puzzles in tens to hundreds of microseconds. Such solvers
been successful 100 times. The histograms of the corresponding use backtracking together with deductive methods specific for
convergence times for each example are shown in Figure 4, Sudoku. Consequently, they are not general purpose as the one
displaying also the mean µ, standard deviation σ , skewness γ1 , presented here, it is precisely the specificity what provides their
success ratio ξ (defined as the number of times the simulation speed-up.
converged to satisfaction over the total number of runs) and the The solution to the map of the territories of Canada, as
best convergence time tmin of each underlying distribution. The defined in Figure 1, was presented by D-Wave systems to
dimensions of the SNNs and simulation parameters for the three demonstrate the applicability of their quantum computer. To
CSPs shown here are summarized, respectively in Tables 1, 2. find the solution they executed a quantum machine instruction
The hard Sudoku puzzle of Figure 2 was previously solved which can return 10, 000 samples/s from which ≈25% solved the
using spiking (Habenschuss et al., 2013) and rate-based (Mostafa problem (Headquarters, 2013). This means an effective time to

Frontiers in Neuroscience | www.frontiersin.org 9 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

FIGURE 4 | Histograms of the convergence time to a solution for the Sudoku, map coloring and spin system problems of Figures 1–3. For each histogram data from
100 simulations were used. The mean µ, standard deviation σ , skewness γ1 , success ratio ξ and the best convergence time tmin are indicated for each problem. The
success ratio is defined as the number of times the simulation converged to satisfaction over the total number of simulations.

TABLE 1 | Network sizes of the SNN solvers of the CMP, Sudoku, and Spin Systems.

Network parameters

CSP Number of neurons Number of synapses Populations (number of variables) Sub-populations (domain size)

World CMP 212,400 14,422,300 193 4


Australia CMP 450 22,920 7 3
Canada CMP 810 39,480 13 3
Sudoku easy 36,675 86,154,125 81 9
Sudoku hard 36,675 86,154,125 81 9
AI escargot 36,675 86,153,250 81 9
AF ring 1,050 975,500 10 2
Spin 2D lattices 10,050 2,160,000 100 2
Spin AF 3D lattices 100,050 31,050,000 1,000 2
Spin FM 3D lattices 100,050 31,050,000 1,000 2

solution of 0.4 ms. The power consumption of the machine is (Chams et al., 1987), genetic algorithms (Gwee et al., 1993), and
25 kW and it operates at a temperature of 0.015K. For this same tabu search (Dorne and Hao, 1999) as well as the more elaborated
map, our solver uses three SpiNNaker chips each one consuming state-of-the-art algorithms (Chams et al., 1987; Gwee et al., 1993;
at most 1 W of power and it finds the solution with a mean time of Dorne and Hao, 1999; Fotakis et al., 2001; Chiarandini and
0.87 s. Additionally, classical techniques like simulated annealing Stützle, 2002; Galinier and Hertz, 2006; Blöchliger and Zufferey,

Frontiers in Neuroscience | www.frontiersin.org 10 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

TABLE 2 | Simulation parameters for the SNN solvers of the CMP, Sudoku, and Spin Systems.

Simulation parameters

CSP Noise populations stimulation (depression) Internal inhibition weights Constraints strength weights External current

World CMP 10 [−0.08, 0.0] [−0.08, 0.0] 0.3


Australia CMP 1 (1) [−1.2, -1.5] [ 1.2, 1.4] 0.2
Canada CMP 1 (1) [−1.2, -1.5] [ 1.2, 1.4] 0.17
Sudoku easy 1 (0) [−0.08, 0.0] [−0.08, 0.0] 0.3
Sudoku hard 1 (0) [−0.08, 0.0] [−0.08, 0.0] 0.3
AI Escargot 1 (0) [−0.03, -0.02] [−0.03, −0.02] 0.3
AF Ring 1 (0) [−0.2, 0.0] [−0.2, −0.0] 0.0
Spin 2D lattices 1 (1) [−0.2, 0.0] [−0.2, −0.0] 0.0
Spin AF 3D lattice 1 (0) [−0.2, 0.0] [−0.2, −0.0] 0.0
Spin FM 3D lattice 1 (0) [−0.2, 0.0] [−0.2, −0.0] 0.0

2008; Hertz et al., 2008; Ge et al., 2010; Lü and Hao, 2010; Titiloye to find the minimum energy configuration of a spin glass), or
and Crispin, 2011), solve coloring map problems in time scales other more general non-linear constraints. The main concern
ranging from tens of seconds to tens of thousands of seconds and with such class of problems is that the network is not able to
conventionally have a success ratio below 1 for the allocated time. recognize the best option among all the configurations that satisfy
As seen in Figure 4, this is the same order of magnitude for the the constraints. This is a typical disadvantage of stochastic search
time that our SNNs needed to solve the coloring map problems algorithms. Thus, the network may visit the optimal solution
of Figure 1. but will not stay in it. To achieve convergence more complex
It is then verified that the solutions found by the SNNs techniques or even non-stochastic strategies could be needed.
in SpiNNaker are on the order of magnitude of the systems The techniques from nonlinear programming could guide the
of interest. Our performance is however not competitive with improvement of SNN solvers in decision making under more
problem-specific solvers which are able to find solutions in a complex constraints.
few microseconds. Although such algorithms are extremely fast, In summary, we have presented a neuromorphic
they do not perform well if the problem is not solvable by implementation of SNNs stimulated with Poisson spike
the presumed strategies. If one still desires to find solutions in sources which solve CSPs. The network dynamics implements
the order of microseconds, one could resource to accelerated a stochastic search over the problem’s space of states which,
hardware e.g., BrainScales (Schemmel et al., 2010) which runs with an adequate choice of parameters, is able to converge to
10, 000 times faster than real-time (resolution of milliseconds). a stable configuration (or set of configurations) that satisfy
Unfortunately, these systems are still limited by the number of all the constraints. A satisfactory performance was found and
neurons and synapses they are able to handle. Better performance further research is needed for CSPs defined by more complex
is also expected from the second generation of SpiNNaker which constraints. Furthermore, we presented a software framework
is currently under development. It is also important to highlight to explore new strategies for stochastic searches with SNNs.
that the NP feature of an algorithm refers to its increasing The code of the framework and examples presented here
complexity with the size of the problem, and that the problems is made available at https://github.com/GAFonsecaGuerra/
presented here correspond to instances of expressly modest sizes. SpiNNakerCSPs.
Nevertheless, the number of variables for most problems in
robotics and perception have an order of magnitude comparable
to that of these CSPs. AUTHOR CONTRIBUTIONS
The main advantage of stochastic search algorithms is that
GF developed the SpiNNaker SNN-CSP solver, performed and
they are general purpose, able to find satisfactory solutions
analyzed the simulations and wrote the manuscript. SF provided
without needing much detail about the specific problem at hand.
the initial scripts, supervised the experiments, discussed the
Moreover, the exploration of solutions to constraint satisfaction
results, and reviewed the manuscript.
situations never seen before is the typical way in which organisms
explore the environment and acquire knowledge about it. To
build the solvers of the previous section, we have used only FUNDING
the number of variables, domain size, and constraints list,
nevertheless the network showed good performance. Thus if a The SpiNNaker system was constructed with funding
system of SNNs is able to collect this kind of information from from the UK Engineering and Physical Sciences Research
its environment, it will easily take beneficial decisions. Council -EPSRC, under grants EP/G015740/1 and
Future work involves the extension of the framework to EP/G015775/1, with support from the industry partners
solve optimization problems where the constraints are defined ARM Ltd and Silistix Ltd. This research has been funded by
by inequalities (e.g., to solve the traveling salesman problem or the European Research Council under the European Union’s

Frontiers in Neuroscience | www.frontiersin.org 11 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

Seventh Framework Programme (FP7/2007-2013)/ERC grant ACKNOWLEDGMENTS


agreement 320689, and the EU Flagship Human Brain Project
(H2020). GF acknowledges The Brazilian National Council for The authors would like to thank Andrew Rowley for helpful
Scientific and Technological Development-CNPq for the Ph.D. discussions as well as Michael Hopkins and Mantas Mikaitis for
scholarship no. 205570/2014-3. proof reading and valuable suggestions to the manuscript.

REFERENCES Fortnow, L. (2009). The status of the p versus np problem. Commun. ACM 52,
78–86. doi: 10.1145/1562164.1562186
Aloupis, G., Demaine, E. D., Guo, A., and Viglietta, G. (2015). Classic Fotakis, D., Likothanassis, S., and Stefanakos, S. (2001). “An evolutionary
nintendo games are (computationally) hard. Theor. Comput. Sci. 586, 135–160. annealing approach to graph coloring,” in Applications of Evolutionary
doi: 10.1016/j.tcs.2015.02.037 Computing (Como), 120–129.
Appel, K. I., and Haken, W. (1989). Every Planar Map is Four Colorable, Vol. 98. Furber, S. (2012). To build a brain. IEEE Spectr. 49, 44–49.
Providence, RI: American Mathematical Society Providence. doi: 10.1109/MSPEC.2012.6247562
Barahona, F. (1982). On the computational complexity of Ising spin glass models. Furber, S. (2016). Large-scale neuromorphic computing systems. J. Neural Eng.
J. Phys. A Math. Gen. 15, 3241–3253. doi: 10.1088/0305-4470/15/10/028 13:051001. doi: 10.1088/1741-2560/13/5/051001
Benjamin, B. V., Gao, P., McQuinn, E., Choudhary, S., Chandrasekaran, Furber, S. B. (2016). Brain-inspired computing. IET Comput. Digit. Tech. 10,
A. R., Bussat, J. M., et al. (2014). Neurogrid: a mixed-analog-digital 299–305. doi: 10.1049/iet-cdt.2015.0171
multichip system for large-scale neural simulations. Proc. IEEE 102, 699–716. Furber, S. B., Galluppi, F., Temple, S., and Plana, L. A. (2014). The SpiNNaker
doi: 10.1109/JPROC.2014.2313565 project. Proc. IEEE 102, 652–665. doi: 10.1109/JPROC.2014.2304638
Blöchliger, I., and Zufferey, N. (2008). A graph coloring heuristic using partial Furber, S. B., Lester, D. R., Plana, L. A., Garside, J. D., Painkras, E., Temple, S., et al.
solutions and a reactive tabu scheme. Comput. Operat. Res. 35, 960–975. (2013). Overview of the SpiNNaker system architecture. IEEE Trans. Comput.
doi: 10.1016/j.cor.2006.05.014 62, 2454–2467. doi: 10.1109/TC.2012.142
Chaitin, G. J. (1982). Register allocation and spilling via graph coloring. Sigplan Galinier, P., and Hertz, A. (2006). A survey of local search methods for graph
Not. 17, 98–101. coloring. Comput. Oper. Res. 33, 2547–2562. doi: 10.1016/j.cor.2005.07.028
Chams, M., Hertz, A., and De Werra, D. (1987). Some experiments with simulated Gary, M. R., and Johnson, D. S. (1979). Computers and Intractability: A Guide to
annealing for coloring graphs. Eur. J. Oper. Res. 32, 260–266. the Theory of NP-Completeness. New York, NY: W. H. Freeman & Co.
Chiarandini, M., and Stützle, T. (2002). “An application of iterated local search to Ge, F., Wei, Z., Tian, Y., and Huang, Z. (2010). “Chaotic ant swarm for graph
graph coloring problem,” in Proceedings of the Computational Symposium on coloring,” in IEEE International Conference on Intelligent Computing and
Graph Coloring and its Generalizations (Ithaca, NY), 112–125. Intelligent Systems (ICIS), 2010, Vol. 1 (Xiamen), 512–516.
Churchland, P. S. (2008). The impact of neuroscience on philosophy. Neuron 60, Goodman, D., Khan, B., Khan, S., Luján, M., and Watson, I. (2013). Software
409–411. doi: 10.1016/j.neuron.2008.10.023 transactional memories for scala. J. Parallel Distrib. Comput. 73, 150–163.
Clarke, A. M., Friedrich, J., Tartaglia, E. M., Marchesotti, S., Senn, W., and Herzog, doi: 10.1016/j.jpdc.2012.09.015
M. H. (2015). Human and machine learning in non-markovian decision Grymel, M., and Furber, S. B. (2011). A novel programmable parallel
making. PLoS ONE 10:e0123105. doi: 10.1371/journal.pone.0123105 CRC circuit. IEEE Trans. Very Large Scale Integr. Syst. 19, 1898–1902.
Cobham, A. (1965). “The intrinsic computational difficulty of functions,” in Logic, doi: 10.1109/TVLSI.2010.2058872
Methodology and Philosophy of Science, Proceedings of the 1964 International Gwee, B.-H., Lim, M.-H., and Ho, J.-S. (1993). “Solving four-colouring map
Congress, Studies in Logic and the Foundations of Mathematics, ed B.-H. problem using genetic algorithm,” in First New Zealand International Two-
Yehoshua (Jerusalem: North-Holland Publishing Company), 24–30. Stream Conference on Artificial Neural Networks and Expert Systems, 1993
Colbourn, C. J. (1984). The complexity of completing partial latin squares. Discrete (Dunedin), 332–333.
Appl. Math. 8, 25–30. Habenschuss, S., Jonke, Z., and Maass, W. (2013). Stochastic computations
Cook, S. A. (1971). “The complexity of theorem-proving procedures,” in in cortical microcircuit models. PLoS Comput. Biol. 9:e1003311.
Proceedings of the Third Annual ACM Symposium on Theory of Computing doi: 10.1371/journal.pcbi.1003311
(Shaker Heights, OH: ACM), 151–158. Headquarters, C. (2013). Programming with D-Wave: Map Coloring Problem. Palo
Crair, M. C., and Bialek, W. (1990). “Non-boltzmann dynamics in networks Alto, CA: D-Wave Systems, Inc.
of spiking neurons,” in Advances in Neural Information Processing Systems Hertz, A., Plumettaz, M., and Zufferey, N. (2008). Variable space search for graph
(Denver, CO), 109–116. coloring. Discrete Appl. Math. 156, 2551–2560. doi: 10.1016/j.dam.2008.03.022
Currin, A., Korovin, K., Ababi, M., Roper, K., Kell, D. B., Day, P. J., et al. Hinton, G. E., and Sejnowski, T. J. (1983). “Optimal perceptual inference,” in
(2017). Computing exponentially faster: implementing a non-deterministic Proceedings of the IEEE conference on Computer Vision and Pattern Recognition
universal turing machine using DNA. J. R. Soc. Interface 14:20160990. (New York, NY), 448–453.
doi: 10.1098/rsif.2016.0990 Hopcroft, J. E., Motwani, R., and Ullman, J. D. (2006). Introduction to Automata
Dailey, D. P. (1980). Uniqueness of colorability and colorability of planar 4-regular Theory, Languages, and Computation, 3rd Edn. Harlow: Addison-Wesley.
graphs are NP-complete. Discrete Math. 30, 289–293. Hopfield, J. J. (1982). Neural networks and physical systems with emergent
Dong, Z. Y. (2012). Zsolver. Available online at: http://forum.enjoysudoku.com/ collective computational abilities. Proc. Natl. Acad. Sci. U.S.A. 79, 2554–2558.
software/ZSolver1.0.zip Hopfield, J. J., and Tank, D. W. (1985). “Neural” computation of decisions in
Dorne, R., and Hao, J.-K. (1999). “Tabu search for graph coloring, T-colorings and optimization problems. Biol. Cybern. 52, 141–152.
set T-colorings,” in Meta-Heuristics, eds S. Voß, S. Martello, I. H. Osman, and Jonke, Z., Habenschuss, S., and Maass, W. (2016). Solving constraint satisfaction
C. Roucairol (New York, NY: Springer), 77–92. problems with networks of spiking neurons. Front. Neurosci. 10:118.
Edwards, S. F., and Anderson, P. W. (1975). Theory of spin glasses. J. Phys. F Met. doi: 10.3389/fnins.2016.00118
Phys. 5:965. Karp, R. M. (1972). “Reducibility among combinatorial problems,” in Complexity of
Ercsey-Ravasz, M., and Toroczkai, Z. (2012). The chaos within sudoku. Sci. Rep. Computer Computations, ed R. E. Miller et al. (New York, NY: Springer; Plenum
2:725. doi: 10.1038/srep00725 Press), 85–103.
Felgenhauer, B., and Jarvis, F. (2005). Enumerating Possible Sudoku Grids. Available Lü, Z., and Hao, J.-K. (2010). A memetic algorithm for graph coloring. Eur. J. Oper.
online at: http://www.afjarvis.staff.shef.ac.uk/sudoku/sudoku.pdf Res. 203, 241–250. doi: 10.1016/j.ejor.2009.07.016

Frontiers in Neuroscience | www.frontiersin.org 12 December 2017 | Volume 11 | Article 714


Fonseca Guerra and Furber Stochastic SNNs on SpiNNaker to Solve CSPs

Maass, W. (1995). “On the computational power of noisy spiking neurons,” Painkras, E., Plana, L. A., Garside, J., Temple, S., Galluppi, F., Patterson, C., et al.
in Proceedings of the 8th International Conference on Neural Information (2013). SpiNNaker: a 1-w 18-core system-on-chip for massively-parallel
Processing Systems (Denver, CO; Cambridge, MA: MIT Press), 211–217. neural network simulation. IEEE J. Solid-State Circ. 48, 1943–1953.
Maass, W. (1996). Lower bounds for the computational power of networks of doi: 10.1109/JSSC.2013.2259038
spiking neurons. Neural Comput. 8, 1–40. Russell, S., and Norvig, P. (2009). Artificial Intelligence: A Modern Approach, 3rd
Maass, W. (1997). Networks of spiking neurons: the third generation of neural Edn. Upper Saddle River, NJ: Pearson.
network models. Neural Netw. 10, 1659–1671. Schemmel, J., Briiderle, D., Griibl, A., Hock, M., Meier, K., and Millner, S.
Malaka, R., and Buck, S. (2000). “Solving nonlinear optimization problems (2010). “A wafer-scale neuromorphic hardware system for large-scale neural
using networks of spiking neurons,” in Proceedings of the IEEE-INNS-ENNS modeling,” in Proceedings of 2010 IEEE International Symposium on Circuits
International Joint Conference on Neural Networks. IJCNN 2000. Neural and Systems (Paris), 1947–1950.
Computing: New Challenges and Perspectives for the New Millennium, Vol. 6 Shannon, C. E. (1948). A mathematical theory of communication. Bell Syst. Tech.
(Como), 486–491. J. 27, 379–423.
Maruoka, A. (2011). Concise Guide to Computation Theory. London: Springer Stein, R. B. (1967). Some models of neuronal variability. Biophys. J. 7, 37–68.
Science & Business Media. Titiloye, O., and Crispin, A. (2011). Quantum annealing of the graph
Mead, C. (1990). Neuromorphic electronic systems. Proc. IEEE 78, 1629–1636. coloring problem. Discrete Optim. 8, 376–384. doi: 10.1016/j.disopt.2010.
Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., 12.001
Akopyan, F., et al. (2014). A million spiking-neuron integrated circuit with Wolfram Research, I. (2017). Mathematica, Version 11.1. Champaign, IL.
a scalable communication network and interface. Science 345, 668–673.
doi: 10.1126/science.1254642 Conflict of Interest Statement: The authors declare that the research was
Mostafa, H., Müller, L.K., and Indiveri, G. (2013). “Recurrent networks of coupled conducted in the absence of any commercial or financial relationships that could
winner-take-all oscillators for solving constraint satisfaction problems,” in be construed as a potential conflict of interest.
Advances in Neural Information Processing Systems (Lake Tahoe, NV), 719–727.
Mostafa, H., Müller, L.K., and Indiveri, G. (2015a). An event-based architecture The reviewer HM and handling Editor declared their shared affiliation.
for solving constraint satisfaction problems. Nat. Commun. 6:8941.
doi: 10.1038/ncomms9941 Copyright © 2017 Fonseca Guerra and Furber. This is an open-access article
Mostafa, H., Müller, L. K., and Indiveri, G. (2015b). Rhythmic inhibition allows distributed under the terms of the Creative Commons Attribution License (CC BY).
neural networks to search for maximally consistent states. Neural Comput. 27, The use, distribution or reproduction in other forums is permitted, provided the
2510–2547. doi: 10.1162/NECO_a_00785 original author(s) or licensor are credited and that the original publication in this
Norvig, P. (2009). Solving every sudoku puzzle. Available online at: http://norvig. journal is cited, in accordance with accepted academic practice. No use, distribution
com/sudoku.html or reproduction is permitted which does not comply with these terms.

Frontiers in Neuroscience | www.frontiersin.org 13 December 2017 | Volume 11 | Article 714

View publication stats

You might also like