Presentation JC
Presentation JC
u = 0.04 × u + 5 × u + 140 - w + I
2
w = a × (b × u - w)
if u > 30mV threshold
u=c
w=w+d
Variables: a, b, c, d
6
u = 0.04 × u 2 + 5 × u + 140 - w + I
w = a × (b × u - w)
if u > 30mV u
u=c
w
w=w+d
7
With code!
Equivalence
∞
θ
vk(t)
-∞
wk(t)
Izhikevich 2003, IEEE Trans. Neural Networks, 14(6) Adapted from Wiki
Mean-Field formulation
The network of Izhikevich neurons
All-to-all coupled
vk(t)
wk(t)
CA3 Izhikevich neurons…
They use a variant of the standard Izhikevich’s neuron:
dr/dt = ? d⟨ v⟩ /dt = ?
• Lorentzian distribution ansatz
• [Montbrió et al. 2015, Phys. Rev. X 5]
• [Ott and Antonsen, 2008, Chaos, 18(3)]
x(η, t) Δη
y(η, t) η¯
ρ(v, w, η, t) L(η)
Mean-field modelling ideas ηk
d⟨ w⟩ /dt = ?
Moment closure assumption:
• [Nicola & Campbell, 2013b]: The validity of this assumption at
high firing rates is supported by numerical simulations of the
full network
⟨ w | v, η⟩ ≈ ⟨ w | η⟩
vk(t)
wk(t)
Mean-field modelling ideas ηk
d⟨ w⟩ /dt = ?
• Perturbation theory:
the mean adaptation with the parameter η is sufficiently
greater than the after-spike jump size [Nicola & Campbell.
2013, JCN, 35(1)]
⟨ w | η⟩ ≫ w jump
vk(t)
wk(t)
The mean-field model
ODEs:
0.3
0.25
0.2
0.15
0.1
EP-
EP+
0.05
PO-
PO+
0
0.05 0.1 0.15 0.2 0.25 0.3
Weakly adapting
WA
7
Two-pop.
What can we learn from the model ?
New discovery
SA
WA
8
Assumptions…
To assess the validity of the mean-field approximation, we examine all the
assumptions that are imposed during the derivation
Assumptions (1)
• All-to-all connectivity within the population and
between different populations
• Reasonable for the application to CA3 region of
hippocampus
• There are formalisms for sparse networks [Ferguson
et al., 2015; Di Volo & Torcini, 2018; Biet al., 2021;
Lin et al., 2020]
Assumptions (2)
• N → ∞ , the
thermodynamic
limit
• As the number of
neurons increases,
the spread of the
network variables
around the mean
narrows and gets
closer to the
dynamics of the
mean-field model
Assumptions (3)
• ⟨w|𝜂⟩ ≫ wjump , the mean adaptation variable with the
parameter 𝜂 is sufficiently greater than the homogeneous
after-spike jump value
• This assumption is required for the differential equation of ⟨w⟩
• However, the mean-field description still captures the essential shape
and frequency of the firing activity of the network
• The accuracy could be improved by inclusion of high-order terms in
the Taylor expansion ⇒ extra term for ⟨w⟩’
Assumptions (4)
• ⟨w|v,𝜂⟩ = ⟨w|𝜂⟩, first-order moment closure
assumption, also called the adiabatic
approximation
• This assumption entails fast dynamics of the
membrane potential
• We could employ a high-order moment closure
approximation,
• Although we need to assess the cost of the added effort
in terms of the improvement of the accuracy of the
resulting mean-field model [Ly & Tranchina, 2007]
Assumptions (5)
• The Lorentzian ansatz on the conditional density
function
v w
Then… 2 ~ 10 neurons!
• From code…
• Spikes/Deltas/Convolutions…?
Ask the forums!
• Not much
action… :-(
• ~ 1 week
• ~ 1 answer
• Not always
useful
Personal conclusions…
• A full month and still
going on…
• Very elegant platform
• New gold standard?
• Specially if planning to run
thousands of neurons
• GPU/Cluster cappabilities
• However…
• Documentation not perfect (spikes!)
• Forums not very active, but they answer in the end, sort of…
• With my own code, ony a couple of weeks, probably
• Not worth the effort… :-(
Final piece of advice
• Stay safe and far from the water!
CNS, JC talks
Exact mean-field models for
spiking neural networks with
adaptation
Liang Chen, Sue Ann Campbell
Journal of Computational Neuroscience (2022) 50:445–469
https://doi.org/10.1007/s10827-022-00825-9
Thanks!