2190-8567-2-10 2190-8567 Research <p>Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons</p> BaladronJavierjavier.baladron@inria.fr FasoliDiegodiego.fasoli@inria.fr FaugerasOlivierolivier.faugeras@inria.fr TouboulJonathanjonathan.touboul@inria.fr

NeuroMathComp Laboratory, INRIA, Sophia-Antipolis Méditerranée, 06902, France

NeuroMathComp Laboratory, ENS, Paris, 75013, France

BANG Laboratory, INRIA, Paris, 75013, France

Mathematical Neuroscience Lab, Center of Interdisciplinary Research in Biology, Collège de France, Paris, 75005, France

CNRS/UMR 7241-INSERM U1050, Université Pierre et Marie Curie, ED 158, Paris, 75005, France

MEMOLIFE Laboratory of Excellence and Paris Science Lettre, 11, Place Marcelin Berthelot, Paris, 75005, France

The Journal of Mathematical Neuroscience 2190-8567 2012 2 1 10 http://www.mathematical-neuroscience.com/content/2/1/10 10.1186/2190-8567-2-1022657695
191020119320123152012 2012Baladron et al.; licensee SpringerThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. mean-field limits propagation of chaos stochastic differential equations McKean-Vlasov equations Fokker-Planck equations neural networks neural assemblies Hodgkin-Huxley neurons FitzHugh-Nagumo neurons

Abstract

We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the FitzHugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons’ initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes place, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is a solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations or non-local partial differential equations resembling the McKean-Vlasov-Fokker-Planck equations. We prove the well-posedness of the McKean-Vlasov equations, i.e. the existence and uniqueness of a solution. We also show the results of some numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiments also indicate that the McKean-Vlasov-Fokker-Planck equations may be a good way to understand the mean-field dynamics through, e.g. a bifurcation analysis.

Mathematics Subject Classification (2000): 60F99, 60B10, 92B20, 82C32, 82C80, 35Q80.

1 Introduction

Cortical activity displays highly complex behaviors which are often characterized by the presence of noise. Reliable responses to specific stimuli often arise at the level of population assemblies (cortical areas or cortical columns) featuring a very large number of neuronal cells, each of these presenting a highly nonlinear behavior, that are interconnected in a very intricate fashion. Understanding the global behavior of large-scale neural assemblies has been a great endeavor in the past decades. One of the main interests of large-scale modeling is characterizing brain functions, which most imaging techniques are recording. Moreover, anatomical data recorded in the cortex reveal the existence of structures, such as the cortical columns, with a diameter of about 50 μm to 1 mm, containing the order of 100 to 100,000 neurons belonging to a few different types. These columns have specific functions; for example, in the human visual area V1, they respond to preferential orientations of bar-shaped visual stimuli. In this case, information processing does not occur at the scale of individual neurons but rather corresponds to an activity integrating the individual dynamics of many interacting neurons and resulting in a mesoscopic signal arising through averaging effects, and this effectively depends on a few effective control parameters. This vision, inherited from statistical physics, requires that the space scale be large enough to include sufficiently many neurons and small enough so that the region considered is homogeneous. This is, in effect, the case of the cortical columns.

In the field of mathematics, studying the limits of systems of particle systems in interaction has been a long-standing problem and presents many technical difficulties. One of the questions addressed in mathematics was to characterize the limit of the probability distribution of an infinite set of interacting diffusion processes, and the fluctuations around the limit for a finite number of processes. The first breakthroughs to find answers to this question are due to Henry McKean (see, e.g. 1 2 ). It was then investigated in various contexts by a large number of authors such as Braun and Hepp 3 , Dawson 4 and Dobrushin 5 , and most of the theory was achieved by Tanaka and collaborators 6 7 8 9 and of course Sznitman 10 11 12 . When considering that all particles (in our case, neurons) have the same, independent initial condition, they are mathematically proved using stochastic theory (the Wasserstein distance, large deviation techniques) that in the limit where the number of particles tends to infinity, any finite number of particles behaves independently of the other ones, and they all present the same probability distribution, which satisfies a nonlinear Markov equation. Finite-size fluctuations around the limit are derived in a general case in 10 . Most of these models use a standard hypothesis of global Lipschitz continuity and linear growth condition of the drift and diffusion coefficients of the diffusions, as well as the Lipschitz continuity of the interaction function. Extensions to discontinuous càdlàg processes including singular interactions (through a local time process) were developed in 11 . Problems involving singular interaction variables (e.g. nonsmooth functions) are also widely studied in the field, but are not relevant in our case.

In the present article, we apply this mathematical approach to the problem of interacting neurons arising in neuroscience. To this end, we extend the theory to encompass a wider class of models. This implies the use of locally (instead of globally) Lipschitz coefficients and of a Lyapunov-like growth condition replacing the customary linear growth assumption for some of the functions appearing in the equations. The contributions of this article are fourfold:

1. We derive, in a rigorous manner, the mean-field equations resulting from the interaction of infinitely many neurons in the case of widely accepted models of spiking neurons and synapses.

2. We prove a propagation of chaos property which shows that in the mean-field limit, the neurons become independent, in agreement with some recent experimental work 13 and with the idea that the brain processes information in a somewhat optimal way.

3. We show, numerically, that the mean-field limit is a good approximation of the mean activity of the network even for fairly small sizes of neuronal populations.

4. We suggest, numerically, that the changes in the dynamics of the mean-field limit when varying parameters can be understood by studying the mean-field Fokker-Planck equation.

We start by reviewing such models in the ‘Spiking conductance-based models’ section to motivate the present study. It is in the ‘Mean-field equations for conductance-based models’ section that we provide the limit equations describing the behaviors of an infinite number of interacting neurons and state and prove the existence and uniqueness of solutions in the case of conductance-based models. The detailed proof of the second main theorem, that of the convergence of the network equations to the mean-field limit, is given in the Appendix. In the ‘Numerical simulations’ section, we begin to address the difficult problem of the numerical simulation of the mean-field equations and show some results indicating that they may be an efficient way of representing the mean activity of a finite-size network as well as to study the changes in the dynamics when varying biological parameters. The final ‘Discussion and conclusion’ section focuses on the conclusions of our mathematical and numerical results and raises some important questions for future work.

2 Spiking conductance-based models

This section sets the stage for our results. We review in the ‘Hodgkin-Huxley model’ section the Hodgkin-Huxley model equations in the case where both the membrane potential and the ion channel equations include noise. We then proceed in the ‘The FitzHugh-Nagumo model’ section with the FitzHugh-Nagumo equations in the case where the membrane potential equation includes noise. We next discuss in the ‘Models of synapses and maximum conductances’ section the connectivity models of networks of such neurons, starting with the synapses, electrical and chemical, and finishing with several stochastic models of the synaptic weights. In the ‘Putting everything together’ section, we write the network equations in the various cases considered in the previous section and express them in a general abstract mathematical form that is the one used for stating and proving the results about the mean-field limits in the ‘Mean-field equations for conductance-based models’ section. Before we jump into this, we conclude in the ‘Mean-field methods in computational neuroscience: a quick overview’ section with a brief overview of the mean-field methods popular in computational neuroscience.

From the mathematical point of view, each neuron is a complex system, whose dynamics is often described by a set of stochastic nonlinear differential equations. Such models aim at reproducing the biophysics of ion channels governing the membrane potential and therefore the spike emission. This is the case of the classical model of Hodgkin and Huxley 14 and of its reductions 15 16 17 . Simpler models use discontinuous processes mimicking the spike emission by modeling the membrane voltage and considering that spikes are emitted when it reaches a given threshold. These are called integrate-and-fire models 18 19 and will not be addressed here. The models of large networks we deal with here therefore consist of systems of coupled nonlinear diffusion processes.

2.1 Hodgkin-Huxley model

One of the most important models in computational neuroscience is the Hodgkin-Huxley model. Using pioneering experimental techniques of that time, Hodgkin and Huxley 14 determined that the activity of the giant squid axon is controlled by three major currents: voltage-gated persistent K + current with four activation gates, voltage-gated transient Na + current with three activation gates and one inactivation gate, and Ohmic leak current, I L , which is carried mostly by chloride ions ( Cl ). In this paper, we only use the space-clamped Hodgkin-Huxley model which we slightly generalize to a stochastic setting in order to better take into account the variability of the parameters. The advantages of this model are numerous, and one of the most prominent aspects in its favor is its correspondence with the most widely accepted formalism to describe the dynamics of the nerve cell membrane. A very extensive literature can also be found about the mathematical properties of this system, and it is now quite well understood.

The basic electrical relation between the membrane potential and the currents is simply:

C d V d t = I ext ( t ) I K I Na I L ,

where I ext ( t ) is an external current. The detailed expressions for I K , I Na and I L can be found in several textbooks, e.g. 17 20 :

I K = g ¯ K n 4 ( V E K ) , I Na = g ¯ Na m 3 h ( V E Na ) , I L = g L ( V E L ) ,

where g ¯ K (respectively, g ¯ Na ) is the maximum conductance of the potassium (respectively, the sodium) channel; g L is the conductance of the Ohmic channel; and n (respectively, m) is the activation variable for K + (respectively, for Na). There are four (respectively, three) activation gates for the K + (respectively, the Na) current which accounts for the power 4 (respectively, 3) in the expression of I K (respectively I Na ). h is the inactivation variable for Na. These activation/deactivation variables, denoted by x { n , m , h } in what follows, represent a proportion (they vary between 0 and 1) of open gates. The proportions of open channels are given by the functions n 4 and m 3 h . The proportions of open gates can be computed through a Markov chain modeling assuming the gates to open with rate ρ x ( V ) (the dependence in V accounts for the voltage-gating of the gate) and to close with rate ζ x ( V ) . These processes can be shown to converge, under standard assumptions, towards the following ordinary differential equations:

x ˙ = ρ x ( V ) ( 1 x ) ζ x ( V ) x , x { n , m , h } .

The functions ρ x ( V ) and ζ x ( V ) are smooth functions whose exact values can be found in several textbooks such as the ones cited above. Note that half of these six functions are unbounded when the voltage goes to −∞, being of the form k 1 e k 2 V , with k 1 and k 2 as two positive constants. Since these functions have been fitted to experimental data corresponding to values of the membrane potential between roughly −100 and 100 mVs, it is clear that extremely large in magnitude and negative values of this variable do not have any physiological meaning. We can therefore safely, smoothly perturb these functions so that they are upper-bounded by some large (but finite) positive number for these values of the membrane potential. Hence, the functions ρ x and ζ x are bounded and Lipschitz continuous for x { n , m , h } . A more precise model taking into account the finite number of channels through the Langevin approximation results in the stochastic differential equationa

d x t = ( ρ x ( V ) ( 1 x ) ζ x ( V ) x ) d t + ρ x ( V ) ( 1 x ) + ζ x ( V ) x χ ( x ) d W t x ,

where W t x and x { n , m , h } are independent standard Brownian motions. χ ( x ) is a function that vanishes outside ( 0 , 1 ) . This guarantees that the solution remains a proportion, i.e. lies between 0 and 1 for all times. We define

σ x ( V , x ) = ρ x ( V ) ( 1 x ) + ζ x ( V ) x χ ( x ) .

In order to complete our stochastic Hodgkin-Huxley neuron model, we assume that the external current I ext ( t ) is the sum of a deterministic part, noted as I ( t ) , and a stochastic part, a white noise with variance σ ext built from a standard Brownian motion W t independent of W t x and x { n , m , h } . Considering the current produced by the income of ion through these channels, we end up with the following system of stochastic differential equations:

{ C d V t = ( I ( t ) g ¯ K n 4 ( V E K ) g ¯ Na m 3 h ( V E Na ) g ¯ L ( V E L ) ) d t C d V t = + σ ext d W t , d x t = ( ρ x ( V ) ( 1 x ) ζ x ( V ) x ) d t + σ x ( V , x ) d W t x , x { n , m , h } .

This is a stochastic version of the Hodgkin-Huxley model. The functions ρ x and ζ x are bounded and Lipschitz continuous (see discussion above). The functions n, m and h are bounded between 0 and 1; hence, the functions n 4 and m 3 h are Lipschitz continuous.

To illustrate the model, we show in Figure 1 the time evolution of the three ion channel variables n, m and h as well as that of the membrane potential V for a constant input I = 20.0 . The system of ordinary differential equations (ODEs) has been solved using a Runge-Kutta scheme of order 4 with an integration time step Δ t = 0.01 . In Figure 2, we show the same time evolution when noise is added to the channel variables and the membrane potential.

<p>Figure 1</p>

Solution of the noiseless Hodgkin-Huxley model.

Solution of the noiseless Hodgkin-Huxley model. Left: time evolution of the three ion channel variables n, m and h. Right: corresponding time evolution of the membrane potential. Parameters are given in the text.

<p>Figure 2</p>

Noisy Hodgkin-Huxley model.

Noisy Hodgkin-Huxley model. Left: time evolution of the three ion channel variables n, m and h. Right: corresponding time evolution of the membrane potential. Parameters are given in the text.

For the membrane potential, we have used σ ext = 3.0 (see Equation 2), while for the noise in the ion channels, we have used the following χ function (see Equation 1):

χ ( x ) = { Γ e Λ / ( 1 ( 2 x 1 ) 2 ) if 0 < x < 1 0 if x 0 x 1

with Γ = 0.1 and Λ = 0.5 for all the ion channels. The system of SDEs has been integrated using the Euler-Maruyama scheme with Δ t = 0.01 .

Because the Hodgkin-Huxley model is rather complicated and high-dimensional, many reductions have been proposed, in particular to two dimensions instead of four. These reduced models include the famous FitzHugh-Nagumo and Morris-Lecar models. These two models are two-dimensional approximations of the original Hodgkin-Huxley model based on quantitative observations of the time scale of the dynamics of each variable and identification of variables. Most reduced models still comply with the Lipschitz and linear growth conditions ensuring the existence and uniqueness of a solution, except for the FitzHugh-Nagumo model which we now introduce.

2.2 The FitzHugh-Nagumo model

In order to reduce the dimension of the Hodgkin-Huxley model, FitzHugh 15 16 21 introduced a simplified two-dimensional model. The motivation was to isolate conceptually essential mathematical features yielding excitation and transmission properties from the analysis of the biophysics of sodium and potassium flows. Nagumo and collaborators 22 followed up with an electrical system reproducing the dynamics of this model and studied its properties. The model consists of two equations, one governing a voltage-like variable V having a cubic nonlinearity and a slower recovery variable w. It can be written as:

{ V ˙ = f ( V ) w + I ext , w ˙ = c ( V + a b w ) ,

where f ( V ) is a cubic polynomial in V which we choose, without loss of generality, to be f ( V ) = V V 3 / 3 . The parameter I ext models the input current the neuron receives; the parameters a, b > 0 and c > 0 describe the kinetics of the recovery variable w. As in the case of the Hodgkin-Huxley model, the current I ext is assumed to be the sum of a deterministic part, noted I, and a stochastic white noise accounting for the randomness of the environment. The stochastic FitzHugh-Nagumo equation is deduced from Equation 4 and reads:

{ d V t = ( V t V t 3 3 w t + I ) d t + σ ext d W t , d w t = c ( V t + a b w t ) d t .

Note that because the function f ( V ) is not globally Lipschitz continuous (only locally), the well-posedness of the stochastic differential equation (Equation 5) does not follow immediately from the standard theorem which assumes the global Lipschitz continuity of the drift and diffusion coefficients. This question is settled below by Proposition 1.

We show in Figure 3 the time evolution of the adaptation variable and the membrane potential in the case where the input I is constant and equal to 0.7. The left-hand side of the figure shows the case with no noise while the right-hand side shows the case where noise of intensity σ ext = 0.25 (see Equation 5) has been added.

<p>Figure 3</p>

Time evolution of the membrane potential and the adaptation variable in the FitzHugh-Nagumo model.

Time evolution of the membrane potential and the adaptation variable in the FitzHugh-Nagumo model. Left: without noise. Right: with noise. See text.

The deterministic model has been solved with a Runge-Kutta method of order 4, while the stochastic model, with the Euler-Maruyama scheme. In both cases, we have used an integration time step Δ t = 0.01 .

2.3 Partial conclusion

We have reviewed two main models of space-clamped single neurons: the Hodgkin-Huxley and FitzHugh-Nagumo models. These models are stochastic, including various sources of noise: external and internal. The noise sources are supposed to be independent Brownian processes. We have shown that the resulting stochastic differential Equations 2 and 5 were well-posed. As pointed out above, this analysis extends to a large number of reduced versions of the Hodgkin-Huxley such as those that can be found in the book 17 .

2.4 Models of synapses and maximum conductances

We now study the situation in which several of these neurons are connected to one another forming a network, which we will assume to be fully connected. Let N be the total number of neurons. These neurons belong to P populations, e.g. pyramidal cells or interneurons. If the index of a neuron is i, 1 i N , we note p ( i ) = α , 1 α P as the population it belongs to. We note N p ( i ) as the number of neurons in population p ( i ) . Since we want to be as close to biology as possible while keeping the possibility of a mathematical analysis of the resulting model, we consider two types of simplified, but realistic, synapses: chemical and electrical or gap junctions. The following material concerning synapses is standard and can be found in textbooks 20 . The new, and we think important, twist is to add noise to our models. To unify notations, in what follows, i is the index of a postsynaptic neuron belonging to population α = p ( i ) , and j is the index of a presynaptic neuron to neuron i belonging to population γ = p ( j ) .

2.4.1 Chemical synapses

The principle of functioning of chemical synapses is based on the release of a neurotransmitter in the presynaptic neuron synaptic button, which binds to specific receptors on the postsynaptic cell. This process, similar to the currents described in the Hodgkin and Huxley model, is governed by the value of the cell membrane potential. We use the model described in 20 23 , which features a quite realistic biophysical representation of the processes at work in the spike transmission and is consistent with the previous formalism used to describe the conductances of other ion channels. The model emulates the fact that following the arrival of an action potential at the presynaptic terminal, a neurotransmitter is released in the synaptic cleft and binds to the postsynaptic receptor with a first order kinetic scheme. Let j be a presynaptic neuron to the postynaptic neuron i. The synaptic current induced by the synapse from j to i can be modelled as the product of a conductance g i j with a voltage difference:

I i j syn = g i j ( t ) ( V i V rev i j ) .

The synaptic reversal potentials V rev i j are approximately constant within each population: V rev i j : = V rev α γ . The conductance g i j is the product of the maximum conductance J i j ( t ) with a function y j ( t ) that denotes the fraction of open channels and depends only upon the presynaptic neuron j:

g i j ( t ) = J i j ( t ) y j ( t ) .

The function y j ( t ) is often modelled 20 as satisfying the following ordinary differential equation:

y ˙ j ( t ) = a r j S j ( V j ) ( 1 y j ( t ) ) a d j y j ( t ) .

The positive constants a r j and a d j characterize the rise and decay rates, respectively, of the synaptic conductance. Their values depend only on the population of the presynaptic neuron j, i.e. a r j : = a r γ and a d j : = a d γ , but may vary significantly from one population to the next. For example, gamma-aminobutyric acid (GABA) B synapses are slow to activate and slow to turn off while the reverse is true for GABA A and AMPA synapses 20 . S j ( V j ) denotes the concentration of the transmitter released into the synaptic cleft by a presynaptic spike. We assume that the function S j is sigmoidal and that its exact form depends only upon the population of the neuron j. Its expression is given by (see, e.g. 20 ):

S γ ( V j ) = T max γ 1 + e λ γ ( V j V T γ ) .

Destexhe et al. 23 give some typical values of the parameters T max = 1 mM , V T = 2 mV and 1 / λ = 5 mV .

Because of the dynamics of ion channels and of their finite number, similar to the channel noise models derived through the Langevin approximation in the Hodgkin-Huxley model (Equation 2), we assume that the proportion of active channels is actually governed by a stochastic differential equation with diffusion coefficient σ γ ( V , y ) depending only on the population γ of j of the form (Equation 1):

d y t j = ( a r γ S γ ( V j ) ( 1 y j ( t ) ) a d γ y j ( t ) ) d t + σ γ y ( V j , y j ) d W t j , y .

In detail, we have

σ γ y ( V j , y j ) = a r γ S γ ( V j ) ( 1 y j ) + a d γ y j χ ( y j ) .

Remember that the form of the diffusion term guarantees that the solutions to this equation with appropriate initial conditions stay between 0 and 1. The Brownian motions W j , y are assumed to be independent from one neuron to the next.

2.4.2 Electrical synapses

The electrical synapse transmission is rapid and stereotyped and is mainly used to send simple depolarizing signals for systems requiring the fastest possible response. At the location of an electrical synapse, the separation between two neurons is very small (≈3.5 nm). This narrow gap is bridged by the gap junction channels, specialized protein structures that conduct the flow of ionic current from the presynaptic to the postsynaptic cell (see, e.g. 24 ).

Electrical synapses thus work by allowing ionic current to flow passively through the gap junction pores from one neuron to another. The usual source of this current is the potential difference generated locally by the action potential. Without the need for receptors to recognize chemical messengers, signaling at electrical synapses is more rapid than that which occurs across chemical synapses, the predominant kind of junctions between neurons. The relative speed of electrical synapses also allows for many neurons to fire synchronously.

We model the current for this type of synapse as

I i j che = J i j ( t ) ( V i V j ) ,

where J i j ( t ) is the maximum conductance.

2.4.3 The maximum conductances

As shown in Equations 6, 7 and 10, we model the current going through the synapse connecting neuron j to neuron i as being proportional to the maximum conductance J i j . Because the synaptic transmission through a synapse is affected by the nature of the environment, the maximum conductances are affected by dynamical random variations (we do not take into account such phenomena as plasticity). What kind of models can we consider for these random variations?

The simplest idea is to assume that the maximum conductances are independent diffusion processes with mean J ¯ α γ N γ and standard deviation σ α γ J N γ , i.e. that depend only on the populations. The quantities J ¯ α γ , being conductances, are positive. We write the following equation:

J i γ ( t ) = J ¯ α γ N γ + σ α γ J N γ ξ i , γ ( t ) ,

where the ξ i , γ ( t ) , i = 1 , , N , γ = 1 , , P , are NP-independent zero mean unit variance white noise processes derived from NP-independent standard Brownian motions B i , γ ( t ) , i.e. ξ i , γ ( t ) = d B i , γ ( t ) d t , which we also assume to be independent of all the previously defined Brownian motions. The main advantage of this dynamics is its simplicity. Its main disadvantage is that if we increase the noise level σ α γ , the probability that J i j ( t ) becomes negative increases also: this would result in a negative conductance!

One way to alleviate this problem is to modify the dynamics (Equation 11) to a slightly more complicated one whose solutions do not change sign, such as for instance, the Cox-Ingersoll-Ross model 25 given by:

d J i j ( t ) = θ α γ ( J ¯ α γ N γ J i j ( t ) ) d t + σ α γ J N γ J i j ( t ) d B i , γ ( t ) .

Note that the right-hand side only depends upon the population γ = p ( j ) . Let J i j ( 0 ) be the initial condition, it is known 25 that

E [ J i j ( t ) ] = J i j ( 0 ) e θ α γ t + J ¯ α γ N γ ( 1 e θ α γ t ) , Var ( J i j ( t ) ) = J i j ( 0 ) ( σ α γ J ) 2 N γ 2 θ α γ ( e θ α γ t e 2 θ α γ t ) + J ¯ α γ ( σ α γ J ) 2 2 N γ 3 θ α γ ( 1 e θ α γ t ) 2 .

This shows that if the initial condition J i j ( 0 ) is equal to the mean J ¯ α γ N γ , the mean of the process is constant over time and equal to J ¯ α γ N γ . Otherwise, if the initial condition J i j ( 0 ) is of the same sign as J ¯ α γ , i.e. positive, then the long term mean is J ¯ α γ N γ and the process is guaranteed not to touch 0 if the condition 2 N γ θ α γ J ¯ α γ ( σ α γ J ) 2 holds 25 . Note that the long term variance is J ¯ α γ ( σ α γ J ) 2 2 N γ 3 θ α γ .

2.5 Putting everything together

We are ready to write the equations of a network of Hodgkin-Huxley or FitzHugh-Nagumo neurons and study their properties and their limit, if any, when the number of neurons becomes large. The external current for neuron i has been modelled as the sum of a deterministic part and a stochastic part:

I i ext ( t ) = I i ( t ) + σ ext i d W t i d t .

We will assume that the deterministic part is the same for all neurons in the same population, I i : = I α , and that the same is true for the variance, σ ext i : = σ ext α . We further assume that the N Brownian motions W t i are N-independent Brownian motions and independent of all the other Brownian motions defined in the model. In other words,

I i ext ( t ) = I α ( t ) + σ ext α d W t i d t , α = p ( i ) , i = 1 , , N .

We only cover the case of chemical synapses and leave it to the reader to derive the equations in the simpler case of gap junctions.

2.5.1 Network of FitzHugh-Nagumo neurons

We assume that the parameters a i , b i and c i in Equation 5 of the adaptation variable w i of neuron i are only functions of the population α = p ( i ) .

Simple maximum conductance variation. If we assume that the maximum conductances fluctuate according to Equation 11, the state of the ith neuron in a fully connected network of FitzHugh-Nagumo neurons with chemical synapses is determined by the variables ( V i , w i , y i ) that satisfy the following set of 3N stochastic differential equations:

{ d V t i = ( V t i ( V t i ) 3 3 w t i + I α ( t ) ) d t d V t i = ( γ = 1 P 1 N γ j , p ( j ) = γ J ¯ α γ ( V t i V rev α γ ) y t j ) d t d V t i = γ = 1 P 1 N γ ( j , p ( j ) = γ σ α γ J ( V t i V rev α γ ) y t j ) d B t i , γ d V t i = + σ ext α d W t i , d w t i = c α ( V t i + a α b α w t i ) d t , d y t i = ( a r α S α ( V t i ) ( 1 y t i ) a d α y t i ) d t + σ α y ( V t i , y t i ) d W t i , y .

S α ( V t i ) is given by Equation 8; σ α y , by Equation 9; and W t i , y , i = 1 , , N , are N-independent Brownian processes that model noise in the process of transmitter release into the synaptic clefts.

Sign-preserving maximum conductance variation. If we assume that the maximum conductances fluctuate according to Equation 12, the situation is slightly more complicated. In effect, the state space of the neuron i has to be augmented by the P maximum conductances J i γ , γ = 1 , , P . We obtain

{ d V t i = ( V t i ( V t i ) 3 3 w t i + I α ( t ) ) d t d V t i = ( γ = 1 P 1 N γ j , p ( j ) = γ J i j ( t ) ( V t i V rev α γ ) y t j ) d t d V t i = + σ ext α d W t i , d w t i = c α ( V t i + a α b α w t i ) d t , d y t i = ( a r α S α ( V t i ) ( 1 y t i ) a d α y t i ) d t + σ α y ( V t i , y t i ) d W t i , y , d J i γ ( t ) = θ α γ ( J ¯ α γ N γ J i γ ( t ) ) d t + σ α γ J N γ J i γ ( t ) d B i , γ ( t ) , γ = 1 , , P ,

which is a set of N ( P + 3 ) stochastic differential equations.

2.5.2 Network of Hodgkin-Huxley neurons

We provide a similar description in the case of the Hodgkin-Huxley neurons. We assume that the functions ρ x i and ζ x i , x { n , m , h } , that appear in Equation 2 only depend upon α = p ( i ) .

Simple maximum conductance variation. If we assume that the maximum conductances fluctuate according to Equation 11, the state of the ith neuron in a fully connected network of Hodgkin-Huxley neurons with chemical synapses is therefore determined by the variables ( V i , n i , m i , h i , y i ) that satisfy the following set of 5N stochastic differential equations:

{ C d V t i = ( I α ( t ) g K ¯ n i 4 ( V t i E K ) g Na ¯ m i 3 h i ( V t i E Na ) g L ¯ ( V t i E L ) ) d t C d V t i = ( γ = 1 P 1 N γ j , p ( j ) = γ J ¯ α γ ( V t i V rev α γ ) y t j ) d t C d V t i = γ = 1 P 1 N γ ( j , p ( j ) = γ σ α γ J ( V t i V rev α γ ) y t j ) d B t i , γ C d V t i = + σ ext α d W t i , d x i ( t ) = ( ρ x α ( V i ) ( 1 x i ) ζ x ( V i ) x i ) d t + σ x ( V i , x i ) d W t x , i , x { n , m , h } , d y t i = ( a r α S α ( V t i ) ( 1 y t i ) a d α y t i ) d t + σ α y ( V t i , y t i ) d W t i , y .

Sign-preserving maximum conductance variation. If we assume that the maximum conductances fluctuate according to Equation 12, we use the same idea as in the FitzHugh-Nagumo case of augmenting the state space of each individual neuron and obtain the following set of ( 5 + P ) N stochastic differential equations:

{ C d V t i = ( I α ( t ) g K ¯ n i 4 ( V t i E K ) g N a ¯ m i 3 h i ( V t i E Na ) g L ¯ ( V t i E L ) ) d t C d V t i = ( γ = 1 P 1 N γ j , p ( j ) = γ J i j ( t ) ( V t i V rev α γ ) y t j ) d t C d V t i = + σ ext α d W t i , d x i ( t ) = ( ρ x α ( V t i ) ( 1 x i ) ζ x α ( V t i ) x i ) d t + σ x ( V t i , x i ) d W t x , i , x { n , m , h } , d y t i = ( a r α S α ( V t i ) ( 1 y t i ) a d α y t i ) d t + σ α y ( V t i , y t i ) d W t i , y , d J i γ ( t ) = θ α γ ( J ¯ α γ N γ J i γ ( t ) ) d t + σ α γ J N γ J i γ ( t ) d B i , γ ( t ) , γ = 1 , , P .

2.5.3 Partial conclusion

Equations 14 to 17 have a quite similar structure. They are well-posed, i.e. given any initial condition, and any time T > 0 , they have a unique solution on [ 0 , T ] which is square-integrable. A little bit of care has to be taken when choosing these initial conditions for some of the parameters, i.e. n, m and h, which take values between 0 and 1, and the maximum conductances when one wants to preserve their signs.

In order to prepare the grounds for the ‘Mean-field equations for conductance-based models’ section, we explore a bit more the aforementioned common structure. Let us first consider the case of the simple maximum conductance variations for the FitzHugh-Nagumo network. Looking at Equation 14, we define the three-dimensional state vector of neuron i to be X t i = ( V t i , w t i , y t i ) . Let us now define f α : R × R 3 R 3 , α = 1 , , P , by

f α ( t , X t i ) = [ V t i ( V t i ) 3 3 w t i + I α ( t ) c α ( V t i + a α b α w t i ) a r α S α ( V t i ) ( 1 y t i ) a d α y t i ] .

Let us next define g α : R × R 3 R 3 × 2 by

g α ( t , X t i ) = [ σ ext α 0 0 0 0 σ α y ( V t i , y t i ) ] .

It appears that the intrinsic dynamics of the neuron i is conveniently described by the equation

d X t i = f α ( t , X t i ) d t + g α ( t , X t i ) [ d W t i d W t i , y ] .

We next define the functions b α γ : R 3 × R 3 R 3 , for α , γ = 1 , , P , by

b α γ ( X t i , X t j ) = [ J ¯ α γ ( V t i V rev α γ ) y t j 0 0 ]

and the function β α γ : R 3 × R 3 R 3 × 1 by

β α γ ( X t i , X t j ) = [ σ α γ J ( V t i V rev α γ ) y t j 0 0 ] .

It appears that the full dynamics of the neuron i, corresponding to Equation 14, can be described compactly by

d X t i = f α ( t , X t i ) d t + g α ( t , X t i ) [ d W t i d W t i , y ] + γ = 1 P 1 N γ j , p ( j ) = γ b α γ ( X t i , X t j ) d t + γ = 1 P 1 N γ j , p ( j ) = γ β α γ ( X t i , X t j ) d B t i , γ .

Let us now move to the case of the sign-preserving variation of the maximum conductances, still for the FitzHugh-Nagumo neurons. The state of each neuron is now P+3-dimensional: we define X t i = ( V t i , w t i , y t i , J i 1 ( t ) , , J i P ( t ) ) . We then define the functions f α : R × R P + 3 R P + 3 , α = 1 , , P , by

f α ( t , X t i ) = [ V t i ( V t i ) 3 3 w t i + I α ( t ) c α ( V t i + a α b α w t i ) a r α S α ( V t i ) ( 1 y t i ) a d α y t i θ α γ ( J ¯ α γ N γ J i γ ( t ) ) , γ = 1 , , P ]

and the functions g α : R × R P + 3 R ( P + 3 ) × ( P + 2 ) by

g α ( t , X t i ) = [ σ ext α 0 0 0 0 0 0 0 0 σ α y ( V t i , y t i ) 0 0 0 0 σ α 1 J N 1 J i 1 ( t ) 0 0 0 0 σ α P J N P J i P ( t ) ] .

It appears that the intrinsic dynamics of the neuron i isolated from the other neurons is conveniently described by the equation

d X t i = f α ( t , X t i ) d t + g α ( t , X t i ) [ d W t i d W t i , y d B t i , 1 d B t i , P ] .

Let us finally define the functions b α γ : R P + 3 × R P + 3 R P + 3 , for α , γ = 1 , , P , by

b α γ ( X t i , X t j ) = [ J i j ( t ) ( V t i V rev α γ ) y t j 0 0 ] .

It appears that the full dynamics of the neuron i, corresponding to Equation 15 can be described compactly by

d X t i = f α ( t , X t i ) d t + g α ( t , X t i ) [ d W t i d W t i , y d B t i , 1 d B t i , P ] + γ = 1 P 1 N γ j , p ( j ) = γ b α γ ( X t i , X t j ) d t .

We let the reader apply the same machinery to the network of Hodgkin-Huxley neurons.

Let us note d as the positive integer equal to the dimension of the state space in Equation 18 ( d = 3 ) or 19 ( d = 3 + P ) or in the corresponding cases for the Hodgkin-Huxley model ( d = 5 and d = 5 + P ). The reader will easily check that the following four assumptions hold for both models:

(H1) Locally Lipschitz dynamics: For all α { 1 , , P } , the functions f α and g α are uniformly locally Lipschitz continuous with respect to the second variable. In detail, for all U > 0 , there exists K U > 0 independent of t [ 0 , T ] such that for all x , y B U d , the ball of R d of radius U:

f α ( t , x ) f α ( t , y ) + g α ( t , x ) g α ( t , y ) K U x y , α = 1 , , P .

(H2) Locally Lipschitz interactions: For all α , γ { 1 , , P } , the functions b α γ and β α γ are locally Lipschitz continuous. In detail, for all U > 0 , there exists L U > 0 such that for all x , y , x , y B U d , we have:

b α γ ( x , y ) b α γ ( x , y ) + β α γ ( x , y ) β α γ ( x , y ) L U ( x x + y y ) .

(H3) Linear growth of the interactions: There exists a K ˜ > 0 such that

max ( b α γ ( x , z ) 2 , β α γ ( x , z ) 2 ) K ˜ ( 1 + x 2 ) .

(H4) Monotone growth of the dynamics: We assume that f α and g α satisfy the following monotonous condition for all α = 1 , , P :

x T f α ( t , x ) + 1 2 g α ( t , x ) 2 K ( 1 + x 2 ) .

These assumptions are central to the proofs of Theorems 2 and 4.

They imply the following proposition stating that the system of stochastic differential equations (Equation 19) is well-posed:

Proposition 1 Let T > 0 be a fixed time. If | I α ( t ) | I m on [ 0 , T ] , for α = 1 , , P , Equations 18 and 19 together with an initial condition X 0 i L 2 ( R d ) , i = 1 , , N of square-integrable random variables, have a unique strong solution which belongs to L 2 ( [ 0 , T ] ; R d N ) .

Proof The proof uses Theorem 3.5 in chapter 2 in 26 whose conditions are easily shown to follow from hypotheses 2.5.3 to (H2). □

The case N = 1 implies that Equations 2 and 5, describing the stochastic FitzHugh-Nagumo and Hodgkin-Huxley neurons, are well-posed.

We are interested in the behavior of the solutions of these equations as the number of neurons tends to infinity. This problem has been long-standing in neuroscience, arousing the interest of many researchers in different domains. We discuss the different approaches developed in the field in the next subsection.

2.6 Mean-field methods in computational neuroscience: a quick overview

Obtaining the equations of evolution of the effective mean-field from microscopic dynamics is a very complex problem. Many approximate solutions have been provided, mostly based on the statistical physics literature.

Many models describing the emergent behavior arising from the interaction of neurons in large-scale networks have relied on continuum limits ever since the seminal work of Amari, and Wilson and Cowan 27 28 29 30 . Such models represent the activity of the network by macroscopic variables, e.g. the population-averaged firing rate, which are generally assumed to be deterministic. When the spatial dimension is not taken into account in the equations, they are referred to as neural masses, otherwise as neural fields. The equations that relate these variables are ordinary differential equations for neural masses and integrodifferential equations for neural fields. In the second case, they fall in a category studied in 31 or can be seen as ordinary differential equations defined on specific functional spaces 32 . Many analytical and numerical results have been derived from these equations and related to cortical phenomena, for instance, for the problem of spatio-temporal pattern formation in spatially extended models (see, e.g. 33 34 35 36 ). The use of bifurcation theory has also proven to be quite powerful 37 38 . Despite all its qualities, this approach implicitly makes the assumption that the effect of noise vanishes at the mesoscopic and macroscopic scales and hence that the behavior of such populations of neurons is deterministic.

A different approach has been to study regimes where the activity is uncorrelated. A number of computational studies on the integrate-and-fire neuron showed that under certain conditions, neurons in large assemblies end up firing asynchronously, producing null correlations 39 40 41 . In these regimes, the correlations in the firing activity decrease towards zero in the limit where the number of neurons tends to infinity. The emergent global activity of the population in this limit is deterministic and evolves according to a mean-field firing rate equation. However, according to the theory, these states only exist in the limit where the number of neurons is infinite, thereby raising the question of how the finiteness of the number of neurons impacts the existence and behavior of asynchronous states. The study of finite-size effects for asynchronous states is generally not reduced to the study of mean firing rates and can include higher order moments of firing activity 42 43 44 . In order to go beyond asynchronous states and take into account the stochastic nature of the firing and understand how this activity scales as the network size increases, different approaches have been developed, such as the population density method and related approaches 45 . Most of these approaches involve expansions in terms of the moments of the corresponding random variables, and the moment hierarchy needs to be truncated which is not a simple task that can raise a number of difficult technical issues (see, e.g. 46 ).

However, increasingly many researchers now believe that the different intrinsic or extrinsic noise sources are part of the neuronal signal, and rather than being a pure disturbing effect related to the intrinsically noisy biological substrate of the neural system, they suggest that noise conveys information that can be an important principle of brain function 47 . At the level of a single cell, various studies have shown that the firing statistics are highly stochastic with probability distributions close to the Poisson distributions 48 , and several computational studies confirmed the stochastic nature of single-cell firings 49 50 51 . How the variability at the single-neuron level affects the dynamics of cortical networks is less well established. Theoretically, the interaction of a large number of neurons that fire stochastic spike trains can naturally produce correlations in the firing activity of the population. For instance, power laws in the scaling of avalanche-size distributions has been studied both via models and experiments 52 53 54 55 . In these regimes, the randomness plays a central role.

In order to study the effect of the stochastic nature of the firing in large networks, many authors strived to introduce randomness in a tractable form. Some of the models proposed in the area are based on the definition of a Markov chain governing the firing dynamics of the neurons in the network, where the transition probability satisfies a differential equation, the master equation. Seminal works of the application of such modeling for neuroscience date back to the early 1990s and have been recently developed by several authors 43 56 57 58 59 . Most of these approaches are proved correct in some parameter regions using statistical physics tools such as path integrals and Van-Kampen expansions, and their analysis often involve a moment expansion and truncation. Using a different approach, a static mean-field study of multi-population network activity was developed by Treves in 60 . This author did not consider external inputs but incorporated dynamical synaptic currents and adaptation effects. His analysis was completed in 39 , where the authors proved, using a Fokker-Planck formalism, the stability of an asynchronous state in the network. Later on, Gerstner in 61 built a new approach to characterize the mean-field dynamics for the spike response model, via the introduction of suitable kernels propagating the collective activity of a neural population in time. Another approach is based on the use of large deviation techniques to study large networks of neurons 62 . This approach is inspired by the work on spin-glass dynamics, e.g. 63 . It takes into account the randomness of the maximum conductances and the noise at various levels. The individual neuron models are rate models, hence already mean-field models. The mean-field equations are not rigorously derived from the network equations in the limit of an infinite number of neurons, but they are shown to have a unique, non-Markov solution, i.e. with infinite memory, for each initial condition.

Brunel and Hakim considered a network of integrate-and-fire neurons connected with constant maximum conductances 41 . In the case of sparse connectivity, stationarity, and in a regime where individual neurons emit spikes at a low rate, they were able to analytically study the dynamics of the network and to show that it exhibits a sharp transition between a stationary regime and a regime of fast collective oscillations weakly synchronized. Their approach was based on a perturbative analysis of the Fokker-Planck equation. A similar formalism was used in 44 which, when complemented with self-consistency equations, resulted in the dynamical description of the mean-field equations of the network and was extended to a multi population network. Finally, Chizhov and Graham 64 have recently proposed a new method based on a population density approach allowing to characterize the mesoscopic behavior of neuron populations in conductance-based models.

Let us finish this very short and incomplete survey by mentioning the work of Sompolinsky and colleagues. Assuming a linear intrinsic dynamics for the individual neurons described by a rate model and random centered maximum conductances for the connections, they showed 65 66 that the system undergoes a phase transition between two different stationary regimes: a ‘trivial’ regime where the system has a unique null and uncorrelated solution, and a ‘chaotic’ regime in which the firing rate converges towards a non-zero value and correlations stabilize on a specific curve which they were able to characterize.

All these approaches have in common that they are not based on the most widely accepted microscopic dynamics (such as the ones represented by the Hodgkin-Huxley equations or some of their simplifications) and/or involve approximations or moment closures. Our approach is distinct in that it aims at deriving rigorously and without approximations the mean-field equations of populations of neurons whose individual neurons are described by biological, if not correct at least plausible, representations. The price to pay is the complexity of the resulting mean-field equations. The specific study of their solutions is therefore a crucial step, which will be developed in forthcoming papers.

3 Mean-field equations for conductance-based models

In this section, we give a general formulation of the neural network models introduced in the previous section and use it in a probabilistic framework to address the problem of the asymptotic behavior of the networks, as the number of neurons N goes to infinity. In other words, we derive the limit in law of N-interacting neurons, each of which satisfying a nonlinear stochastic differential equation of the type described in the ‘Spiking conductance-based models’ section. In the remainder of this section, we work in a complete probability space ( Ω , F , P ) satisfying the usual conditions and endow with a filtration ( F t ) t .

3.1 Setting of the problem

We recall that the neurons in the network fall into different populations P. The populations differ through the intrinsic properties of their neurons and the input they receive. We assume that the number of neurons in each population α { 1 , , P } , denoted by N α , increases as the network size increases and moreover that the asymptotic proportion of neurons in population α is nontrivial, i.e. N α / N λ α ( 0 , 1 ) as N goes to infinityb.

We use the notations introduced in the ‘Partial conclusion’ section, and the reader should refer to this section to give a concrete meaning to the rather abstract (but required by the mathematics) setting that we now establish.

Each neuron i in population α is described by a state vector noted as X t i , N in R d and has an intrinsic dynamics governed by a drift function f α : R × R d R d and a diffusion matrix g α : R × R d R d × m assumed uniformly locally Lipschitz continuous with respect to the second variable. For a neuron i in population α, the dynamics of the d-dimensional process ( X t i ) governing the evolution of the membrane potential and additional variables (adaptation, ionic concentrations), when there is no interaction, is governed by the equation:

d X t i , N = f α ( t , X t i , N ) d t + g α ( t , X t i , N ) d W t i .

Moreover, we assume, as it is the case for all the models described in the ‘Spiking conductance-based models’ section, that the solutions of this stochastic differential equation exist for all time.

When included in the network, these processes interact with those of all the other neurons through a set of continuous functions that only depend on the population α = p ( i ) , the neuron i belongs to and the populations γ of the presynaptic neurons. These functions, b α γ ( x , y ) : R d × R d R d , are scaled by the coefficients 1 / N γ , so the maximal interaction is independent of the size of the network (in particular, neither diverging nor vanishing as N goes to infinity).

As discussed in the ‘Spiking conductance-based models’ section, due to the stochastic nature of ionic currents and the noise effects linked with the discrete nature of charge carriers, the maximum conductances are perturbed dynamically through the N × P -independent Brownian motions B t i , α of dimension δ that were previously introduced. The interaction between the neurons and the noise term is represented by the function β α γ : R d × R d R d × δ .

In order to introduce the stochastic current and stochastic maximum conductances, we define two independent sequences of independent m- and δ-dimensional Brownian motions noted as ( W t i ) i N and ( B t i α ) i N , α { 1 P } which are adapted to the filtration F t .

The resulting equation for the ith neuron, including the noisy interactions, reads:

d X t i , N = f α ( t , X t i , N ) d t + γ = 1 P 1 N γ j , p ( j ) = γ b α γ ( X t i , N , X t j , N ) d t + g α ( t , X t i , N ) d W t i + γ = 1 P 1 N γ j , p ( j ) = γ β α γ ( X t i , N , X t j , N ) d B t i γ .

Note that this implies that X i , N and X j , N have the same law whenever p ( i ) = p ( j ) , given identically distributed initial conditions.

These equations are similar to the equations studied in another context by a number of mathematicians, among which are McKean, Tanaka and Sznitman (see the ‘Introduction’ section), in that they involve a very large number of particles (here, particles are neurons) in interaction. Motivated by the study of the McKean-Vlasov equations, these authors studied special cases of equations (Equation 21). This theory, referred to as the kinetic theory, is chiefly interested in the study of the thermodynamics questions. They show the property that in the limit where the number of particles tends to infinity, provided that the initial state of each particle is drawn independently from the same law, each particle behaves independently and has the same law, which is given by an implicit stochastic equation. They also evaluate the fluctuations around this limit under diverse conditions 1 2 6 7 9 10 11 . Some extensions to biological problems where the drift term is not globally Lipschitz but satisfies the monotone growth condition (Equation 20) were studied in 67 . This is the approach we undertake here.

3.2 Convergence of the network equations to the mean-field equations and properties of those equations

We now show that the same type of phenomena that were predicted for systems of interacting particles happen in networks of neurons. In detail, we prove that in the limit of large populations, the network displays the property of propagation of chaos. This means that any finite number of diffusion processes become independent, and all neurons belonging to a given population α have asymptotically the same probability distribution, which is the solution of the following mean-field equation:

d X ¯ t α = f α ( t , X ¯ t α ) d t + γ = 1 P E Z ¯ [ b α γ ( X ¯ t α , Z ¯ t γ ) ] d t + g α ( t , X ¯ t α ) d W t α + γ = 1 P E Z ¯ [ β α γ ( X ¯ t α , Z ¯ t γ ) ] d B t α γ , α = 1 , , P ,

where Z ¯ is a process independent of X ¯ that has the same law, and E Z ¯ denotes the expectation under the law of Z ¯ . In other words, the mean-field equation can be written, denoting by d m t γ ( z ) the law of Z ¯ t γ (hence, also of X ¯ t γ ):

d X ¯ t α = f α ( t , X ¯ t α ) d t + γ = 1 P ( R d b α γ ( X ¯ t α , z ) d m t γ ( z ) ) d t + g α ( t , X ¯ t α ) d W t α + γ = 1 P ( R d β α γ ( X ¯ t α , z ) d m t γ ( z ) ) d B t α γ .

In these equations, W t α , for α = 1 P , are independent, standard, m-dimensional Brownian motions. Let us point out the fact that the right-hand side of Equations 22 and 23 depends on the law of the solution; this fact is sometimes referred to as ‘the process X ¯ is attracted by its own law’. This equation is also classically written as the McKean-Vlasov-Fokker-Planck equation on the probability distribution p of the solution. This equation which we use in the ‘Numerical simulations’ section can be easily derived from Equation 22. Let p α ( t , z ) , z = ( z 1 , , z d ) , be the probability density at time t of the solution X ¯ t α to Equation 22 (this is equivalent to d m t α ( z ) = p α ( t , z ) d z ), then we have:

t p α ( t , z ) = div z ( ( f α ( t , z ) + γ = 1 P b α γ ( z , y ) p γ ( t , y ) d y ) p α ( t , z ) ) + 1 2 i , j = 1 d 2 z i z j ( D i j α ( z ) p α ( t , z ) ) , α = 1 , , P ,

where the d × d matrix D α is given by

D α ( z ) = γ = 1 P E Z [ β α γ ( z , Z ) ] E Z T [ β α γ ( z , Z ) ] + g α ( t , z ) g α T ( t , z )

with

E Z [ β α γ ( z , Z ) ] = β α γ ( z , y ) p γ ( t , y ) d y .

The P equations (Equation 24) yield the probability densities of the solutions X ¯ t α of the mean-field equations (Equation 22). Because of the propagation of chaos result, the X ¯ t α are statistically independent, but their probability functions are clearly functionally dependent.

We now spend some time on notations in order to obtain a somewhat more compact form of Equation 22. We define X ¯ t to be the dP-dimensional process X ¯ t = ( X ¯ t α ; α = 1 P ) . We similarly define f, g, b and β as the concatenations of functions f α , g α , b α , β and β α , γ , respectively. In details, f ( t , X ¯ ) = ( f α ( t , X ¯ t α ) ; α = 1 P ) , b ( X , Y ) = ( γ = 1 P b α γ ( X α , Y γ ) ; α = 1 P ) and W = ( W α ; α = 1 P ) . The term of noisy synaptic interactions requires a more careful treatment. We define β = ( β α γ ; α , γ = 1 P ) ( R d × δ ) P × P and B = ( B α γ ; α , γ = 1 P ) ( R δ ) P × P , and the product ⊙ of an element M ( R d × δ ) P × P and an element X ( R δ ) P × P as the element of ( R d ) P :

( M X ) α = γ M α γ X α γ .

We obtain the equivalent compact mean-field equation:

d X ¯ t = ( f ( t , X ¯ t ) + E Z ¯ [ b ( X ¯ t , Z ¯ t ) ] ) d t + g ( t , X ¯ t ) d W t + E Z ¯ [ β ( X ¯ t , Z ¯ t ) ] d B t .

Equations 22 and 24 are implicit equations on the law of X ¯ t .

We now state the main theoretical results of the paper as two theorems. The first theorem is about the well-posedness of the mean-field equation (Equation 22). The second is about the convergence of the solutions of the network equations to those of the mean-field equations. Since the proof of the second theorem involves similar ideas to those used in the proof of the first, it is given in the Appendix.

Theorem 2 Under assumptions (H1) to (H4), there exists a unique solution to the mean-field equation (Equation 22) on [ 0 , T ] for any T > 0 .

Let us denote by M ( C ) the set of probability distributions on C the set continuous functions [ 0 , T ] ( R d ) P , and M 2 ( C ) the space of square-integrable processes. Let ( W α ; α = 1 P ) (respectively, ( B α γ ; α , γ = 1 P ) ) also be a family of P (respectively, P 2 )-independent, m (respectively δ)-dimensional, adapted standard Brownian motions on ( Ω , F , P ) . Let us also note X 0 M ( R d ) P as the (random) initial condition of the mean-field equation. We introduce the map Φ acting on stochastic processes and defined by:

Φ : { M ( C ) M ( C ) , X ( Y t = { Y t α , α = 1 P } ) t with Y t α = X 0 α + 0 t ( f α ( s , X s α ) + γ = 1 P E Z [ b α γ ( X s α , Z s γ ) ] ) d s Y t α = + 0 t g α ( s , X s α ) d W s α Y t α = + γ = 1 P 0 t E Z [ β α γ ( X s α , Z s γ ) ] d B s α γ , α = 1 , , P .

We have introduced in the previous formula the process Z t with the same law as and independent of X t . There is a trivial identification between the solutions of the mean-field equation (Equation 22) and the fixed points of the map Φ: any fixed point of Φ provides a solution for Equation 22, and conversely, any solution of Equation 22 is a fixed point of Φ.

The following lemma is useful to prove the theorem:

Lemma 3 Let X 0 L 2 ( ( R d ) P ) be a square-integrable random variable. Let X be a solution of the mean-field equation (Equation 22) with initial condition X 0 . Under assumptions (H3) and (H4), there exists a constant C ( T ) > 0 depending on the parameters of the system and on the horizon T, such that:

E [ X t 2 ] C ( T ) , t [ 0 , T ] .

Proof Using the Itô formula for X t 2 , we have:

X t 2 = X 0 2 + 2 0 t ( X s T f ( s , X s ) + 1 2 g ( s , X s ) 2 + X s T E Z [ b ( X s , Z s ) ] + 1 2 E Z [ β ( X s , Z s ) ] 2 ) d s + N t ,

where N t is a stochastic integral, hence with a null expectation, E [ N t ] = 0 .

This expression involves the term x T b ( x , z ) . Because of assumption (H3), we clearly have:

| x T b ( x , z ) | x b ( x , z ) x K ˜ ( 1 + x 2 ) K ˜ ( 1 + x 2 ) .

It also involves the term x T f ( t , x ) + 1 2 g ( t , x ) 2 which, because of assumption (H4), is upperbounded by K ( 1 + x 2 ) . Finally, assumption (H3) again allows us to upperbound the term 1 2 E Z [ β ( X s , Z s ) ] 2 by K ˜ 2 ( 1 + X s 2 ) .

Finally, we obtain

E [ 1 + X t 2 ] E [ 1 + X 0 2 ] + 2 ( K + K ˜ 2 + K ˜ ) 0 t E [ 1 + X s 2 ] d s .

Using Gronwall’s inequality, we deduce the L 2 boundedness of the solutions of the mean-field equations. □

This lemma puts us in a position to prove the existence and uniqueness theorem:

Proof We start by showing the existence of solutions and then prove the uniqueness property. We recall that by the application of Lemma 3, the solutions will all have bounded second-order moment.

Existence. Let X 0 = ( X t 0 = { X t 0 α , α = 1 P } ) M ( C ) be a given stochastic process, and define the sequence of probability distributions ( X k ) k 0 on M ( C ) defined by induction by X k + 1 = Φ ( X k ) . Define also a sequence of processes Z k , k 0 , independent of the sequence of processes X k and having the same law. We note this as ‘X and Z i.i.d.’ below. We stop the processes at the time τ U k the first hitting time of the norm of X k to the constant value U. For convenience, we will make an abuse of notation in the proof and denote X t k = X t τ U k k . This implies that X t k belongs to B U d , the ball of radius U centered at the origin in R d , for all times t [ 0 , T ] .

Using the notations introduced for Equation 25, we decompose the difference X t k + 1 X t k as follows:

X t k + 1 X t k = 0 t ( f ( s , X s k ) f ( s , X s k 1 ) ) d s A t + 0 t E Z [ b ( X s k , Z s k ) b ( X s k 1 , Z s k 1 ) ] d s B t + 0 t ( g ( s , X s k ) g ( s , X s k 1 ) ) d W s C t + 0 t E Z [ β ( X s k , Z s k ) β ( X s k 1 , Z s k 1 ) ] d B s D t

and find an upperbound for M t k : = E [ sup s t X s k + 1 X s k 2 ] by finding upperbounds for the corresponding norms of the four terms A t , B t , C t and D t . Applying the discrete Cauchy-Schwartz inequality, we have:

X t k + 1 X t k 2 4 ( A t 2 + B t 2 + C t 2 + D t 2 )

and treat each term separately. The upperbounds for the first two terms are obtained using the Cauchy-Schwartz inequality, those of the last two terms using the Burkholder-Davis-Gundy martingale moment inequality.

The term A t is easily controlled using the Cauchy-Schwarz inequality and the use of assumption (H1):

A s 2 K U 2 T 0 s X u k X u k 1 2 d u .

Taking the sup of both sides of the last inequality, we obtain

sup s t A s 2 K U 2 T 0 t X s k X s k 1 2 d s K U 2 T 0 t sup u s X u k X u k 1 2 d s ,

from which follows the fact that

E [ sup s t A s 2 ] K U 2 T 0 t E [ sup u s X u k X u k 1 2 ] d s .

The term B t is controlled using the Cauchy-Schwartz inequality, assumption (H2), and the fact that the processes X and Z are independent with the same law:

B s 2 2 T L U 2 0 s ( X u k X u k 1 2 + E [ X u k X u k 1 2 ] ) d u .

Taking the sup of both sides of the last inequality, we obtain

sup s t B s 2 2 T L U 2 0 t ( sup u s X u k X u k 1 2 + E [ sup u s X u k X u k 1 2 ] ) d s ,

from which follows the fact that

E [ sup s t B s 2 ] 4 T L U 2 0 t E [ sup u s X u k X u k 1 2 ] d s .

The term C t is controlled using the fact that it is a martingale and applying the Burkholder-Davis-Gundy martingale moment inequality and assumption (H1):

E [ sup s t C s 2 ] 4 K U 2 0 t E [ sup u s X u k X u k 1 2 ] d s .

The term D t is also controlled using the fact that it is a martingale and applying the Burkholder-Davis-Gundy martingale moment inequality and assumption (H2):

E [ sup s t D t 2 ] 16 L U 2 0 t E [ sup u s X u k X u k 1 2 ] d s .

Putting all of these together, we get:

E [ sup s t X s k + 1 X s k 2 ] 4 ( T + 4 ) ( K U 2 + 4 L U 2 ) 0 t E [ sup u s X u k X u k 1 2 ] d s .

From the relation M t k K 0 t M s k 1 d s with K = 4 ( T + 4 ) ( K U 2 + 4 L U 2 ) , we get by an immediate recursion:

M t k ( K ) k 0 t 0 s 1 0 s k 1 M s k 0 d s 1 d s k ( K ) k t k k ! M T 0

and M T 0 is finite because the processes are bounded. The Bienaymé-Tchebychev inequality and Equation 27 now give

P ( sup s t X s k + 1 X s k 2 > 1 2 2 ( k + 1 ) ) 4 ( 4 K t ) k k ! M T 0

and this upper bound is the term of a convergent series. The Borel-Cantelli lemma stems that for almost any ω Ω , there exists a positive integer k 0 ( ω ) (ω denotes an element of the probability space Ω) such that

sup s t X s k + 1 X s k 2 1 2 2 ( k + 1 ) , k k 0 ( ω )

and hence

sup s t X s k + 1 X s k 1 2 k + 1 , k k 0 ( ω ) .

It follows that with probability 1, the partial sums:

X t 0 + k = 0 n ( X t k + 1 X t k ) = X t n

are uniformly (in t [ 0 , T ] ) convergent. Denote the thus defined limit by X ¯ t . It is clearly continuous and F t -adapted. On the other hand, the inequality (Equation 27) shows that for every fixed t, the sequence { X t n } n 1 is a Cauchy sequence in L 2 . Lemma 3 shows that X ¯ M 2 ( C ) .

It is easy to show using routine methods that X ¯ indeed satisfies Equation 22.

To complete the proof, we use a standard truncation property. This method replaces the function f by the truncated function:

f U ( t , x ) = { f ( t , x ) , x U f ( t , U x / x ) , x > U ,

and similarly for g. The functions f U and g U are globally Lipchitz continuous; hence, the previous proof shows that there exists a unique solution X ¯ U to equations (Equation 22) associated with the truncated functions. This solution satisfies the equation

X ¯ U ( t ) = X 0 + 0 t ( f U ( t , X ¯ U ( s ) ) + E Z ¯ [ b ( X ¯ U ( s ) , Z ¯ s ) ] ) d s + 0 t g U ( t , X ¯ U ( s ) ) d W s + 0 t E Z ¯ [ β ( X ¯ U ( s ) , Z ¯ s ) ] d B s , t [ 0 , T ] .

Let us now define the stopping time as

τ U = inf { t [ 0 , T ] , X ¯ U ( t ) U } .

It is easy to show that

X ¯ U ( t ) = X ¯ U ( t ) if 0 t τ U , U U ,

implying that the sequence of stopping times τ U is increasing. Using Lemma 3 which implies that the solution to Equation 22 is almost surely bounded, for almost all ω Ω , there exists U 0 ( ω ) such that τ U = T for all U U 0 . Now, define X ¯ ( t ) = X ¯ U 0 ( t ) , t [ 0 , T ] . Because of Equation 29, we have X ¯ ( t τ U ) = X ¯ U ( t τ U ) , and it follows from Equation 28 that

X ¯ ( t τ U ) = X 0 + 0 t τ U ( f U ( s , X ¯ s ) + E Z ¯ [ b ( X ¯ s , Z ¯ s ) ] ) d s + 0 t τ U g U ( s , X ¯ s ) d W s + 0 t τ U E Z ¯ [ β ( X ¯ U ( s ) , Z ¯ s ) ] d B s = X 0 + 0 t τ U ( f ( s , X ¯ s ) + E Z ¯ [ b ( X ¯ s , Z ¯ s ) ] ) d s + 0 t τ U g ( s , X ¯ s ) d W s + 0 t τ U E Z ¯ [ β ( X ¯ U ( s ) , Z ¯ s ) ] d B s ,

and letting U , we have shown the existence of solution to Equation 22 which, by Lemma 3, is square-integrable.

Uniqueness. Assume that X and Y are two solutions of the mean-field equations (Equation 22). From Lemma 3, we know that both solutions are in M 2 ( C ) . Moreover, using the bound Equation 26, we directly obtain the inequality:

E [ sup s t X s Y s 2 ] K 0 t E [ sup u s X u Y u 2 ] d s

which, by Gronwall’s theorem, directly implies that

E [ sup s t X s Y s 2 ] = 0

which ends the proof. □

We have proved the well-posedness of the mean-field equations. It remains to show that the solutions to the network equations converge to the solutions of the mean-field equations. This is what is achieved in the next theorem.

Theorem 4 Under assumptions (H1) to (H4), the following holds true:

• Convergencec: For each neuron i of population α, the law of the multidimensional process X i , N converges towards the law of the solution of the mean-field equation related to population α, namely X ¯ α .

• Propagation of chaos: For any k N , and any k-tuple ( i 1 , , i k ) , the law of the process ( X t i 1 , N , , X t i n , N , t T ) converges towards d m t p ( i 1 ) m t p ( i n ) , i.e. the asymptotic processes have the law of the solution of the mean-field equations and are all independent.

This theorem has important implications in neuroscience that we discuss in the ‘Discussion and conclusion’ section. Its proof is given in the Appendix.

4 Numerical simulations

At this point, we have provided a compact description of the activity of the network when the number of neurons tends to infinity. However, the structure of the solutions of these equations is complicated to understand from the implicit mean-field equations (Equation 22) and of their variants (such as the McKean-Vlasov-Fokker-Planck equations (Equation 24)). In this section, we present some classical ways to numerically approximate the solutions to these equations and give some indications about the rate of convergence and the accuracy of the simulation. These numerical schemes allow us to compute and visualize the solutions. We then compare the results of the two schemes for a network of FitzHugh-Nagumo neurons belonging to a single population and show their good agreement.

The main difficulty one faces when developing numerical schemes for Equations 22 and 24 is that they are non-local. By this, we mean that in the case of the McKean-Vlasov equations, they contain the expectation of a certain function under the law of the solution to the equations (see Equation 22). In the case of the corresponding Fokker-Planck equation, it contains integrals of the probability density functions which is a solution to the equation (see Equation 24).

4.1 Numerical simulations of the McKean-Vlasov equations

The fact that the McKean-Vlasov equations involve an expectation of a certain function under the law of the solution of the equation makes them particularly hard to simulate directly. One is often reduced to use Monte Carlo simulations to compute this expectation, which amounts to simulating the solution of the network equations themselves (see 68 ). This is the method we used. In its simplest fashion, it consists of a Monte Carlo simulation where one numerically solves the N network equations (Equation 21) with the classical Euler-Maruyama method a number of times with different initial conditions, and averages the trajectories of the solutions over the number of simulations.

In detail, let Δ t > 0 and N N . The discrete-time dynamics implemented in the stochastic numerical simulations consists of simulating M times a P-population discrete-time process ( X ˜ n i , n T / Δ t , i = 1 N ) , solution of the recursion, for i in population α:

X ˜ n + 1 i , r = X ˜ n i , r + Δ t { f α ( t , X ˜ n i , r ) d t + γ = 1 P 1 N γ j = 1 , p ( j ) = γ N γ b α γ ( X ˜ n i , r , X ˜ n j , r ) } + Δ t { g α ( t , X ˜ n i , r ) ξ n + 1 i , r + γ = 1 P 1 N γ j = 1 , p ( j ) = γ N γ β α γ ( X ˜ n i , r , X ˜ n j , r ) ζ n + 1 i γ } ,

where ξ n i , r and ζ n i γ , r are independent d- and δ-dimensional standard normal random variables. The initial conditions X ˜ 1 i , r , i = 1 , , N , are drawn independently from the same law within each population for each Monte Carlo simulation r = 1 , , M . One then chooses one neuron i α in each population α = 1 , , P . If the size N of the population is large enough, Theorem 4 states that the law, noted as p α ( t , X ) , of X i α should be close to that of the solution X ¯ α of the mean-field equations for α = 1 , , P . Hence, in effect, simulating the network is a good approximation (see below) of the simulation of the mean-field or McKean-Vlasov equations 68 69 . An approximation of p α ( t , X ) can be obtained from the Monte Carlo simulations by quantizing the phase space and incrementing the count of each bin whenever the trajectory of the i α neuron at time t falls into that particular bin. The resulting histogram can then be compared to the solution of the McKean-Vlasov-Fokker-Planck equation (Equation 24) corresponding to population α whose numerical solution is described next.

The mean square error between the solution of the numerical recursion (Equation 30) X ˜ n i and the solution of the mean-field equations (Equation 22) X ¯ n Δ t i is of order O ( Δ t + 1 / N ) , the first term being related to the error made by approximating the solution of the network of size N, X n Δ t i , N by an Euler-Maruyama method, and the second term, to the convergence of X n Δ t i , N towards the mean-field equation X ¯ n Δ t i when considering globally Lipschitz continuous dynamics (see proof of Theorem 4 in the Appendix). In our case, as shown before, the dynamics is only locally Lipschitz continuous. Finding efficient and provably convergent numerical schemes to approximate the solutions of such stochastic differential equations is an area of active research. There exist proofs that some schemes are divergent 70 or convergent 71 for some types of drift and diffusion coefficients. Since our equations are not included in either case, we conjecture convergence since we did not observe any divergence and leave the proof for future work.

4.2 Numerical simulations of the McKean-Vlasov-Fokker-Planck equation

For solving the McKean-Vlasov-Fokker-Planck equation (Equation 24), we have used the method of lines 72 73 . Its basic idea is to discretize the phase space and to keep the time continuous. In this way, the values p α ( t , X ) , α = 1 , , P of the probability density function of population α at each sample point X of the phase space are the solutions of P ODEs where the independent variable is the time. Each sample point in the phase space generates P ODEs, resulting in a system of coupled ODEs. The solutions to this system yield the values of the probability density functions p α solution of (Equation 24) at the sample points. The computation of the integral terms that appear in the McKean-Vlasov-Fokker-Planck equation is achieved through a recursive scheme, the Newton-Cotes method of order 6 74 . The dimensionality of the space being large and numerical errors increasing with the dimensionality of the integrand, such precise integration schemes are necessary. For an arbitrary real function f to be integrated between the values x 1 and x 2 , this numerical scheme reads:

x 1 x 2 f ( x ) d x 5 288 Δ x i = 1 M / 5 [ 19 f ( x 1 + ( 5 i 5 ) Δ x ) + 75 f ( x 1 + ( 5 i 4 ) Δ x ) + 50 f ( x 1 + ( 5 i 3 ) Δ x ) + 50 f ( x 1 + ( 5 i 2 ) Δ x ) + 75 f ( x 1 + ( 5 i 1 ) Δ x ) + 19 f ( x 1 + 5 i Δ x ) ] ,

where Δx is the integration step, and M = ( x 2 x 1 ) / Δ x is chosen to be an integer multiple of 5.

The discretization of the derivatives with respect to the phase space parameters is done through the following fourth-order central difference scheme:

d f ( x ) d x f ( x 2 Δ x ) 8 f ( x Δ x ) + 8 f ( x + Δ x ) f ( x + 2 Δ x ) 12 Δ x ,

for the first-order derivatives, and

d 2 f ( x ) d x 2 ( f ( x 2 Δ x ) + 16 f ( x Δ x ) 30 f ( x ) + 16 f ( x + Δ x ) f ( x + 2 Δ x ) ) / ( 12 Δ x 2 )

for the second-order derivatives (see 75 ).

Finally, we have used a Runge-Kutta method of order 2 (RK2) for the numerical integration of the resulting system of ODEs. This method is of the explicit kind for ordinary differential equations, and it is described by the following Butcher tableau:

4.3 Comparison between the solutions to the network and the mean-field equations

We illustrate these ideas with the example of a network of 100 FitzHugh-Nagumo neurons belonging to one, excitatory, population. We also use chemical synapses with the variation of the weights described by (Equation 11). We choose a finite volume, outside of which we assume that the probability density function (p.d.f.) is zero. We then discretize this volume with n V n w n Y points defined by

n V = def ( V max V min ) / Δ V , n w = def ( w max w min ) / Δ w , n y = def ( y max y min ) / Δ y ,

where V min , V max , w min , w max , y min and y max define the volume in which we solve the network equations and estimate the histogram defined in the ‘Numerical simulations of the McKean-Vlasov equations’ section, while ΔV, Δw and Δy are the quantization steps in each dimension of the phase space. For the simulation of the McKean-Vlasov-Fokker-Planck equation, instead, we use Dirichlet boundary conditions and assume the probability and its partial derivatives to be 0 on the boundary and outside the volume.

In general, the total number of coupled ODEs that we have to solve for the McKean-Vlasov-Fokker-Planck equation with the method of lines is the product P n V n w n y (in our case, we chose P = 1 ). This can become fairly large if we increase the precision of the phase space discretization. Moreover, increasing the precision of the simulation in the phase space, in order to ensure the numerical stability of the method of lines, requires to decrease the time step Δt used in the RK2 scheme. This can strongly impact the efficiency of the numerical method (see the ‘Numerical simulations with GPUs’ section).

In the simulations shown in the left-hand parts of Figures 4 and 5, we have used one population of 100 excitatory FitzHugh-Nagumo neurons connected with chemical synapses. We performed 10 , 000 Monte Carlo simulations of the network equations (Equation 14) with the Euler-Maruyama method in order to approximate the probability density. The model for the time variation of the synaptic weights is the simple model. The p.d.f. p ( 0 , V , w , y ) of the initial condition is Gaussian and reads

p ( 0 , V , w , y ) = 1 ( 2 π ) 3 / 2 σ V 0 σ w 0 σ y 0 e ( V V ¯ 0 ) 2 / ( 2 σ V 0 2 ) ( w w ¯ 0 ) 2 / ( 2 σ w 0 2 ) ( y y ¯ 0 ) 2 / ( 2 σ y 0 2 ) .

<p>Figure 4</p>

Joint probability distribution.

Joint probability distribution. ( V , w ) computed with the Monte Carlo algorithm for the network equations (Equation 14) (left) compared with the solution of the McKean-Vlasov-Fokker-Planck equation (Equation 24) (right), sampled at four times t fin . Parameters are given in Table 1, with a current I = 0.4 corresponding to a stable limit cycle. Initial conditions (first column of Table 1) are concentrated inside this limit cycle. The two distributions are similar and centered around the limit cycle with two peaks (see text).

<p>Figure 5</p>

Joint probability distribution.

Joint probability distribution. ( V , y ) computed with the Monte Carlo algorithm for the network equations (Equation 14) (left) compared with the solution of the McKean-Vlasov-Fokker-Planck equation (Equation 24) (right), sampled at four times tfin. Parameters are given in Table 1, with a current I=0.4 corresponding to a stable limit cycle. Initial conditions (first column of Table 1) are concentrated inside this limit cycle. The two distributions are similar and centered around the limit cycle with two peaks (see text).

The parameters are given in the first column of Table 1. In this table, the parameter t fin is the time at which we stop the computation of the trajectories in the case of the network equations and the computation of the solution of the McKean-Vlasov-Fokker-Planck equation in the case of the mean-field equations. The sequence [ 0.5 , 1.2 , 1.5 , 2.2 ] indicates that we compute the solutions at those four time instants corresponding to the four rows of Figures 4 and 5. The phase space has been quantized with the parameters shown in the second column of the same table to solve the McKean-Vlasov-Fokker-Planck equation. This quantization has also been used to build the histograms that represent the marginal probability densities with respect to the pairs ( V , w ) and ( V , y ) of coordinates of the state vector of a particular neuron. These histograms have then been interpolated to build the surfaces shown in the left-hand side of Figures 4 and 5. The parameters of the FitzHugh-Nagumo model are the same for each neuron of the population: they are shown in the third column of Table 1.

<p>Table 1</p>

Initial condition

Phase space

FitzHugh-Nagumo

Synaptic weights

Synapse

Results are shown in Figures 4 and 5 (see text).

t fin = [ 0.5 , 1.2 , 1.5 , 2.2 ] ,

Δt = 0.01 (mean field),

0.1 (network)

V min = 3

a = 0.7

J ¯ = 1

V rev = 1

V max = 3

b = 0.8

σ J = 0.2

a r = 1

ΔV = 0.1

c = 0.08

a d = 1

V ¯ 0 = 0.0

w min = 2

I = 0.4

T max = 1

σ V 0 = 0.4

w max = 2

σ ext = 0

λ = 0.2

w ¯ 0 = 0.5

Δw = 0.1

V T = 2

σ w 0 = 0.4

y min = 0

Γ = 0.1

y ¯ 0 = 0.3

y max = 1

Λ = 0.5

σ y 0 = 0.05

Δy = 0.06

Parameters used in the simulations of the neural network and for solving the McKean-Vlasov-Fokker-Planck equation

The parameters for the noisy model of maximum conductances of Equation 11 are shown in the fourth column of the table. For these values of J ¯ and σ J , the probability that the maximum conductances change sign is very small. Finally, the parameters of the chemical synapses are shown in the sixth column. The parameters Γ and Λ are those of the χ function (Equation 3). The solutions are computed over an interval of t fin = 0.5 , 1.2 , 1.5 , 2.2 time units with a time sampling of Δ t = 0.1 for the network and Δ t = 0.01 for the McKean-Vlasov-Fokker-Planck equation. The rest of the parameters are the typical values for the FitzHugh-Nagumo equations.

The marginals estimated from the trajectories of the network solutions are then compared to those obtained from the numerical solution of the McKean-Vlasov-Fokker-Planck equation (see Figures 4 and 5 right), using the method of lines explained above and starting from the same initial conditions (Equation 31) as the neural network.

We have used the value I = 0.4 for the external current (this value corresponds to the existence of a stable limit cycle for the isolated FitzHugh-Nagumo neuron), and the initial conditions have the values V ¯ 0 = 0 , w ¯ 0 = 0.5 and y ¯ 0 = 0.3 ; therefore, the initial points of the trajectories in the phase space are concentrated inside the limit cycle. We therefore expect that the solutions of the neural network and the McKean-Vlasov-Fokker-Planck equation will concentrate their mass around the limit cycle. This is what is observed in Figures 4 and 5, where the simulation of the neural network (left-hand side) is in very good agreement with the results of the simulation of the McKean-Vlasov-Fokker-Planck equation (right-hand side). Note that the densities display two peaks. These two peaks correspond to the fact that depending upon the position of the initial condition with respect to the nullclines of the FitzHugh-Nagumo equations, the points in the phase space follow two different classes of trajectories, as shown in Figure 6. The two peaks then rotate along the limit cycle in the ( V , w ) space (see also the ‘Numerical simulations with GPUs’ section).

<p>Figure 6</p>

Projection of 100 trajectories in the (V,w) (top left), (V,y) (top right) and ( w , y ) (bottom) planes.

Projection of 100 trajectories in the (V,w)(top left),(V,y)(top right) and (w,y)(bottom) planes. The limit cycle is especially visible in the (V,w) projection (red curves). The initial conditions split the trajectories into two classes corresponding to the two peaks shown in Figures 4 and 5. The parameters are the same as those used to generate these two pictures.

Figures 4 and 5 show a qualitative similarity between the marginal probability density functions obtained by simulating the network and those obtained by solving the Fokker-Planck equation corresponding to the mean-field equations. To make this more quantitative, we computed the Kullback-Leibler divergence D KL ( p Network | | p MVFP ) between the two distributions.

We performed 10 , 000 Monte Carlo simulations of the network equations up to t fin = 10 for increasing values of the network size N. As shown in Figure 7, the Kullback-Leibler divergence does decrease with increasing values of N, thereby confirming the fact that even for relatively small values of N, the average behavior of the network is well represented by the mean-field system described by the McKean-Vlasov-Fokker-Planck equation.

<p>Figure 7</p>

Variation of the Kullback-Leibler divergence.

Variation of the Kullback-Leibler divergence. Variation of the Kullback-Leibler divergence between the marginal probability density function p ( t , V , w ) estimated from the network equations and computed from the McKean-Vlasov-Fokker-Planck equation as a function of the network size. We have performed 10 , 000 Monte Carlo simulations of the network equations up to time t fin = 10.0 .

4.4 Numerical simulations with GPUs

Unfortunately, the algorithm for solving the McKean-Vlasov-Fokker-Planck equation described in the previous section is computationally very expensive. In fact, when the number of points in the discretized grid of the ( V , w , y ) phase space is big, i.e. when the discretization steps ΔV, Δw and Δy are small, we also need to keep Δt small enough in order to guarantee the stability of the algorithm. This implies that the number of equations that must be solved has to be large and moreover that they must be solved with a small time step if we want to keep the numerical errors small. This will inevitably slow down the simulations. We have dealt with this problem by using a more powerful hardware, the graphical processing units (GPUs).

We have changed the Runge-Kutta scheme of order 2 used for the simulations shown in the ‘Numerical simulations of the McKean-Vlasov-Fokker-Planck equation’ section and adopted a more accurate Runge-Kutta scheme of order 4. This was done because with the more powerful machine, each computation of the right-hand side of the equation is faster, making it possible to use four calls per time step instead of two in the previous method. Hence, the parallel hardware allowed us to use a more accurate method.

One of the purposes of the numerical study is to get a feeling for how the different parameters, in particular those related to the sources of noise, influence the solutions of the McKean-Vlasov-Fokker-Planck equation. This is meant to prepare the ground for the study of the bifurcation of these solutions with respect to these parameters, as was done in 76 in a different context. For this preliminary study, we varied the input current I and the parameter σ ext controlling the intensity of the noise on the membrane potential in Equations 14. The McKean-Vlasov-Fokker-Planck equation writes in this case:e

t p ( t , V , w , y ) = V { [ V V 3 3 w + I J ¯ ( V V rev ) R 3 y p ( t , V , w , y ) d V d w d y ] × p ( t , V , w , y ) } w [ c ( V + a b w ) p ( t , V , w , y ) ] y { [ a r S ( V ) ( 1 y ) a d y ] p ( t , V , w , y ) } + 1 2 2 V 2 { [ σ ext 2 + σ J 2 ( V V rev ) 2 ( R 3 y p ( t , V , w , y ) d V d w d y ) 2 ] × p ( t , V , w , y ) } + 1 2 σ w 2 2 w 2 p ( t , V , w , y ) + 1 2 2 y 2 { [ a r S ( V ) ( 1 y ) + a d y ] χ 2 ( y ) p ( t , V , w , y ) } .

The simulations were run with the χ function (Equation 3); the initial condition described by Equation 31 and the parameters are shown in Table 2. These parameters are similar to those used in the previous numerical simulations, but they differ in the size of the grid which is larger in this case.

<p>Table 2</p>

Initial condition

Phase space

Stochastic FN neuron

Synaptic weights

The simulations are shown in Figures 8 and 9 and in Additional files 1, 2, 3 and 4.

Δt = 0.0025,0.0012

V min = 4

a = 0.7

J ¯ = 1

V ¯ 0 = 0.0

V max = 4

b = 0.8

σ J = 0.01

σ V 0 = 0.2

ΔV = 0.027

c = 0.08

w ¯ 0 = 0.5

w min = 3

I = 0.4,0.7

σ w 0 = 0.2

w max = 3

σ ext = 0.27 , 0.45

y ¯ 0 = 0.3

Δw = 0.02

σ w = 0.0007

σ y 0 = 0.05

y min = 0

y max = 1

Δy = 0.003

Parameters used in the simulations of the McKean-Vlasov-Fokker-Planck equation on GPUs

Four snapshots of the solution are shown in Figure 8 (corresponding to the values I = 0.4 and σ ext = 0.27 of the external input current and of the standard deviation of the noise on the membrane potential), and three are shown in Figure 9 (corresponding to the values I = 0.7 and σ ext = 0.45 ). In the figures, the left column corresponds to the values of the marginal p ( t , V , w ) , and the right column corresponds to the values of the marginal p ( t , V , y ) . Both are necessary to get an idea of the shape of the full distribution p ( t , V , w , y ) . The first row of Figure 8 shows the initial conditions. They are the same for the results shown in Figure 9. The second, third and fourth rows of Figure 8 show the time instants t = 30.0 , t = 50.0 and at convergence (the time units differ from those of the previous section, but it is irrelevant to this discussion). The three rows of Figure 9 show the time instants t = 30.0 , t = 50.0 and at convergence. In both cases, the solution appears to converge to a stationary distribution whose mass is distributed over a ‘blurred’ version of the limit cycle of the isolated neuron. The ‘blurriness’ increases with the variance of the noise. The four movies for these two cases are available as Additional files 1, 2, 3 and 4.

<p>Figure 8</p>

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation.

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation. Marginals with respect to the V and w variables (left) and to the V and y variables (right) of the solution of the McKean-Vlasov-Fokker-Planck equation. The first row shows the initial condition; the second, the marginals at time 30.0; the third, the marginals at time 50.0; and the fourth, the stationary (large time) solutions. The input current I is equal to 0.4 and σ ext = 0.27 . These are screenshots at different times of movies available as Additional files 1 and 2.

<p>Figure 9</p>

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation.

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation. Marginals with respect to the V and w variables (left) and to the V and y variables (right) of the solution of the McKean-Vlasov-Fokker-Planck equation. The first row shows the marginals at time 30.0, the second the marginals at time 50.0 and the third the stationary (large time) solutions. The input current I is equal to 0.7 and σ ext = 0.45 . These are screenshots at different times of movies available as Additional files 3 and 4.

The results shown in Figures 8 and 9 and in Additional files 1, 2, 3 and 4 were obtained using two machines, each with seven nVidia Tesla C2050 cards, six 2.66 GHz dual-Xeon X5650 processors and 72G of ram. The communication inside each machine was done using the lpthreads library and between machines using MPI calls. The mean execution time per time step using the parameters already described is 0.05 s.

The reader interested in more details in the numerical implementations and in the gains that can be achieved by the use of GPUs can consult 77 .

In Figure 10, we show a solution to the McKean-Vlasov-Fokker-Planck equation which is qualitatively quite different from the solutions shown in Figures 8 and 9: The stationary solution is concentrated at a point in ( V , w , y ) space. This is an indication that perhaps, between the values −0.8 and 0.4 of the input current, the solutions to the McKean-Vlasov-Fokker-Planck equation have bifurcated. The numerical tools we have developed may be a way to build an intuition to guide a rigorous analysis of these phenomena.

<p>Figure 10</p>

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation at convergence.

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation at convergence. Marginals with respect to the V and w variables (left) and to the V and y variables (right) of the solution of the McKean-Vlasov-Fokker-Planck equation at convergence. The parameters are those in Table 1 except for the input current I which is equal to −0.8, σext=0.45 and t fin = 2.2 . Compare with the last row of Figure 9 (see text).

5 Discussion and conclusion

In this article, we addressed the problem of the limit in law of networks of biologically inspired neurons as the number of neurons tends to infinity. We emphasized the necessity of dealing with biologically inspired models and discussed at length the type of models relevant to this study. We chose to address the case conductance-based network models that are a relevant description of the neuronal activity. Mathematical results on the analysis of these diffusion processes in interaction resulted to the replacement of a set of NP d-dimensional coupled equations (the network equations) in the limit of large Ns by P d-dimensional mean-field equations describing the global behavior of the network. However, the price to pay for this reduction was the fact that the resulting mean-field equations are nonstandard stochastic differential equations, similar to the McKean-Vlasov equations. These can be expressed either as implicit equations on the law of the solution or, in terms of probability density function through the McKean-Vlasov-Fokker-Planck equations, as a nonlinear, non-local partial differential equation. These equations are, in general, hard to study theoretically.

Besides the fact that we explicitly model real spiking neurons, the mathematical part of our work differs from that of previous authors such as McKean, Tanaka and Sznitman (see the ‘Introduction’ section) because we are considering several populations with the effect that the analysis is significantly more complicated. Our hypotheses are also more general, e.g. the drift and diffusion functions are nontrivial and satisfy the general condition (H4) which is more general than the usual linear growth condition. Also, they are only assumed locally (and not globally) Lipschitz continuous to be able to deal, for example, with the FitzHugh-Nagumo model. A locally Lipschitz continuous case was recently addressed in a different context for a model of swarming in 67 .

Proofs of our results, for somewhat stronger hypotheses than ours and in special cases, are scattered in the literature, as briefly reviewed in the ‘Introduction’ and ‘Setting of the problem’ sections. Our main contribution is that we provide a complete, self-sufficient proof in a fairly general case by gathering all the ingredients that are required for our neuroscience applications. In particular, the case of the FitzHugh-Nagumo model where the drift function does not satisfy the linear growth condition involves a generalization of previous works using the more general growth condition (H4).

The simulation of these equations can itself be very costly. We, hence, addressed in the ‘Numerical simulations’ section numerical methods to compute the solutions of these equations, in the probabilistic framework, using the convergence result of the network equations to the mean-field limit and standard integration methods of differential equations or in the Fokker-Planck framework. The simulations performed for different values of the external input current parameter and one of the parameters controlling the noise allowed us to show that the spatio-temporal shape of the probability density function describing the solution of the McKean-Vlasov-Fokker-Planck equation was sensitive to the variations of these parameters, as shown in Figures 8 and 9. However, we did not address the full characterization of the dynamics of the solutions in the present article. This appears to be a complex question that will be the subject of future work. It is known that for different McKean-Vlasov equations, stationary solutions of these equations do not necessarily exist and, when they do, are not necessarily unique (see 78 ). A very particular case of these equations was treated in 76 where the authors consider that the function f α is linear, g α is constant and b α β ( x , y ) = S β ( y ) . This model, known as the firing-rate model, is shown in that paper to have the Gaussian solutions when the initial data is Gaussian, and the dynamics of the solutions can be exactly reduced to a set of 2P-coupled ordinary differential equations governing the mean and the standard deviation of the solution. Under these assumptions, a complete study of the solutions is possible, and the dependence upon the parameters can be understood through bifurcation analysis. The authors show that intrinsic noise levels govern the dynamics, creating or destroying fixed points and periodic orbits.

The mean-field description has also deep theoretical implications in neuroscience. Indeed, it points towards the fact that neurons encode their responses to stimuli through probability distributions. This type of coding was evoked by several authors 47 , and the mean-field approach shows that under some mild conditions, this phenomenon arises: all neurons belonging to a particular population can be seen as independent realizations of the same process, governed by the mean-field equation. The relevance of this phenomenon is reinforced by the fact that it has recently been observed experimentally that neurons had correlation levels significantly below what had been previously reported 13 . This independence has deep implications on the efficiency of neural coding which the propagation of chaos theory accounts for. To illustrate this phenomenon, we have performed the following simulations. Considering a network of 2, 10 and 100 FitzHugh-Nagumo neurons, we have simulated 2,000 times the network equations over some time interval [ 0 , 100 ] . We have picked at random a pair of neurons and computed the time variation of the cross-correlation of the values of their state variables. The results are shown in Figure 11. It appears that the propagation of chaos is observable for relatively small values of the number of neurons in the network, thus indicating once more that the theory developed in this paper in the limit case of an infinite number of neurons is quite robust to finite-size effects.f

<p>Figure 11</p>

Variations over time of the cross-correlation of ( V , w , y ) variables of several FitzHugh-Nagumo neurons in a network.

Variations over time of the cross-correlation of (V,w,y)variables of several FitzHugh-Nagumo neurons in a network. Top left: 2 neurons. Top right: 10 neurons. Bottom: 100 neurons. The cross-correlation decreases steadily with the number of neurons in the network.

The present study develops theoretical arguments to derive the mean-field equations resulting from the activity of large neuron ensembles. However, the rigorous and formal approach developed here does not allow direct characterization of brain states. The paper, however, opens the way to rigorous analysis of the dynamics of large neuron ensembles through derivations of different quantities that may be relevant. A first approach could be to derive the equations of the successive moments of the solutions. Truncating this expansion would yield systems of ordinary differential equations that can give approximate information on the solution. However, the choice of the number of moments taken into account is still an open question that can raise several deep questions 46 .

Electronic Supplementary Material

<p>Additional file 1</p>

Time evolution of the ( V , w ) marginal of the solution to the McKean-Vlasov-Fokker-Planck equation. The four images in the left part of Figure 8 are four snapshots of this movie taken at time 0 (initial condition), time 30, time 50 and at a large enough time for the solution to be stationary. The input current is equal to 0.4, and the standard deviation of the membrane potential noise, to 0.27. (AVI 2.0 MB)

Click here for file

<p>Additional file 2</p>

Time evolution of the ( V , y ) marginal of the solution to the McKean-Vlasov-Fokker-Planck equation. The four images in the right part of Figure 8 are four snapshots of this movie taken at time 0 (initial condition), time 30, time 50 and at a large enough time for the solution to be stationary. The input current is equal to 0.4, and the standard deviation of the membrane potential noise, to 0.27. (AVI 1.5 MB)

Click here for file

<p>Additional file 3</p>

Time evolution of the ( V , w ) marginal of the solution to the McKean-Vlasov-Fokker-Planck equation. The three images in the left part of Figure 9 are three snapshots of this movie taken at time 30, time 50 and at a large enough time for the solution to be stationary. The input current is equal to 0.7, and the standard deviation of the membrane potential noise, to 0.45. (AVI 3.0 MB)

Click here for file

<p>Additional file 4</p>

Time evolution of the ( V , y ) marginal of the solution to the McKean-Vlasov-Fokker-Planck equation. The three images in the right part of Figure 9 are three snapshots of this movie taken at time 30, time 50, and at a large enough time for the solution to be stationary. The input current is equal to 0.7, and the standard deviation of the membrane potential noise, to 0.45. (AVI 2.2 MB)

Click here for file

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JB and DF developed the code for solving the stochastic differential equations, the McKean-Vlasov equations and the McKean-Vlasov-Fokker-Planck equations. They ran the numerical experiments and generated all the figures. DF derived some of the McKean-Vlasov equations in a heuristic fashion. OF and JT developed the models, proved the theorems and wrote the paper. All authors read and approved the final manuscript.

Acknowledgements

This work was partially supported by the ERC grant #227747 NerVi, the FACETS-ITN Marie-Curie Initial Training Network #237955 and the IP project BrainScaleS #269921.

aMore precisely, as shown in 79 80 , the convergence is to a larger - 13-dimensional - system with an invariant four-dimensional manifold on which the solution lives given appropriate initial conditions. See also 81 .

bAs we will see in the proof, most properties are valid as soon as N α tends to infinity as N goes to infinity for all α { 1 , , P } , the previous assumption will allow quantifying the speed of convergence towards the asymptotic regime.

cThe type of convergence is specified in the proof given in the Appendix.

dThe notation m t α was introduced right after Equation 22.

eWe have included a small noise (controlled by the parameter σ w ) on the adaptation variable w. This does not change the previous analysis, in particular proposition 1, but makes the McKean-Vlasov-Fokker-Planck equation well-posed in a cube of the state space with 0 boundary value, see e.g. 82 .

fNote that we did not estimate the correlation within larger networks since, as predicted by Theorem 4, it will be smaller and smaller, requiring an increasingly large number of Monte Carlo simulations.

gNote that i j and i k as soon as p ( i ) p ( j ) = p ( k ) = γ . In the case where p ( i ) = γ , it is easy to check that when j (respectively, k) is equal to i, all terms such that k j (respectively, j k ) are equal to 0.

<p>A class of Markov processes associated with nonlinear parabolic equations</p>McKeanHProc Natl Acad Sci USA196656619071911<p>Propagation of chaos for a class of non-linear parabolic equations</p>McKeanHStochastic Differential EquationsAir Force Office Sci. Res., Arlington <p>Lecture Series in Differential Equations 7</p> 19674157<p>The Vlasov dynamics and its fluctuations in the 1/n limit of interacting classical particles</p>BraunWHeppKCommun Math Phys1977562101113<p>Critical dynamics and fluctuations for a mean-field model of cooperative behavior</p>DawsonDJ Stat Phys1983312985<p>Prescribing a system of random variables by conditional distributions</p>DobrushinRLTheory Probab Appl197015458486<p>Probabilistic treatment of the Boltzmann equation of Maxwellian molecules</p>TanakaHProbab Theory Relat Fields19784667105<p>Central limit theorem for a simple diffusion model of interacting particles</p>TanakaHHitsudaMHiroshima Math J1981112415423<p>Some probabilistic problems in the spatially homogeneous Boltzmann equation</p>TanakaHTheory and Application of Random Fields Lecture Notes in Control and Information SciencesSpringer, BerlinKallianpur G1983258267<p>Limit theorems for certain diffusion processes with interaction</p>TanakaHStochastic AnalysisNorth-Holland, Amsterdam <p>North-Holland Mathematical Library 32</p> 1984469488<p>Nonlinear reflecting diffusion process, and the propagation of chaos and fluctuations associated</p>SznitmanAJ Funct Anal1984563311336<p>A propagation of chaos result for Burgers’ equation</p>SznitmanAProbab Theory Relat Fields1986714581613<p>Topics in propagation of chaos</p>SznitmanASEcole d’Eté de Probabilités de Saint-Flour XIX 1989Springer, BerlinBurkholder D, Pardoux E, Sznitman AS <p>Lecture Notes in Math. 1464</p> 1991165251<p>Decorrelated neuronal firing in cortical microcircuits</p>EckerABerensPKelirisGBethgeMLogothetisNToliasAScience20103275965584<p>A quantitative description of membrane current and its application to conduction and excitation in nerve</p>HodgkinAHuxleyAJ Physiol1952117500544<p>Theoretical effect of temperature on threshold in the Hodgkin-Huxley nerve model</p>FitzhughRJ Gen Physiol19664959891005<p>Mathematical models of excitation and propagation in nerve</p>FitzHughRBiological EngineeringMcGraw-Hill Book Co., New YorkSchwan HP1969185IzhikevichEMDynamical Systems in Neuroscience: The Geometry of Excitability and BurstingMIT Press, Cambridge2007<p>Recherches quantitatifs sur l’excitation des nerfs traitee comme une polarisation</p>LapicqueLJ. Physiol. Paris19079620635TuckwellHCIntroduction to Theoretical NeurobiologyCambridge University Press, Cambridge1988ErmentroutGBTermanDFoundations of Mathematical Neuroscience.Springer, Berlin <p>Interdisciplinary Applied Mathematics</p> 2010<p>Mathematical models of threshold phenomena in the nerve membrane</p>FitzHughRBull Math Biol1955174257278<p>An active pulse transmission line simulating nerve axon</p>NagumoJArimotoSYoshizawaSProc IRE19625020612070<p>Synthesis of models for excitable membranes, synaptic transmission and neuromodulation using a common kinetic formalism</p>DestexheAMainenZSejnowskiTJ Comput Neurosci199413195230KandelERSchwartzJHJesselTMPrinciples of Neural ScienceMcGraw-Hill, New York42000<p>A theory of the term structure of interest rates</p>CoxJCIngersollJCJrRossSAEconometrica1985532385407MaoXStochastic Differential Equations and ApplicationsHorwood, Chichester22008<p>Characteristics of random nets of analog neuron-like elements</p>AmariSIEEE Trans Syst Man Cybern197225643657<p>Dynamics of pattern formation in lateral-inhibition type neural fields</p>AmariSBiol Cybern19772727787<p>Excitatory and inhibitory interactions in localized populations of model neurons</p>WilsonHCowanJBiophys J197212124<p>A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue</p>WilsonHCowanJBiol Cybern19731325580<p>Nichtlineare Integralgleichungen nebst Anwendungen</p>HammersteinAActa Math193054117176<p>Absolute stability and complete synchronization in a class of neural fields models</p>FaugerasOGrimbertFSlotineJJSIAM J Appl Math200861205250<p>Bumps, breathers, and waves in a neural network with spike frequency adaptation</p>CoombesSOwenMRPhys Rev Lett20059414Article ID 148102<p>Neural networks as spatio-temporal pattern-forming systems</p>ErmentroutBRep Prog Phys199861353430<p>Temporal oscillations in neuronal nets</p>ErmentroutGCowanJJ Math Biol197973265280<p>Multiple bumps in a neuronal model of working memory</p>LaingCTroyWGutkinBErmentroutGSIAM J Appl Math2002636297<p>Hyperbolic planforms in relation to visual edges and textures perception</p>ChossatPFaugerasOPLoS Comput Biol2009512Article ID e1000625<p>Local/global analysis of the stationary solutions of some neural field equations</p>VeltzRFaugerasOSIAM J Appl Dyn Syst201093954998<p>Asynchronous states in networks of pulse-coupled neuron</p>AbbottLVan VreeswijkCPhys Rev19934814831490<p>Model of global spontaneous activity and local structured delay activity during delay periods in the cerebral cortex</p>AmitDBrunelNCereb Cortex19977237252<p>Fast global oscillations in networks of integrate-and-fire neurons with low firing rates</p>BrunelNHakimVNeural Comput19991116211671<p>Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons</p>BrunelNJ Comput Neurosci20008183208<p>A master equation formalism for macroscopic modeling of asynchronous irregular activity states</p>El BoustaniSDestexheANeural Comput20092146100<p>Population dynamics of interacting spiking neurons</p>MattiaMDel GiudicePPhys Rev E, Stat Nonlinear Soft Matter Phys2002665Article ID 51917<p>An effective kinetic representation of fluctuation-driven neuronal networks with application to simple and complex cells in visual cortex</p>CaiDTaoLShelleyMMcLaughlinDWProc Natl Acad Sci USA20041012077577762<p>Critical analysis of dimension reduction by a moment closure method in a population density approach to neural network modeling</p>LyCTranchinaDNeural Comput200719820322092RollsETDecoGThe Noisy Brain: Stochastic Dynamics as a Principle of Brain FunctionOxford University Press, Oxford2010<p>The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs</p>SoftkyWRKochCJ Neurosci199313334350<p>Firing rate of noisy quadratic integrate-and-fire neurons</p>BrunelNLathamPNeural Comput20031522812306Plesser HE: Aspects of signal processing in noisy neurons. PhD thesis. Georg-August-Universität; 1999.<p>First hitting time of double integral processes to curved boundaries</p>TouboulJFaugerasOAdv Appl Probab2008402501528<p>Neuronal avalanches are diverse and precise activity patterns that are stable for many hours in cortical slice cultures</p>BeggsJMPlenzDJ Neurosci2004242252165229<p>Avalanches in a stochastic model of spiking neurons</p>BenayounMCowanJDvan DrongelenWWallaceEPLoS Comput Biol201067Article ID e1000846<p>Phase transitions towards criticality in a neural system with adaptive interactions</p>LevinaAHerrmannJMGeiselTPhys Rev Lett200910211Article ID 118110<p>Can power-law scaling and neuronal avalanches arise from stochastic dynamics?</p>TouboulJDestexheAPLoS ONE201052Article ID e8982<p>Stochastic neural field theory and the system-size expansion</p>BressloffPSIAM J Appl Math20097014881521<p>Field-theoretic approach to fluctuation effects in neural networks</p>BuiceMACowanJDPhys Rev E, Stat Nonlinear Soft Matter Phys2007755Article ID 051919<p>Systematic fluctuation expansion for neural network activity equations</p>BuiceMCowanJChowCNeural Comput2010222377426<p>Master-equation approach to stochastic neurodynamics</p>OhiraTCowanJPhys Rev E, Stat Nonlinear Soft Matter Phys199348322592266<p>Mean-field analysis of neuronal spike dynamics</p>TrevesANetwork199343259284<p>Time structure of the activity in neural network models</p>GerstnerWPhys Rev E, Stat Nonlinear Soft Matter Phys199551738758<p>A constructive mean-field analysis of multi-population neural networks with random synaptic weights and stochastic inputs</p>FaugerasOTouboulJCessacBFront Comput Neurosci2009doi:10.3389/neuro.10.001.2009<p>Averaged and quenched propagation of chaos for spin glass dynamics</p>GuionnetAProbab Theory Relat Fields19971092183215<p>Population model of hippocampal pyramidal neurons, linking to refractory density approach to conductance-based neurons</p>ChizhovAVGrahamLJPhys Rev E, Stat Nonlinear Soft Matter Phys200775Article ID 011924<p>Chaos in random neural networks</p>SompolinskyHCrisantiASommersHPhys Rev Lett1988613259262<p>Relaxational dynamics of the Edwards-Anderson model and the mean-field theory of spin-glasses</p>SompolinskyHZippeliusAPhys Rev B, Condens Matter Mater Phys1982251168606875<p>Stochastic mean-field limit: non-Lipschitz forces and swarming</p>BolleyFCañizoJACarrilloJAMath Models Methods Appl Sci2011211121792210<p>A stochastic particle method with random weights for the computation of statistical solutions of McKean-Vlasov equations</p>TalayDVaillantOAnn Appl Probab200313140180<p>A stochastic particle method for the McKean-Vlasov and the Burgers equation</p>BossyMTalayDMath Comput199766217157192<p>Strong and weak divergence in finite time of Euler’s method for stochastic differential equations with non-globally Lipschitz continuous coefficients</p>HutzenthalerMJentzenAKloedenPProc R Soc, Math Phys Eng Sci2011467213015631576<p>Convergence of the stochastic Euler scheme for locally Lipschitz coefficients</p>HutzenthalerMJentzenAFound Comput Math2011116657706SchiesserWThe Numerical Method of Lines: Integration of Partial Differential EquationsAcademic Press, San Diego1991SchiesserWEGriffithsGWA Compendium of Partial Differential Equation Models: Method of Lines Analysis with MatlabCambridge University Press, New York12009UeberhuberCWNumerical Computation 2: Methods, Software, and AnalysisSpringer, Berlin1997MortonKWMayersDFNumerical Solution of Partial Differential Equations: An IntroductionCambridge University Press, Cambridge2005<p>Noise-induced behaviors in neural mean field dynamics</p>TouboulJHermannGFaugerasOSIAM J Appl Dyn Syst20121114981Baladron J, Fasoli D, Faugeras O: Three applications of GPU computing in neuroscience. Comput Sci Eng 2012, 14:40-47.<p>Non-uniqueness of stationary measures for self-stabilizing processes</p>HerrmannSTugautJStoch Process Appl2010120712151246<p>Fluid limit theorems for stochastic hybrid systems with application to neuron models</p>PakdamanKThieullenMWainribGAdv Appl Probab2010423761794Wainrib G: Randomness in neurons: a multiscale probabilistic analysis. PhD thesis. Ecole Polytechnique; 2010.<p>Stochastic differential equation models for ion channel noise in Hodgkin-Huxley neurons</p>GoldwynJHImennovNSFamulareMShea-BrownEPhys Rev E, Stat Nonlinear Soft Matter Phys2011834Article ID 041908EvansLCPartial Differential EquationsAmerican Mathematical Society, Providence <p>Graduate Studies in Mathematics 19</p> 1998