NeuroMathComp Laboratory, INRIA, Sophia-Antipolis Méditerranée, 06902, France

NeuroMathComp Laboratory, ENS, Paris, 75013, France

BANG Laboratory, INRIA, Paris, 75013, France

Mathematical Neuroscience Lab, Center of Interdisciplinary Research in Biology, Collège de France, Paris, 75005, France

CNRS/UMR 7241-INSERM U1050, Université Pierre et Marie Curie, ED 158, Paris, 75005, France

MEMOLIFE Laboratory of Excellence and Paris Science Lettre, 11, Place Marcelin Berthelot, Paris, 75005, France

Abstract

We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the FitzHugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons’ initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes place, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is a solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations or non-local partial differential equations resembling the McKean-Vlasov-Fokker-Planck equations. We prove the well-posedness of the McKean-Vlasov equations, i.e. the existence and uniqueness of a solution. We also show the results of some numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiments also indicate that the McKean-Vlasov-Fokker-Planck equations may be a good way to understand the mean-field dynamics through, e.g. a bifurcation analysis.

**Mathematics Subject Classification (2000): **
60F99, 60B10, 92B20, 82C32, 82C80, 35Q80.

1 Introduction

Cortical activity displays highly complex behaviors which are often characterized by the presence of noise. Reliable responses to specific stimuli often arise at the level of population assemblies (cortical areas or cortical columns) featuring a very large number of neuronal cells, each of these presenting a highly nonlinear behavior, that are interconnected in a very intricate fashion. Understanding the global behavior of large-scale neural assemblies has been a great endeavor in the past decades. One of the main interests of large-scale modeling is characterizing brain functions, which most imaging techniques are recording. Moreover, anatomical data recorded in the cortex reveal the existence of structures, such as the cortical columns, with a diameter of about 50 μm to 1 mm, containing the order of 100 to 100,000 neurons belonging to a few different types. These columns have specific functions; for example, in the human visual area V1, they respond to preferential orientations of bar-shaped visual stimuli. In this case, information processing does not occur at the scale of individual neurons but rather corresponds to an activity integrating the individual dynamics of many interacting neurons and resulting in a mesoscopic signal arising through averaging effects, and this effectively depends on a few effective control parameters. This vision, inherited from statistical physics, requires that the space scale be large enough to include sufficiently many neurons and small enough so that the region considered is homogeneous. This is, in effect, the case of the cortical columns.

In the field of mathematics, studying the limits of systems of particle systems in interaction has been a long-standing problem and presents many technical difficulties. One of the questions addressed in mathematics was to characterize the limit of the probability distribution of an infinite set of interacting diffusion processes, and the fluctuations around the limit for a finite number of processes. The first breakthroughs to find answers to this question are due to Henry McKean (see, e.g.

In the present article, we apply this mathematical approach to the problem of interacting neurons arising in neuroscience. To this end, we extend the theory to encompass a wider class of models. This implies the use of locally (instead of globally) Lipschitz coefficients and of a Lyapunov-like growth condition replacing the customary linear growth assumption for some of the functions appearing in the equations. The contributions of this article are fourfold:

1. We derive, in a rigorous manner, the mean-field equations resulting from the interaction of infinitely many neurons in the case of widely accepted models of spiking neurons and synapses.

2. We prove a propagation of chaos property which shows that in the mean-field limit, the neurons become independent, in agreement with some recent experimental work

3. We show, numerically, that the mean-field limit is a good approximation of the mean activity of the network even for fairly small sizes of neuronal populations.

4. We suggest, numerically, that the changes in the dynamics of the mean-field limit when varying parameters can be understood by studying the mean-field Fokker-Planck equation.

We start by reviewing such models in the ‘Spiking conductance-based models’ section to motivate the present study. It is in the ‘Mean-field equations for conductance-based models’ section that we provide the limit equations describing the behaviors of an infinite number of interacting neurons and state and prove the existence and uniqueness of solutions in the case of conductance-based models. The detailed proof of the second main theorem, that of the convergence of the network equations to the mean-field limit, is given in the Appendix. In the ‘Numerical simulations’ section, we begin to address the difficult problem of the numerical simulation of the mean-field equations and show some results indicating that they may be an efficient way of representing the mean activity of a finite-size network as well as to study the changes in the dynamics when varying biological parameters. The final ‘Discussion and conclusion’ section focuses on the conclusions of our mathematical and numerical results and raises some important questions for future work.

2 Spiking conductance-based models

This section sets the stage for our results. We review in the ‘Hodgkin-Huxley model’ section the Hodgkin-Huxley model equations in the case where both the membrane potential and the ion channel equations include noise. We then proceed in the ‘The FitzHugh-Nagumo model’ section with the FitzHugh-Nagumo equations in the case where the membrane potential equation includes noise. We next discuss in the ‘Models of synapses and maximum conductances’ section the connectivity models of networks of such neurons, starting with the synapses, electrical and chemical, and finishing with several stochastic models of the synaptic weights. In the ‘Putting everything together’ section, we write the network equations in the various cases considered in the previous section and express them in a general abstract mathematical form that is the one used for stating and proving the results about the mean-field limits in the ‘Mean-field equations for conductance-based models’ section. Before we jump into this, we conclude in the ‘Mean-field methods in computational neuroscience: a quick overview’ section with a brief overview of the mean-field methods popular in computational neuroscience.

From the mathematical point of view, each neuron is a complex system, whose dynamics is often described by a set of stochastic nonlinear differential equations. Such models aim at reproducing the biophysics of ion channels governing the membrane potential and therefore the spike emission. This is the case of the classical model of Hodgkin and Huxley

2.1 Hodgkin-Huxley model

One of the most important models in computational neuroscience is the Hodgkin-Huxley model. Using pioneering experimental techniques of that time, Hodgkin and Huxley

The basic electrical relation between the membrane potential and the currents is simply:

where

where

The functions ^{a}

where

In order to complete our stochastic Hodgkin-Huxley neuron model, we assume that the external current

This is a stochastic version of the Hodgkin-Huxley model. The functions

To illustrate the model, we show in Figure

Solution of the noiseless Hodgkin-Huxley model.

**Solution of the noiseless Hodgkin-Huxley model**.

Noisy Hodgkin-Huxley model.

**Noisy Hodgkin-Huxley model**.

For the membrane potential, we have used

with

Because the Hodgkin-Huxley model is rather complicated and high-dimensional, many reductions have been proposed, in particular to two dimensions instead of four. These reduced models include the famous FitzHugh-Nagumo and Morris-Lecar models. These two models are two-dimensional approximations of the original Hodgkin-Huxley model based on quantitative observations of the time scale of the dynamics of each variable and identification of variables. Most reduced models still comply with the Lipschitz and linear growth conditions ensuring the existence and uniqueness of a solution, except for the FitzHugh-Nagumo model which we now introduce.

2.2 The FitzHugh-Nagumo model

In order to reduce the dimension of the Hodgkin-Huxley model, FitzHugh

where

Note that because the function

We show in Figure

Time evolution of the membrane potential and the adaptation variable in the FitzHugh-Nagumo model.

**Time evolution of the membrane potential and the adaptation variable in the FitzHugh-Nagumo model**.

The deterministic model has been solved with a Runge-Kutta method of order 4, while the stochastic model, with the Euler-Maruyama scheme. In both cases, we have used an integration time step

2.3 Partial conclusion

We have reviewed two main models of space-clamped single neurons: the Hodgkin-Huxley and FitzHugh-Nagumo models. These models are stochastic, including various sources of noise: external and internal. The noise sources are supposed to be independent Brownian processes. We have shown that the resulting stochastic differential Equations 2 and 5 were well-posed. As pointed out above, this analysis extends to a large number of reduced versions of the Hodgkin-Huxley such as those that can be found in the book

2.4 Models of synapses and maximum conductances

We now study the situation in which several of these neurons are connected to one another forming a network, which we will assume to be fully connected. Let

2.4.1 Chemical synapses

The principle of functioning of chemical synapses is based on the release of a neurotransmitter in the presynaptic neuron synaptic button, which binds to specific receptors on the postsynaptic cell. This process, similar to the currents described in the Hodgkin and Huxley model, is governed by the value of the cell membrane potential. We use the model described in

The synaptic reversal potentials

The function

The positive constants

Destexhe et al.

Because of the dynamics of ion channels and of their finite number, similar to the channel noise models derived through the Langevin approximation in the Hodgkin-Huxley model (Equation 2), we assume that the proportion of active channels is actually governed by a stochastic differential equation with diffusion coefficient

In detail, we have

Remember that the form of the diffusion term guarantees that the solutions to this equation with appropriate initial conditions stay between 0 and 1. The Brownian motions

2.4.2 Electrical synapses

The electrical synapse transmission is rapid and stereotyped and is mainly used to send simple depolarizing signals for systems requiring the fastest possible response. At the location of an electrical synapse, the separation between two neurons is very small (≈3.5 nm). This narrow gap is bridged by the

Electrical synapses thus work by allowing ionic current to flow passively through the gap junction pores from one neuron to another. The usual source of this current is the potential difference generated locally by the action potential. Without the need for receptors to recognize chemical messengers, signaling at electrical synapses is more rapid than that which occurs across chemical synapses, the predominant kind of junctions between neurons. The relative speed of electrical synapses also allows for many neurons to fire synchronously.

We model the current for this type of synapse as

where

2.4.3 The maximum conductances

As shown in Equations 6, 7 and 10, we model the current going through the synapse connecting neuron

The simplest idea is to assume that the maximum conductances are independent diffusion processes with mean

where the

One way to alleviate this problem is to modify the dynamics (Equation 11) to a slightly more complicated one whose solutions do not change sign, such as for instance, the Cox-Ingersoll-Ross model

Note that the right-hand side only depends upon the population

This shows that if the initial condition

2.5 Putting everything together

We are ready to write the equations of a network of Hodgkin-Huxley or FitzHugh-Nagumo neurons and study their properties and their limit, if any, when the number of neurons becomes large. The external current for neuron

We will assume that the deterministic part is the same for all neurons in the same population,

We only cover the case of chemical synapses and leave it to the reader to derive the equations in the simpler case of gap junctions.

2.5.1 Network of FitzHugh-Nagumo neurons

We assume that the parameters

which is a set of

2.5.2 Network of Hodgkin-Huxley neurons

We provide a similar description in the case of the Hodgkin-Huxley neurons. We assume that the functions

2.5.3 Partial conclusion

Equations 14 to 17 have a quite similar structure. They are well-posed, i.e. given any initial condition, and any time

In order to prepare the grounds for the ‘Mean-field equations for conductance-based models’ section, we explore a bit more the aforementioned common structure. Let us first consider the case of the simple maximum conductance variations for the FitzHugh-Nagumo network. Looking at Equation 14, we define the three-dimensional state vector of neuron

Let us next define

It appears that the intrinsic dynamics of the neuron

We next define the functions

and the function

It appears that the full dynamics of the neuron

Let us now move to the case of the sign-preserving variation of the maximum conductances, still for the FitzHugh-Nagumo neurons. The state of each neuron is now

and the functions

It appears that the intrinsic dynamics of the neuron

Let us finally define the functions

It appears that the full dynamics of the neuron

We let the reader apply the same machinery to the network of Hodgkin-Huxley neurons.

Let us note

(H1)

(H2)

(H3)

(H4)

These assumptions are central to the proofs of Theorems 2 and 4.

They imply the following proposition stating that the system of stochastic differential equations (Equation 19) is well-posed:

**Proposition 1**

The case

We are interested in the behavior of the solutions of these equations as the number of neurons tends to infinity. This problem has been long-standing in neuroscience, arousing the interest of many researchers in different domains. We discuss the different approaches developed in the field in the next subsection.

2.6 Mean-field methods in computational neuroscience: a quick overview

Obtaining the equations of evolution of the effective mean-field from microscopic dynamics is a very complex problem. Many approximate solutions have been provided, mostly based on the statistical physics literature.

Many models describing the emergent behavior arising from the interaction of neurons in large-scale networks have relied on continuum limits ever since the seminal work of Amari, and Wilson and Cowan

A different approach has been to study regimes where the activity is uncorrelated. A number of computational studies on the integrate-and-fire neuron showed that under certain conditions, neurons in large assemblies end up firing asynchronously, producing null correlations

However, increasingly many researchers now believe that the different intrinsic or extrinsic noise sources are part of the neuronal signal, and rather than being a pure disturbing effect related to the intrinsically noisy biological substrate of the neural system, they suggest that noise conveys information that can be an important principle of brain function

In order to study the effect of the stochastic nature of the firing in large networks, many authors strived to introduce randomness in a tractable form. Some of the models proposed in the area are based on the definition of a Markov chain governing the firing dynamics of the neurons in the network, where the transition probability satisfies a differential equation, the

Brunel and Hakim considered a network of integrate-and-fire neurons connected with constant maximum conductances

Let us finish this very short and incomplete survey by mentioning the work of Sompolinsky and colleagues. Assuming a linear intrinsic dynamics for the individual neurons described by a rate model and random centered maximum conductances for the connections, they showed

All these approaches have in common that they are not based on the most widely accepted microscopic dynamics (such as the ones represented by the Hodgkin-Huxley equations or some of their simplifications) and/or involve approximations or moment closures. Our approach is distinct in that it aims at deriving rigorously and without approximations the mean-field equations of populations of neurons whose individual neurons are described by biological, if not correct at least plausible, representations. The price to pay is the complexity of the resulting mean-field equations. The specific study of their solutions is therefore a crucial step, which will be developed in forthcoming papers.

3 Mean-field equations for conductance-based models

In this section, we give a general formulation of the neural network models introduced in the previous section and use it in a probabilistic framework to address the problem of the asymptotic behavior of the networks, as the number of neurons

3.1 Setting of the problem

We recall that the neurons in the network fall into different populations ^{b}.

We use the notations introduced in the ‘Partial conclusion’ section, and the reader should refer to this section to give a concrete meaning to the rather abstract (but required by the mathematics) setting that we now establish.

Each neuron

Moreover, we assume, as it is the case for all the models described in the ‘Spiking conductance-based models’ section, that the solutions of this stochastic differential equation exist for all time.

When included in the network, these processes interact with those of all the other neurons through a set of continuous functions that only depend on the population

As discussed in the ‘Spiking conductance-based models’ section, due to the stochastic nature of ionic currents and the noise effects linked with the discrete nature of charge carriers, the maximum conductances are perturbed dynamically through the

In order to introduce the stochastic current and stochastic maximum conductances, we define two independent sequences of independent

The resulting equation for the

Note that this implies that

These equations are similar to the equations studied in another context by a number of mathematicians, among which are McKean, Tanaka and Sznitman (see the ‘Introduction’ section), in that they involve a very large number of particles (here, particles are neurons) in interaction. Motivated by the study of the McKean-Vlasov equations, these authors studied special cases of equations (Equation 21). This theory, referred to as the kinetic theory, is chiefly interested in the study of the thermodynamics questions. They show the property that in the limit where the number of particles tends to infinity, provided that the initial state of each particle is drawn independently from the same law, each particle behaves independently and has the same law, which is given by an implicit stochastic equation. They also evaluate the fluctuations around this limit under diverse conditions

3.2 Convergence of the network equations to the mean-field equations and properties of those equations

We now show that the same type of phenomena that were predicted for systems of interacting particles happen in networks of neurons. In detail, we prove that in the limit of large populations, the network displays the property of propagation of chaos. This means that any finite number of diffusion processes become independent, and all neurons belonging to a given population

where

In these equations,

where the

with

The

We now spend some time on notations in order to obtain a somewhat more compact form of Equation 22. We define

We obtain the equivalent compact mean-field equation:

Equations 22 and 24 are implicit equations on the law of

We now state the main theoretical results of the paper as two theorems. The first theorem is about the well-posedness of the mean-field equation (Equation 22). The second is about the convergence of the solutions of the network equations to those of the mean-field equations. Since the proof of the second theorem involves similar ideas to those used in the proof of the first, it is given in the Appendix.

**Theorem 2**

Let us denote by

We have introduced in the previous formula the process

The following lemma is useful to prove the theorem:

**Lemma 3**

where

This expression involves the term

It also involves the term

Finally, we obtain

Using Gronwall’s inequality, we deduce the

This lemma puts us in a position to prove the existence and uniqueness theorem:

Using the notations introduced for Equation 25, we decompose the difference

and find an upperbound for

and treat each term separately. The upperbounds for the first two terms are obtained using the Cauchy-Schwartz inequality, those of the last two terms using the Burkholder-Davis-Gundy martingale moment inequality.

The term

Taking the sup of both sides of the last inequality, we obtain

from which follows the fact that

The term

Taking the sup of both sides of the last inequality, we obtain

from which follows the fact that

The term

The term

Putting all of these together, we get:

From the relation

and

and this upper bound is the term of a convergent series. The Borel-Cantelli lemma stems that for almost any

and hence

It follows that with probability 1, the partial sums:

are uniformly (in

It is easy to show using routine methods that

To complete the proof, we use a standard truncation property. This method replaces the function

and similarly for

Let us now define the stopping time as

It is easy to show that

implying that the sequence of stopping times

and letting

which, by Gronwall’s theorem, directly implies that

which ends the proof. □

We have proved the well-posedness of the mean-field equations. It remains to show that the solutions to the network equations converge to the solutions of the mean-field equations. This is what is achieved in the next theorem.

**Theorem 4**

• Convergence^{c}:

• Propagation of chaos: ^{d}

This theorem has important implications in neuroscience that we discuss in the ‘Discussion and conclusion’ section. Its proof is given in the Appendix.

4 Numerical simulations

At this point, we have provided a compact description of the activity of the network when the number of neurons tends to infinity. However, the structure of the solutions of these equations is complicated to understand from the implicit mean-field equations (Equation 22) and of their variants (such as the McKean-Vlasov-Fokker-Planck equations (Equation 24)). In this section, we present some classical ways to numerically approximate the solutions to these equations and give some indications about the rate of convergence and the accuracy of the simulation. These numerical schemes allow us to compute and visualize the solutions. We then compare the results of the two schemes for a network of FitzHugh-Nagumo neurons belonging to a single population and show their good agreement.

The main difficulty one faces when developing numerical schemes for Equations 22 and 24 is that they are non-local. By this, we mean that in the case of the McKean-Vlasov equations, they contain the expectation of a certain function under the law of the solution to the equations (see Equation 22). In the case of the corresponding Fokker-Planck equation, it contains integrals of the probability density functions which is a solution to the equation (see Equation 24).

4.1 Numerical simulations of the McKean-Vlasov equations

The fact that the McKean-Vlasov equations involve an expectation of a certain function under the law of the solution of the equation makes them particularly hard to simulate directly. One is often reduced to use Monte Carlo simulations to compute this expectation, which amounts to simulating the solution of the network equations themselves (see

In detail, let

where

The mean square error between the solution of the numerical recursion (Equation 30)

4.2 Numerical simulations of the McKean-Vlasov-Fokker-Planck equation

For solving the McKean-Vlasov-Fokker-Planck equation (Equation 24), we have used the

where Δ

The discretization of the derivatives with respect to the phase space parameters is done through the following fourth-order central difference scheme:

for the first-order derivatives, and

for the second-order derivatives (see

Finally, we have used a Runge-Kutta method of order 2 (RK2) for the numerical integration of the resulting system of ODEs. This method is of the explicit kind for ordinary differential equations, and it is described by the following

4.3 Comparison between the solutions to the network and the mean-field equations

We illustrate these ideas with the example of a network of 100 FitzHugh-Nagumo neurons belonging to one, excitatory, population. We also use chemical synapses with the variation of the weights described by (Equation 11). We choose a finite volume, outside of which we assume that the probability density function (p.d.f.) is zero. We then discretize this volume with

where

In general, the total number of coupled ODEs that we have to solve for the McKean-Vlasov-Fokker-Planck equation with the method of lines is the product

In the simulations shown in the left-hand parts of Figures

Joint probability distribution.

**Joint probability distribution**.

Joint probability distribution.

**Joint probability distribution**.

The parameters are given in the first column of Table

Initial condition

Phase space

FitzHugh-Nagumo

Synaptic weights

Synapse

Results are shown in Figures

Δ

0.1 (network)

Δ

Δ

Γ = 0.1

Λ = 0.5

Δ

The parameters for the noisy model of maximum conductances of Equation 11 are shown in the fourth column of the table. For these values of

The marginals estimated from the trajectories of the network solutions are then compared to those obtained from the numerical solution of the McKean-Vlasov-Fokker-Planck equation (see Figures

We have used the value

Projection of 100 trajectories in the

**Projection of 100 trajectories in the ****( top left),**

Figures

We performed

Variation of the Kullback-Leibler divergence.

**Variation of the Kullback-Leibler divergence**. Variation of the Kullback-Leibler divergence between the marginal probability density function

4.4 Numerical simulations with GPUs

Unfortunately, the algorithm for solving the McKean-Vlasov-Fokker-Planck equation described in the previous section is computationally very expensive. In fact, when the number of points in the discretized grid of the

We have changed the Runge-Kutta scheme of order 2 used for the simulations shown in the ‘Numerical simulations of the McKean-Vlasov-Fokker-Planck equation’ section and adopted a more accurate Runge-Kutta scheme of order 4. This was done because with the more powerful machine, each computation of the right-hand side of the equation is faster, making it possible to use four calls per time step instead of two in the previous method. Hence, the parallel hardware allowed us to use a more accurate method.

One of the purposes of the numerical study is to get a feeling for how the different parameters, in particular those related to the sources of noise, influence the solutions of the McKean-Vlasov-Fokker-Planck equation. This is meant to prepare the ground for the study of the bifurcation of these solutions with respect to these parameters, as was done in ^{e}

The simulations were run with the

Initial condition

Phase space

Stochastic FN neuron

Synaptic weights

The simulations are shown in Figures

Δ

Δ

Δ

Δ

Four snapshots of the solution are shown in Figure

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation.

**Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation**. Marginals with respect to the

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation.

**Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation**. Marginals with respect to the

The results shown in Figures

The reader interested in more details in the numerical implementations and in the gains that can be achieved by the use of GPUs can consult

In Figure

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation at convergence.

**Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation at convergence**. Marginals with respect to the

5 Discussion and conclusion

In this article, we addressed the problem of the limit in law of networks of biologically inspired neurons as the number of neurons tends to infinity. We emphasized the necessity of dealing with biologically inspired models and discussed at length the type of models relevant to this study. We chose to address the case conductance-based network models that are a relevant description of the neuronal activity. Mathematical results on the analysis of these diffusion processes in interaction resulted to the replacement of a set of

Besides the fact that we explicitly model real spiking neurons, the mathematical part of our work differs from that of previous authors such as McKean, Tanaka and Sznitman (see the ‘Introduction’ section) because we are considering several populations with the effect that the analysis is significantly more complicated. Our hypotheses are also more general, e.g. the drift and diffusion functions are nontrivial and satisfy the general condition (H4) which is more general than the usual linear growth condition. Also, they are only assumed locally (and not globally) Lipschitz continuous to be able to deal, for example, with the FitzHugh-Nagumo model. A locally Lipschitz continuous case was recently addressed in a different context for a model of swarming in

Proofs of our results, for somewhat stronger hypotheses than ours and in special cases, are scattered in the literature, as briefly reviewed in the ‘Introduction’ and ‘Setting of the problem’ sections. Our main contribution is that we provide a complete, self-sufficient proof in a fairly general case by gathering all the ingredients that are required for our neuroscience applications. In particular, the case of the FitzHugh-Nagumo model where the drift function does not satisfy the linear growth condition involves a generalization of previous works using the more general growth condition (H4).

The simulation of these equations can itself be very costly. We, hence, addressed in the ‘Numerical simulations’ section numerical methods to compute the solutions of these equations, in the probabilistic framework, using the convergence result of the network equations to the mean-field limit and standard integration methods of differential equations or in the Fokker-Planck framework. The simulations performed for different values of the external input current parameter and one of the parameters controlling the noise allowed us to show that the spatio-temporal shape of the probability density function describing the solution of the McKean-Vlasov-Fokker-Planck equation was sensitive to the variations of these parameters, as shown in Figures

The mean-field description has also deep theoretical implications in neuroscience. Indeed, it points towards the fact that neurons encode their responses to stimuli through probability distributions. This type of coding was evoked by several authors ^{f}

Variations over time of the cross-correlation of

**Variations over time of the cross-correlation of ****variables of several FitzHugh-Nagumo neurons in a network**.

The present study develops theoretical arguments to derive the mean-field equations resulting from the activity of large neuron ensembles. However, the rigorous and formal approach developed here does not allow direct characterization of brain states. The paper, however, opens the way to rigorous analysis of the dynamics of large neuron ensembles through derivations of different quantities that may be relevant. A first approach could be to derive the equations of the successive moments of the solutions. Truncating this expansion would yield systems of ordinary differential equations that can give approximate information on the solution. However, the choice of the number of moments taken into account is still an open question that can raise several deep questions

Electronic Supplementary Material

Time evolution of the

Click here for file

Time evolution of the

Click here for file

Time evolution of the

Click here for file

Time evolution of the

Click here for file

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JB and DF developed the code for solving the stochastic differential equations, the McKean-Vlasov equations and the McKean-Vlasov-Fokker-Planck equations. They ran the numerical experiments and generated all the figures. DF derived some of the McKean-Vlasov equations in a heuristic fashion. OF and JT developed the models, proved the theorems and wrote the paper. All authors read and approved the final manuscript.

Acknowledgements

This work was partially supported by the ERC grant #227747 NerVi, the FACETS-ITN Marie-Curie Initial Training Network #237955 and the IP project BrainScaleS #269921.

^{a}More precisely, as shown in

^{b}As we will see in the proof, most properties are valid as soon as

^{c}The type of convergence is specified in the proof given in the Appendix.

^{d}The notation

^{e}We have included a small noise (controlled by the parameter

^{f}Note that we did not estimate the correlation within larger networks since, as predicted by Theorem 4, it will be smaller and smaller, requiring an increasingly large number of Monte Carlo simulations.

^{g}Note that