# Effective generators for superpositions of non-Poissonian spike trains

Filed under:
Computational neuroscience

Moritz Deger (Bernstein Center Freiburg, University of Freiburg), Moritz Helias (Inst of Neuroscience and Medicine (INM-6), Research Center Juelich), Stefan Rotter (Bernstein Center Freiburg & Faculty of Biology, University of Freiburg)

Networks of spiking neurons are widely studied in computational neuroscience. Simulations typically represent only a part of the brain in a network model. To compensate for the missing excitatory and inhibitory inputs from neurons external to the represented part, randomly generated spike trains are often injected to the simulated neurons.

If all external spike trains are Poisson processes (PP), their superposition is again a PP, with a rate equal to the sum of the individual rates. To represent the sum of all external inputs, it is, therefore, only necessary to generate a single spike train with a higher rate. In most areas of the neocortex, however, neural spike trains are either more or less regular than a PP [1]. In this case, the superposition (pooled input) is not a PP any more [2]. In fact, our analyses of statistical properties of superpositions of non-Poissonian (NPP) processes, and of the dynamics of leaky-integrate-and-fire neurons driven by such inputs, showed that NPP superpositions exhibit profound differences to the PP, to which neurons are sensitive [2].

Suppose we can model the external input as N independent and identical renewal processes. To generate the superposition, the naive approach is to generate N realizations of the renewal process, and then collect all the spikes in a pooled spike train. Since this has to be repeated for each of M simulated neurons, the procedure results in computational costs proportional to M*N.

Depending on the details of the modeled system, N can be on the order of 1000. In contrast, in the case of external PP inputs, it suffices to generate a single PP only. Using NPP external inputs thus can slow down a simulation by a factor of N, which is why PPs are commonly used.

Here, we present two optimised algorithms to generate superpositions of NPP spike trains directly [2]: One for gamma processes with integer shape parameter, and one for PPs with dead time. Both generators have a computational cost which is independent of N. The generators exploit a population description of the superimposed processes, require time-discrete simulation, and have been implemented in NEST [3].

[1] Shinomoto et al. (2003), Neural Comput, http://dx.doi.org/10.1162/089976603322518759

[2] Deger et al. (2011), J Comput Neurosci, http://dx.doi.org/10.1007/s10827-011-0362-8

[3] Gewaltig & Diesmann (2007), Scholarpedia, http://dx.doi.org/10.4249/scholarpedia.1430

If all external spike trains are Poisson processes (PP), their superposition is again a PP, with a rate equal to the sum of the individual rates. To represent the sum of all external inputs, it is, therefore, only necessary to generate a single spike train with a higher rate. In most areas of the neocortex, however, neural spike trains are either more or less regular than a PP [1]. In this case, the superposition (pooled input) is not a PP any more [2]. In fact, our analyses of statistical properties of superpositions of non-Poissonian (NPP) processes, and of the dynamics of leaky-integrate-and-fire neurons driven by such inputs, showed that NPP superpositions exhibit profound differences to the PP, to which neurons are sensitive [2].

Suppose we can model the external input as N independent and identical renewal processes. To generate the superposition, the naive approach is to generate N realizations of the renewal process, and then collect all the spikes in a pooled spike train. Since this has to be repeated for each of M simulated neurons, the procedure results in computational costs proportional to M*N.

Depending on the details of the modeled system, N can be on the order of 1000. In contrast, in the case of external PP inputs, it suffices to generate a single PP only. Using NPP external inputs thus can slow down a simulation by a factor of N, which is why PPs are commonly used.

Here, we present two optimised algorithms to generate superpositions of NPP spike trains directly [2]: One for gamma processes with integer shape parameter, and one for PPs with dead time. Both generators have a computational cost which is independent of N. The generators exploit a population description of the superimposed processes, require time-discrete simulation, and have been implemented in NEST [3].

[1] Shinomoto et al. (2003), Neural Comput, http://dx.doi.org/10.1162/089976603322518759

[2] Deger et al. (2011), J Comput Neurosci, http://dx.doi.org/10.1007/s10827-011-0362-8

[3] Gewaltig & Diesmann (2007), Scholarpedia, http://dx.doi.org/10.4249/scholarpedia.1430

**Preferred presentation format**: Poster

**Topic**: Computational neuroscience