Summer term 2017
Aspects of randomness in neural activity and information processing can be successfully analyzed in terms of stochastic models. This course gives an introduction to the models and measures of neural noise (or 'variability' as it is more often called) and should enable the student to follow the current literature on the subject on his/her own. To this end, some key concepts from nonlinear dynamics, stochastic processes, and information theory are outlined. Then a number of basic problems (see below) are addressed; here, the main emphasis is given to analytically tractable models, but simulation techniques are explained as well. As an outlook some more involved problems (ISI statistics under correlated ('colored') noise, with subthreshold oscillations, or with adaptation, stimulus-induced correlations) are sketched at the end of the course.
Contents include: Key concepts from nonlinear dynamics (bifurcations, fixed points, manifolds, limit cycle), stochastic processes (Langevin and Fokker-Planck equations, Master equation, linear response theory), information theory (mutual information and its lower and upper bounds), point processes (Poisson process; renewal vs. nonrenewal point process). Neural noise sources and how they enter different neuron models, the diffusion approximation of synaptic input or channel fluctuations by a Gaussian noise, measures of spike train and interval variability and their interrelation, Poisson spike train: entropy & information content, one-dimensional stochastic integrate-and-fire (IF) neurons: spontaneous activity, response to weak stimuli & information transfer, different forms of stochastic resonance in single neurons and neuronal populations, multidimensional IF models: subthreshold resonances, synaptic filtering & spike-frequency adaptation, effect of nonrenewal behavior of the spontaneous activity on the information transfer, outlook: stimulus-driven correlations; networks of stochastic neurons.
last updated 01/27/2017