Projects and Papers

 

Dynamics of neural circuits

This project, with current students Søren Enemark, Lana Bosanac (Copenhagen University), Anna-Gabriella Tempesta (KVL), Masako Kusano (Cambridge University) and Pär Björklund (Stockholm University), and former students Hans Christian Mulvad, Mandana Ahmadi, Alex Lerchner (now at NIMH), Gustaf Sterner (now at the University of Rochester), and Pauline Ruffiot (Univ Grenoble), deals with the dynamical properties of local networks (corresponding to cortical columns). 

 

We model the generic dynamics of networks of coupled excitatory and inhibitory spiking neurons.  We are particularly interested in states with irregular firing as found widely experimentally in neocortical networks.  We are also studying the effects of synaptic adaptation (depression and facilitation) and learning in these circuits.  We make extensive use of mean field theory, which is a good approximation for neocortical circuits because of their high connectivity.

            In our first investigations we used current-based synaptic models, i.e., we modeled the effect of a presynaptic spike simply as the injection of a current pulse, of a magnitude fixed for a given synapse.  More recently we use a more biologically-realistic model in which the presynaptic spike causes a short-duration increase in the conductance of the postsynaptic cell’s membrane potential.  We have also extended our models to describe the network dynamics underlying orientation selectivity in the primary visual cortex.  We have explored in particular the sensitivity of the irregularity of neuronal firing to synaptic, neuronal and network properties (such as the mean synaptic time, the firing threshold and the connectivity.  In current work we are using more detailed (Hodgkin-Huxley-like) neurons in place of integrate-and-fire ones. 

            The left-hand figure below (from Barry Richmond’s lab) shows examples of experimentally observed cortical neuronal firing irregularity.  The variance, over many trials, of the spike count in a fixed time interval is plotted against the mean spike count.  If the neuronal firing were a Poisson process, the mean count and variance would be equal.  The figure shows that the firing is often “even more irregular than Poisson”, i.e., the variance is greater than the mean. 

            The right-hand figure shows an example of what we find in one of our mean field model calculations.  Further investigation shows that getting results like the experiments depends on the size of the synaptic strengths relative to the firing threshold.

 

 

 

 

            We have also studied the dynamical consequences of possible hidden layer-like structure in the connectivity.  It has been conjectured that this might permit propagation of precisely-timed spiking sequences (“synfire chains”).   We find that the intrinsic noise of background neuronal activity severely limits the dynamical stability of such activity patterns, but that synaptic depression makes it possible in a sizeable part of the system parameter space.  The figure below shows an example of a successfully-propagating synfire chain, as found in the simulations done by Søren Enemark in his thesis.  The chain consists of six pools of 200 neurons each, with feed-forward connections from pool to pool (including last-to-first).  The traces are of the membrane potentials of six neurons, one in each of the pools.

 

 

 

Some references:

 

For an introduction to mean-field modeling of cortical networks, see our tutorial article, Mean Field Methods for Cortical Network Dynamics (Lecture Notes, Erice, 2003)

 

Other papers on mean field theory for cortical networks:

 

S Solinas and J Hertz, Stability of asynchronous firing states in networks with synaptic adaptation, Neurocomputing 38-40 915-920 (2001)

 

J Hertz, K Nilsen and B J Richmond, Anomalous response variability on a balanced cortical network model, Neurocomputing 52-54 787-792 (2003) (cond-mat/0202145) [1-population model]

 

A Lerchner, C Ursta, J Hertz, M Ahmadi and P Ruffiot, Response variability in cortical networks, q-bio.NC/0402022

             

A Lerchner, M Ahmadi and J Hertz, High-conductance states in a mean-field cortical model, Neurocomputing 58-60 935-940 2004 (q-bio.NC/0402026) [conductance-based synapses]

 

A Lerchner, G Sterner, J Hertz and M Ahmadi, Mean field theory for a balanced hypercolumn model of orientation selectivity, q-bio.NC/0403037 [orientation hypercolumn model]

 

 

About synaptic depression and synfire chains: Søren Enemark’s thesis (Niels Bohr Institut, 2004)

 

 

------------------------------------------------------------------------------------------------------------------------------------------------

 

 

Network dynamics of gene regulation

 

In the fundamental process of biology, genes are transcribed to mRNA, which is then translated to make proteins.  The proteins can influence the transcription or translation of other genes, either directly as transcription factors or indirectly, through intracellular or extracellular signals.  Thus, the entire genome can be viewed as a network of interacting genes, and the different possible stable cycles of this network can be identified with different cell types.

            The technology of modern molecular biology now permits the parallel measurement of the expression of thousands of different genes, providing a partial record of the dynamics of the gene network in, for example, the development of an embryo.  The motivation for making such measurements is to try to understand the interactions between the different genes, so that one can predict the effects of genetic manipulation on the cell or organism.  We have successfully modeled the networks governing development of the spinal cord and hippocampus in rats at a coarse-grained level, identifying the important families of genes involved and their main interactions.

            Because there are some many more free parameters in these network models than there are measurements (even after the parameter space has been drastically compressed with the help of data clustering and the resulting identification of the major collective variables), and because the data are generally noisy, it is not possible in general to determine them exactly.  Thus the result of the model-fitting procedure is, formally, an estimated joint distribution of all the parameters.  By looking at the single-parameter marginal distributions (left-hand figure below), we can see that some of the parameters are rather well-determined, while others are not.  Further information is contained in the joint distributions of particular pairs of parameters (marginalized with respect to all other parameters), an example of which is show in the right-hand figure.

 

 

 

            In current work we are focusing on the activity of transcription factors, which can be estimated from the expression levels of the genes they are known to regulate.   These activities are basic dynamical variables in the genetic network of an organism.  Based on this idea, we are trying to characterize the dynamics of the regulatory networks in yeast and, eventually, in higher organisms.

            This work is in collaboration with Mattias Wahde (former Nordita fellow, now at Chalmers University) and Michael Hörnqvist (formerly at Nordita, now at Linköping University, Norrköping).

 

 

 

Some papers:

 

M Wahde and J Hertz, Coarse-grained reverse engineering of genetic regulatory networks, Biosystems 55 129-139 (2000)

 

M Wahde and J Hertz, Modeling genetic regulatory networks in neural development, J Comput Biol 8 (4) 429-42 (2001)

 

M Wahde, J Hertz and M L Andersson, Reverse engineering of sparsely-connected genetic regulatory networks, (Proc 2nd Workshop on Computation of Biochemical Pathways and Genetic Networks, Heidelberg, June 2001)

 

M Hörnqvist, J Hertz and M Wahde, Effective dimensionality of large-scale expression data using principal component analysis, Biosystems 65 147-156 (2002)

 

M Wahde and J Hertz, Genetic network inference using joint weight distributions (draft)

 

 

-------------------------------------------------------------------------------------------------------------------------------------------------

 

 

Glassy Dynamics in Heteropolymer Models for Proteins

 

Random heteropolymers have been proposed as a generic model to describe protein folding and structure. In our work we are using methods from spin glass dynamics, both analytic and numerical, to study the non-equilibrium properties of heteropolymers.  They are expected to exhibit aging properties of a sort

very similar to those found in spin glasses, including dependence on the time since the sample was prepared and breakdown of the fluctuation response relation.  The folding of a protein involves competition between glassiness due to the inhomogeneity of the chain and the energetic favorability of its

native state.  We have investigated, in mean field theory, this kind of interplay in a model with a tunable tendency to form a particular native state. This model has a an interesting phase diagram (below) that appears to shed light on the conditions under which real protein dynamics may exhibit some glassy features.  At strong enough tendency to a native state (b0) the native state is stable, and, at high enough temperature, folding is fast.  However, lowering the temperature produces a rough free energy landscape and glassy phases which can trap the system and hinder folding.  At still lower temperature, the free energy landscape around the native state becomes rough and all dynamical properties are glassy.  This seems to be an inescapable consequence of the inhomogeneity of the system.

 

 

Papers:

 

Z Konkoli, J Hertz and S Franz, Random Heteropolymer Dynamics, Phys Rev E 64 051910 (2001)

 

Z Konkoli and J Hertz, Embedding a native state into a random heteropolymer model: the dynamic approach, Phys Rev E 67 051915 (2003)

 


Last updated 4 October 2004 by hertz@nordita.dk