The links to the papers and other materials below will not appear in Internet Explorer. Try any other browser.
My research is in the fields of theoretical neuroscience and biophysics, with a focus on learning and memory. I study the relation between the structure of biological processes and their performance in their functions, particularly models of the processes that implement synaptic plasticity.
Rather than focusing on specific models, my preferred approach is to analyse a broad space of models. In this way we can find trade-offs between different aspects of performance and the properties of models that enable the management of these trade-offs. Through this process we can understand how these structures can be adapted for different purposes in the brain as well as artificial neural networks. I believe that this approach is necessary to understanding the mechanisms used in the actual biological processes and how they may be affected by therapeutic interventions.
Complex synaptic plasticity
Computational neuroscientists often describe synapses by a single number: the synaptic weight. Learning is then implemented by shifting this number. In contrast, biological synapses are complex systems with their own internal dynamics. This structure has profound consequences for their ability to learn and store memories, as shown by Amit and Fusi (1992 and 1994).
In one project, we analyse the memory timescales of an entire space of models of synaptic plasticity, including the cascade model (Fusi, Drew & Abbott, 2005) and others of that class. We find trade-offs between rapid learning and slow forgetting and the models that navigate them optimally. This yields predictions for the different synaptic structures found in different brain regions – those that store memories for different timescales.
In another project, we investigate genetic/pharmacological interventions intended to speed up learning. Experiments with enhanced plasticity have shown mixed results – in some cases learning is enhanced, in others it is impaired. We uncover the rules by which such interventions succeed or fail. The outcome is determined by both synaptic structure and prior neural activity. This provides an explanation for the mixed results of experiments.
Energy efficiency of biological processes
The brain shows amazing energy efficiency, consuming only 20 Watts compared to the megawatts consumed by supercomputers. This suggests that minimizing energy use may have been an important factor in the evolution of brains. The efficient coding theory has provided valuable insight into sensory systems, but that field was motivated by the limits set by information theory. For energy consumption we need to know what these limits are.
I have been studying the limits on energy efficiency in communication, using non-equilibrium thermodynamics. We found a trade-off between the energy consumption required to send a signal, the precision with which the signal can be decoded and the rate of transmission. In keeping with my general approach, this trade-off applies to the entirety of the space of models. By taking this further and understanding what principles allow a system to approach these limits, we expect to gain insight into how biological systems achieve such efficiency, and thus find an important design principle. These considerations should also be useful in the design of electronic computers etc.
The project above concerned the energy costs of sending a signal. In ongoing work we are studying the energy costs of receiving a signal. Building on studies of bacterial chemoreception, starting with Berg and Purcell (1977), we have found a trade-off between energy, accuracy and time required to estimate chemical concentration using a physical receptor. We understood how the uncertainty of this estimate is affected by energy consumption depends on the different levels of detail with which downstream processes can observe the receptor.
High-dimensional statistics
One of the perils of high-dimensional data analysis is creating illusions of structure in noise. The methods of statistical physics can be used to analyse these effects, especially the behaviour of null models.
One way of alleviating the “curse of dimensionality” is dimensionality reduction. As demonstrated in the field of compressed sensing, random dimensionality reduction can be equally effective, without the risk of generating illusions of structure. Previous theoretical work on random projections has produced very loose bounds on their efficacy, possibly because they demand that they apply to arbitrarily ill-conditioned data models. In our work, we performed this analysis on an ensemble of random manifolds. By focusing on manifolds with high probability we were able to produce much tighter bounds.
In another project we applied this line of reasoning to neural recordings. Because we usually record a tiny fraction of the neurons of interest (typically in the hundreds out of millions/billions), they can be thought of as a projection. In this paper, we use random projection ideas to argue that, for questions about population dynamics, the onerous task of spike-sorting is not necessary.