Biography
I develop computational models to understand information processing in neural networks. I strongly believe that neural networks and computational neuroscience models should "compute"; the challenge is to develop insights from neuroscience into usefully computing neural networks, and to bring machine learning insights into models of how neurons in the brain compute. I particularly focused on continuous-time information processing, where time is an explicit dimension of the problem domain. This includes networks of spiking neurons, predictive coding, interactive neural cognition, supervised neural learning, and (biologically plausible) deep reinforcement learning methods. I am also a part-time full professor of Computational Neuroscience at the University of Amsterdam, with the Swammerdam Institute of Life Sciences, and an honorary full-professor of Bio-inspired Neural Networks at the Rijksuniversiteit Groningen.Research
- A key research interest is work on neural adaptation and predictive coding for optimal spiking information processing. An example is the notion of multiplicative adaptation for adaptive spike coding, which allows spiking neurons to efficiently encode analog signals over vastly different and rapidly changing dynamic ranges. Current work focuses framing such adaptation in terms of predictive coding, and applying this paradigm to standard learning theory.
- We also work on biologically plausible policy gradient reinforcement learning of working memory, for so called Semi-Markov Decision Processes. In AuGMent, we show how synaptic tags combined with integrating neurons allow neural networks to learn sequences of tasks, closely mimicking the way monkeys learn these tasks. Recent work has shown how we can formulate learning and processing in RNNs like AuGMent in continuous-time, and implement this in spiking neural networks.
- Other research relates to neural models of early vision and audition, where the dynamic properties of real neurons are crucial to understanding the relationship between spiking and neural information processing.
- Active collaborations involve Matthias Brucklacher, Kwangjun Lee and Cyriel Pennartz at SILS, UvA; Lieke Ceton, Sami Mollard, Paolo Papale and Pieter Roelfsema, at NIN Amsterdam; and Lynn Soerensen and Steven Scholte at B&C, UvA.
- In more applied machine learning efforts, we work on AI-based physics with Nikolaj Mucke, Ruud Henkes and Kees Oosterlee.
Talented students are always welcome to come and do their MSc-thesis work at CWI on Bio-inspired Deep Learning, either based on their own proposed ideas, or based some ready-made potential MSc thesis projects. In particular for projects that link (spiking) neural networks to biology, and also on work that explores (probabilistic) learning in spiking neural networks (SNNs), for example in connection with reservoir computing, or in projects on efficiently simulating large-scale SNNs, with for example GPUs. Feel free to contact me for more information.