Overview¶
The brain is a deeply complex dynamical system continuously interacting with its environment. How can experimental and computational neuroscientists make sense of this complexity in a principled way?
This workshop introduces a dynamical systems perspective on neural computation using recurrent neural networks (RNNs) as a central modeling tool. We will show how abstract RNN models can capture meaningful neural interactions, sharpen intuitions, and provide non-trivial insights into the computational mechanisms underlying behavior.
Instructors¶
Doris Voina, PhD is a Postdoctoral Scholar in the SNAIL laboratory at Université de Montréal with Dr. Shahab Bakhtiari. She earned her PhD in Applied Mathematics from the University of Washington (2022), with a broad interest in computational neuroscience. Her work focuses on developing computational and AI methods for data analysis to better understand neural mechanisms underlying learning and perception.
Ladan Shahshahani, PhD is a Postdoctoral Scholar in the SNAIL laboratory at Université de Montréal where she works with Dr. Shahab Bakhtiari to investigate how the brain carries out mental simulation using psychophysics and MEG/fMRI. She is particularly interested in how the dynamics of these neural transformations can guide the development of modern AI systems, inspiring models that capture the brain’s flexible, simulation-driven computations. Her background includes research on visual working memory, cerebellar and subcortical contributions to cognition, and computational approaches to neuroimaging, which now shape her broader goal of understanding and modeling how the brain builds and manipulates internal representations.
Objectives¶
This session is divided into two parts:
Part 1 — Introduction: A Dynamical Systems Framework Using RNNs¶
How to do computational neuroscience by training artificial neural networks
Participants will:
Learn the basics of recurrent neural networks
Understand the dynamical systems perspective applied to RNNs
Work through simple hands-on exercises to build intuition for what RNNs are and what kinds of computations they can perform
Part 2 — A Case Study: Modeling a Context-Integration Task with RNNs¶
In this more involved section, participants will:
Train an RNN to perform a context-integration behavioral task
Perform dimensionality reduction beyond standard PCA
Characterize the RNN’s dynamics via neural manifolds (e.g., line attractors)
Compare RNN trajectories with neural population data from Mante & Sussillo (2013)
Extract computational insights about how the brain may solve the task based on the RNN’s inferred dynamics
Bonus / Homework¶
Additional optional exercises for participants who want to explore further
Material¶
The first hands-on tutorial will focus on modeling context-dependent decision-making by reproducing analyses from the following article by Mante and Sussillo, 2013.
- Mante, V., Sussillo, D., Shenoy, K. V., & Newsome, W. T. (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature, 503(7474), 78–84. 10.1038/nature12742


