
Joao Barbosa
PI (INSERM) @ Neuromodulation Institute & NeuroSpin.
Neuroscientist in the making. Chronic emigrant.
News & Events
Oct 2025 Joao will give two talks at Bernstein Workshops 2025: Machine learning for constraining interpretable models and Top-down control of neural dynamics
Sept 2025 Joao teaching at Summer School on Computational Biology, Coimbra
Aug 2025 Lubna gave a talk at CCN 2025, Amsterdam
Aug 2025 Joao participated in the Neuroscience Leadership Training in Kingston, Canada
July 2025 Joao taught at Cognitive and Computational Neuroscience Summer School, Suzhou, China
July 2025 Joao taught at Simons-BioRTC Computational Neuroscience Summer School, Damaturo, Nigeria
May 2025 Philipp and Joao gave talks at Mathematics of Neuroscience and AI in Split. Kaleab presented a poster.
Open Positions
If you are interested in computational neuroscience, machine learning, and working at the intersection of theory and experiments, please contact me with your CV and research interests. Check our publications to understand the type of research we do.
Research Interests
We combine machine learning with multiregional theories of decision-making and working memory. Our research focuses on understanding how computations necessary to solve complex tasks can be distributed across segregated brain regions through data-driven approaches and biophysical modeling.
Extracting Dynamical Systems from Large-Scale Recordings
+
Motivation
Neural computations supporting complex behaviors involve multiple brain regions, and large-scale recordings from animals engaged in complex tasks are increasingly common. A current challenge in analysing these data is to identify which part of the information contained within a brain region is shared with others. For instance, using linear decoding, one might find that a given area encodes all task-related variables but using decoding alone it is hard to identify which variables are actually communicated to a specific downstream area. This is particularly challenging when considering more than two interconnected areas.
Approach & Findings
To address this limitation, we train multi-region recurrent neural networks (RNN) models to reproduce the dynamics of large-scale recordings. This recordings can in principle be of different modalities (single units, fMRI, M/EEG, etc). For instance, we showcase this approach by reproducing the dynamics of more than 6000 neurons across 7 cortical areas from monkeys engaged in a two-dimensional (color and motion direction) context-dependent decision-making task. After fitting, we partition the activity of each area, separating recurrent inputs from those originating in other areas. Decoding analyses show that all areas encode both stimuli (color and direction), but selectively project different dimensions of their activity. Sensory areas (V4, MT and IT) project only one variable (color or direction) while compressing others, irrespective of the context or downstream area. In contrast, we observed that the prefrontal cortex (PFC) and frontal eye fields (FEF) projected different aspects of the stimulus, depending on the downstream area or context. In the model, PFC/FEF strongly compress the irrelevant stimulus dimension in their projections to fronto-parietal areas but not as much towards sensory areas. These preliminary results motivate a novel approach to study how different regions coordinate their activity to solve context-dependent tasks.
Related Publications
The Neural Basis of Working Memory
+
Motivation
Working memory (WM) is a core function of cognition. This is reflected in the strong correlation between WM performance and other cognitive abilities, notably IQ. Importantly, WM is impaired in major neurological dysfunctions, including schizophrenia. Sustained activity in prefrontal cortex neurons has been regarded as the neural substrate of working memory, but this has been under intense debate in recent years. Alternative proposals suggest that short-term synaptic plasticity also plays a role in supporting working memory.
Findings
In Joao's graduate research, he contributed substantially to this debate by showing that both mechanisms co-exist and interact in the monkey PFC. Moreover, we bridged some insights from biophysical modeling and monkey research directly to experiments with clinical populations (i.e. people with schizophrenia (PSZ) and anti-NMDAR encephalitis).
Approach
We combine biophysical modeling and data analysis of spiking activity from the monkey PFC, human EEG, and TMS, to tackle open questions in working memory research. Data analysis, computational modeling, and human experiments are performed by our team, with EEG experiments designed and collected through direct collaboration.
Related Publications
- Pinging the brain with visual impulses reveals electrically active, not activity-silent working memories (PLoS Biology, 2021)
- Interplay between persistent activity and activity-silent dynamics in prefrontal cortex underlies serial biases in working memory (Nature Neuroscience, 2020)
- Synaptic basis of reduced serial dependence in anti-NMDAR encephalitis and schizophrenia (Nature Communications, 2020)
Neural Basis of Context-Dependent Decision Making
+
Motivation
The field of computational neuroscience originates from physics, where there is a strong tradition to model nature from first principles. For instance, we have modeled working memory using ring networks, in which their connectivity structure is designed so that the required attractor dynamics emerges. This approach is extremely successful in modeling simple tasks, but has been proven limited when modeling more complex tasks. This has motivated a new approach, originated instead in Machine Learning, where the connectivity of very flexible neural networks are instead trained to perform arbitrarily complex tasks. We rely heavily on this approach to understand how computations necessary to solve complex tasks can be distributed across segregated brain regions.
Approach
We combine machine learning with multiregional theories of decision-making. Specifically, we extended the framework of low-rank recurrent neural networks (RNN) to model multi-region computations underlying context-dependent behavior. We approach this question in two complementary ways: i) developed statistical methods to fit recurrent neural networks to large-scale recordings (from rats and monkeys) acquired through collaborations and then reverse engineer the dynamical system extracted through gradient descent or ii) use our theoretical intuitions to directly build toy models that explain the neural data.
Related Publications
Normative Modelling with Recurrent Neural Networks
+
Motivation
Traditional approaches to understanding brain function often start with the neural data and try to infer the computation. Normative modelling takes the opposite approach: we start with the computational goal and ask what neural implementation would be optimal. By training recurrent neural networks (RNNs) to perform cognitive tasks under biologically-inspired constraints, we can generate testable predictions about how the brain might solve these problems.
Approach
We develop and apply normative theories using task-optimized RNNs that incorporate biological constraints such as Dale's law, sparse connectivity, and metabolic costs. By systematically varying these constraints and comparing the resulting network solutions to neural data, we can identify which computational principles are fundamental to brain function and which implementation details vary across individuals or brain regions.
Key Questions
- What are the optimal neural solutions for cognitive tasks under biological constraints?
- How do different constraints (e.g., energy, connectivity, noise) shape neural computation?
- Can normative models predict individual differences in neural dynamics and behavior?
- How do normative solutions change during development and learning?
Related Publications
- Early selection of task-relevant features through population gating (Nature Communications, 2023)
- NeuroGym: An open resource for developing and sharing neuroscience tasks (PsyArxiv, 2022)