Organizers 
Matthew G. Perich
Postdoc
Dept. of Fundamental Neuroscience
University of Geneva, Switzerland

Sara A. Solla
Professor
Dept. of Physics and Astronomy
Dept. of Physiology
Northwestern University, USA

Juan A. Gallego
Postdoc
Dept. of Robotics and Automation
CSIC, Spain

Stable manifold dynamics underlie the consistent execution of learned behavior
Matthew Perich
For learned actions to be executed reliably, the cortex must integrate sensory information, establish a motor plan, and generate
appropriate motor outputs to muscles. Animals, including humans, perform such behaviors with remarkable consistency for years after
acquiring a skill. How does the brain achieve this stability? Is the process of integration and planning as stable as the behavior
itself? We explore these fundamental questions from the perspective of neural populations. Recent work suggests that the building
blocks of neural function may be the activation of populationwide activity patterns, the neural modes, rather than the independent
modulation of individual neurons. These neural modes, the dominant covariation patterns of population activity, define a low
dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the timedependent
activation of the neural modes as their latent dynamics. We hypothesize that the ability to perform a given behavior in a
consistent manner requires that the latent dynamics underlying the behavior also be stable. A dynamic alignment method allows
us to examine the long term stability of the latent dynamics despite unavoidable changes in the set of neurons recorded via
chronically implanted microelectrode arrays. We use the sensorimotor system as a model of cortical processing, and find remarkably
stable latent dynamics for up to two years across three distinct cortical regions, despite ongoing turnover of the recorded neurons.
The stable latent dynamics, once identified, allows for the prediction of various behavioral features via mapping models whose
parameters remain fixed throughout these long timespans. These results are upheld by an adversarial domain adaptation approach
that aligns latent spaces based on data statistics rather than dynamics. We conclude that latent cortical dynamics within the
task manifold are the fundamental and stable building blocks underlying consistent behavioral execution.
Bayesian timing shaped by curvature in cortical manifolds
Devika Narain
Past experiences impress statistical regularities of the environment upon neural circuits. Bayesian theory offers a principled
framework to study how prior beliefs shape perception, cognition, and sensorimotor function. There is, however, a fundamental gap
in our understanding of how populations of neurons exploit statistical regularities to represent past experiences. Recent studies
have provided a deeper understanding of how neural circuits perform behaviorally relevant computations through an analysis of
geometrical manifolds represented by invivo and insilico population dynamics. Using this emerging multidisciplinary approach
within the context of a Bayesian timing task in monkeys, we investigated how neural circuits in frontal cortical areas might
encode prior statistics and how the dynamic patterns of activity they generate could support Bayesian integration. Our results
indicate that prior statistics establish curved manifolds of neural activity that warp underlying representations and create biases
in accordance with Bayesoptimal behavior. This finding uncovers a simple and general principle for how prior beliefs may be embedded
in the nervous system and how they might exert their influence on behavior.
Shaping slow activity manifolds in recurrent neural networks
Srdjan Ostojic
To process information and produce adaptive behavior, the brain represents the external world in terms of abstract quantities
such as value, position, or orientation. Increasing experimental evidence suggests that neural circuits encode such continuous,
topologicallyorganized quantities by means of the collective organization of neural activity along nonlinear, low dimensional
manifolds in the space of possible network states. In higher order brain areas, these manifolds persist in absence of sensory stimuli,
and are therefore presumably generated by intrinsic recurrent interactions. How recurrent connectivity gives rise and shapes activity
manifolds is however not fully understood. The most prominent models of recurrentlygenerated manifolds are continuous attractor
networks. In these models, the emergence of activity manifolds typically relies on strong and highly ordered structure in the synaptic
connectivity. For instance, in the classical bump attractor model a ringlike manifold of fixed points relies on a distancedependent,
bellshaped connectivity, which is itself ringlike. While such tightly structured connectivity has recently been identified in the fly
brain, it remains challenging to reconcile classical attractor networks with circuits in the mammalian cortex, where lowdimensional
activity organization coexists with highly heterogeneous connectivity and singlecell activity. In this work, we asked how much structure
is required and expected in the connectivity and in the activity of a recurrent neural network which generates lowdimensional activity manifolds.
We considered a large class of recurrent networks in which the connectivity can be expanded in terms of rankone components. By studying
analytically the emergent dynamics, we found that hidden statistical symmetries in the distribution of connectivity weights generate a
fundamental degeneracy in the dynamics that leads to the appearance of slow activity manifolds in the neural state space. In the specific
case of classical ring models, the connectivity is fully ordered and specified by the symmetry itself; more in general, though, the
connectivity can include strong additional variance along irrelevant directions which are orthogonal to the symmetry. Statistical symmetries
can arise in absence of precise constraints, as in the example of spherical symmetry that emerges from iid Gaussian variables, and therefore
require very little finetuning. We found that connectivity symmetries fully specify the shape and the topology of activity manifolds in the
highdimensional neural state space. The intrinsic dimensionality of the manifold is determined by the number of parameters defining the
symmetry, while the embedding dimensionality is determined by the symmetry matrix representation. Importantly, the variance of the connectivity
distribution along irrelevant directions introduces significant heterogeneity in population activity and tuning curves. As a result, the
symmetry which generates the manifold does not prominently manifest itself neither in the synaptic connectivity, nor in the singleunit activity.
Why it is difficult to get off the intrinsic manifolds of brain activity?
Arvind Kumar
Several recent studies have shown that neural activity in vivo tends to be constrained to a lowdimensional manifold. Such activity does
not arise in simulated neural networks with random homogeneous connectivity and such a lowdimensional structure is an indicative of some
specific connectivity pattern in neuronal networks. In particular, this connectivity pattern appears to be constraining learning so that
only neural activity patterns falling within the intrinsic manifold can be learned and elicited. Curiously, the animal find it hard (if not
impossible) to generate activity that lies orthogonal to the intrinsic manifold. In my talk I present mechanisms to construct neuronal
networks whose activity is confined to a lowdimensional manifold, in a biologically plausible manner. Using these models I will show that
learning neural activity patterns that lie outside the intrinsic manifold requires much larger changes in synaptic weights than patterns
that lie within the intrinsic manifold. Assuming larger changes to synaptic weights requires extensive learning, this observation provides
an explanation of why learning is easier when it does not require the neural activity to leave its intrinsic manifold. Finally, I will
discuss other possible perturbations in the neuronal activity manifold that are easier or more difficult.
New neural activity patterns emerge with longterm learning
Emily Oby
Learning has been associated with changes in the brain at every level of organization. However,
it remains difficult to establish a causal link between specific changes in the brain and new
behavioral abilities. We use a braincomputer interface (BCI) to establish a causal link from
changes in neural activity patterns to changes in behavior. Previously, we have shown that the
structure of neural population activity limits the learning that can occur within a single day.
Here, we use a manifold framework to repeatedly and reliably construct novel BCI mappings
that encourage the formation of new patterns of neural activity, and ask whether the mappings
are learnable. We establish that new neural activity patterns emerge with learning. We
demonstrate that these new neural activity patterns cause the new behavior. Thus, the formation
of new patterns of neural population activity can underlie the learning of new skills.
Neural manifold contributions do not reflect the network communication structure in monkey frontoparietal areas
Benjamin Dann
There is general agreement that complex cognition and behavior in primates is generated by the activity of networked populations of neurons
in the brain. Recent technical and analytical developments allow the simultaneous recording of large numbers of neurons and to separate the
population response into its cognition and behaviorrelated building blocks. These building blocks, often referred to as subspaces, are
composed of the activity of many neurons and it is hypothesized that they are shaped and constrained by the brain network structure. However,
it is unclear whether neural subspace contributions directly reflect, or are indirectly shaped by the network structure. To examine this
question, we recorded simultaneously from 4890 neurons in the frontoparietal grasping network while two macaque monkeys performed a mixed
freechoice and instructed delayed grasping task. The population response of both areas was surprisingly simplestructured and just occupied
three subspaces for visual, intention, and movementrelated activity, which explained ~80% of single trial activity. Unfortunately, it is
currently impossible to measure the structural connectivity of a recorded neuronal population. However, condition independent cofluctuations
in spiking with high temporal precision can be assumed to reflect structural connectivity as an approximation. The connectivity structure
identified by this method was dominated by a strongly interconnected group of hubneurons from both areas, which were exclusively oscillatory
synchronized. Nevertheless, connectivity strength decreasing with distance in accordance with anatomical connectivity. To test whether the
population response corresponds to the network communication structure, we simply correlated neural contributions to both and found that both
structures were completely uncorrelated (R2< 0.02 for all subspaces, datasets and monkeys). Together, these results suggest that neurons
contributing to the same cognition and behaviorrelated computation are not necessarily connected, whereas oscillatory synchronized hubneurons
shape or even coordinate the population response.
TBD
Ila Fiete
(coming soon)