Stable manifold dynamics underlie the consistent execution of learned behavior

Matthew Perich

For learned actions to be executed reliably, the cortex must integrate sensory information, establish a motor plan, and generate
appropriate motor outputs to muscles. Animals, including humans, perform such behaviors with remarkable consistency for years after
acquiring a skill. How does the brain achieve this stability? Is the process of integration and planning as stable as the behavior
itself? We explore these fundamental questions from the perspective of neural populations. Recent work suggests that the building
blocks of neural function may be the activation of population-wide activity patterns, the neural modes, rather than the independent
modulation of individual neurons. These neural modes, the dominant co-variation patterns of population activity, define a low
dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent
activation of the neural modes as their latent dynamics. We hypothesize that the ability to perform a given behavior in a
consistent manner requires that the latent dynamics underlying the behavior also be stable. A dynamic alignment method allows
us to examine the long term stability of the latent dynamics despite unavoidable changes in the set of neurons recorded via
chronically implanted microelectrode arrays. We use the sensorimotor system as a model of cortical processing, and find remarkably
stable latent dynamics for up to two years across three distinct cortical regions, despite ongoing turnover of the recorded neurons.
The stable latent dynamics, once identified, allows for the prediction of various behavioral features via mapping models whose
parameters remain fixed throughout these long timespans. These results are upheld by an adversarial domain adaptation approach
that aligns latent spaces based on data statistics rather than dynamics. We conclude that latent cortical dynamics within the
task manifold are the fundamental and stable building blocks underlying consistent behavioral execution.

Bayesian timing shaped by curvature in cortical manifolds

Devika Narain

Past experiences impress statistical regularities of the environment upon neural circuits. Bayesian theory offers a principled
framework to study how prior beliefs shape perception, cognition, and sensorimotor function. There is, however, a fundamental gap
in our understanding of how populations of neurons exploit statistical regularities to represent past experiences. Recent studies
have provided a deeper understanding of how neural circuits perform behaviorally relevant computations through an analysis of
geometrical manifolds represented by in-vivo and in-silico population dynamics. Using this emerging multidisciplinary approach
within the context of a Bayesian timing task in monkeys, we investigated how neural circuits in frontal cortical areas might
encode prior statistics and how the dynamic patterns of activity they generate could support Bayesian integration. Our results
indicate that prior statistics establish curved manifolds of neural activity that warp underlying representations and create biases
in accordance with Bayes-optimal behavior. This finding uncovers a simple and general principle for how prior beliefs may be embedded
in the nervous system and how they might exert their influence on behavior.

Shaping slow activity manifolds in recurrent neural networks

Srdjan Ostojic

To process information and produce adaptive behavior, the brain represents the external world in terms of abstract quantities
such as value, position, or orientation. Increasing experimental evidence suggests that neural circuits encode such continuous,
topologically-organized quantities by means of the collective organization of neural activity along non-linear, low- dimensional
manifolds in the space of possible network states. In higher order brain areas, these manifolds persist in absence of sensory stimuli,
and are therefore presumably generated by intrinsic recurrent interactions. How recurrent connectivity gives rise and shapes activity
manifolds is however not fully understood. The most prominent models of recurrently-generated manifolds are continuous attractor
networks. In these models, the emergence of activity manifolds typically relies on strong and highly ordered structure in the synaptic
connectivity. For instance, in the classical bump attractor model a ring-like manifold of fixed points relies on a distance-dependent,
bell-shaped connectivity, which is itself ring-like. While such tightly structured connectivity has recently been identified in the fly
brain, it remains challenging to reconcile classical attractor networks with circuits in the mammalian cortex, where low-dimensional
activity organization co-exists with highly heterogeneous connectivity and single-cell activity. In this work, we asked how much structure
is required and expected in the connectivity and in the activity of a recurrent neural network which generates low-dimensional activity manifolds.
We considered a large class of recurrent networks in which the connectivity can be expanded in terms of rank-one components. By studying
analytically the emergent dynamics, we found that hidden statistical symmetries in the distribution of connectivity weights generate a
fundamental degeneracy in the dynamics that leads to the appearance of slow activity manifolds in the neural state space. In the specific
case of classical ring models, the connectivity is fully ordered and specified by the symmetry itself; more in general, though, the
connectivity can include strong additional variance along irrelevant directions which are orthogonal to the symmetry. Statistical symmetries
can arise in absence of precise constraints, as in the example of spherical symmetry that emerges from iid Gaussian variables, and therefore
require very little fine-tuning. We found that connectivity symmetries fully specify the shape and the topology of activity manifolds in the
high-dimensional neural state space. The intrinsic dimensionality of the manifold is determined by the number of parameters defining the
symmetry, while the embedding dimensionality is determined by the symmetry matrix representation. Importantly, the variance of the connectivity
distribution along irrelevant directions introduces significant heterogeneity in population activity and tuning curves. As a result, the
symmetry which generates the manifold does not prominently manifest itself neither in the synaptic connectivity, nor in the single-unit activity.

Why it is difficult to get off the intrinsic manifolds of brain activity?

Arvind Kumar

Several recent studies have shown that neural activity in vivo tends to be constrained to a low-dimensional manifold. Such activity does
not arise in simulated neural networks with random homogeneous connectivity and such a low-dimensional structure is an indicative of some
specific connectivity pattern in neuronal networks. In particular, this connectivity pattern appears to be constraining learning so that
only neural activity patterns falling within the intrinsic manifold can be learned and elicited. Curiously, the animal find it hard (if not
impossible) to generate activity that lies orthogonal to the intrinsic manifold. In my talk I present mechanisms to construct neuronal
networks whose activity is confined to a low-dimensional manifold, in a biologically plausible manner. Using these models I will show that
learning neural activity patterns that lie outside the intrinsic manifold requires much larger changes in synaptic weights than patterns
that lie within the intrinsic manifold. Assuming larger changes to synaptic weights requires extensive learning, this observation provides
an explanation of why learning is easier when it does not require the neural activity to leave its intrinsic manifold. Finally, I will
discuss other possible perturbations in the neuronal activity manifold that are easier or more difficult.

New neural activity patterns emerge with long-term learning

Emily Oby

Learning has been associated with changes in the brain at every level of organization. However,
it remains difficult to establish a causal link between specific changes in the brain and new
behavioral abilities. We use a brain-computer interface (BCI) to establish a causal link from
changes in neural activity patterns to changes in behavior. Previously, we have shown that the
structure of neural population activity limits the learning that can occur within a single day.
Here, we use a manifold framework to repeatedly and reliably construct novel BCI mappings
that encourage the formation of new patterns of neural activity, and ask whether the mappings
are learnable. We establish that new neural activity patterns emerge with learning. We
demonstrate that these new neural activity patterns cause the new behavior. Thus, the formation
of new patterns of neural population activity can underlie the learning of new skills.

Manifold learning for unsupervised analysis of neuronal activity tensors

Gal Mishne

In machine learning, the manifold assumption is that high-dimensional data in fact lies on (or close to) a manifold of intrinsic
low dimensionality embedded in the high-dimensional space, where manifold learning aims to uncover the underlying low-dimensional
parametrization of the data. Recently, such manifold representations are playing an increasing role in the analysis of large scale
measurements of neural populations and enabling unsupervised and unbiased data exploration and visualization. I will discuss the
properties of manifold learning and its application to neuroscience, where one important component of these approaches is defining
pairwise distances between data-points. I will present a new metric which takes into account the coupled multi-scale structure of
multi-trial experiments, when modeling the data as a rank-3 tensor of neurons, time-frames and trials. In analyzing neuronal activity
from the motor cortex we identify in an unsupervised manner: functional subsets of neurons, activity patterns associated with
particular behaviors, and long-term temporal trends.

Hebbian learning of manifolds

Cengiz Pehlevan

An influential account of invariant object recognition hypothesizes that sensory cortices learn to disentangle object manifolds
to a linearly separable representation in an unsupervised manner, however a biologically-plausible implementation of such
computation is missing. Starting with a minimal biologically-plausible unsupervised learning network, a single-layer neural
network with simple nonlinearities and Hebbian/anti-Hebbian plasticity, and building up in network depth, I will explore how
manifold disentangling and learning can be achieved with biological mechanisms.

Neural manifold contributions do not reflect the network communication structure in monkey frontoparietal areas

Benjamin Dann

There is general agreement that complex cognition and behavior in primates is generated by the activity of networked populations of neurons
in the brain. Recent technical and analytical developments allow the simultaneous recording of large numbers of neurons and to separate the
population response into its cognition- and behavior-related building blocks. These building blocks, often referred to as subspaces, are
composed of the activity of many neurons and it is hypothesized that they are shaped and constrained by the brain network structure. However,
it is unclear whether neural subspace contributions directly reflect, or are indirectly shaped by the network structure. To examine this
question, we recorded simultaneously from 48-90 neurons in the fronto-parietal grasping network while two macaque monkeys performed a mixed
free-choice and instructed delayed grasping task. The population response of both areas was surprisingly simple-structured and just occupied
three subspaces for visual-, intention-, and movement-related activity, which explained ~80% of single trial activity. Unfortunately, it is
currently impossible to measure the structural connectivity of a recorded neuronal population. However, condition independent co-fluctuations
in spiking with high temporal precision can be assumed to reflect structural connectivity as an approximation. The connectivity structure
identified by this method was dominated by a strongly interconnected group of hub-neurons from both areas, which were exclusively oscillatory
synchronized. Nevertheless, connectivity strength decreasing with distance in accordance with anatomical connectivity. To test whether the
population response corresponds to the network communication structure, we simply correlated neural contributions to both and found that both
structures were completely uncorrelated (R^{2}< 0.02 for all subspaces, datasets and monkeys). Together, these results suggest that neurons
contributing to the same cognition- and behavior-related computation are not necessarily connected, whereas oscillatory synchronized hub-neurons
shape or even coordinate the population response.

Why higher order principal components may be irrelevant

Allan Mancoo

Large-scale recordings of neural activity are now widely carried out in many experimental labs, leading to the question of how to capture
the essential structure of the recorded activities. One popular way of doing so is through the use of dimensionality reduction methods.
However, interpretation of the results of these tools can be fraught with difficulties. Most commonly, linear methods such as Principal
Component Analysis are used despite the fact that these methods do not explicitly take into account that individual neuronal activity is
constrained to be non-negative. While this simplest form of nonlinearity is well-known, its specific effect or importance for linear
dimensionality reduction methods is less clear. Here, we study these effects under the assumptions that linear readouts of population
activity should be low-dimensional and that the overall firing should be limited for reasons of efficiency. These assumptions also
underlie the literature on efficient, balance networks (DenĂ¨ve and Machens, Nat. Neurosci., 2016). We show that these simple assumptions
lead to population trajectories that move on specific, non-linear surfaces in the neural space. In turn, methods such as principal
component analysis extract not only the low-dimensional linear readouts, but also a tail of higher-order components, caused by the
non-linearities in the population trajectories. We explain these findings geometrically and show that such higher-order components
often appear in real data. We sketch a set of methods that would allow to incorporate the non-negativity constraints in a meaningful
way into dimensionality reduction methods.

The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep

Ila Fiete

Though neural circuits consistent of thousands of neurons and thus can potentially occupy a several-thousand dimensional
activity space, we consider that in order to perform computation, representation and error-correction, the states are
intrinsically restricted to a much smaller subspace, corresponding to the dimension and topology of the set of represented
variables, and that excursions of state away from this smaller subspace are driven by intrinsic dynamics back onto the subspace.
This manifold perspective enables blind discovery and decoding of the represented variable using only neural population activity
(without knowledge of input, output, behavior, or topography). I will describe how we characterize and directly visualize manifold
structure in the mammalian head direction circuit, revealing that the states form a topologically non-trivial 1D ring, which suggests
that the thousands of neurons in the network encode only a 1D variable. The ring exhibits isometry, and is invariant across waking
and REM sleep, directly demonstrating continuous attractor dynamics and enabling powerful inference about mechanism. Finally, I will
show that external rather than internal noise limits memory fidelity, and the manifold approach reveals new dynamical trajectories
during sleep.