Research

SELF-ORGANIZATION within BRAINS AND BEYOND

The human brain proves amazing information processing capacities, which rely on the coordinated activity of 80 billion neurons, each of them interacting with thousands of other neurons. I want to understand how these neurons jointly form a functional network for information processing. Given that trillions of neural connections (synapses) have to be tuned and coordinated to perform meaningful information processing, this “infogenesis” must be primarily driven by self-organizing principles. I see a key role in self-organization through local learning rules, thus principles that update the connection strength between neurons only based on locally available information. My aim is to derive and understand a priory how such local learning rules can optimize a network for information processing, independently of, and potentially even before ever being exposed to specific stimuli of the external world. If identified, such a self-organization mechanism on the one hand can improve the initialization and effectiveness of artificial neural networks, and on the other hand shed light on the riddle of how our complex brains cope so seemingly effortless with the complex environment.

PROJECT DETAILS

I. Inference and forecast of Disease Spread. With the outbreak of COVID-19, it turned out that our work on spreading dynamics in the brain under subsampling is highly valuable to investigate the spread of SARS-CoV-2.  We investigated how the reproduction number R changes as a consequence of a certain intervention like e.g. closing schools or imposing a contact ban. We then continued to investigate the challenges when mitigating COVID-19 spread via test-trace-isolate strategies. These strategies are inherently imperfect, and their capacity is limited. Hence, we identified an important tipping point: If case numbers are too high, so that a local health authority cannot trace the contacts anymore, and cannot break the chain, we see increasingly more unaware carriers. These unaware carriers are the drivers of a self-accelerating spread. The conclusion is straight forward: If case numbers are low, a control of the spread is much easier. Beyond the tipping point, regaining control becomes increasingly more difficult.
In parallel, we are developing a hierarchical Bayesian model to investigate the impact of non-pharmaceutical interventions (NPIs) using European as well as federal data. And we critically revisit current case reports, and the effects of increased testing.
Importantly, with great colleagues we jointly published a number of statements about COVID-19 mitigation and control strategies. To of these statements were  signed by all four presidents, of the Fraunhofer, Helmholtz, Leibnitz and Max Planck Society.
If you are interested in this line of research, feel welcome to join our team!

References:

Alwan et al., The Lancet, 2020
Contreras et al., arxiv
Dehning et al., Science, 2020/ arxiv
Dehning et al., medrxiv
Linden et al., arxiv

Github: https://github.com/Priesemann-Group/covid19_inference_forecast

Meyer-Hermmann*, Pigeot*, Priesemann*, Schöbel*: Adaptive Strategien zur Eindämmung der COVID-19 Epidemie
Meyer-Hermmann*, Pigeot*, Priesemann*, Schöbel*: Gemeinsam können wir es schaffen: Jeder einzelne Beitrag schützt Gesundheit, Gesellschaft und Wirtschaft

 

II. Subsampling. In most large networks, it is impossible to sample the activity of all nodes in parallel. For example, the human brain comprises 80 billion neurons, but current techniques allow sampling the activity from only a few hundred neurons at a time. I showed for collective states of networks that subsampling can severely impede inferences about the properties of the full system. In detail, subsampling in critical models can distort the expected power law relations, and thereby a critical system can be misinterpreted as sub- or super-critical. I am developing approaches to overcome subsampling effects by extending methods from finite size scaling theory, and harnessing the theory of non-equilibrium phase transitions.

Selected References:
VP et al., Frontiers in Systems Neuroscience (2014)
Levina & VP, Nature Communications (2017)
Wilting &VP, Nature Communications (2018)
Neto, Spitzner & VP, arxiv

In experiments only a tiny fraction (right) of all neurons can be sampled in parallel (figure from Wilting & VP, 2018, generated with TREES).

III. Reverberating, Subcritical Dynamics in vivo. In neuroscience, a popular hypothesis states that the collective neural dynamics should self-organize to a critical state, because at criticality simple models maximize their information processing capacities. However, criticality also comes with the risk of spontaneous runaway activity (epilepsy). I obtained the first evidence that neural dynamics maintains a distance to criticality, and thus can keep a safety margin to runaway activity. This distance to criticality is consistent across different species, but changes from wakefulness to deep sleep. Most recently, I succeeded in taming the out-of-equilibrium effects imposed by ongoing external stimulation, and as a result developed a very effective methods to quantify the distance to criticality from very little data. This opens novel avenues to study how networks tune their distance to criticality – and hence their information processing – from one moment to the other, depending on task requirement and context.

Selected References:
Cramer et int., VP, Nature Communications (2020)
VP et al., Plos Comp. Biol. (2013)
Wilting & VP, Cerebral Cortex (2019)
Wilting & VP, Current Opinion Neurobiol. (2019)
Zierenberg et al., Physical Review X (2018)

IV. Information Theory. I harness information theory in two complementary ways: To infer the coding principles of living neural networks, which were developed, tested and selected in million years of evolution, and as guiding principle to design networks for optimized processing. In this realm, information theory provides the ideal language, because it is independent of semantics. Thereby it enables a comparison of coding principles across very diverse architectures, and implementations. This is useful when e.g. comparing coding across species and areas, and has been key for us to unravel a clear hierarchy of processing across cortex.  Our ongoing/future work goes into the role of phase transitions in information processing; local learning as a form of symmetry breaking; architectures to optimize Gibbs sampling despite strong correlations; and invariance of information flow under coarse-graining (contact me if any of this catches your interest!).

Selected References:
Wibral, Lizier & VP, Frontiers in Robotics and AI (2015)
Wibral, Finn, Wollstadt, Lizier, VP, Entropy (2018)
Wollstadt et al.,  PLOS Computational Biology (2017)

 

V. Music. Long range correlations dominate neural activity. Therefore music, which may be considered the mirror of the soul or the brain, should also reflect these correlations. In collaboration with Theo Geisel, we investigate how long range correlations and information theoretic quantities change with genres, and whether long range correlations play a central role in making Swing swing.
References:
Sogorski, Geisel & VP, Plos ONE (2018)
Datseris et al., arxiv / in press (2019)

To analyze long-range correlations in the tempo fluctuations of music performances, the beat of each piece must be extracted with millisecond precision [workflow from Sogorski, Geisel & Priesemann, 2018].

RECENT HIGHLIGHTS (10/2018)

* Operating in a reverberating regime enables rapid tuning of network states to task requirements: Wilting et al., Front. Syst. Neurosci., (2018)

* What makes in vitro and in vivo dynamics so different? – We derive the central role that the strength of external input plays in shaping collective dynamics: Zierenberg et al., PRX (2018)

* We developed two analytical solutions to overcome subsampling-induced bias,
one for time series: Wilting & Priesemann, arXiv / Nature Comm. (2018),
and one for spatial structures, such as degree distributions in a graph or avalanches: Levina & Priesemann, Nature Comm. (2017)