Principles of Cognition in
Brains and Machines
Computational neuroscience, empirical brain research and physical constraints of intelligent systems
Computational neuroscience, empirical brain research and physical constraints of intelligent systems
I am an interdisciplinary researcher working at the intersection of physics, neuroscience, cognitive science, and artificial intelligence, currently leading research groups at the Mannheim Center for Neuromodulation and Neuroprosthetics (MCNN), University of Heidelberg, and at Friedrich-Alexander-University Erlangen-Nuremberg (FAU).
My interdisciplinary research focuses on substrate-independent principles of cognition across biological and artificial systems, integrating computational and cognitive neuroscience with physics-inspired modeling and artificial intelligence. I investigate how cognitive functions emerge from neural dynamics under naturalistic conditions and develop brain-constrained, interpretable AI models to uncover principles that generalize across substrates.
My research aims to identify substrate-independent principles of cognition that govern information processing in biological and artificial systems. I work at the intersection of computational auditory and cognitive neuroscience, physics-inspired modeling, and artificial intelligence, combining empirical neuroimaging with theory-driven computational approaches.
A central goal of my work is to understand how cognitive functions emerge from neural dynamics and how these dynamical principles generalize across physical substrates, from the human brain to artificial neural systems. I treat cognition as a lawful dynamical process, shaped by constraints such as network architecture, noise, and energy landscapes, rather than as a collection of task-specific mechanisms.
My empirical work focuses on auditory and linguistic cognition under naturalistic conditions. Using MEG, EEG, and invasive iEEG, I investigate continuous real-world stimuli (e.g. audiobooks) to uncover the temporal dynamics and representational structure of speech and auditory processing. This approach enables the study of cognition beyond trial-based paradigms and links neural dynamics directly to real-world perception and comprehension.
I study altered auditory perception, including tinnitus and hyperacusis, as model systems for understanding dysfunctional brain dynamics. By relating pathological states to normal auditory processing, my work contributes to a mechanistic understanding of how stable and maladaptive neural attractor states emerge, with direct relevance for clinical neuroscience.
A major theoretical focus of my research is the question of how the brain organizes complex information. I model cognitive maps and internal representations of space, meaning, and linguistic structure using neural network–based successor representations and related frameworks. These models formalize cognition as movement within structured state spaces shaped by experience and prediction.
Across projects, I integrate hierarchical representations, Bayesian inference, and predictive coding to study cognition across multiple temporal and spatial scales. This multi-scale perspective allows me to connect sensory processing, learning, and abstract reasoning within a unified dynamical framework.
In parallel, I develop brain-constrained deep neural networks to simulate neural computation and to test hypotheses about cognitive dynamics. By comparing biological and artificial systems—including deep learning models and large language models (LLMs)—I investigate which computational principles are shared across substrates and which depend on specific biological constraints.
A key aim is to advance interpretable and explainable AI. By analyzing and reverse-engineering neural networks through methods inspired by neuroscience and physics, I address the black-box problem and explore how stable representations, attractor dynamics, and inductive biases contribute to robust cognition.
Conceptually, my work is guided by the view that cognition can be understood in terms of dynamical systems, attractor landscapes, and information-theoretic constraints. Drawing on methods from statistical physics, dynamical systems theory, and information theory, I study how noise, recurrence, and network structure shape neural computation. This perspective provides a principled bridge between empirical neuroscience and the design of cognitively grounded artificial systems.
Overall, my research seeks to establish a physics-of-cognition perspective that is firmly anchored in computational auditory and cognitive neuroscience, while extending toward a general theory of cognition across brains and machines. By integrating empirical data, computational models, and theoretical analysis, I aim to contribute to a unified understanding of cognition that is both biologically grounded and substrate-independent.