Principles of Cognition in
Brains and Machines
Computational neuroscience, empirical brain research and physical constraints of intelligent systems
Computational neuroscience, empirical brain research and physical constraints of intelligent systems
I am a physicist, neuroscientist, cognitive scientist, and AI researcher, currently leading research groups at the Mannheim Center for Neuromodulation and Neuroprosthetics (MCNN), University of Heidelberg, and at Friedrich-Alexander-University Erlangen-Nuremberg (FAU).
My interdisciplinary research focuses on principles of cognition in brains and machines, bridging empirical neuroscience, computational modeling, and artificial intelligence. I investigate how cognitive functions emerge from neural systems and how insights from brain research can inform the development of biologically inspired and explainable AI models.
My research investigates principles of cognition in biological and artificial systems, combining empirical neuroscience, computational modeling, and artificial intelligence.
Bridging Brains and Machines
I study how cognitive functions—particularly in the auditory and linguistic domain—are represented and processed in the human brain and in artificial neural systems, including deep learning models and large language models (LLMs).
Naturalistic Auditory and Speech Processing
Using advanced neuroimaging techniques such as MEG, EEG, and invasive iEEG, I investigate continuous, real-world stimuli (e.g., audiobooks) to uncover dynamic neural mechanisms underlying perception and cognition.
Auditory Phantom Perceptions
I study altered auditory perception, including tinnitus and hyperacusis, to better understand pathological brain dynamics and their relation to normal sensory and cognitive processing.
Neural Representations of Structure and Meaning
I model how the brain organizes and navigates complex information—such as spatial environments, abstract concepts, or linguistic structures—using neural network–based successor representations and related frameworks.
Multi-Scale Cognitive Modeling
My work integrates hierarchical representations, Bayesian inference, and predictive coding to study cognition across multiple temporal and spatial scales, from perception to abstract reasoning.
Biologically Inspired Neural Networks
I develop brain-constrained deep neural network models to simulate neural computation, aiming to advance robust, interpretable, and neuroscience-informed AI systems.
Explainable and Interpretable AI
By analyzing and reverse-engineering neural networks—including LLMs—I address the black-box problem in AI and explore how principles from neuroscience can guide the design of transparent and reliable architectures.
Recurrent Neural Networks and Dynamics
I investigate the structural and dynamical properties of recurrent neural networks using methods from dynamical systems theory, information theory, and statistical physics, including the role of noise and stochasticity in neural computation.
Bayesian Brain and Predictive Coding
My research examines how perception emerges from interactions between top-down predictions and bottom-up sensory signals, extending Bayesian brain frameworks to both healthy and pathological auditory and speech processing.
Natural Language Processing for Neuroscience
I use NLP methods to align continuous speech stimuli with neural recordings, enabling fine-grained analyses of linguistic structure and neural dynamics under naturalistic conditions.
Advanced Neuroimaging Analysis
I develop machine learning and clustering approaches to analyze high-dimensional, multimodal neuroimaging data, combining deep learning with theory-driven model interpretation.
Through this work, I aim to identify general principles of cognition that govern information processing across brains and machines, while translating insights from neuroscience into AI systems that are robust, interpretable, and aligned with human cognition.