This line of research investigates how humans develop, organise, and use visual and multimodal representations. It connects infant cognition, adult object recognition, computational modelling, and artificial intelligence to understand how perceptual systems extract stable meaning from highly variable sensory input.
In infancy, the research focuses on how infants learn to recognise, classify, and link new experiences to prior encounters. A central question is how infants develop the ability to categorise other individuals by integrating visual face information with auditory voice information. These projects examine when face and voice categorisation emerge, whether they become integrated into multimodal representations, and how these abilities develop through increasing social interaction.
In adults, the research focuses on object and face recognition. Humans can recognise familiar objects rapidly and flexibly despite changes in viewpoint, scale, illumination, occlusion, texture, noise, and background clutter. At the same time, they can discriminate very subtle differences between visually similar entities. This balance between invariance and specificity provides a central model system for studying visual cognition.
Current projects combine behavioural experiments, psychophysics, computational modelling, artificial neural networks, and neurosymbolic architectures. They ask how classical object-recognition models can be extended by symbolic or top-down components, whether such models better capture human robustness under degraded or transformed viewing conditions, and whether symbolic layers can improve interpretability without reducing performance.
Together, this research line studies how perceptual systems develop stable categories, integrate information across sensory modalities, and support flexible recognition across changing environments.
This line of research investigates how dogs interpret human communicative signals and how dog–human interaction can be quantified experimentally. A central question is whether dogs treat human gestures, such as pointing, as referential signals to specific objects or whether their behaviour is better explained by attentional and spatial mechanisms.
Current work includes a multi-cup pointing study using symmetric and asymmetric spatial layouts to separate coarse side-following from more precise object-directed choice. This design tests whether dogs follow pointing as a target-specific communicative cue or whether performance is mainly driven by attention to the indicated side of space. Recent work from this project has been reported in:
Jennifer D. Mugleston, Shin-Miau Huang, and Christoph D. Dahl. Referential and attentional accounts of dog point-following in an asymmetric multi-cup design. bioRxiv, 2026.
More broadly, this research line examines dog–human communication, shared attention, behavioural synchronisation, and the co-evolution of dog and human social cognition. It combines controlled behavioural experiments with computational analysis to clarify which mechanisms underlie dogs’ responses to human communicative cues.
This line of research uses simulations, machine learning, artificial neural networks, and artificial agents to model cognitive mechanisms. The goal is to test how cognitive capacities can emerge from implementable computational systems.
Current projects include models of visual object recognition, face recognition, perceptual abstraction, perceptual narrowing, category learning, symbol grounding, minimal cognition, cooperation, and robot-based approaches to communication and agency. These projects examine how artificial systems acquire useful representations, generalise across transformations, act on learned information, and connect perception with behaviour.
The broader aim is to build computational models that link behavioural experiments, neuroscience, comparative cognition, and artificial intelligence.
Research project funding: General Research Project: Identification number 112-2410-H-038-027; National Science and Technology Council (NSTC), formerly known as MOST; Title: Computational modeling of adaptation in the visual system
This line of research develops formal descriptions of cognition, behaviour, and social interaction. The goal is to express cognitive processes in quantitative terms that allow precise comparison across species, tasks, and systems.
Current projects use information theory, entropy, mutual information, network analysis, dynamical systems, and formal modelling to study category formation, abstraction, social interaction, dominance, synchronisation, cooperation, and the flow of information within groups. These approaches ask how much information is preserved, lost, compressed, or transformed when organisms perceive, categorise, decide, communicate, or interact.
The broader aim is to provide mathematical tools for comparative cognition: not only asking whether a species can solve a task, but describing the structure of the information that makes the solution possible.
Research output:
Christoph D Dahl. Information funnels and multiscale gap-space dynamics in Kaprekar's routine. arXiv preprint arXiv:2512.05124, 2025.
Christoph D. Dahl. Coarse-grained drift fields and attractor-basin entropy in Kaprekar’s routine. Entropy, 2026.
Christoph D. Dahl. An Information-Theoretic Analysis of Category Maps and Target Preservation. bioRxiv, 2026.
Christoph D. Dahl, and Timothy J. Lane. Competing democratic and autocratic long-run regimes in a minimal norm–institution model. SSRN, 2026.
This line of research investigates how fish perceive, decide, learn, and interact in social contexts. Fish provide an important comparative model for studying cognition because they combine rich behavioural flexibility with experimentally accessible neural and social systems.
Current work focuses on two main areas. The first concerns quantity perception and numerosity-based decision-making. Using zebrafish and related species, we examine how fish evaluate number, size, spatial extent, food quantity, and threat-related cues when making adaptive choices. Rather than treating numerosity as a simple preference for “more,” this work asks how quantity information is integrated with ecological context, reward value, risk, and competing perceptual dimensions.
The second area concerns social interaction and group dynamics. We use tracking, machine learning, and information-theoretic analyses to quantify how individuals move, coordinate, and exchange information within groups. These projects examine social learning, group-level behavioural structure, dominance or influence patterns, and the flow of information from informed to uninformed individuals during collective behaviour.
Together, this research line uses fish as a model system for understanding the evolution of cognition, the computational principles of decision-making, and the link between individual perception and group-level behaviour.
Research project funding: Research Project for Newly-recruited Personnel, Ministry of Science and Technology, Taiwan. Identification number: 110-2311-B-038-002. Title: Quantifying the effect of multiple neurotransmitter systems on group-level animal behaviour through machine learning.
Research output:
Hsin-En Cheng, and Christoph D Dahl. Ecological context gates numerosity-based affiliation decision in zebrafish. bioRxiv, 2026.
This line of research uses computational ethology to quantify behaviour in freely moving animals. Computational ethology combines ethology with mathematics, computer science, artificial intelligence, and machine learning to detect predefined behaviours and discover previously unrecognised behavioural patterns.
A central focus is higher-order cognition in small-brained animals. These projects ask how animals with comparatively small nervous systems can perceive, learn, recognise individuals, navigate environments, make decisions, and interact socially. Rather than treating small brains as simple systems, this work examines which neural and computational principles allow complex behaviour to emerge from limited biological resources.
The broader goal is to develop a minimal-brain approach to cognition. This asks what the smallest set of mechanisms is that can support internal representation, category formation, adaptive decision-making, social recognition, or simple forms of agency. The research combines behavioural experiments, computational modelling, virtual environments, and real-world animal observations to compare cognition across biological and artificial systems.
Together, this programme uses small-brained animals as model systems for identifying the minimal neural, behavioural, and computational requirements for cognition.
Research output:
Christoph D Dahl and Yaling Cheng. Individual recognition in a jumping spider (Phidippus regius). e-life, 2025.
Christoph D Dahl and Yaling Cheng. Individual recognition in a jumping spider (Phidippus regius). bioRxiv, pages 2023–11, 2023.