Research

We want to understand how sensory signals are processed by neural devices (e.g. human, insect or fish brain) to control and drive behaviour. By 'understand' we mean account for all measurable aspects of sensory behaviour via simple models consisting of a few elements embedded within a physiologically plausible circuit.

We are not interested in processes that cannot be measured; for this reason, the first step in our enquiry consists of a thorough and extensive empirical characterization of the specific process under investigation. Data are then used to constrain fully specified computational models which we assess using mathematics or computer simulations.

The two steps of 1) characterizing the process experimentally on the one hand, and 2) accounting for the empirical results via computational models on the other hand, are not in our view separable. We strive to integrate the two approaches as closely as possible for each project and each researcher. We believe it is critical that the same individual understands and handles both steps. Outsourcing either one to others belittles the complexity of both, as they mutually inform each other in ways that are best exploited only by understanding this complexity at both levels at the same time.

We approach the sensory process by first describing it as a treatable mapping between the input stimulus and the behavioural decision. We then focus our efforts on two specific sensory systems, vision and audition. We attempt to estimate the perceptual operator underlying the sensory process used by the observer to detect a specified signal; for this purpose we rely on recent technical developments in psychophysics, although some of our projects use more classic threshold-based approaches.

The resulting characterization is often as detailed as is feasible given realistic constraints on the number of participants and the amount of data collected by a given participant. This characterization is used to guide the implementation of computational models, typically consisting of a front-end physiologically plausible circuit feeding onto a standard signal detection theory decision model. We have used this approach to study a number of phenomena in human vision and audition, ranging from low-level feature detection to natural images and sounds; please refer to our list of publications for details. For an overview of our current work, see this presentation at IRCAM (Paris) and/or refer to the topical subsections in the drop-down menu on the left or down below.

What does it mean to "understand" a visual operation? A gallery of algorithms and circuits, and a brief explanation of why they work.

How do nets compare with humans when extracting features from natural scenes, detecting signals in noise, and looking at abstract art?

The remarkable noisiness of human behaviour, its statistical distribution, and its surprising relationship with the calculus of variations and zebrafish.

From single spikes to circuit models.

Feature conjunction in zebrafish? Yes. Complex visual analysis in the fighter-fish? Again, yes!

How do we combine local motion signals to make out the complex movements generated by others? What does this have to do with mirror neurons, or autism?

Recording signals from the human brain, striking a balance between spatial resolution (fMRI) and temporal resolution (EEG).