Cracking the Code

We recognize objects easily every day, but object recognition is in fact a very difficult problem. Even leading computer algorithms do not match human performance today. Object recognition is not easy for the brain either: a series of cortical areas, taking up ~40% of the brain, is dedicated to vision. But we know very little about the code in which the brain represents objects for perception, and about how the brain transforms what we see into what we perceive. How do we crack the code for objects? What are its features and what are its rules?


Our approach to this problem is best understood through an analogy to colour. We see millions of colors but it is well known that color perception is three-dimensional - any color we perceive can be represented using three numbers. Can we do likewise for the millions of shapes we see? Do shapes also reside in a low-dimensional space?

To gain insight into these questions, we perform three types of experiments in our lab: (1) Behavioral experiments, brain imaging (fMRI) and perturbations (TMS) in humans; (2) Recordings from single neurons from the monkey visual cortex and (3) Computer vision. 

In the human experiments, we probe the perceptual representation using behavioral tasks such as visual search or categorization and explore the neural correlates using brain imaging. In the monkey experiments, we probe the representation at the level of single neurons in the inferotemporal cortex, an area critical for object recognition. We use behavioral and neuronal data to test and validate computational models of object recognition. Finally, we are using computer vision to understand why vision is such a hard problem, and using our brain data to improve computer vision algorithms.