Optic Flow and Wayfinding Lab

We build neural models and perform experiments that link perception, action, and behavior with the brain. The focus is on human navigation: what strategies and mechanisms in the primate brain make humans so capable when moving through dynamic, complex environments like driving through traffic and walking through Grand Central Terminal? Our work aims to uncover neural mechanisms that inform the development of biologically derived algorithms for perception and control.

Neural networks, vision, optic flow, navigation, dynamical systems, perception and action, neuroscience, temporal dynamics

Robustness and Stability of Heading Perception

How do humans reliably perceive self-motion in dynamic environments?

Everyday situations demand that humans percieve their self-motion (heading) despite the presence of rain, snow, smoke, blowing leaves, and other globally discprenant motion. Moving objects may occupy large portions of the visual field for extended periods of time, yet our perception of self-motion remains robust and stable. The aim of this project is to uncover the mechanisms in our visual system that lead to such robust heading perception. Experiments aim to elucidate properties of human heading perception in dynamic environments.

Competitive Dynamics Model

The Competitive Dynamics model is a dynamical systems model of brain areas LGN, V1, MT, and MSTd, that simultaneously generates human-like heading estimates and recovers world-relative object motion from optic flow. Data collected from experiments in our lab and others iteratively tests and refines the model. The model leverages biological design to dramatically improve computational efficiency. Recent progress addresses curvilinear path perception and integrates disparity to improve the robustness of self-motion and object motion estimates. On-going work is adapting the model to simulate navigation scenarios in virtual environments and video collected from flying drones.

Object Motion Perception during Self-Motion

How do we accurately perceive the motion of objects that move through the world?

One reason that humans effectively interact with moving objects during self-motion is that we perceive trajectories of moving objects in much the same way, whether we're stationary or moving. This is remarkable considering that our self-motion may radically influence the object’s motion on the retina. A soccer ball that flies up and to the left may generate upward motion on the retina of a mobile observer, yet we do not notice a discrepancy because the brain factors out our self-motion. This project addresses the principles that underlie how the visual system integrates multiple signals experienced during self-motion (feedback and predictions, optic flow, vestibular signals, motor efference copies, etc.) to recover the motion of objects relative to the stationary world.

Detection of Moving Objects

How do we differentiate between stationary and independently moving objects during self-motion?

While luminance, color, and texture contrast may be used to differentiate between objects and their surroundings (figure-ground segregation), these properties can fail to be informative, such as when it comes to detecting the presence of camouflaged animals. In such scenarios, motion provides a powerful means to perform figure-ground segregation, as humans have little difficulty noticing an animal when sudden movement breaks its camoflague. Motion aids in detecting moving objects even during self-motion, when motion pervades the entire the visual field. For example, we effortlessly differentiate between moving and parked cars while driving. This project concerns the identification of neural mechanisms that enable humans to readily perceive the difference between earth-fixed objects and those that move independently from the observer during self-motion.

Object Boundary and Surface Coding

Why does a boundary belong to a figure and not its background?

Figure-ground segregation represents a fundamental competency performed by the visual system to perceive distinct objects at different depths from their background. This may occur through two complementary processes, one relating to object boundaries and the other relating to surfaces. First, the visual system must group boundaries belonging to the object, but not its background. So far, modeling work as a part of this project has leveraged a technique known as border-ownership (BO) coding, which occurs in primate visual cortex. BO is more informative than tradition approaches because, in addition to detecting edges, it associates them to the objects to which they belong. Second, the visual system must determine whether a region belongs to the interior or exterior of a shape. Work thus far has harnessed “skeleton” or medial axis representations of shapes at multiple spatial scales.