How do we represent the outside world inside of our heads?

My research explores how our knowledge of abstract, complex information (the people, places, and things that comprise our world) is represented in the mind and brain. How is this knowledge stored and organized? How does it manifest in neural activity?

How does experience shape the mind and brain?

And importantly, how does experience impact how/where/what information is stored and represented? Our brains are as unique as our fingerprints. How does who we are — our unique combination of preferences, capabilities, memories — shape how we see the world? How does it change our brain?

I pursue these questions with several approaches. Most recently, I have leveraged brain-to-brain neuroimaging analyses, coupled with naturalistic experiment paradigms, to study how brain signals dynamically converge and diverge across people. In particular, I examine how these brain-to-brain couplings change as a function of one's experience. Across studies, my research investigates the effect of experience on neurocognitive representations at different timescales, ranging from moments to decades.

How do brains differ across people who have had visual experiences, versus those who have not?

In a recent study, I compared brain activity across individuals while the study participants listened to sound clips from popular, live-action movies. These sound clips featured conversations between characters, suspenseful music, complicated plotlines etc. For example, the brain activity shown above was recorded while people listened to a six-minute audio excerpt from the movie "Taken." One group of study participants was sighted, and the other group had been blind since birth. I measured spatially-distributed, multi-voxel brain activity patterns (the colored specks seen here on a patch in the back of the brain, in visual cortex) during each timepoint in the sound clip, and quantified the similarity of patterns across different people.

For this animated demo, I took each group (sighted and blind, each n=18), split them in half, and then calculated an average pattern for each moment in the sound clip. I then computed the correlation between these dynamic activity patterns for Sighted-Half 1 versus Sighted-Half 2 (left) and Blind-Half1 versus Blind-Half2 (right). The resulting similarity values for each timepoint are shown by the height of each blue bar.

As you can see, across the timepoints in the movie sound clip, the blue bars are much higher for the blind group. That is, in this patch of visual cortex, the two blind half-groups had much similar patterns to each other than the sighted group did! Remember, for this experiment, both the sighted and blind participants are just listening to the sound clips from movies— they don't see anything. This result suggests that blind individuals, who have never used their visual cortex for visual processing (unlike sighted people), show consistent activity across people while processing real-world, meaningful sounds.

Experience (in this case, lifelong access to or absence of vision) has made a considerable difference in how the brain responds to naturalistic input. This finding is a dramatic example about how people with drastically different life experiences (across the timescale of decades) have diverging brain activity in response to realistic stimuli. This work was performed in collaboration with Marina Bedny and Janice Chen at Johns Hopkins.

What happens in the visual cortices of individuals who were sighted from birth and through adolescence, but then lost their vision as adults? To investigate whether the visual cortex plasticity observed in the blind group is limited by sensitive developmental periods, we ran another experiment. Do you think that while listening to these movie clips, the adult-onset blind adults would show similar visual cortex responses to those who were born blind, or would they look more like the sighted control group? You can find the answer here!

When two people look at the same work of art, do they "see" the same thing?

Even among sighted people, who presumably have a lot of shared experiences, there is so much rich variation in our thoughts. For example, viewing art is a highly subjective experience. When we view a complex and evocative image (like the picture on the left), even though it's a static picture, our understanding and interpretation of the image unfolds over time, the longer we look. How does this dynamically evolving interpretation of an aesthetic experience manifest in brain activity? How does this idiosyncratic process differ across people?

To study these questions, my colleagues and I designed an open-ended fMRI experiment: once study participants entered the MRI machine, we showed them various artworks (like the image above on the left), and then we asked them to describe each piece of art aloud, in their own words and at their own pace while we measured their brain activity and recorded their responses. To analyze this data, we transcribed their responses; broke up each artwork into segments (the labeled version of the image on the right); and then identified the link between their brain activity at each moment and the particular image segments that they were describing. This enables us to compare brain activity patterns between individuals while they describe the same image segment, and to measure how these patterns change over time and across people.

What happens in the brains of different people while they view and describe artwork? How does this vary across people?

For the case of movie viewing, we know quite a bit about what neural activity looks like across people. Several fMRI studies have found that in areas of the brain that process narratives and other high-level content (e.g., the default mode network), the activity timecourses of different people are synchronized over time. Consider the above abstract schematic, starting with the gray box on the left. Let's say the triangle is the beginning of the viewing experience, and the blue square is the end. The thick black line represents the "veridical path" of the movie across the viewing experience (i.e., the presentation), and the different colored lines are the paths of individual people who watched the movie (i.e., each person's re-presentation). As we can see, each person's trajectory is pretty similar across the viewing experience, and they all align pretty closely with the movie path. The extant fMRI data are consistent with this abstract schematic, for the case of movies. Some filmmakers would probably be pretty pleased about this: brain activity across different viewers aligns over time, suggesting that everyone has a similar interpretation of the unfolding events! (although one could imagine that a more avant-garde creator might lament such consensus among viewers)

... But, for artwork in particular, there is no single "path" through the viewing experience (right gray box schematic). How do these time-evolving trajectories converge and diverge across people? Can we use each person's dynamic verbal descriptions as a clue for how their subjective interpretations evolve over time? One possibility is that, for each possible pairing of two people in the experiment, the more similar these two peoples' interpretations of the artwork, the more similar their brain patterns become over the course of their minutes-long viewing. Alternatively, maybe everyone's interpretation/neural trajectory starts out different, and then they all converge at the end. Or, perhaps vice-versa. This is currently a work in progress in collaboration with the Chen Lab at Johns Hopkins. Stay tuned!

This study is just one example of the different ways that my colleagues and I have explored how brain activity patterns change across people and across viewings of naturalistic stimuli. In a recent experiment, Janice Chen and I used a similar approach to explore how brain activity patterns are transformed when people watch a movie, versus when they later recount the movie aloud from memory. More details on this work are available here.

How do our learning experiences shape our resulting knowledge?

There are many things in the world that we will never be able to directly access, experience, or discover for ourselves. Instead, we rely on the testimony of others. We learn by reading their books, listening to their stories and their lectures, etc. Using language, we can powerfully transmit knowledge from one mind to another. How does our knowledge differ for things that we learn via firsthand experience versus indirectly (e.g., through language)?

One way to study this question is to find groups of people who can only access a given type of knowledge through language, and then compare those people to other groups who have learned that knowledge through direct, firsthand experience. For example, sighted people can learn information that is typically visually acquired via sensory access, while people who are born blind must learn about this information through other means (e.g., verbal communication with sighted people).

Working in collaboration with the Bedny Lab at Johns Hopkins and members of the blindness community, I designed a series of behavioral studies to investigate how blind and sighted people think about information that is acquired via vision versus audition. There is compelling evidence that blind people have a very rich understanding of sight verbs (e.g., the difference between "glance" and "stare," and "shimmer" versus "glow") that aligns with sighted peoples' knowledge. To further examine how each group thinks about these concepts, I tested how blind and sighted people put this knowledge into action. We posed questions like: what if a sighted person stared at someone from across the street, would they be able to tell if the other person's eyes were brown or blue? What if they were right next to the person? What if the sighted person just glanced at the other person instead of staring at them?

We found some subtle and intriguing differences between how blind and sighted people judged these scenarios, which suggest that direct sensory experience versus language-mediated knowledge impacts one's perceptual reasoning about other people. To learn more about the study and the preliminary results, you can check out the video below.

For more information on my research, check out my Google Scholar profile and my Open Science Repository profile.

I welcome any questions or comments about my work!