This piece, by Onno Berkan, was published on 04/29/25. The original text, by Dale Purves, was published by the Journal of Cognitive Neuroscience on 04/01/25.
This study by Dale Purves explores how our brains interpret what we see, challenging traditional beliefs about visual perception. Purves finds that our visual perception isn't based on directly measuring the physical world, but rather on our accumulated experiences with visual patterns.
It's your brain's world, you're just living in it. Think of it this way: while self-driving cars utilize precise instruments like cameras and laser sensors to measure the world around them, our eyes and brain work differently, relying on patterns we've encountered throughout our lives and through evolution.
You've seen this image before– it's called the checker shadow illusion. The two gray squares are identical, yet they appear different to the human eye. According to Purves, this happens because our brain interprets these patterns based on how frequently we've encountered similar patterns in nature.
Purves' study also explores how we perceive lines and angles. Interestingly, horizontal lines appear shorter than vertical lines of the same length, and we tend to overestimate acute angles while underestimating obtuse ones. These "errors" in our perception aren't errors at all - they reflect how our brain has learned to interpret these patterns based on their frequency in the natural world.
Motion perception is another fascinating aspect examined. When we see a moving object and a flash of light at the same position, we perceive the flash as lagging behind the moving object. This "flash-lag effect" increases with the speed of the moving object, showing how our brain processes motion based on accumulated experience rather than exact physical measurements.
The researchers used various methods to study these phenomena, including computer simulations and analysis of how objects appear in natural scenes. They found that our perceptions consistently align with the statistical patterns of how things appear in our environment rather than their actual physical properties.
This approach to understanding vision is similar to deep learning, which modern artificial intelligence systems use to learn through experience. Just as AI programs like AlphaZero learn to play games through repeated trial and error rather than following preset rules, our visual system learns to interpret the world through accumulated experience.
The study concludes that our visual perception isn't about measuring the physical world directly, which we can't do anyway, but about ranking and interpreting visual information based on how frequently we've encountered similar patterns in our experience. This explains why what we see sometimes differs from what we measure with instruments, but still allows us to interact successfully with our environment.
Want to submit a piece? Or trying to write a piece and struggling? Check out the guides here!
Thank you for reading. Reminder: Byte Sized is open to everyone! Feel free to submit your piece. Please read the guides first though.
All submissions to berkan@usc.edu with the header “Byte Sized Submission” in Word Doc format please. Thank you!