This piece, by Onno Berkan, was published on 12/03/24. The original text, by Loke et al., was published by the Journal of Cognitive Neuroscience on 03/01/24.
This University of Amsterdam study investigated how humans and artificial intelligence (specifically deep neural networks) process visual information when looking at objects in different backgrounds. The study revealed some fascinating insights about how our brains and AI systems handle visual information in surprisingly similar ways.
The researchers conducted experiments with 62 human participants aged 18-35, recording their brain activity using EEG while they viewed various images. Participants were shown objects against backgrounds of varying complexity, from simple to highly complex. The deep neural networks (DCNNs) were also shown the same images.
One of the most striking findings was that humans demonstrated nearly perfect performance in categorizing objects, even when the images were shown for just 32 milliseconds. However, the background's complexity affected how well people could identify objects. Certain AI systems were found to approximate human performance very well, but very few could surpass them.
The study made two major discoveries about how visual processing works:
The study found high correlations between the categorical representations of segmentation and background complexity with EEG recordings, indicating that visual processing is largely modulated by object backgrounds rather than object categories
Our brains process the background information before processing the actual object.
Perhaps most surprisingly, the researchers found that artificial neural networks (particularly deeper, more complex ones) processed visual information similarly to human brains– both early EEG and DCNN layers represented object backgrounds rather than object categories. Neither humans nor AI systems were specifically trained to distinguish between objects and backgrounds, yet both developed this ability naturally as a solution to the challenge of object categorization.
This research has significant implications for understanding human vision and artificial intelligence. The fact that biological and artificial systems arrived at similar solutions suggests that this approach to visual processing might be fundamentally important for understanding objects in our environment.
This research challenges our previous understanding of visual processing, suggesting that an object's background plays a much more crucial role than previously thought. It also provides valuable insights for designing future AI systems, highlighting the importance of considering how objects interact with their surroundings.
Want to submit a piece? Or trying to write a piece and struggling? Check out the guides here!
Thank you for reading. Reminder: Byte Sized is open to everyone! Feel free to submit your piece. Please read the guides first though.
All submissions to berkan@usc.edu with the header “Byte Sized Submission” in Word Doc format please. Thank you!