This Meta (Facebook) study aimed to predict brain activity from sensory input. They had people watch TV in an fMRI scanner and attempted to correlate the scans with what they were watching. This was then used to predict neural activity from just the TV show, in multiple modalities. They predicted neural responses to the audio, video, and the actual script (story) simultaneously. Their model won the 2025 Algonauts competition.
This Stanford study aimed to tackle one of the most pressing topics in Brain-Computer Interfaces at present: decoding speech for individuals with paralysis. They had an ALS patient who could both speak sentences and imagine speaking, and recorded activity from the motor cortex. They tried to predict both mouth movements and the phonemes they were trying to produce. This allowed them to break down spoken words into mouth/face movements, as well as individual sounds. From these, they were able to predict what words the participant was trying to say with greater accuracy than recent models.
This Nature Research Briefing summarizes new research on how we perceive simultaneous visual inputs – multiple things happening at once – even though the visual information travels to the brain area that deals with it through neurons of differing lengths. The brain needs to know that these events occur concurrently, yet the sensory signals take different paths to the visual cortex. How, then, do we still process these as happening at the same time?
This week's spotlight is on Professor Dong Song, whose Neural Modeling and Interface lab is developing a hippocampal prosthetic that aims to decipher the computations performed by the hippocampus, taking its inputs and approximating its outputs using a complex model.
Links: Read this Viterbi article
Letter Three: Secrets of Memory and Dying (10/01/24-10/07/24)
Letter Seven: The World Through Your Brain’s Eyes (10/29/24-11/04/24)
Letter Eleven: How AI & Neuroscience Propel Each Other Forward (12/03/24-12/09/24)
Letter Thirteen: No Memory Means No Imagination (01/21/25-01/27/25)
Letter Sixteen: DeepSeek & the ChatGPT for Genomes (02/11/25-02/17/25)
Letter Seventeen: Noise Cancellation for the Brain (02/18/25-02/24/25)
Letter Twenty-One: Giving Voice to the Voiceless (03/25/25-04/07/25)
Letter Twenty-Two: The Neuroscience of Chess (04/08/25-04/14/25)
Letter Twenty-Four: One Step Closer to a Connectome (04/29/25-05/25)
Want to submit a piece? Fill out the submission form here: https://forms.gle/tEaoXMHpsHtQFoeL9
Trying to write a piece and struggling? Check out the guides here!
Thank you for reading. Reminder: Byte Sized is open to everyone! Feel free to submit your piece. Please read the guides first though.
All submissions via the form above or to berkan@usc.edu with the header “Byte Sized Submission” in Word Doc format please. Thank you!