Click on any of the titles to read the full piece!
Researchers from CalTech are in the early stages of creating a BCI (brain-computer interface) that is meant to “decode internal speech”, identifying what the user is thinking without them ever having to speak it out.
Many AI systems, especially LLMs (Large Language Models) are black boxes, in that we know their input, and we know their output, but we don’t quite know what happens in between. This is what the field of Explainable AI (XAI) attempts to solve, by reverse-engineering AIs.
One cubic millimeter is about as large as a grain of pink Himalayan salt-- contained in there is around 1.4 petabytes of data, in which are neurons that make up to 50 connections with each other, as well as “neurons with tendrils that formed knots around themselves”.
Neuroscience puts the intelligence in Artificial Intelligence. Its impact on AI, from artificial “neurons” in neural networks to activation functions, cannot be understated– yet, AI seems to diverge more and more from the brain as it moves forward, creating fundamental differences between the two. While some claim that AI has gotten all it can from neuroscience, others disagree, stating that we are entering a “new era” of computing and AI, which may require us to go back to the basics, to go back to the brain.
An opinion piece by Armin Bazarjani.
The Piray Lab focuses on how people make decisions in noisy environments though computational models using reinforcement learning and Bayesian machine learning.
Starting Monday, September 16th, we'll have:
Opinion & News pieces every week!
Article analyses!
Articles written by faculty!
Advice pieces written by people in the industry!
Want to submit a piece? Or trying to write a piece and struggling? Check out the guides here!
Thank you for reading. Reminder: Byte Sized is open to everyone! Feel free to submit your piece. Please read the guides first though.
All submissions to berkan@usc.edu with the header “Byte Sized Submission” in Word Doc format please. Thank you!