About Me
Formally trained in computer science at MIT and Stanford, I specialize in artificial intelligence and machine learning.
During my decade-long journey in computer science, I have had the opportunity to innovate across areas such as AI explainability, biotechnology, robotics, automobiles, augmented reality and game development.
Recently, I had the privilege of leading our team of six scientists to the $17,000 award for "Most Promising Approach to Auditing Large Language Models (LLMs)" at the Stanford-HAI AI Audit Challenge.
I take pride in translating complex concepts into clear, accessible language, and my fluency in five languages further enables me to connect with diverse audiences.
Stanford (2021-2024)
Pursued PhD in Computer Science (Artificial Intelligence), advised by Carlos Guestrin, and currently transitioning to industry.
My latest research revolves around developing tiny language models that can fit in phones and watches, and replicate the robust capabilities of large language models (LLMs), within specific business domains. These tiny LLM replicas should improve global AI accessibility, reduce AI energy consumption and enable free or low-cost AI-equipped applications.
At Apple, in 2023, as Machine Learning Research Intern, I diagnosed issues of hallucination in Apple's internal large language models (LLMs) through a novel counterfactual diagnostic strategy which asks: "When an LLM outputs x, is it internally actually probabilistically more confident in x, than statements y and z that contradict x?" Our tool detected hallucinations in Apple's LLMs with an accuracy of up to 90% on tasks like app store review summarization.
In 2023, I led a team of 6 scientists to develop Ceteris Paribus [video], which won the award for "most promising approach to audit large language models" (like Chat-GPT) in the global AI Audit competition held by the Stanford center for Human-centered AI. Ceteris Paribus empowers policymakers and regulators to audit large language models (LLMs) for illegal discrimination across protected traits such as race and gender, without requiring pre-existing data (dynamic benchmark) or computational expertise.
I was a finalist for the Hertz Foundation Fellowship and Open Philanthropy AI Fellowship.
I am fortunate to have been supported by the Stanford School of Engineering Stanford School of Engineering First Year Doctoral Fellowship (2021) and Stanford EDGE Fellowship (2021).
Metabolomics (2020-2021)
I enjoyed my role of Deep Learning Scientist at ReviveMed from 2020 to 2021, where I patented a metabolomics signal detection pipeline. Our work led to the discovery of potential kidney cancer biomarkers, in collaboration with MIT Biological Engineering and the Broad Institute.
MIT (2014-2020)
I graduated from MIT with B.S. and MEng. degrees in Computer Science in 2018 and 2020. With the guidance of Leslie Kaelbling, I developed new machine learning and inference techniques for probabilistic graphical models. I also applied graph neural networks to problems involving 3D geometry.
I enjoy teaching and was a TA for 4 semesters at MIT. I helped run probabilistic inference and machine learning classes.
Also at MIT, Patrick Winston introduced me to research in artificial intelligence, Polina Golland and Gregory Wornell taught me how to teach, and Ferran Alet helped hone my scientific thinking.
I had the opportunity to develop the campus shuttle mobile app for Ford Motor Company during my Junior year summer. My main contribution was to deploy a map interface that tracks shuttle positions in real-time using GPS data.
In the winter of 2016, as an Augmented Reality Game Developer at Brain Power, LLC, I engineered augmented reality games on Google Glass to help autistic children with emotional awareness and making eye contact during conversations.
Mauritius (1995-2014)
Before joining MIT at the age of 18, I grew up on the island of Mauritius. According to my friend, because of its population of 1.3 million people, I am "one in a million".
I am fluent in 5 languages: French, Spanish, English, Hindi, Creole. I am using geopolitics to help me decide on a 6th language.
I enjoy creative writing: I enjoy poetry as a means to process the universe around us and blogging to document my backpacking trips.
AI Conference Publications
1. Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models
Adarsh K. Jeewajee, Leslie P. Kaelbling
Neural Information Processing Systems (NeurIPS), 2020
Published and invited for poster presentation - Paper - Video - Code - Slides
2. Robotic Gripper Design with Evolutionary Strategies and Graph Element Networks
2. Robotic Gripper Design with Evolutionary Strategies and Graph Element Networks
Adarsh K. Jeewajee*, Ferran Alet*, Maria Bauza*, Max Thomsen*, Alberto Rodriguez, Leslie P. Kaelbling, Tomás Lozano-Pérez
(* equal contributions)NeurIPS Workshop on Machine Learning for Engineering Modeling, Simulation, and Design (NeurIPS ML4Eng), 2020
Published and invited for poster presentation - Paper
3. Graph Element Networks: Adaptive, Structured Computation and Memory
Ferran Alet, Adarsh K. Jeewajee, Maria Bauza, Alberto Rodriguez, Tomás Lozano-Pérez, Leslie P. Kaelbling
International Conference on Machine Learning (ICML), 2019
Published and invited for oral presentation (4.5% of all submissions) - Paper - Code - Slides