Confirmed Speakers (In alphabetical order)
Chelsea Finn is an Assistant Professor of Computer Science and Electrical Engineering at Stanford University, where she heads the IRIS Lab. Her group studies how robots and other embodied agents acquire versatile skills through large-scale interaction and data, and she is also a co-founder of Pi, a startup focused on learning-enabled robot intelligence.
Prof. Finn’s efforts to endow robots with language-conditioned goals and rapid adaptation speak directly to LAW 2025’s mission of marrying language models with agent and world models: her methods show how high-level instructions can be grounded in physical control and updated on the fly.
Danijar Hafner is a Research Scientist at Google DeepMind best known for the Dreamer family of reinforcement-learning agents, which learn compact latent-space world models and use “imagination” roll-outs for long-horizon planning.
Dreamer illustrates how explicit learned simulators can make agents far more sample-efficient—an insight central to LAW 2025’s agenda of integrating world models with language-guided reasoning and action.
Sendhil Mullainathan is a Professor of Economics and Electrical Engineering & Computer Science at MIT. His interdisciplinary research applies machine learning to behavioral economics, social policy, and medicine, with an emphasis on fairness and causal inference.
By probing the causal structure latent in large language models and their societal impacts, Prof. Mullainathan offers LAW 2025 a critical perspective on the “implicit world models” inside LLMs and on aligning them with human values.
Tim Rocktäschel is a Director & Principal Scientist at Google DeepMind and Professor of Artificial Intelligence at University College London. He leads DeepMind’s Open-Endedness team and recently spearheaded Genie 2, a foundation world model that generates playable 3-D environments for training agents.
Genie 2 exemplifies the workshop’s goal of uniting rich, learned simulators with language-driven reasoning, making Rocktäschel’s experience pivotal for discussions on scalable agent–world-model integration.
Dorsa Sadigh is an Associate Professor of Computer Science at Stanford University whose research lies at the intersection of robot learning and human-robot interaction, developing algorithms that allow adaptive agents to learn from and collaborate with people.
Prof. Sadigh's work on inferring human intent and ensuring safe, interactive autonomy aligns with LAW 2025’s interest in grounding language-conditioned agents in realistic physical and social world models.
Joshua Tenenbaum is a Professor of Cognitive Science and Computer Science at MIT famed for probabilistic program models of intuitive physics, causal reasoning, and concept learning.
Prof. Tenenbaum's theories of “languages of thought” provide a cognitive blueprint for the workshop’s exploration of how LLMs can interface with structured, probabilistic world models to support rapid, human-like generalization.
Eric Xing is the founding President of the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) and a Professor of Computer Science at Carnegie Mellon University. His research spans statistical machine learning, distributed systems, and large-scale frameworks such as the Parameter Server.
Prof. Xing’s expertise in building scalable ML infrastructure and structured probabilistic models informs LAW 2025’s need for computational backbones that can host tightly-coupled language, agent, and world models.
Sherry Yang is a Staff Research Scientist at Google DeepMind and will soon join New York University as an Assistant Professor. Her recent work focuses on scaling and aligning large language models, multimodal reasoning, and evaluating multi-agent interactions.
Prof. Yang’s studies on reinforcement-learning fine-tuning and emergent agent behavior supply practical guidance for LAW 2025’s central challenge: aligning language agents with explicit world models to achieve coherent planning and action.
Contact: law2025@googlegroups.com