Professor Robin Burke is a Professor and Chair of the Department of Information Science at the University of Colorado, Boulder. He conducts research in personalized recommender systems, a field he helped found and develop. His most recent projects explore fairness, accountability and transparency in recommendation through the integration of objectives from diverse stakeholders. Professor Burke is the author of more than 150 peer-reviewed articles in various areas of artificial intelligence including recommender systems, machine learning and information retrieval. His work has received support from the National Science Foundation, the National Endowment for the Humanities, the Fulbright Commission and the MacArthur Foundation, among others.
Chara is an Assistant Professor of OR/Stat at MIT. Her research interests are on incentive-aware machine learning, social aspects of computing, online learning, and mechanism design. Recently, Chara has started thinking about policy questions related to AI and recommendation systems. Before MIT, she was a FODSI postdoctoral fellow at UC Berkeley and received her PhD in EconCS from Harvard, advised by Yiling Chen. Outside of research, she spends her time adventuring with her pup, Terra.
Haifeng Xu is an assistant professor in the Department of Computer Science and Data Science Institute at UChicago. He directs the Sigma Lab (Strategic IntelliGence for Machine Agents) which focuses on designing intelligent AI systems that can effectively learn and act in informationally complex multi-agent setups. His research has been recognized by a few awards including IJCAI early career spotlight, Google Faculty research award, the ACM SIGecom Dissertation Award and IFAAMAS Victor Lesser Distinguished Dissertation Award.
Dylan is the Bonnie and Marty (1964) Tenenbaum Career Development Assistant Professor of EECS on the faculty of Artificial Intelligence and Decision-Making and a Schmidt Futures AI2050 Early Career Fellow. He runs the Algorithmic Alignment Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT). His research focuses on the problem of agent alignment: the challenge of identifying behaviors that are consistent with the goals of another actor or group of actors. His research group works to identify solutions to alignment problems that arise from groups of AI systems, principal-agent pairs (i.e., human-robot teams), and societal oversight of ML systems.
Leqi Liu is a postdoctoral researcher at Princeton Language and Intelligence, currently working on understanding and building large language models. In Fall 2024, she will join McCombs School of Business as an assistant professor and be a core member of the Machine Learning Laboratory at the University of Texas at Austin. Her research centers around the foundations of human-centered machine learning, using insights from behavioral sciences and tools from learning theory to design machine learning algorithms that account for complex human preferences.