Format: In-person at The University of Texas at Austin on September 16, 2024
Trust is a cornerstone topic in efforts to develop ethical autonomous systems. But what exactly does it take to build trustworthy AI? Conversely, what makes an AI system untrustworthy? How can we ensure different disciplinary perspectives are represented in the process, and what metrics matter in the pursuit of trustworthy AI? Through a guided discussion and series of activities, this workshop aims to facilitate interactive discussions on how to address issues of trustworthiness in autonomous AI systems through an interdisciplinary lens. We strive to find a common language with which to discuss AI and its ethical implications.
Workshop participants will hear insights from guest speakers Josh Hoffman, an AI researcher at UT-Austin, and Dr. Steve Kramer, Chief Scientist at KUNGFU.AI, an Austin-based AI consulting firm providing interdisciplinary AI expertise. Following a presentation, participants will be encouraged to approach AI from a set of diverse lenses, with the goal of reaching consensus on key ethical issues through a case study challenge. This event is designed to facilitate information exchange and broadly appeal to researchers and practitioners from various backgrounds (both technical and non-technical) working on or interested in issues at the intersection of AI and ethics.
Interested in attending? Fill out our participant interest form.
It is important to understand what we are willing to entrust to AI as society integrates these systems into more facets of public, private, and commercial life. Trust is a key concept relating to autonomous systems as is outlined in AI Ethics guidelines, frameworks, and regulation, such as the NIST risk management framework (Tabassi, 2023), OECD AI principles, or the EU-AI Act (European Commission, 2021). Common attributes connected to trust are transparency, explainability, safety, security, privacy and bias. While there is universal agreement on the importance of trust and there are common key principles, there is no agreement on what defines trust and how to develop, design and deploy trustworthy systems.
Furthermore, different disciplines approach trust in various ways. There is not even an agreement as to whether AI can be trusted. Some humanities define trust as a unique human concept and therefore conclude that AI can not be trusted but only relied on (Ryan, 2020). Conversely, in the field of control, a system can be trusted if it is possible to prove its stability. Most disciplines, however, allow trust in AI but vary in their opinion on what constitutes a trustworthy autonomous system. Researchers are grappling with these issues as do practitioners. We aim to identify arising issues, bring researchers and practitioners with various backgrounds together, and find possible solutions to addressing challenges in a productive way through interactive activities.
This half-day event will be held in-person at the Julius Glickman Conference Center at The University of Texas at Austin on Monday, September 16, 2024. Schedule reflects local time zone of the event, which is Central Standard Time (CST). Please note that conference registration is required to attend the workshop.
09:00 - Welcome and Introductions
09:10 - Guest Speaker + Q&A
10:00 - Interactive Activity
10:30 - Break, Networking
10:45 - Guest Speaker + Q&A
11:10 - Case Study Challenge
11:50 - Final Discussions and Conclusions
12:15 - Closing Remarks
Guest Speaker: Josh Hoffman, UT Austin
Josh Hoffman is a graduate student at The University of Texas at Austin. There, he researches neuro-symbolic programming, program synthesis, and reinforcement learning focussing on agent cooperation and economics. Additionally, he and Swarat Chaudhuri are the co-creators of the Trustworthy ML course at UT Austin which Josh now teaches. The course teaches students how to mathematically and critically approach technical challenges as AI enters all aspects of our lives.
Dr. Steve Kramer, Chief Scientist of KUNGFU.AI, is a computational physicist and data science entrepreneur with 31 years of post-Ph.D. experience in AI, data science, research, software, and business management. He earned a Ph.D. in physics in the Center for Nonlinear Dynamics at The University of Texas at Austin. He has acted as the Principal Investigator on multiple subcontracts for DARPA's Information Innovation Office and on multiple contracts for the Defense Innovation Unit (DIU) and for CDAO. He is proud to serve on the Board of the Austin Forum on Technology and Society and as a member of Board of Technical Advisors for data.world.
Jessica Needle is a PhD student in the School of Information (iSchool) at the University of Texas at Austin, where she studies critical perspectives on information technology, ethical AI, and surveillance. Her research focuses on how different communities engage with socio-technical systems to preserve democratic ideals, including issues like autonomy, privacy, and free speech.
Tina Lassiter is a PhD student in the School of Information at the University of Texas at Austin. Her research focuses on interdisciplinary approaches to algorithmic auditing and trust in AI. As a former German IT and IP lawyer she has a legal background helping her navigate upcoming AI regulation. She is also a fellow at the Center for AI and Digital Policy (caidp.org).
Max Rudolph is a PhD student in the Department of Computer Science at the University of Texas at Austin. His research focuses on building efficient algorithms for learning sequential decision making in the real world. Max has published in top tier robotics and machine learning conferences on topics ranging from multi-agent systems to representation learning for control.