AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC) - Keynotes
AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC) - Keynotes
AAAI 2025 Fall Symposium
Westin Arlington Gateway, Arlington, VA USA
November 6-8, 2025
Keynote Speakers
Scientist, Information Technology Laboratory
NIST
Topic
NIST AI Consortium - Working together laying the foundation for AI innovation
Bio
Natalia Globus is a National Institute of Standards and Technology (NIST) scientist who serves as a Deputy Consortium Manager and a head of membership and operations for the NIST Artificial Intelligence (AI) Consortium. NIST AI Consortium brings together AI creators and users, academic, government and industry researchers, and non-profit organizations to equip and empower the collaborative establishment of a new measurement science that enables the identification of proven, scalable, and interoperable techniques and metrics to promote innovation, economic competitiveness, and national security for AI systems. Prior to joining NIST, Natalia served in advisory roles at the Food and Drug Administration (FDA) and National Institutes of Health (NIH). Natalia holds both Masters and Bachelor degrees in Electro-Mechanical Engineering and Metrology from North-West State Technical University, St. Petersburg, Russia.
Program Manager, Information Innovation Office
DARPA
Topic
TBD
Bio
Dr. Patrick Shafto joined DARPA in September 2023 to develop, execute, and transition programs in artificial intelligence (AI), mathematics, machine learning, and human-machine symbiosis. He is a professor of mathematics and computer science at Rutgers University, and for the two years before joining DARPA, he was a member of the School of Mathematics at the Institute for Advanced Study in Princeton. His research focuses on the mathematical foundations of learning agents, bridging mathematics, machine learning, AI, and cognitive science. His work has been published in more than 100 papers related to mathematical, computational, and empirical perspectives on learning. He also co-founded and served as chief scientist for Redpoll, a startup focused on human-centered AI, from 2019-2023.
Chief of Responsible AI, U.S. Department of Defense
CDAO
Topic
Government Perspectives on Operationalizing and Harnessing AI Assurance
Bio
Dr. Matthew Johnson serves as the Chief of Responsible AI (RAI) for the U.S. Department of War (DoW), where his team is based within the Chief Digital and Artificial Intelligence Office (CDAO). As the Department’s lead for Responsible AI and AI Assurance, his Division builds the technical tools, best practices, and policies to assess and assure the Department’s AI-enabled capabilities. His Division developed the Department’s Responsible AI Toolkit and Web App (a focal point of the ‘Secure-by-Design’ portion of America’s AI Action plan), issued the Department’s policy on Generative AI, and leads Frontier AI Red Teaming for the Department. Dr. Johnson also chairs the White House’s CAIO Council AI Assurance Working Group, which is tasked with developing resources to enable compliance with the Federal Government’s AI risk requirements. Dr. Johnson's background is in philosophy and cognitive science, having earned a PhD in Philosophy from the University of Cambridge, as well as degrees in Cognitive Science from the University of Cambridge (MPhil) and Yale University (BA). Previously, he consulted for Google AI and was a Research Fellow at the University of Oxford. His areas of specialty include AI-enabled autonomy, human-agent interaction, AI red teaming, and AI Ethics.
VP Research, Technology & Innovation
Thales
Topic
Trustworthy AI for Critical Systems: New Challenges
Bio
David Sadek is VP Research, Technology & Innovation at Thales, notably in charge of Artificial Intelligence and Quantum Computing. A Doctor in Computer Science and an expert in Artificial Intelligence and Cognitive Science, he was Chairman of the Executive committee of the French national industrial program on AI (Confiance.ai) and, previously, SVP Research at IMT (Institut Mines-Télécom) and VP R&D at Orange. He created and ran R&D teams at Orange Labs working on intelligent agents and natural human-machine dialogue, for more than fifteen years. His research work led to the design and implementation of the first worldwide technologies of conversational agents, as well as to the ACL inter-agent communication language standard. He has also directed several industrial transfer and innovative service deployment programmes.
Associate Professor
Delft University of Technology
Topic
Justifying metric choices in AI trustworthiness assessments
Bio
Stefan Buijsman is an associate professor in philosophy at Delft University of Technology, where he leads the Delft Digital Ethics Centre and the WHO Collaborating Centre on AI for Health Governance, including ethics. His research focuses on connecting ethical principles to design and governance requirements for AI systems, primarily in healthcare and the public sector. In addition to his research he has also written three popular science books on mathematics and AI.