2022 KEYNOTE SPeakers

Tuesday, October 11th

Dr. MAtt TuREK

Deputy Director of The Information Innovation Office, DARPA

Dr. Matt Turek assumed the role of deputy office director for DARPA's Information Innovation Office (I2O) in May 2022. In this position, he provides technical leadership and works with program managers to envision, create, and transition capabilities that ensure enduring information advantage for the United States and its allies.

Dr. Turek joined DARPA in July 2018 as an I2O program manager, and served as acting deputy director of I2O from June 2021 to October 2021. He previously managed the Media Forensics (MediFor), Semantic Forensics (SemaFor), Machine Common Sense (MCS), and Explainable AI (XAI) programs as well as the Reverse Engineering of Deception (RED) AI Exploration program (AIE). His research interests include computer vision, machine learning, artificial intelligence, and their application to problems with significant societal impact.

Prior to his position at DARPA, Turek was at Kitware, Inc., where he led a team developing computer vision technologies. His research focused on multiple areas, including large scale behavior recognition and modeling; object detection and tracking; activity recognition; normalcy modeling and anomaly detection; and image indexing and retrieval. Turek has made significant contributions to multiple DARPA and Air Force Research Lab (AFRL) efforts and has transitioned large scale systems for operational use. Before joining Kitware, Turek worked for GE Global Research, conducting research in medical imaging and industrial inspection.

Dr. Turek holds a doctorate of philosophy in computer science from Rensselaer Polytechnic Institute, a Master of Science in electrical engineering from Marquette University, and a Bachelor of Science in electrical engineering from Clarkson University. His doctoral work focused on combinatorial optimization techniques for computer vision problems. Turek is a co-inventor on several patents and co-author of multiple publications, primarily in computer vision.

Prof. Tara Javidi

Jacobs Family Scholar, HDSI Fellow, and Professor

Electrical and Computer Engineering and Halicioglu Data Science Institute

University of California San Diego (UCSD)

Tara Javidi received her BS in electrical engineering at Sharif University of Technology, Tehran, Iran. She received her MS degrees in electrical engineering (systems) and in applied mathematics (stochastic analysis) from the University of Michigan, Ann Arbor. She received her Ph.D. in electrical engineering and computer science from the University of Michigan, Ann Arbor, in 2002. From 2002 to 2004, Tara Javidi was an assistant professor at the Electrical Engineering Department, University of Washington, Seattle. In 2005, she joined the University of California, San Diego, where she is currently a Jacobs Family Scholar, Halicioglu Data Science Fellow, and Professor of Electrical and Computer Engineering. In 2012-2013, she spent her sabbatical at Stanford University as a visiting faculty.

Tara Javidi’s research interests are in theory of active learning, information acquisition and statistical inference, information theory with feedback,  stochastic control theory, and wireless communications and communication networks.

At the University of California, San Diego, Tara Javidi is a a co-PI at The Institute for Learning-enabled Optimization at Scale (TILOS), a founding co-director of the Center for Machine-Integrated Computing and Security, the principal investigator of DetecDrone Project. She is a faculty member of the Centers of Information Theory and Applications (ITA), Wireless Communications (CWC), Contextual Robotics Institute (CRI) and Networked Systems (CNS). She is also a founding faculty member  of Halicioglu Data Science Institute (HDSI) and an affiliate faculty member in the departments of Computer Science and Engineering as well as Ethnic Studies.  She is currently the Chair of UCSD Division of Academic Senate (2021/22) where she previously served as the Vice Chair (2020/21).  She received the 2021 University of California Academic Council Chairs Award for Mid-Career Leadership which honors a UC faculty member who has demonstrated outstanding and creative contributions that impact faculty governance as well as exceptional promise in serving the Academic Senate and working across the university’s many and diverse stakeholders. 

Wednesday, October 12th

Mr. Charles howell

Senior Director for Research and Analysis, Special Competitive Studies Project, Society Implications Panel

Chuck Howell is a Senior Director of Research and Analysis at the Special Competitive Studies Project, addressing emerging technology impacts on society. Prior to SCSP, Chuck was Chief Scientist for Responsible Artificial Intelligence at the MITRE Corporation. In this role he supported the National Security Commission on Artificial Intelligence line of effort on Responsible AI and Ethics. Chuck is focused on adapting tools and techniques from high-assurance systems engineering, risk management frameworks, and stakeholder participatory design to apply to consequential AI (particularly, machine learning) systems. These tools and techniques can help policy makers, acquirers, developers, and those affected by an AI system anticipate sociotechnical opportunities and challenges. 

He has over 30 years of experience working in High Assurance Systems Engineering and AI. He was a member of the Institute of Electrical and Electronics Engineers (IEEE) Software Engineering Body of Knowledge Industrial Advisory Board, chaired the First Annual Assurance Case Workshop in Florence, Italy, and co-chaired the Fall 2015, 2016, 2017, and 2018 Association for the Advancement of Artificial Intelligence (AAAI) workshops on Cognitive Assistance in Government and Public Sector Applications. Chuck is co-author of the book Solid Software (Prentice Hall). He is a Senior Member of the IEEE and a member of the AAAI and the Association for Computing Machinery.  

https://www.linkedin.com/in/chuckhowellcch/

https://www.scsp.ai/

Dr. Michelle Quirk

Senior Technical Advisor at the Advanced Projects Research Agency – Energy (ARPA-E)

Dr. Michelle Quirk is currently a senior technical advisor at the Advanced Projects Research Agency – Energy.  She provides technical insights to program directors on artificial intelligence (AI), scientific computing, and uncertainty quantification.  Her professional career as a scientist began at Los Alamos National Laboratory where she specialized in machine learning (ML) and scientific computing, and was seconded to the Defense Threat Reduction Agency to work on Nuclear Weapons Effects.  Michelle subsequently became a civil servant in the Research Directorate of the National Geospatial-Intelligence Agency.  Then following a stint as an AI consultant for the Department of Defense, she joined the National Nuclear Security Administration as a physical scientist in the Office of Advanced Simulation and Computing, and Institutional Research and Development Programs.

Dr. Quirk obtained a B.S. in mathematics-mechanics with specialization in rock mechanics from the University of Bucharest Romania.  This was followed by an M.S. in computational and applied mathematics and a Ph.D. in mathematics, both from the University of Texas at Austin. In the M.S. work, she co-authored the first papers on the infinite element method for solving Maxwell’s Equations for exterior wave propagation.  Her Ph.D. research was on multi-component imagery compression, during which she served on the International Standard Organization JPEG-2000 Standards Committee.  Michelle developed ML optimizers for preserving features in images compressed at very high rates.  This led to a lossy-to-lossless compression schema for raw LiDAR data, using adaptive multi-rate filter banks.  Her research in AI spans decades, and covers computer vision techniques, cognitive assistants for national security problems, and reinforcement learning for asynchronous networks.


Thursday, October 13th

Dr. Yevgeniya (Jane) Pinelis

Chief of AI Assurance at the Chief Digital and Artificial Intelligence Office (CDAO)

Dr. Jane Pinelis is the Chief of AI Assurance at the Chief Digital and Artificial Intelligence Office (CDAO). In this role, she leads a diverse team of testers and analysts in rigorous test and evaluation (T&E) for CDAO capabilities, as well as development of T&E-specific products and standards that will support testing of AI-enabled systems across the DoD. She also leads the team that is responsible for instantiating Responsible AI principles into DoD practices.

Prior to joining the CDAO, Dr. Pinelis served as the Director of Test and Evaluation for USDI’s Algorithmic Warfare Cross-Functional Team, better known as Project Maven. She directed the developmental testing for the AI models, including computer vision, machine translation, facial recognition and natural language processing.

Also, Dr. Pinelis led the design and analysis of the widely publicized study on the effects of integrating women into combat roles in the Marine Corps. Based on this experience, she co-authored a book, titled “The Experiment of a Lifetime: Doing Science in the Wild for the United States Marine Corps.”

Dr. Pinelis holds a BS in Statistics, Economics, and Mathematics, an MA in Statistics, and a PhD in Statistics, all from the University of Michigan, Ann Arbor.

Dr. Nicholas Carlini

Research Scientist at Google Brain

Nicholas Carlini is a research scientist at Google Brain working at the intersection of machine learning and computer security. His most recent line of work studies properties of neural networks from an adversarial perspective. He received his Ph.D. from UC Berkeley in 2018, and B.A. in computer science and mathematics (also from UC Berkeley) in 2013.

He is interested in developing attacks on machine learning systems; most of his work is in developing attacks to demonstrate security and privacy risks of these systems. He has received best paper awards at USENIX Security, IEEE S&P, and ICML, and his work has been featured in the New York Times, the BBC, Nature Magazine, Science Magazine, Wired, and Popular Science.

Previously he interned at Google Brain, evaluating the privacy of machine learning; Intel, evaluating Control-Flow Enforcement Technology (CET); and Matasano Security, doing security testing and designing an embedded security CTF.

https://nicholas.carlini.com


Previous AIPR Speakers

2021