Sunday, July 10th, 2022

Workshop on Design Automation for the Certification of Autonomous Systems (DAC-AS)

59th Design Automation Conference (DAC) in San Francisco, CA, USA

The integration of increasingly richer software functions into complex, autonomous systems raises concerns about their trustworthiness with respect to safety, security, and other dependability measures. While its definition can vary by industry, trust is often achieved through a process of certification, in which the residual risks associated with the deployment of a system in a specified environment are evaluated and deemed acceptable. However, current certification processes heavily depend on human judgment. A certification or regulatory authority is expected to determine whether a system is trustworthy by analyzing large amounts of evidence about a product and its development. Such a lengthy process can result in superficial, incomplete, biased, and costly evaluations.

The situation is exacerbated by the emergence of artificial intelligence (AI) and machine learning (ML) solutions in consumer applications, which has revolutionized the industry, enabling features that were not possible with traditional methods. Safety-critical and mission-critical domains such as the aerospace, automotive, medical, and nuclear domains are eager to leverage AI-enabled software in their products as well, but there is a lack of consensus on how to ensure that such software is trustworthy. Certification standards such as DO-178C and IEC 62304 do not provide explicit guidance for certifying software containing AI components. Without a clear pathway to certification, the risks in developing AI-enabled high-integrity systems remain a barrier to adoption.

DAC-AS Workshop investigates the potential of design automation to mitigate these risks. Design automation concepts can help streamline the certification process by aiding the construction of comprehensive and defensible arguments for system correctness, for example, in the form of assurance cases. On the other hand, new design methods and tools can facilitate the analysis of AI-enhanced components and the generation of evidence to support the correctness claims. The workshop aims to bring together the certification, design automation, and artificial intelligence communities in both academia and industry to discuss promising methods for increasing trust in autonomous systems.

Speakers

Zamira Daw

Senior Manager, AI Systems Engineering Team Lead, Raytheon Technologies Research Center

Pierluigi Nuzzo

Assistant Professor, Department of Electrical and Computer Engineering and Computer Science, University of Southern California

Timothy Wang

Principal Research Engineer, Raytheon Technologies Research Center

Marco Pavone

Associate Professor, Department of Aeronautics and Astronautics, Stanford University

Eric Feron

Professor, Division of Computer, Electrical and Mathematical Sciences and Engineering, King Abdullah University of Science and Technology (KAUST)

Jean-Baptiste Jeannin

Assistant Professor, Department of Aerospace Engineering, University of Michigan Ann Arbor

Yasser Shoukry

Assistant Professor, Department of Electrical Engineering and Computer Science, University of California — Irvine

Panelists

George Romanski

Chief Scientific and Technical Advisor, FAA

Eric Feron

Professor, Division of Computer, Electrical and Mathematical Sciences and Engineering, King Abdullah University of Science and Technology (KAUST)

Michael Holloway

Senior Research Engineer, NASA Langley Research Center

Zamira Daw

Senior Manager, AI Systems Engineering Team Lead, Raytheon Technologies Research Center

Yasser Shoukry

Assistant Professor, Department of Electrical Engineering and Computer Science, University of California — Irvine

Venue

Moscone Center, San Francisco, CA, USA