Tutorials

We have accepted three tutorials for the conference attendees. The tutorial sessions will be held in the afternoon of Sunday, May 15th. Each session is scheduled for approx. 90 mins.

An Introduction to Graph Neural Networks
Sunday, May 15, 2022, 14:00 – 15:30, Leatherback

Alina Lazar, PhD, Youngstown State University

Graph structured data exist everywhere in the real world. Almost any problem can be modeled using graph representations. From social networks to molecular structures, the range of practical applications is vast. In this context, it is becoming important to design and evaluate advanced learning methods on graph structured data. GNNs that extend the well-known deep neural network models to graph representations, offer researchers a new way to learn graph representations at the node, edge, and graph levels. For each of those levels, different challenges could be faced, therefore specific algorithms must be designed. This tutorial will cover relevant GNN-related topics, including the basics of learning on graph structured data, graph embeddings, attention networks, aggregation functions and examples of applications (node classification, predicting missing links, detecting communities, and graph matching). For these applications, GNNs have achieved impressive performance on relatively small graph datasets. Unfortunately, most real-world problems rely on large graphs that do not fit into the available GPU memory of current hardware systems. We will also discuss ways to design, evaluate and scale GNN training and inference methods.

Enabling AI Adoption through Assurance
Sunday, May 15, 2022, 16:00 – 17:30, Loggerhead A/B/C

  • Jaganmohan Chandrasekaran, Virginia Tech

  • Feras A. Batarseh, Virginia Tech

  • Laura Freeman, Virginia Tech

  • D. Richard Kuhn, National Institute of Standard Technology (NIST)

  • M S Raunak, National Institute of Standard Technology (NIST)

  • Raghu N. Kacker, National Institute of Standard Technology (NIST)

The wide scale adoption AI will require that AI engineers and developers can provide assurances to the user base that an algorithm will perform as intended and without failure. Assurance is the safety valve for reliable, dependable, explainable, and fair intelligent systems. AI assurance provides the necessary tools to enable AI adoption into applications, software, hardware, and complex systems. AI assurance involves quantifying capabilities and associating risks across deployments including: data quality to include inherent biases, algorithm performance, statistical errors, and algorithm trustworthiness and security. Data, algorithmic, and context/domain-specific factors may change over time and impact the ability of AI systems in delivering accurate outcomes. In this tutorial, we discuss the importance and different angles of AI assurance, and present a general framework that addresses its challenges.

This tutorial covers the major aspects of AI assurance; it will be organized into three parts. First, we will introduce and present the challenges in testing and evaluating AI systems. The remainder of the session (i.e. parts 2 and 3) will be an interactive. In part 2, we will introduce the new assurance metrics for AI systems such as explainability, fairness, and trustworthiness followed by a discussion and involvement by the audience. In the third part of the session, the attendees will be encouraged to participate in an open-discussion setup to exchange ideas and challenges in AI assurance based on their own experiences, we aim to cover different angles of assurance, for instance, ones that arise due to the domain in which an AI system is deployed, the architecture of the AI system itself, and ones related to government and policy regulations. We will conclude the tutorial with a discussion on the future of AI assurance and a feedback poll (lessons learned) from the participants.

Sparse Predictive Hierarchies: An Alternative to Deep Learning
Sunday, May 15, 2022, 16:00 – 17:30, Leatherback

Eric Laukien, Ogma Intelligent Systems Corp

Deep Learning, now synonymous with large reverse-mode automatic differentiated hierarchical networks, has become the standard method for addressing most problems in Artificial Intelligence. However, many have started to notice the problems inherent to Deep Learning, such as catastrophic forgetting and very high computational cost. This tutorial serves to describe an alternative paradigm, called Sparse Predictive Hierarchies, to the highly popular Deep Learning, by describing methods that avoid backpropagation, i.i.d sampling, batches, and dense representations in favor of biologically-inspired, sparse, online/incremental learning with entirely local operations. It seeks to show that such methods can work in practice by using them on compute-constrained and robotics tasks. We hope to inspire others to seek other alternatives to Deep Learning, and discuss how alternative paradigms can overcome many of the limitations of Deep Learning such as catastrophic interference and high compute costs.

The tutorial plans to include demonstrations of real robots, including the “smallest self-driving car”, the Lorcan Mini quadruped robot (both are small and easy to transport), and the “Atari Pong on a Raspberry Pi” example.