Invited Speakers

Harbir Antil

Optimization Based Deep Neural Networks with Memory

This talk will introduce novel deep neural networks (DNNs) that have memory. The latter helps overcome the vanishing gradient challenge in deep learning. Approximation properties of the DNN are established and the learning problem is cast as a constrained optimization problem. These DNNs are shown to be excellent surrogates to parameterized partial differential equations (PDEs) and Bayesian inverse problems with multiple advantages over the traditional approaches.

The proposed approach is also applied to chemically reacting flows problems. They require solving a system of stiff ordinary differential equations and fluid flow equations. These are highly challenging problems, for instance, for combustion the number of reactions can be significant (over 100) and due to the large CPU requirements of chemical reactions (over 99%) a large number of flow and combustion problems are presently beyond the capabilities of even the largest supercomputers. We apply the proposed DNNs to these problems of multiple species and reactions. Experimental results show that it is helpful to account for the physical properties of species while designing DNNs. The proposed approach is shown to generalize well.

BIO: Harbir Antil is the director of the Center for Mathematics and Artificial Intelligence (CMAI) and Professor of Mathematics at George Mason University (GMU), Fairfax VA. His areas of interest include optimization, calculus of variations, partial differential equations, numerical analysis, and scientific computing with applications in optimal control, shape optimization, free boundary problems, dimensional reduction, inverse problems, and deep learning.

He has received several awards, including Research Fellow at Brown University, Affiliate associate professorship at University of Delaware, Career connection faculty award at GMU, etc. He has given numerous plenary talks at national and international meetings. His research is supported by NSF, DOE, AFOSR, and Department of Navy.

At GMU, Harbir has advised more than 25 students and postdocs, all of them are placed at top institutions. Harbir is a strong promoter of bringing together Academia, National Labs and Industry. He has launched multiple successful initiatives to accomplish this, including CMAI, East Coast Optimization Meeting (ECOM), CMAI Meets Industry Symposium, etc.


John Baras

From Copernicus-Brahe-Kepler to Swarms: Learning Composable Laws from Observed Trajectories

A novel approach is described, rooted in the port-Hamiltonian formalism on multi-layered graphs, for modeling, learning and analysis of governing laws of multi-agent systems. The problem is inspired by the discovery by Kepler of the laws governing planetary motion from the data collected by Copernicus. We focus on learning the coordination laws of ensembles of autonomous multi-agent systems from empirical indexed trajectories data (position, velocity, etc.). Natural collectives like flocking birds, insects, and fish are included. We describe our results on validation of the applicability of universal port-Hamiltonian models (single and multi-agent) for an efficient learning framework to physics and biology related processes. We employ methods and techniques from mathematical physics for efficient and scalable learning (symmetries, invariants, conservation laws, Noether’s theorems, sparse learning, model reduction). We describe the modeling and software implementation of the methodology via deep learning platforms and efficient numerical schemes. We validate the performance on simulated ensemble data, generated by multiple potentials (various forms of Cucker-Smale models) and Boids models, with complex behaviors and maneuvers of autonomous swarms. We investigate the identification of leaders and sub-swarm motions. We apply mean-field theory to derive macroscopic (PDE) models of the ensemble that lead to explanations of certain coordination laws observed in bird flocks. Finally, we apply these methods to the control of several DPS physics-based systems described by PDEs.

BIO: John S. Baras is a Distinguished University Professor, holding the Lockheed Martin Chair in Systems Engineering with the Institute for Systems Research (ISR) and the ECE Department at the University of Maryland College Park (UMD). He received his Ph.D. degree in Applied Mathematics from Harvard University, in 1973, and he has been with UMD since then. From 1985 to 1991, he was the Founding Director of the ISR. Since 1992, he has been the Director of the Maryland Center for Hybrid Networks (HYNET), which he co-founded. He is a Fellow of IEEE (Life), SIAM, AAAS, NAI, IFAC, AMS, AIAA, Member of the National Academy of Inventors (NAI) and a Foreign Member of the Royal Swedish Academy of Engineering Sciences (IVA). Major honors and awards include the 1980 George Axelby Award from the IEEE Control Systems Society, the 2006 Leonard Abraham Prize from the IEEE Communications Society, the 2017 IEEE Simon Ramo Medal, the 2017 AACC Richard E. Bellman Control Heritage Award, and the 2018 AIAA Aerospace Communications Award. In 2016 he was inducted in the University of Maryland A. J. Clark School of Engineering Innovation Hall of Fame. In June 2018 he was awarded a Doctorate Honoris Causa by his alma mater the National Technical University of Athens, Greece. His research interests include systems, control, optimization, autonomy, communication networks, applied mathematics, signal processing and understanding, robotics, computing systems, formal methods and logic, network security and trust, systems biology, healthcare management, model-based systems engineering. He has been awarded twenty patents and honored with many awards as innovator and leader of economic development.

Andrea Cavallaro

Multi-modal learning for robot perception

The audio-visual analysis of the environment surrounding a robot is important for the recognition of activities, objects, interactions and intentions. In this talk I will discuss methods that enable robots to understand a dynamic scene using only their on-board sensors. These methods include a multi-modal training strategy that leverages complementary information across observation modalities to improve the testing performance of a uni-modal system; a multi-channel technique for acoustic sensing with a small microphone array mounted on a drone; an audio-visual tracker that exploits visual observations to guide the acoustic processing to localise people in 3D from a compact multi-sensor platform; and the estimation of the physical properties of unknown containers manipulated by humans to inform the control of a robot during a dynamic human-to-robot handover. I will show several examples of multi-modal dynamic scene understanding and discuss open research directions.

BIO: Andrea Cavallaro is Professor of Multimedia Signal Processing and the founding Director of the Centre for Intelligent Sensing at Queen Mary University of London, UK. He is Fellow of the International Association for Pattern Recognition (IAPR) and Turing Fellow at the Alan Turing Institute, the UK National Institute for Data Science and Artificial Intelligence. He received his Ph.D. in Electrical Engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne. He was a Research Fellow with British Telecommunications (BT) in 2004/2005 and was awarded the Royal Academy of Engineering teaching Prize in 2007; three student paper awards on target tracking and perceptually sensitive coding at IEEE ICASSP in 2005, 2007 and 2009; and the best paper award at IEEE AVSS 2009. Prof. Cavallaro is vice chair of the IEEE Signal Processing Society, Image, Video, and Multidimensional Signal Processing Technical Committee and an elected member of the IEEE Video Signal Processing and Communication Technical Committee. He is Senior Area Editor for the IEEE Transactions on Image Processing; and Associate Editor for the IEEE Transactions on Circuits and Systems for Video Technology and IEEE Multimedia. He is a past Area Editor for the IEEE Signal Processing Magazine (2012-2014) and past Associate Editor for the IEEE Transactions on Image Processing (2011-2015), IEEE Transactions on Signal Processing (2009-2011), IEEE Transactions on Multimedia (2009-2010) and IEEE Signal Processing Magazine (2008-2011). He is a past elected member of the IEEE Multimedia Signal Processing Technical Committee and of the IEEE Signal Processing Society, Image, Video, and Multidimensional Signal Processing Technical Committee, and chair of its Awards committee. Prof. Cavallaro has published over 250 journal and conference papers, one monograph on Video tracking (2011, Wiley) and three edited books: Multi-camera networks (2009, Elsevier); Analysis, retrieval and delivery of multimedia content (2012, Springer); and Intelligent multimedia surveillance (2013, Springer).

Guillermo Gallego

Event-based vision in multi-agent scenarios

Event-based cameras are bio-inspired vision sensors whose pixels work independently from each other and respond asynchronously to brightness changes, with microsecond resolution.

Their advantages make it possible to tackle challenging scenarios in robotics, such as high-speed and high dynamic range scenes.

In this talk I will go over the characteristics of these sensors and provide examples of what they can offer in multi-agent scenarios, such as autonomous driving, SLAM and monitoring / surveillance. I will highlight the associated challenges and how to overcome some of them with tailored algorithms.

BIO: Guillermo Gallego is Associate Professor at Technische Universitat Berlin, Berlin, Germany, in the Dept. of Electrical Engineering and Com-puter Science. He received the PhD degree in Electrical and Computer Engineering from the Georgia Institute of Technology, USA, in 2011. From 2011 to 2014 he was a Marie Curie researcher with Universidad Politecnica de Madrid, Spain, and from 2014 to 2019 he was a postdoctoral researcher at the University of Zurich, Switzerland.

Stephanie Gil

Situational Awareness and Secure Coordination for Multi-Robot Teams

Multi-robot systems are becoming more pervasive all around us, in the form of fleets of autonomous vehicles, future delivery drones, and robotic teammates for search and rescue. As a result, it becomes increasingly critical to question the robustness of their coordination algorithms to reliable information exchange, security threats and/or corrupted data. This talk will focus on the role of controlled mobility and information exchange for enhancing situational awareness and security of these systems. Specifically, we will discuss our work in using robot mobility to realize reliable and adaptive information exchange that supports coordination objectives, the role of communication for quantifying trust in several important multi-robot algorithms, and the use of information exchange to divulge new information about the environment. We will discuss the vulnerabilities of important multi-robot algorithms such as consensus and coverage to malicious or erroneous data and we demonstrate the potential of communication to thwart certain attacks, for example the Sybil Attack, on these algorithms. We will present both a theoretical framework, and experimental results, for enhancing multi-robot distributed algorithms through careful use of communication.

BIO: Stephanie is an Assistant Professor in the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University. Her work centers around trust and coordination in multi-robot systems for which she has recently been granted an Office of Naval Research Young Investigator award (2021) and the National Science Foundation CAREER award (2019). She has also been selected as a 2020 Sloan Research Fellow and is a 2021 Amazon Research Award recipient for her work at the intersection of robotics and communication. She has held a Visiting Assistant Professor position at Stanford University during the summer of 2019, and an Assistant Professorship at Arizona State University from 2018-2020. She completed her Ph.D. work (2014) on multi-robot coordination and control and her M.S. work (2009) on system identification and model learning. At MIT she collaborated extensively with the wireless communications group NetMIT, the result of which were two U.S. patents recently awarded in adaptive heterogeneous networks for multi-robot systems and accurate indoor positioning using Wi-Fi. She completed her B.S. at Cornell University in 2006.

Jonathan How

SLAM & RL Solutions for Multiagent Systems

This talk will cover recent work on collaborative simultaneous localization and mapping (TRO’21) and reinforcement learning (ICML’21) for multiagent systems. The first part of the talk presents certifiably correct distributed pose graph optimization (PGO), the backbone of modern collaborative simultaneous localization and mapping (SLAM). Our method is based upon a sparse semidefinite relaxation that provably provides globally optimal PGO solutions under moderate measurement noise, matching the guarantees enjoyed by the state-of-the-art centralized methods. To solve the semidefinite relaxation, we propose a low-rank optimization approach that is inherently decentralized: it requires only local communication, provides privacy protection, and is easily parallelizable. Utilizing our distributed PGO, we develop Kimera-Multi, the first distributed multi-robot system for dense metric-semantic SLAM. Our system is robust against incorrect loop closures, and builds a globally consistent metric-semantic 3D mesh model of the environment in real-time. We demonstrate the accuracy and efficiency of Kimera-Multi in photo-realistic simulations, SLAM benchmarking datasets, and challenging outdoor datasets collected using ground robots. In the second part of the talk, we address a key challenge in multiagent reinforcement learning: How can agents learn beneficial behaviors in a multiagent setting? Here, all agents are constantly learning, leading to inherent non-stationarity in the environment and unstable learning behavior. To address this challenge, we consider non-stationary policy dynamics of all agents in the environment. We show that our theoretically grounded approach provides a general solution to the multiagent learning problem, which inherently comprises all key aspects of previous state-of-the-art approaches on this topic. We evaluate our method on a diverse suite of multiagent benchmarks and demonstrate its ability to adapt to new agents efficiently as they learn across the full spectrum of mixed incentive, competitive, and cooperative domains.

BIO: Jonathan P. How is the Richard C. Maclaurin Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology. He received a B.A.Sc. (Aerospace option) from the University of Toronto in 1987, and his S.M. and Ph.D. in Aeronautics and Astronautics from MIT in 1990 and 1993, respectively. Prior to joining MIT in 2000, he was an assistant professor in the Department of Aeronautics and Astronautics at Stanford University. He was the editor-in-chief of the IEEE Control Systems Magazine (2015-19) and was elected to the Board of Governors of the IEEE Control System Society (CSS) in 2019. His research focuses on robust planning and learning under uncertainty with an emphasis on multiagent systems. His work has been recognized with multiple awards, including the 2020 IEEE CSS Distinguished Member Award, the 2020 AIAA Intelligent Systems Award, the 2015 AeroLion Technologies Outstanding Paper Award for Unmanned Systems, the 2015 IEEE CSS Video Clip Contest, the 2011 IFAC Automatica award for best applications paper, and the 2002 Institute of Navigation Burka Award. He also received the Air Force Commander's Public Service Award in 2017. He is a Fellow of IEEE and AIAA and was elected to the National Academy of Engineering in 2021.

M. Ani Hsieh

Learning to Swarm Using Knowledge-based Neural Ordinary Differential Equations

Swarm dynamics governs collective swarm behaviors and is the key to understanding artificial and natural swarms. However, the complexity in agent interactions and the decentralized nature of most swarms poses a significant challenge in determining single-agent behaviors from observations of the swarm. In this work, we consider learning swarm dynamics from observation of individual agents trajectories. Using knowledge-based neural ordinary differential equations, we incorporate simple assumptions on interaction rules and the graphical structure of decentralized agent networks into artificial neural networks as knowledge. We apply our learning scheme to a flocking swarm, and demonstrate that the learnt dynamics can reproduce flocking behavior not only in the original swarm, but also in larger swarms

BIO: M. Ani Hsieh is a Research Associate Professor in the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. She is also the Deputy Director of the General Robotics, Automation, Sensing, and Perception (GRASP) Laboratory. Her research interests lie at the intersection of robotics, multi-agent systems, and dynamical systems theory. Hsieh and her team design algorithms for estimation, control, and planning for multi-agent robotic systems with applications in environmental monitoring, estimation and prediction of complex dynamics, and design of collective behaviors. She received her B.S. in Engineering and B.A. in Economics from Swarthmore College and her PhD in Mechanical Engineering from the University of Pennsylvania. Prior to Penn, she was an Associate Professor in the Department of Mechanical Engineering and Mechanics at Drexel University. Hsieh is the recipient of a 2012 Office of Naval Research (ONR) Young Investigator Award and a 2013 National Science Foundation (NSF) CAREER Award.

Isaac Kaminer

Abe Clark

Modeling Large-Scale Adversarial Swarm Engagements using Direct Methods of Optimal Control

We theoretically and numerically study the problem of optimal control of large-scale autonomous systems under explicitly adversarial conditions, including probabilistic destruction of agents during the simulation. Large-scale autonomous systems often include an adversarial component, where different agents or groups of agents explicitly compete with one another. An important component of these systems that is not included in current theory or modeling frameworks is random destruction of agents in time. In this case, the modeling and optimal control framework should consider the attrition of agents as well as their position. We propose and test three numerical modeling schemes, where survival probabilities of all agents are smoothly and continuously decreased in time, based on the relative positions of all agents during the simulation. In particular, we apply these schemes to the case of agents defending a high-value unit from an attacking swarm. We show that these models can be successfully used to model this situation, provided that attrition and spatial dynamics are coupled. Our results have relevance to an entire class of adversarial autonomy situations, where the positions of agents and their survival probabilities are both important

BIOS: Isaac Kaminer received his PhD in Electrical Engineering from University of Michigan in 1992. Before that he spent four years working at Boeing Commerical first as a control engineer in 757/767/747-400 Flight Management Computer Group and then as an engineer in Flight Control Research Group. Since 1992 he has been with the Naval Postgraduate School first at the Aeronautics and Astronautics Department and currently at the Department of Mechanical and Aerospace Engineering where he is a Professor. He has a total of over 35 years of experience in development and flight testing of guidance, navigation and control algorithms for both manned and unmanned aircraft. His more recent efforts were focused on development of coordinated control strategies for multiple UAVs, vision based guidance laws for multiple UAVs and on swarming and counter-swarming strategies for unmanned systems. Professor Kaminer has co-authored more than a hundred fifty refereed journal and conference publications.

Abe Clark is an Assistant Professor in the Department of Physics at the Naval Postgraduate School. Before NPS, he worked at Yale University for three years as a postdoctoral associate. He received a PhD in Physics from Duke University in 2014 and, before that, BS and MS degrees in Electrical and Computer Engineering from Texas Tech University. His previous research has been on soft matter systems, which are disordered systems of macroscopic objects (like sand grains or particles in a dense suspension). He is an expert in molecular dynamics (MD) simulations, which are used to model the dynamics of many-body systems like particulate materials or swarms of autonomous vehicles. Since 2013, he has authored over 20 refereed journal publications and a book chapter.

Aljosa Osep

Tracking Every Pixel and Object

Spatio-temporal interpretation of raw sensory data is vital for intelligent agents to understand how to interact with the environment and perceive how trajectories of moving agents evolve in the 4D continuum, i.e., 3D space and time. In this talk, I will discuss two open challenges: first, I will talk about a holistic, dynamic scene understanding and present our recent work on segmenting and tracking every point and pixel. Then, I will move on to the challenging problem of tracking every object, i.e., object tracking in the open world, in which the set of target classes that need to be detected and tracked is unbounded. In such scenarios, intelligent agents encounter unknown dynamic objects that were not observed during the model training.

BIO: Aljosa Osep is currently working as a PostDoc at the robotics institute of Carnegie Mellon University in Pittsburgh and Dynamic Vision and Learning Group at the Technical University in Munich. When not exploring the world, he is working on cool research problems that lie at the intersection of computer vision, robotics, and machine learning. His current research focus is on scaling object detection, segmentation, and tracking methods to the open world, in which future robots will need to operate.