Schedule

This workshop will be held virtually. It consists of invited talks, contributed talks, and a panel discussion.

Panel Discussions

Panel discussions took place on May 27, 2020. The video is accessible using the following link:

https://vimeo.com/423695028

Slack Channel Discussions:

Join ICRA 2020 Slack and join Slack channel #ws16 for discussions from May 31 to June 30. Here is the link:

https://icra20.slack.com/app_redirect?channel=ws16

We will have a live Slack discussion session at 12 PM UTC (i.e., 8AM EST) on June 4.

Invited Talks

Title: Zero Trust Architecture in Robotics

Speaker: Víctor Mayoral Vilches (Alias Robotics)

Video URL: https://www.loom.com/share/9c0e2cd4f9da4ac9b9f973452f0249b5

Abstract: Security is a pre-requiste for ensuring safety in robotics and needs to be ensured inter and intra robot communications. Traditional cyber security is often based on the castle-and-moat approach—also known as perimeter security— to protect systems from malicious attacks. Like medieval castles protected by stone walls and moats, organizations and their cyber-physical systems use perimeter security fortifying their network perimeters with firewalls, proxy servers, honeypots, and other Intrusion Prevention (or Detection) Systems (IPS (or IDS)). The problem with this approach is that once an attacker gains access to the network, they have free reign over everything inside.

In this talk Víctor Mayoral Vilches will introduce a security architecture for robots that makes no trust assumptions and demands strict identity verification for every person, device or sub-component trying to access resources on a robot network (internal or external), regardless of whether they are sitting inside or outside of the network perimeter.


Title: ARSimplex: A Unified Framework against Coordinated Cyber-Physical Attacks

Speaker: Xiaofeng Wang (University of South Carolina)

Video URL: https://drive.google.com/file/d/1hyIj4v6fET2qlADm9yylABZU9y-wNxNf/view

Abstract: As the complexity of robotic systems increases, it becomes more and more challenging to ensure system resilience. Historically, robust fault-tolerant control (RFTC) theory, software fault tolerance and security technologies are developed independently with different assumptions and models. For instance, the Simplex architecture is shown to be an efficient tool to address software failures and attacks in control systems. When physical damage/fault exists, however, Simplex may not function correctly, because failures in physical components could change the system dynamics and therefore the original Simplex design may not be appropriate for the new faulty dynamics. On the other hand, RFTC tools always require software correctness. To ensure decent operations against coordinated cyber and physical attacks (CCPA), we present a unified control software architecture, called ARSimplex (Attack-Resilient Simplex), that can seamlessly integrate RFTC techniques into the Simplex architecture. It includes the uncertainty monitor, the high-performance controller (HPC), the robust high-assurance controller (RHAC), and the decision logic that triggers the switch of the controllers. ARSimplex guarantees system resilience against cyber attacks by switching the control authority from the HPC to the RHAC, taking advantage of adaptivity of the RHAC against physical damage/failures, which will lead to verifiable and certifiable software architectures with higher level of system resilience.

Short Bio: Xiaofeng Wang is associate professor in the Department of Electrical Engineering at the University of South Carolina (UofSC), Columbia. He earned his B.S. degree in Applied Mathematics and M.S. in Operation Research and Control Theory from East China Normal University, China, in 2000 and 2003, respectively, and obtained his PhD degree in Electrical Engineering at the University of Notre Dame in 2009. After that, he worked as postdoctoral research associate at the University of Illinois at Urbana and Champaign before he joined UofSC. His research interests include robotics and control, cyber-physical systems, and autonomous systems. He is associate editor of IEEE CSS Conference Editorial Board and Journal of The Franklin Institute. He was the recipient of the best paper award in the Annual Conference of the Prognostics and Health Management Society in 2014.


Title: A Theory of Hypergames on Graphs for Security with Cyber Deception

Speaker: Jie Fu (Worcester Polytechnic Institute)

Video URL: https://www.dropbox.com/s/hysc5d92agddbhh/Security_workshop_cyberdeception_ICRA2020.mov?dl=0

Abstract: With increasing sophisticated attacks on cyber-physical systems, deception has been developed to improve system security and safety. The mean of deception is through obfuscating information or providing misleading information. In this talk, a class of hypergames is introduced for modeling the dynamic, sequential interaction between an attacker and a defender with temporal logic objectives. To mitigate the disadvantages of the defender, we introduce two deceptive mechanisms--action deception and payoff deception. Using the solution concepts of the hypergames, we show how to synthesize provably secured and stealthy deceptive defense strategies against intelligent and adaptive adversaries. We will also discuss fundamental limits in deception for games with qualitative temporal logic objectives. This preliminary analysis can be leveraged to synthesize provably secured communication networks for multi-robot systems and for assuring security in network controlled robots against adversarial attacks.

Short Bio: Jie Fu is an assistant professor with the Dept. of Electrical and Computer Engineering, Robotics Engineering Program, at the Worcester Polytechnic Institute. Her research interests are in probabilistic planning and control for stochastic robotic systems and game-theoretic synthesis of reactive systems, with particular focus on graph games with asymmetric incomplete information for security and defense applications. Prior to joining WPI, she was a postdoctoral researcher at the University of Pennsylvania, where she conducted research in reinforcement learning with temporal logic specifications and human-robot shared autonomy. She received her PhD in Mechanical Engineering from the University of Delaware in 2013. Dr. Fu has led the WPI team who won the first place at ICRA 2016 Formal Methods for Robotics Challenge.


Title: Towards Resilience Against Non-Random Attacks

Speaker: Nicola Bezzo (University of Virginia)

Video URL: https://www.loom.com/share/a59bad09935d4faa8dc587d59478a8bb

Abstract: In this presentation I will discuss some of the work that my group is doing on the area of CPS cyber-security applied to autonomous systems focusing primarily on the detection of non-random attacks with the intention of hijacking a vehicle or a swarm of vehicles towards undesired states.

Short Bio: Nicola Bezzo is an Assistant Professor with the Department of Engineering Systems and Environment and the Department of Electrical and Computer Engineering at the University of Virginia (UVA). Prior to joining UVA in 2016, he was a Postdoctoral Researcher at the PRECISE Center, in the Department of Computer and Information Science at the University of Pennsylvania (UPenn) where he worked on topics related to robotics and cyber-physical systems security. He received a Ph.D. degree in Electrical and Computer Engineering from the University of New Mexico where he focused on the development of theories for motion planning of heterogeneous aerial and ground robotic systems under communication constraints. Prior to his Ph.D., he received both M.S. and B.S. degrees in Electrical Engineering with honors (summa cum laude) from Politecnico di Milano, Italy. At UVA he leads the Autonomous Mobile Robots Lab with research focused on safe and resilient motion planning and control of autonomous vehicles under uncertainties. He is also part of the Link Lab.


Contributed Talks and Posters:

Poster #1: A Dynamic Game Framework for Robot Deception with an Application to Deceptive Pursuit-Evasion

Authors: Linan Huang and Quanyan Zhu (New York University)

Video URL: https://www.loom.com/share/21b3177b1a1a4cdb83c32af9ff65b267

Abstract: Recent advances in automation and adaptive control strategies in multi-agent systems enable robots to use deception to accomplish their objectives. We study rational and persistent deception among intelligent robots to enhance the security and operation efficiency of autonomous vehicles. We present an N-person K-stage nonzero-sum game with an asymmetric information structure where each robot’s private information is modeled as a random variable or its type. The deception is persistent as each robot’s private type remains unknown to other robots for all stages. The deception is rational as robots aim to achieve their deception goals at minimum cost. Each robot forms a belief on others’ types based on state observations and updates it via the Bayesian rule. The level-t perfect Bayesian Nash equilibrium is a natural solution concept of the dynamic game. It demonstrates the sequential rationality of the agents, maintains the belief consistency with the observations and strategies, and provides a reliable prediction of the outcome of the deception game. In particular, in the linear-quadratic setting, we derive a set of extended Riccati equations, obtain the explicit form of the affine state-feedback control, and develop an online computational algorithm. We define the concepts of deceivability and the price of deception to evaluate the strategy design and assess the deception outcome. The proposed model has wide applications including cooperative robots, pursuit and evasion, and humanrobot teaming. The pursuit-evasion games are used as case studies where the evader aims to deceptively reach the target and the pursuer keeps her maneuverability as private information. The pursuer has the lowest cumulative cost under the proposed policy than the direct-following and conservative policies. We have proposed multi-dimensional metrics such as the stage of truth revelation, the endpoint distance, and the cumulative cost to measure the deception impact throughout stages. We have concluded that Bayesian learning can largely reduce the impact of initial belief manipulation and sometimes result in a win-win situation. The increase of the pursuer’s maneuverability can also reduce the endpoint distance and her cumulative cost yet has a marginal effect. A robot is more deceivable, i.e., less learnable when his/her potential types are less distinguishable. Finally, we have found that the idea of using deception to counter deception is not always effective. In particular, it is beneficial for the low-maneuverability pursuer to disguise as a high-maneuverability pursuer but not vice versa.


Poster #2: Modeling a Honeypot Architecture for a Robotic Scenario

Authors: Francisco J. Rodriguez Lera, Angel Manuel Guerrero Higueras, Camino Fernandez Llamas, and Vicente Matellan Olivera (Escuela de Ingenerıas Industrial and SCAYLE, Spain)

Abstract: https://drive.google.com/file/d/1L-lXefbo8lplOQSX_MrjR-VNpDhBuIMe/view?usp=sharing

Video URL: https://www.loom.com/share/3fa57223791a490ab0b7cf63425de3e1


Poster #3: Static Information Flow Control for Robotics

Authors: Ruffin White, Gianluca Caiazzay, Pietro Ferraray, Agostino Cortesiy, Henrik I. Christensen, UC San Diego, USA and Ca’ Foscari University of Venice, Venezia VE, Italy

Abstract: As robotics and networked infrastructures become further integrated, either to augment onboard computational power and memory, or environmental awareness as with cloud based robotics, additional connectivity bears additional risks, broadening the attack surface, and threatening data privacy. To regulate the propagation of data acquisitioned by cyber physical systems within sensitive environments, e.g. domestic, healthcare, or home service robots, conventional measures like that of authenticated encryption and Access Control (AC) are commonly applied. Dually, static Information Flow Control (IFC) can complement these measures to ensure the AC policy is enforced correctly and prevents any disclosure of data to subjects of insufficient security levels. Additionally, the IFC modeling process can help guide design choices, as in allocating trust to subjects and auditing privileges. In this work, we explore the use of static IFC as a means to formally verify the soundness of AC policies within robotic Industrial Internet of Things (IIoT) middlewares like ROS2 and Secure Data Distribution Service.

Video URL: https://www.loom.com/share/51f571d9f40b42cbbd5e8b998421466f