Topic Area

CDS19: Controlling Dynamical Systems

Topic area leaders



Prof. Tobi Tobi Delbruck <tobi@ini.uzh.ch> (Univ. of Zurich and ETH Zurich)

Prof. Aaron Ames <ames@caltech.edu> (Caltech)

Invited guests

all confirmed

Dr. Matthew Cook (UZH-ETH Zurich)

Dr. Aditya Nair (U Washington)

Background

Control of dynamical systems has a long history at Telluride dating back to 15 years ago with the bipedal walkers in Ralph Etienne-Cummings’ and Tony Lewis’ RedBot robot, and postural measurement and control with John Tapson, Bruce Mortimer, John Jeka, and Mark Tilden. Except for the implementation of adaptive controllers using the NEF framework in spiking networks, it has not been so present over the last decade.

Main Aim

CDS19 will explore new ways to couple nonlinear control with deep learning. With recent progress in deep learning we will revisit control with the aim of marrying it with deep learning and hardware accelerators for recurrent neural networks.

Basic research questions

The main research questions addressed in this topic area, in the context of Telluride are

  1. Can we bring in practical work on advanced control engineering to Telluride to better bridge the understanding gap from technology to biology? In particular, the hierarchy present in the human system suggests a natural structure for combining learning and control--especially the use of central pattern generators in the spinal cord that play an essential role in locomotion. Biological systems seem to be able to quickly assemble prototypes for control from prior primitives, although mastery of any particular skill occurs by reinforcement learning that can take hundreds of thousands of trials. Yet this learning typically occurs within the spinal cord (in the context of locomotion), which is similar to the proposed approach: MPC will play the role of the nominal patterns present in the spinal cord which will be modulated in a safe manner through episodic learning.
  2. Can we explore the use of deep learning technology to bring It more into control? In particular, can we understand how to rapidly learn from small amount of training data sufficient generality and stability to address real control problems? This learning strategy will require an understanding of the coupling between the learning and control feedback loops, and how they interact. Special attention will have to be given to learning with limited data in a way that preserves the beneficial properties and behaviors of the system. Thus, we will need to explore the exploration vs. exploitation tradeoff in systems that are not inherently stable.

Potential projects

  1. Controlling a powered prosthesis, and specifically AMPRO.(see right image). Sensing and learning of the human model and intent as the device operates.
  2. Controlling a dual DVS pencil balancer robot so that it achieves more stable and more human-like efficient balancing.
  3. Controlling Georgia Tech or Slasher robot car for minimum lap time
  4. Training RNN to model dynamics of AMPRO using recorded control & sensor data
  5. Training RNN to model damped pendulum and inverted pendulum, and controlling it via NMPC
  6. Controlling inexpensive inverted pendulum purchased from aliexpress by NMPC using RNN to model pendulum

Introductory hands-on tutorial syllabus (1-2h slots)

  1. Basics of model predictive control with software exercises - (Evangelos Theodorou, Georgia Tech)
  2. Basics of PyTorch to train a small RNN for prediction - Chang Gao (UZH-ETH Zurich)
  3. Basics of walkers, role of nonlinear dynamics and control, application to assistive devices, etc - Rachel Gehlhar (Caltech)
  4. Using the DeltaRNN (DRNN) accelerator developed at INI - Chang Gao (UZH-ETH Zurich)
  5. Using DVS/DAVIS event cameras in jAER, ROS, cAER, and pAER - Tobi