The Digest

Explorations in Dynamics

Discussion by Ben Gravell, August 5, 2020

We came across these interesting works that are somewhat tangential to our interests, but fascinating all the same.

Physics in N dimensions

Marc ten Bosch developed a formulation for rigid body dynamics that is independent of the dimension of the space, which is described using geometric algebra. An interesting issue that was also solved was that of collision resolution, which is handled elegantly for convex polytopes which are sufficient to represent typical objects to a high level of accuracy. The paper was accepted to SIGGRAPH 2020 and has been implemented as the commercially available 4D toys as well as an upcoming computer game. Two Minute Papers also covered this game/research.

Reaction-diffusion dynamics

Reaction-diffusion dynamics generalize the diffusion-only dynamics readers may be familiar with from the field of networked systems, which can actually be thought of as a discretization of the diffusion/heat partial differential equation. When the new reaction term is added, rather interesting behavior emerges due to the nonlinearities in the dynamics.

This page shows Robert Munafo's exploration of different regimes of the parameter space of the reaction-diffusion dynamics.

One particularly interesting setting is called the "U-skate world," which exhibits emergent patterns somewhat similar to Conway's Game of Life e.g. the appearance of "still-lifes," small portions of the state-space that retain their structure over time and "gliders," small portions of the state-space that both retain their structure over time and move across the state-space by delicate balancing of the forces over their bodies.

SmoothLife

Conway's Game of Life was a seminal development in computer science that spurred thought about information theory. It was a simple set of rules meant to mimic conditions of population pressures and incentives in living creatures, known as a cellular automata. The game is played on a grid and each cell has a binary state which is updated at discrete time steps. Stephan Rafler generalized the mathematical formulation of this game to continuous state and time, leading to qualitatively similar behavior as Conway's Game, but with much more organic and arguably richer phenomena, including self-propelled gliders and chain-like structures. Check out the paper, the code, and some YouTube videos. Lex Fridman also interviewed Stephen Wolfram, another giant of computing, wherein they discussed cellular automata.


Respect the Unstable

Inaugural Bode Lecture by Dr. Gunter Stein, CDC, 1989

Discussion by Ben Gravell, April 23, 2020

Keywords: Stability, robustness, fundamental limitations, Bode integral, sensitivity, waterbed effect

Summary

In this classic entertaining lecture delivered by Dr. Gunter Stein, fundamental limitations arising in control pertaining to stability, performance, and robustness are addressed. In particular, the Bode integral is advocated as a fundamental "conservation law" of control, despite its relative obscurity at the time, which arguably persists to this day. Other highlights include demonstrations of the effect of unstable poles both on the ubiquitous simple inverted pendulum, as well as the then-state-of-the-art X-29 aircraft with forward-swept wings; these two systems are more similar than a passing glance would suggest! Dr. Stein's prescient observations on the twin trends of ever more dangerous systems being controlled and increasing detachment of control theoreticians from these systems, whether due to over-emphasis on formal mathematics or due to automatic "optimal" control synthesis, are perhaps more sobering than ever today in light of recent developments in autonomous driving systems, power networks, and data-driven learning control.

Watch the video here and read the paper here.

Multi-Sensor Fusion in Automated Driving: A Survey

Work by Zhangjing Wang, Yu Wu, and Qingqing Niu, IEEE Access Volume: 8, 2020

Discussion by Sleiman Safaoui, April 18, 2020

Keywords: Multi Senor Fusion, Autonomous Driving, Tracking, Data Association

Summary

The authors present a survey for multi-source and heterogeneous information fusion for autonomous driving vehicles. They discuss three main topics:

  1. Sensors and communications: identifying the most popular sensors and communication schemes for autonomous driving and their advantages and disadvantages
  2. Fusion: dividing fusion approaches into four main strategies: discernible units, feature complementarity, target attributes of different sensors, and decision making of different sensors.
  3. Data association: describing methods for data association for single or multiple sensors detecting spare or multiple targets.

Read the paper on IEEE here.

Intro to Reinforcement Learning

Tutorial by Ben Gravell, February 2020

Summary

In this tutorial we walk through a basic introduction to reinforcement learning.

Intro to RL.pdf

Optimal Inequalities in Probability Theory: A Convex Optimization Approach

Work by Dimitris Bertsimas & Ioana Popescu, SIAM Journal on Optimization, Volume: 15, Number: 3, Pages: 780-804, 2005

Discussion by Venkatraman Renganathan, February 27, 2020

Keywords: Bounds in Probability Theory , Higher Order Moment Based SDP

Summary

The authors investigate the problem of obtaining best possible bounds on the probability that a certain random variable belongs in a given set, given information on some of its moments (up to kth-order) famously called as the (n, k, Ω)-bound problem. They formulate it as an optimization problem and use modern optimization theory, in particular convex and semidefinite programming. In particular, they provide concrete answers to the following three key questions namely:

  1. Are such bounds “best possible”; that is, do there exist distributions that match them?
  2. Can such bounds be generalized in multivariate settings, and in what circumstances can they be explicitly and/or algorithmically computed?
  3. Is there a general theory based on optimization methods to address in a unified manner moment-inequality problems in probability theory?

Continue reading

Read the paper on SIAM here.

Metrics for Signal Temporal Logic Formulae

Work by Curtis Madsen, Prashant Vaidyanathan, Sadra Sadraddini, Cristian-Ioan Vasile, Nicholas A. DeLateur, Ron Weiss, Douglas Densmore, and Calin Belta, IEEE CDC 2018

Discussion by Sleiman Safaoui, February 24, 2020

Keywords: Signal Temporal Logic, Metric Spaces

Summary

The authors discuss how STL formulae can admit a metric space under mild assumptions. They present two metrics: the Pompeiu-Hausdorff (PH) and the Symmetric Difference (SD) metrics. The PH distance measures how much the language of one formula needs to be enlarged to include the other. The SD distance measures the overlap between the two formulae. The PH distance can be formulated as a Mixed-Integer Linear Programming (MILP) optimization problem which can be computed fairly quickly (although complexity grows exponentially with the number of predicates are formulae horizon). The SD distance is computed using a recursive algorithm based on the area of satisfaction. Its complexity depends on the complexity of the norm operation.

Read the paper on arXiv here.

A Survey of Distributed Optimization

Work by Tao Yang, et al., Annual Reviews in Control 2020

Discussion by Yi Guo, February 18, 2020

Keywords: Control review, distributed optimization, algorithm design

Summary

In distributed optimization of multi-agent systems, agents cooperate to minimize a global function which is a sum of local objective functions. Motivated by applications including power systems, sensor networks, smart buildings, and smart manufacturing, various distributed optimization algorithms have been developed. In these algorithms, each agent performs local computation based on its own information and information received from its neighboring agents through the underlying communication network, so that the optimization problem can be solved in a distributed manner.

This survey paper aims to offer a detailed overview of existing distributed optimization algorithms and their applications in power systems. More specifically, the authors first review discrete-time and continuous-time distributed optimization algorithms for undirected graphs. The authors then discuss how to extend these algorithms in various directions to handle more realistic scenarios. Finally, the authors focus on the application of distributed optimization in the optimal coordination of distributed energy resources.

Read the paper on Elsevier here.

Shrinking Horizon Model Predictive Control With Signal Temporal Logic Constraints Under Stochastic Disturbances

Work by Samira S. Farahani, Rupak Majumdar, Vinayak S. Prabhu, and Sadegh Soudjani, IEEE Transactions on Automatic Control August 2019

Discussion by Sleiman Safaoui, February 13, 2020

Keywords: Signal temporal logic, model predictive control, stochastic disturbances

Summary

The authors discuss a shrinking horizon model predictive control (SH-MPC) problem to generate control inputs for a discrete-time linear system under additive stochastic disturbance (either Gaussian or bounded support). The system specifications are through a signal temporal logic (STL) formula and encoded as a chance constraint into the SH-MPC problem. The SH-MPC problem is optimized for minimum input and maximum robustness.

The authors approximate the system robustness in the objective function using min-max and max-min canonical forms of min-max-plus-scaling (MMPS) functions. They under approximate the chance constraint by showing that any chance constraint on a formula can be transformed into chance constraints on atomic propositions, then they transform the latter into linear constraints.

Read the paper on arXiv here or on IEEE Xplore here.

Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak- Lojasiewicz Condition

Work by Hamed Karimi, Julie Nutini, and Mark Schmidt, ECML PKDD 2016

Discussion by Ben Gravell, February 11, 2020

Keywords: Nonconvex, optimization, Polyak-Lojasiewicz inequality, gradient domination, convergence, global

Summary

The authors re-explore the Polyak-Lojasiewicz inequality first analyzed by Polyak in 1963 by deriving convergence results under various descent methods for various modern machine learning tasks and establishing equivalences and sufficiency-necessity relationships with several other nonconvex function classes. A highlight of the paper is a beautifully simple proof of linear convergence using gradient descent on PL functions, which we walk through in our discussion.

Continue reading

Read the paper on arXiv here.

Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem

Work by Hesameddin Mohammadi, Armin Zare, Mahdi Soltanolkotabi, and Mihailo R. Jovanovic, IEEE TAC 2020 (under review) / CDC 2019

Discussion by Ben Gravell, February 6, 2020

Keywords: Data-driven control, gradient descent, gradient-flow dynamics, linear quadratic regulator, model-free control,

nonconvex optimization, Polyak-Lojasiewicz inequality, random search method, reinforcement learning, sample complexity

Summary

This work extends results on convergence of policy gradient methods for discrete-time systems of Fazel et al. to the case of continuous-time linear dynamics while also significantly improving the number of cost function evaluations and simulation time. These improvements were made possible by novel proof techniques which included 1) relating the gradient-flow dynamics associated with the nonconvex formulation to that of a convex reparameterization, and 2) relaxing strict bounds on the gradient estimation error to probabilistic guarantees on high correlation between the gradient and its estimate. This echoes the notion that indeed "policy gradient is nothing more than random search", albeit a random search with compelling convergence properties.

Dr. Zare recently joined UT Dallas as a faculty member and we look forward to working with him!

Read the paper on arXiv here.

Sparse LQR Synthesis via Information Regularization

Work by Jeb Stefan and Takashi Tanaka, CDC 2019

Discussion by Ben Gravell, January 31, 2020

Keywords: Linear quadratic regulator, information, theory, regularization, matrix inequality, iterative

Summary

Researchers from UT Austin formulate a problem of jointly optimizing quadratic cost of a linear system and an information-theoretic communication cost which accommodates partial channel capacity. It is demonstrated empirically that this optimization can be solved with an iterative semidefinite program (SDP), and that the communication cost acts as a regularizer on the control gains, which in some cases promotes sparsity. This is similar to our own work on learning sparse control; we would love to see how data driven approaches could augment info-theoretic notions in the case of unknown dynamics.

Paper link is forthcoming once the CDC proceedings have been published. Author website is here.

MakeSense: Automated Sensor Design for Proprioceptive Soft Robots

Work by Javier Tapia, Espen Knoop , Mojmir Mutny, Miguel A. Otaduy, and Moritz Bacher, 2020

Discussion by Ben Gravell, January 28, 2020

Keywords: Convex, optimization, feasible, design

Summary

Researchers at Disney have created a method for doing optimized sensor selection for soft robotics. A large number of presumptive fabricable sensors are virtually generated and are then culled down to a small set which give good pose estimation in simulation. They then verify the method experimentally by fabricating physical soft robots with highly elastic strain sensors embedded in a flexible polymer. Although our own research is geared toward theoretical analysis of sensor and actuator selection in the case of well-defined networked linear systems, we found this study a fascinating tangent.

Watch the video and read the paper here.

Automatic Repair of Convex Optimization Problems

Work by Shane Barratt, Guillermo Angeris, Stephen Boyd, 2020

Discussion by Ben Gravell, January 27, 2020

Keywords: Convex, optimization, feasible, design

Summary

This work looks at the meta-optimization setting where the original convex problem is infeasible, unbounded, or pathological (bad) and the problem is changed by a small (minimum) amount to become feasible, bounded, and nonpathological (good). The problem parameter perturbation is itself minimized by the meta-optimization. The authors give examples from control and economic theory, showing that their method can be used as a design tool e.g. slightly changing the mass properties of an aerospace vehicle such that a constrained trajectory planning problem becomes feasible.

Read the paper on arXiv here.

Guaranteed Margins for LQG Regulators

Work by John C. Doyle, IEEE TAC 1978

Discussion by Ben Gravell, January 24, 2020

Keywords: Robust control, optimal control, robustness margins, linear quadratic Gaussian, linear quadratic regulator, Kalman filter

Summary

In this classic paper with the famously short 3-word long abstract "there are none", it is shown by counterexample that linear-quadratic Gaussian (LQG) regulators do not possess any gain or phase stability robustness margins. This stands in stark contrast to the case of linear quadratic regulators (LQR) which were shown a year prior by Safanov and Athens to possess impressive guaranteed 6dB gain and 60 degree phase margins.

Read the paper here or from IEEE Xplore.

See Stephen Tu's nice walkthrough of this paper.

Control Performance Guarantees via Concentration Bounds

Tutorial by Ben Gravell, December 2019

Summary

In this short tutorial we walk through some recent applications of concentration bounds from the field of high-dimensional statistics to the field of adaptive optimal control.

Control performance guarantees via concentration bounds.pdf

Adaptive Kalman Filter for Detectable Linear Time-Invariant Systems

Work by Rahul Moghe, Renato Zanetti, & Maruthi R. Akella, AIAA JGCD 2019

Discussion by Ben Gravell and Venkatraman Renganathan, November 15, 2019

Keywords: Adaptive, Kalman filter, linear system, learning, optimal, estimation

Summary

Unlike the classic Kalman filter requires knowledge of the measurement and process noise covariance matrices, the adaptive Kalman filter proposed by the authors does not. It is shown that under mild assumptions, the online data-driven estimates of the noise covariances asymptotically converge to the true values, rendering the proposed adaptive Kalman filter asymptotically optimal.

Read the paper on AIAA here.

Play with our demo code on Binder, view the Jupyter Notebook, or clone the GitHub repo.

Presentation + Code

We discuss and walk through the paper in a presentation and Python code.

Adaptive_Kalman_Filter_Presentation.pdf