The Confluence of Vision and Control

Conference on Control Technology and Applications Workshop

Online on Sunday, August 8, 2021

Workshop Scope

The use of visual sensors in feedback control has been an active topic of research for decades. As the cost of hardware lowers and computational capabilities increase, vision-based control is reaching new levels of capability and application. Recent innovations in computer vision have influenced control applications in autonomous vehicles and robotics. At the same time, control-theoretic solutions, such as nonlinear and adaptive control, have been used to solve open problems in computer vision.

This workshop presents twelve discussions on recent works in vision-based control, the application of control in computer vision, and topics in which vision and control are uniquely intertwined. The workshop seeks to highlight recent developments and open problems that exist at the intersection of vision and control and spur further research and development in the community.

Several speakers have agreed to post their presentations below. If you are interested in the outcomes or more information, feel free to reach out to the speakers or organizers below.

Registration

To attend the workshop, attendees must register by following the CCTA guideline. Workshop fees are $40 for IEEE members and students/retiree, and $60 for non-IEEE members. Registration link:


Schedule

*Times are US Pacific*

8:30 AM – 8:45 AM: Welcome & Introductions

8:45 AM – 9:15 AM: Warren Dixon
(University of Florida)

Description: Image feedback can be used to estimate Euclidean distances between features in an image and/or relative motion between an image feature and the camera. Typical current methods assume that the image feature is continuously observed; yet, in most practical scenarios, the image feature can be occluded. Image occlusion/intermittency segregates the image dynamics into two subsystems: when the feature is visible, feedback is available and the estimator/observer is stabilizable, otherwise its unstable. This talk discusses the use of switched/hybrid systems methods including the development of sufficient dwell-time conditions to ensure the stability of such estimators/observers.

Biosketch: Prof. Warren Dixon received his Ph.D. in 2000 from the Department of Electrical and Computer Engineering from Clemson University. He worked as a research staff member and Eugene P. Wigner Fellow at Oak Ridge National Laboratory (ORNL) until 2004, when he joined the University of Florida in the Mechanical and Aerospace Engineering Department. His main research interest has been the development and application of Lyapunov-based control techniques for uncertain nonlinear systems. He is an ASME Fellow and IEEE Fellow, and formerly an IEEE Control Systems Society (CSS) Distinguished Lecturer.

9:15 AM – 9:45 AM: Ashwin Dani
(University of Connecticut)

Title: Shape Estimation of Deformable Objects

Description: In this talk, new developments of deformable object shape estimation will be presented. The shape estimation method for elongated deformable objects is based on a B-spline chained multiple random matrix models (RMMs) representation. The RMM models geometric characteristics of an elongated deformable object. The hyper degrees of freedom structure of the elongated deformable object make its shape estimation challenging. An EM algorithm to estimate the model parameters will be presented that uses B-spline chained multiple RMMs likelihood. The shape estimation method can be useful for estimating shapes of soft objects and for its control.

Biosketch: Dr. Ashwin Dani received M.S. and Ph.D. degrees from the University of Florida (UF), Gainesville. He is currently an Associate Professor in the Department of Electrical and Computer Engineering at the University of Connecticut, Storrs. He was a post-doctoral research associate in Aerospace Engineering department at the University of Illinois at Urbana-Champaign. His research interests are in the area of estimation, machine learning for control, human-robot collaboration, and vision-based control and autonomous navigation. He is a senior member of IEEE, and a member of the Conference Editorial Board for IEEE Control System Society.

9:45 AM – 10:15 AM: Kaveh Fathian
(Massachusetts Institute of Technology)

Title: Graph-Theoretic Frameworks for Robust Data Association

Description: Data association is concerned with matching elements of two (or multiple) sets of data that are measured by sensors or known based on prior knowledge. As such, it includes a broad range of control and robotic applications, including estimating camera motion, loop-closure in SLAM, and map merging based on feature correspondences in images. The main challenge in data association is the existence of wrong matches, which occur due to noise, outliers, or similar-looking features, and if not corrected can drastically affect the results in these applications. In this talk, we review classical and recent techniques for robust data association, ranging from model-based methods between two sets (e.g, RANSAC) to model-free techniques based on the notion of cycle consistency across multiple sets of data. We show that these techniques can achieve considerable improvement in the accuracy of existing pipelines such as SLAM.

Biosketch: Kaveh Fathian received the M.S. degree and the Ph.D. degree from the University of Texas at Dallas, TX, USA, in 2013 and 2018, respectively, both in electrical engineering. He also received the M.S. degree in mathematics from the University of Texas at Dallas, in 2018. He is a research scientist at the Massachusetts Institute of Technology’s Aerospace Controls Laboratory, Cambridge, MA, USA. His current research interests include topics in control theory, distributed and multi-robot systems, vision-based estimation, and spectral graph theory. Dr. Fathian is a Member of the IEEE Control Systems Society and the Robotics and Automation Society.

10:15 AM – 10:45 AM: Coffee Break

10:45 AM – 11:15 AM: Nicholas Gans
(University of Texas at Arlington)

Title: Estimating Pose and Velocity from Five Visual Feature Points

Description: We present our latest result in vision based motion estimation: recovering the rotation and translation, and linear and angular velocity of a moving camera or moving object from five or more feature points tracked through a sequence of images. Recovery of translation and rotation are commonly recovered using the epipolar constraint or homography matrix. These approaches have can require specific 3D structure of the tracked points or can fail for certain classes of motions. Velocity is an overlooked problem, often estimated using backwards differencing of pose estimates and/or filtering. We recently presented a new formulation for position and angle estimation based on the quaternion representation of rotation from five matched feature points. We then extended the methodology to estimate angular and linear velocity from five optical flow points. Our current work focuses on fusing these estimates using extended Kalman filter and partial Kalman filter. Experimental results of several ego motion and target tracking will be presented.

Biosketch: Nicholas Gans earned his Ph.D. in Systems and Entrepreneurial Engineering from the University of Illinois Urbana-Champaign in 2005. He is currently Division Head of Autonomy and Intelligent Systems at the University of Texas at Arlington Research Institute. Prior to this position, we was a professor in the department of Electrical and Computer Engineering at The University of Texas at Dallas. His research interests are in the fields of robotics, nonlinear and adaptive control, machine vision, and autonomous vehicles. He is a senior member of IEEE, and an Associate Editor for the IEEE Transaction on Robotics.

11:15 AM – 11:45 AM: Guoqiang Hu
(Nanyang Technological University)

Title: Robust Coordination of Networked Multi-Robot Systems

Description: Man-made multi-robot systems have been advancing apace with the help of high-performance hardware and computational technologies. Despite the high-performance computing, communication, sensing, and power devices used in these systems, their effectiveness in uncertain environments appears to still fall behind natural systems such as a swarm of ants, a flock of birds, or a team of wolves. One of the challenges in multi-robot coordination is the lack of effective distributed algorithms and designs that enable the robots to work cooperatively and safely in uncertain environments. This talk will present some recent research results on distributed algorithms and robust control methods for multi-robot coordination.

Biosketch: Guoqiang Hu received his Ph.D. degree in Mechanical Engineering from the University of Florida in 2007. He is currently with the School of Electrical and Electronic Engineering at Nanyang Technological University, Singapore. His research focuses on analysis, control, design and optimization of distributed intelligent systems. More specifically, he works on distributed control, distributed optimization and games, with applications to multi-robot systems and smart city systems. He serves as Associate Editor for IEEE Transactions on Automatic Control and IEEE Transactions on Control Systems Technology.

11:45 AM – 12:15 PM: Roberto Tron
(Boston University)

Title: Designing Feedback-Based Navigation

Description: We propose a novel approach for navigating in environments by synthesizing linear feedback controllers that take as input relative displacements with respect to a set of visual landmarks. We first provide a basic formulation based on a polygonal decomposition of the environment and the solution of a sequence of robust min-max Linear Programming problems on the elements of a cell decomposition of the environment. The optimization problems are formulated using linear Control Lyapunov Function (CLF) for stability and Control Barrier Function (CBF) constraints for safety. We then discuss how to handle realistic cases involving high-order linear agent dynamics, inaccurate map representations, transitory occlusions of landmarks, noise in the measurements, and non-polygonal environments. Throughout the presentation, we show via simulations and experiments that the resulting controllers are very robust with respect to the aforementioned challenges.

Biosketch: Roberto Tron is an Assistant Professor in the Mechanical Engineering department at Boston University. He received his B.Sc. (2004) and M.Sc. (2007) degrees (highest honors) from the Politecnico di Torino, Italy. He received a Diplome d’Engenieur from the Eurecom Institute and a DEA degree from the Université de Nice Sophia-Antipolis in 2006. He received his Ph.D. in Electrical and Computer Engineering from The Johns Hopkins University in 2012, and has been a post-doctoral researcher with the GRASP Lab at the University of Pennsylvania until 2015.

His research spans automatic control, robotics and computer vision, with particularly interest in applications of Riemannian geometry and in distributed perception, control, and planning for teams of multiple agents. He was recognized at the IEEE Conference on Decision and Control with the “General Chair’s Interactive Presentation Recognition Award” (2009), the “Best Student Paper Runner-up” (2011), and the “Best Student Paper Award” (2012). His research interests include applications of Riemannian geometry and optimization to problems in computer vision, control of multi-agent systems and robotics.

12:15 PM – 1:30 PM: Lunch Break

1:30 PM – 2:00 PM: Patricio Vela
(Georgia Institute of Technology)

Title: Perception Space as a Critical Representation for Navigating Unknown Worlds

Description: The talk will cover two aspects of navigating unknown environments and the role of visual information towards permitting safe and accurate trajectory generation for goal attainment. Navigating an environment under partial map information requires establishing a free-space map at the same time as safely maneuvering through the environment to arrive at the goal state. In this case, many of the theoretical guarantees of existing path planning methods fail to apply. Furthermore, real-time constraints prevent the use of a single technique. Instead, hierarchical navigation strategies prevail with a slow global approach for hypothesizing the best possible path given all information, and a fast local approach for identifying the best local goal option aligned with the global information. Theoretical support for hierarchical navigation is less frequently studied. The first part of this talk will describe how Marr’s visual hierarchy together with Gibson’s notion of affordances provides a framework for deriving a hierarchical navigation scheme with provable properties. These properties rely on classical artificial potential field methods together with ideas from contemporary control theory to guarantee safe navigation of an idealized robot, plus a means to translate the solution to non-ideal robots. The second part of the talk will follow-up on Marr’s visual representation and explore how trajectory tracking in the absence of absolute position information (i.e, GPS-denied settings) can be done purely from image-based measurements. Such a capability is critical to realizing planned paths in the absence of externally derived position estimates. The technique, called trajectory servoing, serves to short-circuit the traditional pose-based feedback derived from SLAM systems under these circumstances and replaces it with image-based feedback. With trajectory servoing, accurate tracking of planned paths is possible without external position measurements. Both of these aspects are essential to navigation through unknown, GPS-denied environments.

Biosketch: Patricio A. Vela is an associate professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. Dr. Vela's research focuses on geometric perspectives to control theory and computer vision, particularly how concepts from control and dynamical systems theory can serve to improve computer vision algorithms used in the decision-loop. These efforts are part of a broad program to understand research challenges associated with autonomous robotic operation in uncertain environments. Dr. Vela received a B.S. (1998) and a Ph.D. (2003) from the California Institute of Technology. He was a post-doctoral researcher at the Georgia Institute of Technology from 2003 to 2005.

2:00 PM – 2:30 PM: Randy Beard
(Brigham Young University)

Title: Visual Multiple Target Tracking Over Lie Groups

Description: When an unmanned air vehicle tracks a ground or air target using visual queues, it is usually tracking another vehicle that is best modeled as motion on the Lie Groups SE(2) and SE(3). However, most target tracking algorithms are developed by assuming that the targets satisfy linear time-invariant models that are constant velocity, constant acceleration, or constant jerk. In this talk we will describe how visual multiple target tracking on Matrix Lie groups like SE(2) and SE(3) can be formulated and solved efficiently.

Biosketch: Prof. Randal W. Beard received his PhD in 1995 from the Department of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute. Since 1996, he has been with the Electrical and Computer Engineering Department at Brigham Young University, Provo, UT, where he is currently a professor. His primary research focus is autonomous control of small air vehicles and multivehicle coordination and control. He is a past associate editor for the IEEE Transactions on Automatic Control, IEEE Control Systems Magazine, and the Journal of Intelligent and Robotic Systems. He is a fellow of the IEEE, and an associate fellow of AIAA.

2:30 PM – 3:00 PM: Zachary Bell
(Air Force Research Laboratory)

Title: Simultaneous Estimation of Euclidean Distances to a Stationary Object’s Features and the Euclidean Trajectory of a Monocular Camera

Description: This talk will focus on the development of online, data-based, exponentially converging observers that are developed for a monocular camera to estimate the Euclidean distance (and hence accurately scaled coordinates) to features on a stationary object and to estimate the Euclidean trajectory taken by the camera while tracking the object, without requiring the typical positive depth constraint. Lyapunov-based stability theory is used to show that the developed observers are exponentially converging without requiring persistence of excitation through the use of a data-based learning method. An experimental study is presented which compares the developed Euclidean distance observer to previous observers demonstrating the effectiveness of this result.

Biosketch: Zachary I. Bell received his Ph.D. from the University of Florida in 2019 and is a researcher for the Air Force Research Lab. His research focuses on cooperative guidance and control, computer vision, adaptive control, and reinforcement learning.

3:00 PM – 3:30 PM: Coffee Break

Participants to Be Determined

3:30 PM – 4:00 PM: Riku Funada
(Tokyo Institute of Technology)

Title: Visual Environmental Monitoring for Teams of Quadcopters

Description: Visual sensor networks utilizing aerial robot teams are rapidly emerging within environmental monitoring applications, such as gathering information about human activities, terrain data, and natural phenomena. In the task, the coordinated behavior of teams to efficiently monitor an important region while preventing overlooking essential events is often desirable. In this talk, we present visual coverage control for quadcopters that prevents the appearance of unmonitored area in-between the teams' field of views while maximizing the coverage quality as much as possible. First, we apply the coverage control toward the team of quadcopters that aims to monitor important areas with high sensing quality. Then, we present the control method that prevents the appearance of coverage holes among trios of agents, where we develop a novel control barrier function approach. We show that the proposed algorithm does not overly restrict the quadcopters' movements for maximizing coverage performance. Finally, the performance of the proposed method is presented through simulation and experiment.

Biosketch: Riku Funada is an assistant professor in the Department of Systems and Control Engineering at the Tokyo Institute of Technology. He received the B.Eng., M.Eng., and Ph.D. degrees from the Tokyo Institute of Technology in 2014, 2016, and 2019, respectively. He was a postdoctoral researcher at the Waseda University in 2019 and the University of Texas at Austin in 2020. His research interests include cooperative control and strategic sensing for networked robotics.

4:00 PM - 5:00 PM: Panel Discussion and Concluding Remarks

Prerequisites skills for participants:

Basic understanding of vision-based control/estimation, nonlinear and adaptive control is beneficial. For the registrants who do not have sufficient background in these topics, basic tutorial material will be provided prior to the workshop.

You can request access to the prerequisite material using the following link https://uta.box.com/s/iwqm2bns5d1sxezheukp185g7scejbpa

Workshop Organizers

Nicholas Gans

Division Head of Automation and Intelligent Systems

University of Texas at Arlington Research Institute

E-mail: nick.gans@uta.edu

Kaveh Fathian

Research Scientist, AeroAstro

Massachusetts Institute of Technology

E-mail: kavehf@mit.edu