Talks

Dr. Alan Fern - Don't Get Fooled by Explanations

Abstract:

In recent year, many approaches have been developed for producing different types of explanations for AI and machine learning systems. In most cases, however, the explanations are not attached to a sound semantics and leave much open to the interpretation of the explanation consumer. As a result, such explanations can be highly misleading and counterproductive. In this talk, we will overview some of our recent efforts where we aim to develop explanation approaches for reinforcement learning and image classification that are attached to different notions of soundness. We will end by discussing some of the challenges in furthering the development of sound explanations.

Bio:

Alan Fern is a Professor of Computer Science at Oregon State University. His general research area is artificial intelligence with an emphasis on machine learning, automated planning, and the intersection of those areas. He is particularly interested in developing model-based planning systems that can learn from experience and humans, as well as explain their decisions to humans. He is an associate editor of Machine Learning and the Journal of Artificial Intelligence Research and is regularly an area chair for ICML, NeurIPS, and AAAI.

Dr. Ufuk Topcu - Efficient Data Processing and Trustworthy Decision Making through Structured Task Representation

Abstract:

The recent breakthroughs in the design of efficient data processing techniques indicate the possibility of integrating potentially large-scale and heterogeneous data into decision making. Nevertheless, such efficiency is frequently put at odds with the interpretability and the trustworthiness of the decision making process. Therefore, it is challenging to design autonomous agents that can efficiently process data and still provide performance guarantees. In this talk, I focus on simultaneous perception and planning for autonomous agents operating with uncertain dynamics and in partially known environments. In this setting, I argue that a pivotal factor in overcoming the said challenge is through structured task representations. In particular, I present our recent results on utilizing temporal logic task specifications as a bridge between perception and planning. We exploit the rich structure of temporal logic specifications to develop task-oriented active perception strategies. Furthermore, by taking the dynamics uncertainties and the evolving perception uncertainties into account, we establish high-probability performance guarantees that hold at runtime.

Joint work with Mahsa Ghasemi

Bio:

Ufuk Topcu joined the Department of Aerospace Engineering at the University of Texas at Austin as an assistant professor in Fall 2015. He received his Ph.D. degree from the University of California at Berkeley in 2008. He held research positions at the University of Pennsylvania and California Institute of Technology. His research focuses on the theoretical, algorithmic, and computational aspects of design and verification of autonomous systems through novel connections between formal methods, learning theory, and controls.

Dr. Yogesh Girdhar - Enabling Vision Guided Interactive Exploration in Bandwidth Limited Environments

Abstract:

Robotic image data gathering tasks often happen in environments that have not been previously explored, and hence best done with humans in the loop. This however is often challenging to do when there are strong communication bottlenecks, which is often the case due to the surrounding medium (underwater environments), distance (planetary missions), or limited availability of human attention. In this talk, I will present ongoing work happening at WARPLab that combines the use of unsupervised scene representation learning, active reward learning, and informative path planning approaches to enable interactive vision-guided exploration in bandwidth-constrained environments.

Bio:

Yogesh Girdhar is a computer scientist, and the PI of the WARPLab (http://warp.whoi.edu) at Woods Hole Oceanographic Institution (WHOI), and an Associate Scientist (without Tenure) in the Applied Ocean Physics & Engineering department. He received his BS and MS from Rensselaer Polytechnic Institute in Troy, NY; and his Ph.D. from McGill University in Montreal, Canada. During his Ph.D. Girdhar developed an interest in ocean exploration using autonomous underwater vehicles, which motivated him to come to WHOI, initially as a postdoc, and then later continue as a scientist to start WARPLab. Girhdar’s research has since then focused on developing smarter autonomous exploration robots that can accelerate the scientific discovery process in extreme and challenging environments, such as the deep sea. Some notable recognition of his work includes the Best Paper Award in Service Robotics at ICRA2020, finalist for Best Paper Award at IROS 2018, and honorable mention for 2014 CIPPRS Doctoral Dissertation Award.

Dr. Kiri L. Wagstaff - Explainable Autonomous Data Collection and Prioritization by Planetary Rovers and Orbiters

Abstract:

Robotic spacecraft function as remote explorers of new environments. Often they must operate so far from the Earth that direct human control is not possible. Autonomy allows spacecraft to navigate, explore, and collect data independently, given information about mission goals and operational constraints. The Mars Science Laboratory rover has employed autonomous data collection since 2016 using the AEGIS system to decide which rocks to target with the ChemCam spectrometer. We are developing an extension that can rank targets by their novelty as well. We have also developed onboard science analysis methods for the upcoming Europa Clipper mission which will operate even further from the Earth. In all three cases, the system must be able to answer "why did it do that?" It is therefore important that the downlinked products are accompanied by sufficient traceability information so that a visualization of the autonomous decision process can be reconstructed for human operators. These explanations increase human understanding of and trust in the autonomous data collection system.

Bio:

Dr. Kiri L. Wagstaff is a principal researcher in machine learning at NASA's Jet Propulsion Laboratory and an associate research professor at Oregon State University. Her research focuses on developing new machine learning and data analysis methods for use onboard spacecraft and in data archives for planetary science, astronomy, cosmology, and more. She holds a Ph.D. in Computer Science from Cornell University followed by an M.S. in Geological Sciences from the University of Southern California and a Master of Library and Information Science (MLIS) from San Jose State University. She received a 2008 Lew Allen Award for Excellence in Research for work on the sensitivity of machine learning methods to high-radiation space environments and a 2012 NASA Exceptional Technology Achievement award for work on transient signal detection methods in radio astronomy data. She also served as a Tactical Uplink Lead (operational planning) for the Mars Opportunity rover. She is passionate about keeping machine learning relevant to real-world problems.

Dr. Bogdan Strimbu - Unmanned Forest Inventory

Abstract:

Accurate and precise inventory is the foundation of almost any management enterprise. Man-made environments have a reduced complexity in term of variability and similarity compared to the natural environments, particularly forest. Therefore, the inventory of human-built conditions is less challenging than the description of the natural conditions using remote sensing techniques. Active forest management, such as thinning, requires application of repetitive qualitative choices that lead to quantitative benefits, such as revenue. The repetition of decisions based on experience are suboptimal, which reduced the amount of the desired outcome. Therefore, significant efforts were placed in developing systems that would diminish or even eliminate the need for subjective choices, namely the selection of which tree to be harvest or not. To identify the harvestable trees several methods were developed that use information acquired with sensors installed above or below canopy. While above canopy data is useful for some forest metrics, they provide a limited view of the forest. Therefore, the current forest inventory is focused on sensors moving below canopy. There are two challenges faced by devices located under the tree crowns suppling the data needed to make optimal harvesting decisions: navigation throughout the forest stand and acquisition of information from which tree dimensions and species can be inferred. The most promising results were obtained using lidar, but the expensive devices associated with laser technology prohibited its application beyond research. An avenue that is less expensive than lidar but is more challenging in respect with data acquisition and navigation is based on red-green-blue images. If UAVs are used, then at least two cameras, if not a swarm, are required to map the forest stand (i.e., one camera installed on one UAV), as evidence suggested unusable data when only one camera is used. Structure from motion using two cameras combined with a pre-existing georeferencing algorithm proved to be an appropriate method of supplying the information needed for a precise and accurate forest inventory. However, being slow, the procedure poses challenges in operationalize it in real world forest inventory. Therefore, the current forest inventory research is focused on using stereo cameras in combination with SLAM algorithms.

Dr. Joshua Peschel - Field Robotics in Cyber-Agricultural Systems

Abstract:

In this talk I will present a suite of new assistive technologies that leverage robotics and computer vision to enable sensing and sensemaking across different types of agricultural environments. Case studies will be presented on robot-assisted phenotyping of row crops, hydrologic data gathering, applications for aerial telemanipulation, and automated visual sensemaking of livestock. The material covered will illustrate how user-focused design of robotics and automated systems, to accomplish new data collection, can also enable better informed decision-making and trust. This talk will be of interest to researchers and practitioners working in fields that include the agricultural sciences, engineering, and computer science.

Bio:

Dr. Joshua Peschel is an Assistant Professor of Agricultural and Biosystems Engineering and Black & Veatch Faculty Fellow at Iowa State University; he also holds courtesy appointments in the departments of Electrical and Computer Engineering and Civil, Construction and Environmental Engineering. Dr. Peschel conducts research in the area of cyber-agricultural systems where he creates new technologies, data sets, and computational models for sensing and sensemaking. His research has been supported by the National Science Foundation, U.S. Departments of Agriculture, Defense and Energy, the Bill & Melinda Gates Foundation, and a number of commodity groups and private industry partners.


Dr. Eric W. Frew - Deploying a Trustworthy Aerial Robot for Studying Severe Local Storms

Abstract:

This talk will describe the design and deployment of trustworthy aerial robots for studying severe local storms. In particular, the Robust Autonomous Aerial Vehicle - Endurant and Nimble (RAAVEN) unmanned aircraft system was designed for studying tornado formation in supercell thunderstorms. The RAAVEN is the result of 10+ years of collaboration between meteorologists, aerospace engineers, and computer scientists. The concept of operations for RAAVEN relies on a semi-autonomous supervisory control system that lets the science team guide the behavior of the autonomous aircraft. Main components of the system design will be presented along with results from deployments in the U.S. Great Plains during the severe storm season in Spring 2019.

Bio:

Dr. Eric W. Frew is a professor in the Ann and H.J. Smead Aerospace Engineering Sciences Department and Director of the Autonomous Systems Interdisciplinary Research Theme (ASIRT) at the University of Colorado Boulder (CU). He received his B.S. in mechanical engineering from Cornell University in 1995 and his M.S and Ph.D. in aeronautics and astronautics from Stanford University in 1996 and 2003, respectively. Dr. Frew has been designing and deploying unmanned aircraft systems for over twenty years. His research efforts focus on autonomous flight of heterogeneous unmanned aircraft systems; distributed information-gathering by mobile robots; miniature self-deploying systems; and guidance and control of unmanned aircraft in complex atmospheric phenomena. Dr. Frew was co-leader of the team that performed the first-ever sampling of a severe supercell thunderstorm by an unmanned aircraft. He is currently the CU Site Director for the National Science Foundation Industry / University Cooperative Research Center (IUCRC) for Unmanned Aircraft Systems. He received the NSF Faculty Early Career Development (CAREER) Award in 2009 and was selected for the 2010 DARPA Computer Science Study Group.

Dr. Julie A. Adams - Transparency: Why does it matter?

Abstract:

Transparency for robotics has been defined as a measure of how well an interface supports a human’s understanding of the robot’s intent, performance, future plans and reasoning, however transparency is truly a notion that impacts the overall robot system design, control capabilities and intelligent algorithms. While this workshop focuses on Explainability and Trust, transparency is broader than those notions, but is necessary when developing systems that will be adapted by novice and expert users alike. An overview of the factors that impact or are impacted by transparency will be presented, with specific emphasis placed on Explainability and Trust. Further, individual robot, multiple robot and swarm robotic system considerations will be discussed with regard to how these fundamental system differences impact transparency.

Bio:

Dr. Julie A. Adams, Professor, Associate Director of the Collaborative Robotics and Intelligent Systems Institute, Oregon State University. Dr. Adams was the founder of the Human-Machine Teaming Laboratory at Vanderbilt University, prior to moving the laboratory to Oregon State. Adams has worked in the area of human-machine teaming for thirty years. Throughout her career she has focused on human interaction with unmanned systems, but also focused on manned civilian and military aircraft at Honeywell, Inc. and commercial, consumer and industrial systems at the Eastman Kodak Company. Her research, which is grounded in robotics applications for domains such as first response, archaeology, oceanography, the national airspace, and the U.S. military, focuses on distributed artificial intelligence, swarms, robotics and human-machine teaming. Dr. Adams received her M.S. and Ph.D. degrees in Computer and Information Sciences from the University of Pennsylvania and her B.S. in Computer Science and B.B.E. in Accounting from Siena College.