Projects

Current Projects

AIFARMS


Agriculture is currently facing a labor crisis. Automating large equipment only partially addresses this problem. We focus on deploying small, low-cost robots beneath the crop canopy which can coordinate to create more sustainable agroecosystems. Before agbots can be ubiquitously used at scale, they need to reach high-levels of autonomy and be made easy-to-use by growers who are managing large acreage with little time to spare. We are developing tools for resilient autonomy and methods to facilitate interaction with the agbots. Through intelligent interaction, we will enable natural collaboration with the robots on the field and effective remote supervision from afar. 

[Paper][Video] [Gallery

Past Projects

A Data-Efficient Visual-Audio Representation with Intuitive Fine-tuning for Voice-Controlled Robots


Peixin Chang, Shuijing Liu, Tianchen Ji, Neeloy Chakraborty, Kaiwen Hong, Katherine Driggs-Campbell

The Conference on Robot Learning (CoRL), 2023.

A command-following robot that serves people in everyday life must continually improve itself in deployment domains with minimal help from its end users, instead of engineers. Previous methods are either difficult to continuously improve after the deployment or require a large number of new labels during fine-tuning. We propose a novel representation that generates an intrinsic reward function for command-following robot tasks by associating images with sound commands.

[Paper][Video]

Learning Visual-Audio Representations for Voice-Controlled Robots


Peixin Chang, Shuijing Liu, D. Livingston McPherson, and Katherine Driggs-Campbell

IEEE International Conference on Robotics and Automation (ICRA), 2023.

Based on the recent advancements in representation learning, we propose a novel pipeline for task-oriented voice-controlled robots with raw sensor inputs. Our pipeline first learns a visual-audio representation (VAR) that associates images and sound commands.  Then the robot learns to fulfill the sound command via reinforcement learning using the intrinsic reward generated by the VAR.

[Paper][Code][Video]

 

Intention Aware Robot Crowd Navigation with Attention-Based Interaction Graph


Shuijing Liu, Peixin Chang, Zhe Huang, Neeloy Chakraborty, Kaiwen Hong, Weihang Liang, D. Livingston McPherson, Junyi Geng, and Katherine Driggs-Campbell

IEEE International Conference on Robotics and Automation (ICRA), 2023.


We study the problem of safe and intention-aware robot navigation in dense and interactive crowds. 

Most previous reinforcement learning (RL) based methods fail to consider different types of interactions among all agents or ignore the intentions of people, which results in performance degradation.

In this paper, we propose a novel recurrent graph neural network with attention mechanisms to capture heterogeneous interactions among agents through space and time. 


[Paper][Video][Code]

 

Learning to Navigate Intersections with Unsupervised Driver Trait Inference


Shuijing Liu, Peixin Chang, Haonan Chen, Neeloy Chakraborty and Katherine Driggs-Campbell 

IEEE International Conference on Robotics and Automation (ICRA), 2022


Navigation through uncontrolled intersections is one of the key challenges for autonomous vehicles.

Identifying the subtle differences in hidden traits of other drivers can bring significant benefits when navigating in such environments. We propose an unsupervised method for inferring driver traits such as driving styles from observed vehicle trajectories. 


[Paper][Code][Video]

Decentralized Structural-RNN for Robot Crowd Navigation 


Shuijing Liu*, Peixin Chang*, Weihang Liang, Neeloy Chakraborty, and Katherine Driggs-Campbell 

IEEE International Conference on Robotics and Automation (ICRA), 2021. 

(*, denote equal contribution)

Previous works on robot crowd navigation assume that the dynamics of all agents are known and well-defined. In addition, the performance of previous methods deteriorates in partially observable environments and environments with dense crowds. To tackle these problems, we propose decentralized structural-Recurrent Neural Network (DS-RNN). 


[Paper][Video][Code]

Robot Sound Interpretation: Combining Sight and Sound in Learning-Based Control 


Peixin Chang, Shuijing Liu, Haonan Chen, Katherine Driggs-Campbell

IEEE International Conference on Intelligent Robots and Systems (IROS), 2020.

We explore the interpretation of sound for robot decision-making, inspired by human speech comprehension. While previous methods use natural language processing to translate sound to text, we propose an end-to-end deep neural network that directly learns control polices from images and sound signals. 

[Paper][Video] [Gallery

iCub Tries to Play an Electronic Keyboard


Peixin Chang, Stephen E. Levinson

Senior Thesis, May 2017. The Highest Honor Award

Department of Electrical and Computer Engineering, the University of Illinois at Urbana-Champaign

This senior thesis project aims at letting an iCub robot play a musical keyboard. The robot is able to compute the 3D location of each electronic keyboard key it can see, listen to a sequence of musical notes a human played on the keyboard, and press the same musical notes in the same order on the keyboard.

[Paper][Video] [Gallery]