NPC25: Neural Perception and Control
NPC25: Neural Perception and Control
Michael Furlong (National Research Council of Canada, University of Waterloo), michael.furlong@uwaterloo.ca
Renaldas Zioma (Independent Researcher), rziom@acm.org
Davide Scaramuzza (UZH)
Amanda Prorok (Cambridge)
Tobi Delbruck (UZH-ETH)
Matthias Kampa (ZHAW)
Giulia D’Angelo (CTU)
Jens E. Pedersen (KTH)
Rika Antonova (Cambridge)
Lisa Li (U Michigan)
Chris Eliasmith (U Waterloo)
The NPC25 topic area focuses on the perception, planning, and control loop in projects with physical robots as well as simulated environments to engage remote participants. We will prepare software environments so participants will be ready to go from day one. We have invited experts in control theory, planning, perception, and robotics to provide lectures, and prepare tutorials on adaptive, feedback and feedforward control with experiments that people can replicate. All materials will be made available for remote participants, including the preparatory material for the workshop.
How can SOA perception, planning & control be improved through neuro inspired approaches?
How can adaptive & scalable decision-making be achieved through hierarchical approaches?
The adoption of learning methods, especially reinforcement learning in robotics, has been a watershed moment. While control theory is an important part of robotics, modern conferences (ICRA, IROS, CoRL, RSS) are largely unintelligible without an understanding of machine learning. We aim to bridge theoretical models and neuro-inspired approaches with practical machine learning and autonomous, power-constrained robotic systems. We aim to integrate perception, learning, control, and planning together with Telluride participants interested in neural control and neuromorphic AI hardware.
This topic is a natural continuation of previous Telluride Workshops: “CDS19: Controlling Dynamical Systems”, “L2RACE21: Learn to Race”, “LTC21: Learning to Control”, "L2S22: Lifelong Learning at Scale, from Theory to Robotic Applications" and “NC23: Neural Learning for Control”, with potential for attracting industry and academic interest.
Arm (Soft grasper) We provide a robotic arm with a soft grasping end effector for learning to pick and place objects leveraging object motion segmentation. This project area focuses on adaptive and reinforcement learning algorithms, leveraging high-speed event camera data.
Vehicle (F1Tenth / Lego Mindstorms) We provide robot cars as testbeds for building adaptive control.
Quadruped (Open source platform) We provide a quadruped robot that can be controlled using a variety of control schemes. Specifically, we focus on planning in the space of smooth and energy efficient trajectories to extend the usable life of the robot.
Brio Labyrinth Running the labyrinth demands quick perception and nonlinear control, with planning under dynamic constraints. It is a good test bed to explore State Space Models and predictive control algorithms.
Simulated environments (Cartpole, Drone swarm / F1Tenth / Chemical plant control / Robotic arm) We provide simulated environments for the above platforms in pre-configured virtual machines. We will include simulated models of small quadrotors for participants interested in exploring multi-timescale or multi-agent problems, swarm behavior, and a simulated chemical plant, a standard test case for Model Predictive Control.