Robotic manipulation sits at the heart of embodied intelligence, demanding adaptability, safety, and trustworthiness in unstructured, contact-rich environments. Recent progress in LLMs and Learning from Demonstrations (LfD) has enabled unprecedented generalization, from zero-shot skill acquisition to fine-tuning for complex tasks. Yet, when applied to manipulation, such learned policies often inherit unsafe biases from demonstrations or fail under dynamic human interaction.
The goal of this workshop is to build a forum for interdisciplinary exchange—connecting Lfd and AI researchers with experts in control theory and formal verification—through the lens of robotic manipulation. Manipulation tasks, from dexterous grasping to collaborative assembly, demand both adaptability and rigorous safety guarantees. Through a series of talks, panels, poster sessions, and interactive demos, we will discuss the opportunities and limitations of LRMs in manipulation, strategies for achieving robustness and stability in contact-rich dynamics, and the role of formal specifications in ensuring scalable and verifiable deployment of manipulation skills.
By fostering collaborations across fields, this workshop seeks to inspire innovative research directions in manipulation learning, identify emerging trends, and lay the groundwork for a new generation of robotic manipulation systems that are not only capable and adaptive but also verifiably safe. Participants will help shape an evolving research agenda focused on integrating demonstration-driven learning, reasoning with large models, and formal guarantees to tackle the challenges of reliable manipulation in human-centered environments.