11:00 - 11:10
by Prof. Byungkyu Kim
11:10 - 12:00
by Prof. Seonggun Joe
Abstract (click to show):
Soft robotics has opened new successful approaches to implement known or new robotic tasks such as grasping, locomotion, climbing, growing, and outperforming rigid body robotics. At the heart of this success is the possibility of exploring a plethora of new bioinspired designs because of the inherent compliance and deformability of robot bodies, which much like living organisms, can reach the versatility needed to adapt and merge in the real-world1. Herein, natural continuum systems that can deform in different directions are a source of inspiration, which is a natural continuum universal manipulator, being an agile organ without joints capable of delicate and precise, yet strong, grasping and manipulation. The intricated composition of muscle fibers, fat, and connective tissues, blood vessels, nerves, etc., presents a mechanical continuity that is likely to disobey the well-known kinematic principles (i.e., based on articulated skeletons). These natural structures inspire innovative design principles, technologies, and materials, for developing new versatile manipulators with no distinction between arm and gripper. In this vision, a major unmet challenge in soft robotics consists in designing and building actuated jointless structures having high deformability, and reliability, with no sharp distinction in stiffness, in which different movements can be programmed by both the constituting materials and design of the core structure.
Meanwhile, metamaterials have shown strong potential in addressing the aforementioned challenges, including enabling complex, multidimensional motion and computational capabilities in soft machines. Due to these promising characteristics, metamaterials can mimic key features of the elephant trunk, such as continuity, compressibility, compliance, and a high force-to-weight ratio. From a mechanical perspective, these properties are primarily driven by architectural design rather than material composition. The metamaterials consist of periodic and/or aperiodic unit cells hierarchically arranged along the architecture. The main challenges to exploit them in soft robotics can be highlighted in: (i) addressing topological transformations to determine not only the unit cell dimension but also the overall design, i.e. to achieve programmable and optimized deformation; (ii) exploiting modern additive manufacturing techniques to obtain highly deformable structures overcoming the shortcomings of soft materials (i.e., low viscosity and stability, high curing time, high density)2.
This work presents the continuum architecture having no sudden changes in stiffness, combined with the capability of providing reliable large deformations in different directions. Based on the extensibility and compressibility of volumetrically tessellated structures, monolithic soft architectures are fabricated using 3D printing technology. These monolithic structures are printed in a single process and exhibit multidimensional movements.
In conclusion, we present completely new methodologies for developing programmable, pneumatic-driven and tendon driven 3D architectures, ensuring scalability up to meter-scale dimensions, like the real elephant trunk found in nature.
1 Laschi, Cecilia, and Barbara Mazzolai. "Bioinspired materials and approaches for soft robotics." Mrs Bulletin 46.4 (2021): 345-349.
2 Joe, Seonggun, et al. "Jointless bioinspired soft robotics by harnessing micro and macroporosity." Advanced Science 10.23 (2023): 2302080.
by Prof. Van Anh Ho
Abstract (click to show):
My research philosophy revolves around elucidating the fundamental physics behind intriguing natural phenomena, leveraging cutting-edge technologies to engineer these principles, and applying them to develop innovative mechanisms that foster the safe, intelligent, and resilient coexistence of robotic systems with humans. This approach encompasses both the scientific and technological dimensions of robotics, emphasizing translational research.
In this presentation, I will concentrate on our endeavors to design systems with adaptive morphology and embodied intelligence. These systems are adept at swiftly adapting to the everchanging environmental conditions without imposing excessive computational demands on a central control unit. This also implies that such systems are capable of decentralizing certain calculations, entrusting them to the body itself. Additionally, I will introduce various topics within soft robotics, including soft underwater robots1, soft-flying2 and locomotive robots3, morphological designs for soft haptic interfaces, encompassing haptic sensing and haptic display, as well as safety measures and controls for drones and robot arms that rely on adaptive morphology and multimodal sensing.
1 Dinh Quang Nguyen and Van Anh Ho, Anguilliform Swimming Performance of an Eel-Inspired Soft Robot, Soft Robotics 2022 9:3, 425-439
2 S. T. Bui, Q. K. Luu, D. Q. Nguyen, N. D. M. Le, G. Loianno and V. A. Ho, "Tombo Propeller: Bioinspired Deformable Structure Toward Collision-Accommodated Control for Drones," in IEEE Transactions on Robotics, vol. 39, no. 1, pp. 521-538, Feb. 2023, doi: 10.1109/TRO.2022.3198494.
3 L. V. Nguyen, K. T. Nguyen and V. A. Ho, "Terradynamics of Monolithic Soft Robot Driven by Vibration Mechanism," in IEEE Transactions on Robotics, doi: 10.1109/TRO.2025.3532499.
12:00 - 13:30
13:30 - 14:30
Unmanned Mobile Robotics for Real-world Applications in Extreme Environments
by Prof. Yonghoon Ji
Abstract (click to show):
In our laboratory, we conduct research to solve various problems for real-world applications through unmanned mobile robots and various sensor data processing technologies. Specifically, we realize advanced intelligent recognition and motion control technologies by measuring physical information distributed in various extreme environments1,2.
First, a robotic exploration system in a disaster area is introduced. We develop a semi-autonomous mobile robot system that builds a wide-area survey map that includes semantic information to carry out damage monitoring of the environment inside reactor buildings. To this end, sensor fusion technology is developed for multiple types of sensors (e.g., thermography, hyperspectral cameras, and LiDAR) mounted on the robot to collect damage information at disaster sites including various physical properties of the environment.
Second, an autonomous robotic system for snow removal in introduced. We develop fundamental technologies for the autonomous snow removal robot in heavy snowfall environments. In order to improve the perception performance of the surrounding environment by the camera installed on the robotic system, we focus on image-generative AI technology, which has seen amazing technological innovation in recent years. By training the relationship between summer and snow-covered winter environments in advance using image data, we can generate corresponding fake summer images with high accuracy even in the winter season, and accurately detect the areas of the pavement covered in snow that are the target area for snow removal.
Finally, underwater sensing technology using acoustic cameras for unmanned construction is introduced. We focus on generating a 3-D material distribution map using the acoustic camera under a waterfront development environment where human cannot directly access, such as construction and reclamation projects related to airports, ports, and submarine tunnels, etc. Providing the underwater map information to the relevant decision-making organizations, it is expected to be utilized for the future plan of the waterfront development.
1 Ji, Yonghoon, Tanaka, Yusuke, Tamura, Yusuke, Kimura, Mai, Umemura, Atsushi, Umemura, Kaneshima, Yoshiharu, Murakami, Hiroki, Yamashita, Atsushi, and Asama, Hajime. "Adaptive motion planning based on vehicle characteristics and regulations for off-road UGVs." IEEE Transactions on Industrial Informatics 15.1 (2019): 599-611.
2 Wang, Yusheng, Ji, Yonghoon, Woo, Hanwool, Tamura, Yusuke, Tsuchiya, Hiroshi, Yamashita, Atsushi, and Asama, Hajime. "Acoustic camera-based pose graph SLAM for dense 3-D mapping in underwater environments." IEEE Journal of Oceanic Engineering 46.3 (2021): 829-847.
by Prof. Hae-Sung Yoon
Abstract (click to show):
This presentation introduces grippers developed at the Smart Hybrid Manufacturing Laboratory of Korea Aerospace University, focusing on energy-efficient soft grippers with intelligent posturing capabilities. Extensive efforts have been made to enhance secure grasping and maintain stable gripping under varying conditions. This study explores two key aspects: energy-efficient structural designs and adaptive grasp-posture control. Beginning with the basic Fin Ray soft fingers, various approaches have been investigated to further improve gripper performance. This research is anticipated to the development of advanced robotic end-effectors for industries requiring high speed and precision, such as aerial robotics.
by Prof. Haoran Xie
Abstract (click to show):
As robotics evolves to augment human capabilities beyond traditional physical limitations, this talk presents recent pioneering projects from our research group, Human-Centered AI Laboratory at Japan Advanced Institute of Science and Technology (JAIST). This talk showcases the fusion of human-centered design and cutting-edge robotics technology.
First, we introduce scalable and origami-inspired supernumerary robotic limbs designed to enhance daily living1. This work proposes lightweight, foldable devices leverage continuous soft robotic mechanisms and polypropylene-based origami modules to adapt to the wearer’s body (e.g., hands, torso) while ensuring safety, durability, and minimal movement restriction. Through material experiments and fatigue tests, we demonstrate how this technology balances mechanical resilience with ergonomic comfort, offering a paradigm shift in wearable assistive systems.
Second, we present a semi-automatic robotic framework for creating oriental ink paintings from 3D models2, blending AI-driven stroke simplification, user-guided aesthetic refinement, and robotic precision to emulate traditional brushwork styles like shade and scratchiness styles. By translating 3D geometries into expressive and physically consistent strokes, this proposed system empowers artists and novices alike to collaborate with machines in producing stylized artworks, bridging computational creativity and human intuition.
In this talk, these projects aim to illustrate how human-centered robotic design can expand human potential: physically, through adaptive wearable systems; and creatively, through collaborative artistic tools. We conclude by discussing the broader implications of this work for the augmented human era, where robotics serves not as a replacement for humans, but as a seamless extension of our bodies, creativity, and daily lives.
1 M. Kusunoki, L. V. Nguyen, H. Tsai, V. Ho and H. Xie, "Scalable and Foldable Origami-Inspired Supernumerary Robotic Limbs for Daily Tasks," in IEEE Access, vol. 12, pp. 53436-53447, 2024, doi: 10.1109/ACCESS.2024.3387485.
2 H. Jin et al., "A Semi-Automatic Oriental Ink Painting Framework for Robotic Drawing From 3D Models," in IEEE Robotics and Automation Letters, vol. 8, no. 10, pp. 6667-6674, Oct. 2023, doi: 10.1109/LRA.2023.3311364.
14:30 - 16:00
16:00 - 17:00
Query-based Object-aware Mapping for On-device Visual Language Mapping and Navigation
by Prof. Pileun Kim
Abstract (click to show):
As robotic applications expand, there is an increasing need for mapping frameworks that incorporate situational awareness and semantic understanding. This paper presents a novel approach to object-aware and knowledge-driven mapping by leveraging visual language models (VLMs) and large language models (LLMs). We introduce a unified framework that enables robots to generate lightweight, semantically enriched maps that support language-driven navigation with minimal computational overhead. By encoding keyframes and observed voxels, the system efficiently reduces map storage and processing requirements, making it suitable for real-time deployment on low-powered edge devices such as the Nvidia Jetson Orin Nano. Additionally, our method facilitates goal generation based on general knowledge, achieving a 58.7% success rate even with ambiguous language queries. This capability enhances zero-shot navigation and object-aware exploration, significantly improving robotic autonomy in unstructured environments.
by Dr. Bohyun Hwang
Abstract (click to show):
Colonoscopy is the gold standard for diagnosing colorectal cancer and diseases. However, colonoscopy insertion techniques are difficult to master and the rigorous insertion process can be painful for the patient. To address this issue, we develop an autonomously manipulated colonoscopy system based on detected lumen by a navigation architecture. In order to enhance safety during the autonomous operation of a colonoscopy, it is essential to incorporate knowledge and expertise comparable to that of a skilled colonoscopist for accurate situational awareness. In this research, we employed a supervised learning-based network to enhance cognitive abilities that incorporate the expertise of medical professionals and develop a navigation system that suggests a clinically safe path for exploration in the colon environment, where mathematical modeling is challenging. A supervised learning-based network utilizes ResNet34, and the network is trained to predict the target steering point and collision probability. Based on these predictions, the endoscope tip's steering direction and whether to insert or withdraw are determined.
The tip steering direction is not directed to the deepest point, but rather to the direction where the appropriate technique can be implemented for a high safety colonoscopy procedure. In detail, the datasets are prepared with the goal of implementing techniques such as slalom and slide-by used by colonoscopists during colonoscopy. The performance of the prediction results is quantitatively evaluated through RMSE and F1-score, and the results were 0.208 and 0.868, respectively. In addition, we verified that the navigation network operates with high accuracy throughout the entire procedure by evaluating the accuracy of the navigation network during the entire colonoscopy procedure1.
Through this talk, we aim to present case studies on the application of supervised learning-based networks and discuss methods for replicating the expertise of medical specialists, which is acquired through years of training and practice in the medical field.
1 B. Hwang, C. Shim, S. Joe and B. Kim "A Preliminary Study on Autonomous Robotic Colonoscopy via Deep Neural Network." 2024 13th International Conference on Control, Automation and Information Sciences (ICCAIS). IEEE, 2024.
by Prof. Nhan Huu Nguyen
Abstract (click to show):
The integration of multimodal sensory systems is critically essential for advanced robotic systems as they are expected to operate in increasingly challenging terrains or closely with humans in unstructured environments. The rich information from various sources would significantly enhance the robot’s autonomy, efficiency in decision-making, and adaptation against disturbances. In addition to this trend, the coexistence of several sensing abilities (particularly vision and tactile sensation) using only one type of sensing component has become preferable. In such a landscape, vision-based tactile sensory systems have stood out as a potential solution. Despite the potential, the literature has seen limited successful case studies of such devices. The bottleneck lies in lacking effective mechanisms to isolate the vision range for these two modalities causing overlapping perceptive fields.
In this talk, two solutions for this problem, one for large-scale1 and one for small-scale soft sensors2, will be introduced. Furthermore, the talk will extend further to the next multimodal sensing paradigm, where one modality not only plays its role but also complements the others to achieve high-level perception as well as intelligent robot behaviors. In the end, the promising scenarios and applications are open to invite potential collaborators.
1 Q. K. Luu, D. Q. Nguyen, N. H. Nguyen and V. A. Ho, "Soft Robotic Link with Controllable Transparency for Vision-based Tactile and Proximity Sensing," 2023 IEEE International Conference on Soft Robotics (RoboSoft), Singapore, Singapore, 2023, pp. 1-6, doi: 10.1109/RoboSoft55895.2023.10122059.
2 N. H. Nguyen, N. D. M. Le, Q. K. Luu, T. T. Nguyen and V. A. Ho, “Vi2TaP: A Cross-Polarization Based Mechanism for Perception Transition in Tactile-Proximity Sensing with Applications to Soft Grippers,” IEEE Robotics and Automation Letters, 2025. (Under Review)
17:00 - 17:30
18:00