Demo videos

Drosophibot II, A biomimetic fruit fly robot for studying the neuromechanics of legged locomotion

Clarus A. Goldsmith (1), Moritz Haustein (2), Ansgar Büschges (2), Nicholas S. Szczecinski (1)

(1) Department of Mechanical, Materials and Aerospace Engineering, West Virginia University, Morgantown, WV, USA

(2) Institute of Zoology, University of Cologne, Köln, NRW, Germany

For decades, the field of biologically inspired robotics has leveraged insights from animal locomotion to improve the walking ability of legged robots. Recently, “biomimetic” robots have been developed to model how specific animals walk. By prioritizing biological accuracy to the target organism rather than the application of general principles from biology, these robots can be used to develop detailed biological hypotheses for animal experiments, ultimately improving our understanding of the biological control of legs while improving technical solutions. In this work, we report the development and validation of the robot Drosophibot II, a meso-scale robotic model of an adult fruit fly, Drosophila melanogaster. This robot is novel for its close attention to the kinematics and dynamics of Drosophila, an increasingly important model of legged locomotion. Each leg’s proportions and degrees of freedom have been modeled after Drosophila 3D pose estimation data. We developed a program to automatically solve the inverse kinematics necessary for walking and solve the inverse dynamics necessary for mechatronic design. By applying this solver to a fly-scale body structure, we demonstrate that the robot’s dynamics fits those modeled for the fly. We validate the robot’s ability to walk forward and backward via open-loop straight line walking with biologically inspired foot trajectories. This robot will be used to test biologically inspired walking controllers informed by the morphology and dynamics of the insect nervous system, which will increase our understanding of how the nervous system controls legged locomotion.

GUA robot animal

Yijie Gao(Ink)

University of Southampton

GUA is a bionic robotic animal designed for companionship and interaction. It has a variety of sensors that can sense stimuli and information from the environment and people, such as temperature and humidity in the environment, changes in light and air pressure, human voice and touch, and the movement of people or objects around it.

The system is composed of 5 modules, namely "sensor, brain, upper body, lower body, and non-physical output". The comprehensive information will be collected through the "" sensor module" and then transmitted to the "brain" chip, where it will be calculated through weighted derivation to affect its "activity value, emotion value, and initiative", and then the performance will be determined based on different values. What kind of behavior (such as sleeping, attacking, acting coquettishly, etc.) is finally output to the mechanical structure controlled by the upper and lower body modules to make movements, as well as the sound and light output controlled by the non-physical output module (version 3.0 has not been completed yet).

In subsequent research, I streamlined its system onto a smaller and denser circuit board, enhanced stability and circuit adaptability, and provided the possibility to support sound output.

An experiment involving real people, in collaboration with the psychology department, will be held in May and has approved an ethics application.

Insect Perception: A Key to Advancing Bioinspired Cognitive Robotics

Chowdhury Mohammad Masum Refat, Mochammad Ariyanto, Keisuke Morishima 

Osaka University

Insects are a diverse group of organisms found in virtually every habitat on Earth, from dense rainforests to urban environments. They play essential roles in ecosystems as pollinators, decomposers, and prey, contributing to the balance and functioning of ecosystems worldwide. Around 50% of named species on our planet are insects. Insects have highly developed touch, taste, smell, vision, hearing and many unknown sensing abilities which are still not discovered. Insects contribute to agriculture and many other fields on a high level and play an important role in balancing the ecosystem. To achieve the global sustainability goal, we need to understand insect perception based on environmental changes.

In our lab, we focus on developing cyborg insect perception patterns based on environmental changes. This helps us understand how the insects, despite having a small body, process high-level sensing information with a low power consumption. Additionally, using this study, we are developing various kinds of cyborg insect control systems to improve the sustainability and efficiency of cyborg insect’s use in real-life applications. We believe understanding insect perception plays a critical role in advancing bioinspired cognitive robotics. The workshop will focus on more detailed explanations about how we analyze insect perception and how we use this information to develop various kinds of cyborg insect control systems.

ACKNOWLEDGEMENT:

This work was supported by JST (Moonshot R&D) (grant number JPM JMS223A) and Mitsubishi Corporation.

REFERENCES:

- G. C. H. E. de Croon et al., 'Insect-inspired AI for autonomous robots.' Sci. Robot. 7, eabl6334 (2022). DOI: 10.1126/scirobotics.abl6334

Bio-Inspired Quadruped Platform for Unsupervised Object Discovery via Interaction

Daniel Barron*, Robin Dumas*, Bear Haon*, Nakul Srikanth*, Kaylene Stocking, Claire Tomlin

UC Berkeley

* equal first authors

Unsupervised Object Detection leverages the principle that distinct objects tend to move independently of each other, hypothesized to be a core inductive bias in human perceptual learning [1]. This methodology allows for the identification of relevant objects without explicit labels, which is more akin to the way animals learn to navigate and manipulate their environments than classic supervised learning. However, most research on unsupervised object detection to date has been done on static datasets of images or short video clips [2,3]. Inspired by nature’s autonomous agents, we aim to move object detection research towards a robotic platform that can interact with and manipulate objects instead of only passively looking at them.

Towards this goal, we developed:

A task environment for a robot, inspired by the foraging and object-manipulation behaviors seen in mammals, to pick up and move objects to goal locations;

A central ROS-based algorithm to coordinate an environment engagement routine, deployed on a Quadruped Unmanned Ground Vehicle (UGV), reminiscent of the adaptable and terrain-agnostic movement strategies in animals;

An extended Large Language Model (LLM) integration to advance associated research in robust, assured, and trustworthy autonomy.

By improving robustness and interpretability of object perception and manipulation, research on object discovery for autonomous systems may impact real-world applications such as search and rescue operations and assistive technology for the differently-abled. By studying and mimicking the cognitive and functional aspects of animal perception and navigation, we can enhance robotic control architectures to achieve effective operations in out-of-factory scenarios, driving forward the evolution of cognitive robotics.

REFERENCES:

[1] Spelke, Elizabeth S. "Principles of object perception." Cognitive science 14.1 (1990): 29-56.

[2] Sajjadi, Mehdi SM, et al. "Object scene representation transformer." Advances in Neural Information Processing Systems 35 (2022): 9512-9524.

[3] Kabra, Rishabh, et al. "Simone: View-invariant, temporally-abstracted object representations via unsupervised video decomposition." Advances in Neural Information Processing Systems 34 (2021): 20146-20159.

Anticipatory-like adaptation in perturbed robot-to-human handovers

Francesco Iori (1,2), Gojko Perovic (1,2), Matteo Conti (1,2), Angela Mazzeo (1,2), Marco Controzzi (1,2), Egidio Falotico (1,2)

(1) The BioRobotics Institute, Scuola Superiore Sant’Anna, Viale Rinaldo Piaggio 34, Pontedera, 56025, Italy.

(2) Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, Piazza Martiri della Libertà 33, Pisa, 56127, Italy

Passing an object (handover) is a joint action that requires multiple skills, among which is the coordination between two agents.

If we consider handover trajectories, humans can easily and reactively adapt their trajectory even under unpredictable or adversarial perturbances. In our work, we investigated how could a robot do so, by implementing a control system inspired by the biological concept of anticipatory control.

In a first exploratory work, we proposed a Dynamical Movement Primitive (DMP) [1] approach to trajectory generation.

Under the assumption of a known goal location, the speed of the robot trajectory is modulated reactively to the human motion [2]

Such work showed how hand motion can be considered informative enough to allow fast reactions.

However, the system depended on a fixed estimate of the handover location, coupling "where" to go and "when" to go.

To remove the dependency on the system on knowledge about the handover location, and to enable execution in unstructured scenarios, we looked to the bio-inspired concept of anticipatory control.

To this end, two NN models are trained to provide:

1.       a prediction of the displacement of the human hand;

2.       the expected direction of motion of the human hand in case of a handover.

By exploiting these two predictions, we can provide the system with:

1.       a target to track to perform the handover (as the predicted position of the human hand);

2.       an estimate of whether the human is moving for the handover (as the mismatch between the predicted motion and the predicted direction for handover);

effectively decoupling the two problems of where and when to reach for a handover.

Inspired by anticipatory control, an adaptation of the system speed is now obtained by modulating the speed from the mismatch of the two predictions, with the handover-specific model providing a prior on how the interaction should evolve.

however, directly regulating the speed of evolution in this way cannot discriminate between a case where the human is not moving their hand either because they're already waiting for the object after reaching out or are just not performing the handover.

To address this issue, we modify speed modulation adding first-order dynamics, producing a memory effect.

The final system can effectively react and adapt to perturbations, without ever requiring data from perturbed interactions for its development, by considering as a prior only positive interactions (here in the form of a trained predictor) along with an anticipatory-like control architecture.

This work was supported by the European projects Human Brain Project (SGA 945539) and APRIL Project (SGA 870142).

[1] A. J. Ijspeert, J. Nakanishi, H. Hoffmann, P. Pastor, and S. Schaal, “Dynamical movement primitives: Learning attractor models for motor behaviors,” Neural Comput., vol. 25, no. 2, pp. 328–373, Feb. 2013, doi: 10.1162/NECO_a_00393.

[2] F. Iori, G. Perovic, F. Cini, A. Mazzeo, E. Falotico, and M. Controzzi, “DMP-Based Reactive Robot-to-Human Handover in Perturbed Scenarios,” Int J of Soc Robotics, vol. 15, no. 2, pp. 233–248, Feb. 2023, doi: 10.1007/s12369-022-00960-4.