TriFinger: An Open-Source Robot for Learning Dexterity
The TriFinger is a robotic platform intended to support research in dexterous manipulation. All hardware and software (including a simulator) are open-source, links can be found below. Please check the paper for a detailed description of the hardware and software: https://arxiv.org/abs/2008.03596
The key properties of the hardware and software design are:
Dexterity: mechanical and sensorial capabilities allow for complex object manipulation beyond grasping
Unsupervised Operation: robust hardware and safe software allow for long-term unsupervised operation
Low Cost: hardware design is simple and inexpensive (roughly 5000 $)
Ease of Use: extremely simple C++ and Python interfaces, control rates up to 1 kHz
Robot Agnostic Software: new robots can easily be integrated into the software framework, which may allow for algorithms to run across robots
Each finger has 3 DoF and they share a workspace, permitting complex fine-manipulation. Three RGB cameras ensure good visibility for any configuration. Instructions for building your own platform can be found here.
The design is loosely inspired by thumb, index and middle finger.
The platform design allows for dexterous manipulation.
The fingers share a large workspace for simultaneous object interaction.
The platform can be flipped, e.g. for throwing.
The internal mechanics are based on the quadruped proposed here. This design has the following qualities:
low weight, high torque
1 kHz torque control and sensing
robustness to impacts due to transparency of transmission
The key strengths of the software framework are:
simple user interface in Python and C++ for control at up to 1 kHz
safety checks to prevent the robot from breaking
synchronized history of all inputs and outputs available and can be logged
we provide a simulator of the proposed platform
The robot-agnostic code can be found in the robot_interfaces repository, see here for a demo of usage in C++, and here for a demo of how a new robot can be implemented. The drivers for our particular robot are implemented in robot_fingers. More detailed instructions for installing and using the code can be found in the documentation:
For a demo of usage in Python, check this file, here is a snippet of actual code:
We provide a simulator (based on PyBullet) of the TriFinger robot. The simulator provides an interface which is identical to the one of the real robot, which makes switching easy. Please see here for documentation and installation instructions.
To illustrate the capabilities of the platform, we perform simple demonstration, optimal control and deep reinforcement learning experiments.
These motions were recorded through kinesthetic teaching (i.e. the motion was demonstrated by guiding the robot fingers).
As above, these motions were recorded through kinesthetic teaching.
Here, we execute a real-time 1kHz control loop which computes the optimal forces to be applied to the object.
Deep Reinforcement Learning
Here we apply an out-of-the-box implementation of a deep RL algorithm (DDPG from stable baselines) to learn reaching from scratch. Notably, no safety precautions are necessary on the user side.