TriFinger: An Open-Source Robot for Learning Dexterity

Overview

The TriFinger is a robotic platform intended to support research in dexterous manipulation. All hardware and software (including a simulator) are open-source, links can be found below. Please check the paper for a detailed description of the hardware and software: https://arxiv.org/abs/2008.03596

The key properties of the hardware and software design are:

Hardware

Each finger has 3 DoF and they share a workspace, permitting complex fine-manipulation. Three RGB cameras ensure good visibility for any configuration. Instructions for building your own platform can be found here.

The design is loosely inspired by thumb, index and middle finger.

The platform design allows for dexterous manipulation.

The fingers share a large workspace for simultaneous object interaction.

The platform can be flipped, e.g. for throwing. 

The internal mechanics are based on the quadruped proposed here. This design has the following qualities:

Software

The key strengths of the software framework are:

The robot-agnostic code can be found in the robot_interfaces repository, see here for a demo of usage in C++, and here for a demo of how a new robot can be implemented. The drivers for our particular robot are implemented in robot_fingers. More detailed instructions for installing and using the code can be found in the documentation:

For a demo of usage in Python, check this file, here is a snippet of actual code:

Simulator

We provide a simulator (based on PyBullet) of the TriFinger robot. The simulator provides an interface which is identical to the one of the real robot, which makes switching easy. Please see here for documentation and installation instructions.

Illustrative Experiments

To illustrate the capabilities of the platform, we perform simple demonstration, optimal control and deep reinforcement learning experiments.

Fine Manipulation

These motions were recorded through kinesthetic teaching (i.e. the motion was demonstrated by guiding the robot fingers).

Flipping
Turning In-Hand
Large Motion
Writing
Balancing

Throwing

As above, these motions were recorded through kinesthetic teaching.

Throwing a Plastic Cup
Throwing a Ball (Slow Motion)
Throwing a Ball

Optimal Control

Here, we execute a real-time 1kHz control loop which computes the optimal forces to be applied to the object.

Object Pickup Task
Circular Motion Task

Deep Reinforcement Learning

Here we apply an out-of-the-box implementation of a deep RL algorithm (DDPG from stable baselines) to learn reaching from scratch. Notably, no safety precautions are necessary on the user side.

Beginning of Training
Middle of Training
End of Training