Manual Intelligence

Project motivation:

In anthropomorphic robot hand cognitive and control community, it's one of the our target to increase the manual competence of anthropomorphic robot hand and to make it autonomously/semi-autonomously perform complex task(manipulate one general object directly/manipulate one object by tools) like human's hand. In traditional robot hand control community, more prior knowledge is coded into the robot hand, which make it just be able to work in the structural environment(eg.to manipulate one known object), but be less adaptive to the change of the environment(the object is changed/unexpected event occur). Active/exploration learning is one effective method to let the robot be able to work in the cognitive the interaction environment, and adaptively adopt right control action when the interaction environment is changed or unexpected events occur. In my project, I want to develop a unify hierarchical learning frame which is fit for learning(exploring or imitate) method for anthropomorphic hand (Shadowhand) to make it operate the general objects in hand. Multimodal sensors-joint angle,tactile sensor and vision will be used to construct the control closedloop to realize the fantastic task.

Challenge:

(1) Unknown/partial known robot hand kinematic and dynamics model.

(2) Limited capability of sensor perception and unmodeled sensor noise(Tactile, Vision)

(3) complex motion interaction model.eg.point contact model vs soft finger contact; rolling,sliding and pin manipulation.

(4) physical constraint eg.Limited robot hand workspace(no fingers collision);the finger motion can not offend the fingers joints limits, torque limitation.

(5) Uncertainty/unstructure work environment(context). e.g.The unknown manipulated object geometry, mass distribution, size etc.

Research Topic: