Here we introduced two datasets for grasping- one with single object in a workspace and other with heavily cluttered environment. All the frames have been captured using 3D range device (kinect/ensenso) from a single viewpoint and has point cloud data, annotation, 2D image. Annotation is given in image plane in terms of orientated rectangle. The grasping rectangle format is same as cornell dataset . A handle is represented by 4 lines that give the vertices in a counter-clockwise order and first two coordinates define the line representing the orientation of the gripper plate. The vertices is represented by x and y coordinates separated by a space. For cluttered scene the name of the object is added corresponding to each handle. The source code is uploaded in the gitlab link.
Cluttered Scene-
Single Object-