Autonomous Block Building Robotic Arm
Project Aim
To construct a 3 Degree of Freedom manipulator that can autonomously construct patterns out of a random arrangement of blocks on a given surface. Members of ISTE Charge developed the image processing software that detects the presence of a block and outputs its' location (pixel) coordinates. Members of ISTE Clutch worked on the design and manufacture of the robot, along with its controls and kinematics (position analysis).
Members
- Mrutyunjay
- Nikhil
- Praveen
- Hassan
- Shreyas
- Varun
Work Done
Like any robotics based project, this project involved an amalgamation of software and hardware. For the entirety of our time on campus, the team was working on each of these in parallel. The code that the project runs on was completed over the past few months, but a few elements of the manipulator were still left to be manufactured, namely the revolving base and the gripper, and tests involving the integration of our code with the robot were yet to be done. For the purposes of this virtual expo we will describe the progress we’ve made with regards to the project thus far.
The project involved three phases -
1. The design of the manipulator
Our Robotic arm was designed to have 3 degrees of freedom- a revolving base, a shoulder joint and an elbow joint, combined with a claw-like end effector. The design was configured to take care of the maximum torque rating of each servo (15 kg-cm). The links were manufactured by laser cutting an acrylic sheet. The entire setup is mounted on a revolving base that consists of a bearing mounted on a 3D printed cylindrical enclosure.
2. The image processing to identify the blocks
Visual feedback is provided by Image Processing done on OpenCv Python.To begin with we feed the system with two images: "initial" (what the construction site looks like before placing any blocks) & “prebuilt" (what the structure should look like once complete). Before and after any block is placed, images are captured and stored to record the progress and saved as "reference" and "placed" respectively. These two images are updated through every single iteration.
For the first block the reference frame and initial frame are the same (no blocks placed). Once read, all the images are resized to 400X400 and converted to gray scale to reduce computational complexity. After placing each block, the corresponding "reference" and "placed" images are taken into account to locate the region of maximum pixel value difference between the two frames which may correspond to a new block placed.
Filtering, thresholding and pixel erosion is performed on the difference obtained from the two frames and we obtain a frame which is white only at the spot of difference (where the block is placed) and black otherwise.
Next, the edges of the placed block are separated out using Canny Edge detection algorithm. This step gets rid of any small unwanted blobs in the previous difference frame. With the edges properly defined, we use corner detection to identify our points of interest of the image "placed".
With the four corner points available we crop out the selected part from the original image "placed" and use it as a template which is matched with the image "prebuilt". Padding of 10 pixels is given on all 4 sides of the template to incorporate surrounding features and make sure the block is placed in the correct order.
Outcomes
Case 1: Block is placed correctly
Reference Frame
Initial mask obtained after difference,thresholding and erosion
Prebuilt Structure
Edges of the placed block
Image Captured after placing the block (placed)
Corner points of the above image are obtained to separate out the area of interest
By the corner points obtained, we can form our template and match it with the existing knowledge of the structure that we have (prebuilt) Template which is matched with the prebuilt structure image :
Result
Case 2:The orientation of the block placed isn’t how it’s supposed to be
No Detection as block is placed incorrectly
3. Controlling the arm:
This mainly had two parts:
1.Communication between image processing script and servo_control script:
This was done using a python library called zeromq.Here firstly both the scripts established a connection.Then the servo_control script sends a “control signal” string (we used “$$”) to ask for coordinates of block.Upon receiving correct message the img_processing script sends a single block coordinate.The servo_control script then moves that block to its location and sends “$$” back to receive next block location.This loop continues till shape has been formed.
2. Communication between servo_control script and actual arduino which in turn sends signal to servo to ultimately move the arm:
This was done using a library called pyfirmata.The actual angles to be moved were calculated using equations derived from inverse kinematics.
As of now this script can only safely(without any collisions) move a single block
3.Additionally a third script called config.py was used as a means to store all global parameters like robot arm length,pixel ratio;block size etc along with the function definitions for building desired shapes like square,plus etc.
Data Flow Diagram
Video demos
Code explanation
Gripper animation
Arm assembly