Imagine you have to carry a heavy load up a long stairway. There is no elevator or the elevator is out of order? Or let us imagine an evacuation scenario where the only way to evacuate is to use a staircase. More relatably with Covid-19 still looming, it has become more important now than ever before to automate working systems to minimize human contact. An automated stair climbing Robot comes from similar inspirations.
A rudimentary execution plan is given in the following set of paragraphs:
During the project, the first thing that was crucial to conclude was whether to use the wheeled or to use the track robots to climb the stairs. Following extensive discussions, decisions were made to go with the track robots owing to better traction. The fact that a weight lifting Robot was to be built, further turned the scales in favor of tracked robots it had a more uniform weight spread design possibility. The design was decided after referring to multiple resources and checking out its feasibility by modeling it on Solidworks. After repeated trials, a workable robust model with a sturdy base structure was designed.
Rigid Body Analyses undertaken
For driving the tracks, 4 high torque DC motors are connected to the wheels which are mainly driving the Robot forwards. These DC motors are powered by a Stepper Motor Driver Module ensuring very precise movement. The high torque generated due to high load current capacity (8A/motor) max, makes the Robot capable of carrying a payload of more than 5-8kg. For the smooth movement of the robot's front and rear arms, servo motors coupled with a gearbox are present at the end of each arm. Another set of servo motors are used to controls the Kinect sensor orientation present at the top front. Other sensors if ought to be added to the Robot can be orientated by this servo motor platform to give a wide view angle of sensor view and to change its own input and direction with the high precision movement for variable speed.
The power source of the Robot is a Battery 12AH/12V which gives enough power voltage and ampere to drive the Robot at different speeds and torque, so there is no problem regarding the power consumption of the Robot, and thus the Robot can move through its trajectory quite ergonomically. For depth-sensing purposes, we have gone with the popular RGBD Camera Kinect. It consists of the IR laser emitter and the IR camera. The IR laser emitter creates a known noisy pattern of structured IR light, which actually allows us to measure the depth. The IR camera operates at 30 Hz and pushes images with1200x960 pixels.
These images are downsampled to 640x480 pixels with 11bits, which provides 2048 levels of sensitivity. An MPU6050 module is used to measure the orientation of the bot, consisting of an inbuilt gyro sensor and an accelerometer. The gyro sensor provides angular velocity and angular position and the accelerometer measures acceleration which is being processed to get the Cartesian position. To control all our motors and motor drivers, the main micro-controller used is a Raspberry Pi 4 . Raspberry Pi 4 with specifications especially RAM and also its ease of connectivity with a Kinect. It also effectively transfers images at a faster rate to our local computer, which makes the entire process of image processioning much faster. We have also incorporated Arduino Uno into our circuitry, which will have the very vital task of controlling the load platform and also exacting orientation information from the MPU6050. The information about orientation can be used to distinguish between the flat land and stairs and can also help in realigning the Robot in case it deviates from its path.
Algorithm
For image processing, we decided to go with a robust RGB--D camera, in our case it was the Kinect. The key advantage of depth imaging is it provides us is that it is possible to get the depth distance between stair edges. This coupled with some other checks like the number and length of an edge can be used to effectively identify stairs. In the case of the wall or open space, the depth distance between edges will not be consistent, but in the case of the stairs, it can be expected to be the same as the tread. A detailed description of the algorithm is mentioned below.
Now comes the weight lifting part. Initially, a plan to use a pneumatic actuator was being considered but on careful scrutiny, the idea fell through due to the requirement of an additional compressor which had no place in the sleek and ergonomic design. The structure that was designed was rigid enough to bear the weight, but the challenge was to balance the structure while ascending and descending the staircase. In order to do this, it was decided to place the load on a protective metal box which in itself would be inclined in a way that it always remains horizontal to the fixed world frame. This was materialized by using a linear motor actuator that will tilt the load-supporting platform in such a way that it is always perpendicular to the gravitational force.
To minimize the turning radius, the arms of the robot were made retractable. Only during climbing will they be extended, in all other cases they will remain retracted. This will drastically reduce the turning radius at the quarter landing. The pictures below show the arm movements is in detail.
Arm Movement during Ascending Stairs
Arm Movement during Descending Stairs
All the parts that are mentioned are available at affordable rates in the market and e-commerce websites alike. The chassis of the robot can be 3D printed. This was the initial basic idea. A more detailed and evolved idea is given in the link shared below.
A simulation of the model was developed using Webots. One of the key activities undertaken during this simulation was to develop two model designs having different weight structures. This was primarily done to understand how the values of coefficient of friction needed to sustain the motion varied with different angles of the staircase. The results of the simulation were further matched with on-paper rigid body calculations to validate the overall process.
One of the key challenges in making the bot autonomous was to find a way to detect the staircase automatically. Since we already had information about the staircase's tread and riser, we leveraged that information to design our algorithm.
Algorithm to detect staircase:
RGB-D Camera Kinect is used to take a depth picture of the staircase.
Now on this depth image, edge detection algorithms like Canny or Sobel will be used to identify horizontal edges.
Once the horizontal edges have been identified, Hough Transform will be used to convert these edges into line segments with definite start and endpoints.
Now certain conditions will be used to accurately judge whether the line segments obtained, are from a staircase. Those conditions are listed below.
- The line segments obtained must be horizontal and they must be of sufficient length to qualify as edges of a staircase.
- The number of horizontal line segments must be greater than or equal to 2.
- The depth distance between each line segment must be approximately equal to the tread of the staircase.
- The distance of a line segment above should be greater than the below one.
The Robot model used in the above Webots simulation is a simplified version of the actual model designed in Solidworks. The main reason to use a simpler model is to run the simulation faster as complex geometries in SolidWorks produce an enormous amount of flat or curved surfaces which makes the simulation mathematically expensive. This simplified model of the Robot has the least number of flat surfaces and possesses all the mechanical properties of the SolidWorks simulated model.
A Study on Staircase Detection for Visually Impaired Person by Machine Learning using RGB-D Images
Manabu Shimakawa, Runa Akutagawa, Kimiyasu Kiyota, Mitsutaka Nakano [Image Processing]
Autonomous Staircase Detection and Stair Climbing for a Tracked Mobile Robot using Fuzzy Controller
E. Mihankhah, A. Kalantari, E. Aboosaeedan ,H.D. Taghirad , and S.Ali.A. Moosavian [Design]
A Kinect-sensor-based Tracked Robot for Exploring and Climbing Stairs
I-Hsum Li, Wei-Yen Wang2, and Chien-Kai Tseng [Design and Image Processing]