As a part of our undergraduate project, we wanted to work on something which could be beneficial to the society. Our Professor Dr. Shyam Kamal suggested us to work on building a Stair climbing wheelchair. The idea behind working on this project was that most of the currently existing wheelchairs cannot be used to traverse stairs. The ones which can traverse stairs are very costly and complicated in design. People with lower limb disabilities are highly dependent on others and cannot lead an independent life. Thus, we decided to work on building a low cost semi-autonomous stair climbing wheelchair with a simple design, keeping in mind that it can be afforded by people of India.
Hardware Design
After doing a literature survey of the various stair climbing wheelchair models, we concluded that track based wheelchair design is the most simple and efficient one. Thus, we went on to use it as the basis of our hardware design. We built a three pulley track based design, as usingit decreases thesize of the pulleys required compared to a two pulley system. The unavailability of tracks with double sided grooves became a problem for us. As an alternative, we employed double sided timing belts to be used in place of tracks. The limitation of these belts were that we can use them only for stairs with pointed nose.
We also need to ensure safety of the user while traversing the stairs. For this we need to take care of two things: first the user should climb the stairs facing backwards to avoid toppling. Second, as the user climbs the stairs the seat inclination changes, thus, we devised a seat inclination mechanism employing slider and bars which ensures the seat remains horizontal with respect to ground throughout ascend.
Hardware Model
We use the Microsoft Kinect Depth Camera and the IMU as our main sensors in the robot. The IMU is used for measuring seat inclination and current angular velocity of the system while using the Depth Camera we calculate the Heading Angle (yaw angle) from the staircase.
Image Processing
Stair edges are the semantic features found across every staircase, the problem of detecting and localizing a staircase in an image can basically be reduced to detect a set group of lines representing stair edges having a similar range of slopes. However the problem is ill posed, detecting a fixed set of lines in an image that correspond to a staircase in a real time setting using a camera would mean creating a pipeline which eliminates noise, detects lines and narrows down to a fixed set that are possible representing the stair edges.
Establishing the pipeline further, we have to detect edges in an image as a first step. Edge detection is a routine computer vision problem however, the problem becomes difficult in case of staircases having a similar color distribution. Using a depth camera would solve the problem, while RGB stair edge detection is not impossible, it sure is difficult. The fundamental property of a staircase is that each stair step would be present at a different distance from the camera, therefore an RGBD sensor like the Microsoft Kinect would be able to capture such depth discontinuities in a grey scale depth image. For edge detection, we employ the Canny Edge detection algorithm where the image threshold is found out adaptively using vertical image gradients. Lines in the image are then found out by using the Probabilistic Hough Transform algorithm, we get all the line segments present in the image in the polar form. Each line segment would have a slope and a y-intercept.
Eliminating line segments which are not within a particular range of the slopes and preserving the line segments which are horizontal or near horizontal, we merge line segments using spatial constraints on slope and offset. We develop a recursive algorithm which recursively merges any two line segments satisfying the constraint until no such pair exists. We filter all the line segments to find out those belonging to the stair edges. Now considering the staircase consists of a large portion of the image, we can eliminate other horizontal lines not belonging to the staircase by having a threshold on the frequency of slopes of line segments as the stair edge (having similar slopes) would have the maximum frequency.
The Pipeline for Stair Localization
Two regions present on a particular stair step would be equal in distance from the camera in a situation where the robot is perfectly aligned with the staircase. This means the depth difference between the two regions would be zero in an ideal case, however as in a general case the robot would not be aligned with the staircase with a non-zero yaw angle , this would correspond to a non zero depth difference between the two regions. After identifying stair edges we take the depth difference of two such regions present below a particular stair edge meant to be present on a particular stair step. The yaw angle could then be simply calculated by utilizing the depth difference between the tow regions and the horizontal separation between the same.
Slip Compensation Control
For a stable traversal on a staircase, the robot is required to maintain almost zero yaw angle such that the system doesn't skid downwards. After calculating the yaw angle of the robot from image processing we are required to provide the torque to the left and right motors in such a way that the system maintains a near zero yaw angle.
In case of tracked vehicles the instantaneous center of rotation is not fixed and while rotating the vehicle the friction force leads to slippage between the track and the surface. Thus controlling the system while turning becomes non-trivial as it is difficult to estimate the slip of each track. There are some methods proposed by previous researchers which include observer design, system modelling, and slip compensation using gyrometry. The easiest of these is the method proposed by Daisuke Endo based on slip estimation using a gyro sensor.
The Figure below shows the basic flow for correcting the yaw angle of the robot.
The gyro sensor provides us with the current angular velocity of the robot. Using this angular velocity and a constraint relation between the slip of right and left tracks we can estimate the slip of each track. By setting a constant desired linear velocity of the system, and the calculated slip for the tracks we the required torque is provided to the motors such that the system reaches the desired zero yaw angle.
ROS based Simulation
We export our Solidworks model in URDF format in the Robot Operation System using the SW2URDF plugin. Since we were working on track based systems some of the methods to model a track in the GAZEBO environment include:
Adding fake wheels to the system
Using the Fast Simulation of Vehicles with Non-deformable Tracks
We initially used the fake wheels method for simulation of tracks in our robot. In this method multiple small fake wheels are added and the move together with the same velocity. Adding multiple small wheels creates more contact surface with the ground which is similar to a track based system. One of the major issues with this approach is that it fails on stairs as the space between the wheels make the robot get stuck at the stair nose. Thus we had to use the second approach developed by Martin Pecka. This approach is based on Contact Surface Motion and the interested reader can refer here for detailed information. Gazebo 9 provides with a built in plugin for tracks using this method. But to use this plugin in ROS we had to create a ROS wrapper to make the plugin ROS-aware.
We write a ROS publisher based on the Image Processing Algorithm described above which publishes the current yaw angle of the robot. To implement the control algorithm we subscribe to this yaw angle and along with the angular velocities obtained using the IMU we publish the required torque to the motors following the Slip Compensated Control.
A lot of work has already been done for performing visual odometry and state estimation in ROS environment. Thus, we employed these packages for sensor fusion and better pose and orientation estimation of our robot. For visual odometry we use ROS wrapper Fovis which has two packages libfovis and fovis_ros. For employing Extended Kalman Filter, we fuse measurements from IMU sensor and visual odometry using robot_pose_ekf package. The visual odometry path and EKF path are plotted using Odom_to_trajectory package. We use Rviz for visualizing the odometry and EKF path.
Future Work
We find that current visual odometry pipelines available in ROS fail to do state estimation on stairs as the feature extraction methods (SIFT, FAST) find it difficult to find region of interest in RGB image of stairs, hence we wish to utilize our image processing algorithm described above to detect predefined semantic features of stairs and utilize for calculating robot state.
We further wish to employ a Robust Integrator approach to calculate the pose from the angular velocity provided by the IMU.
We also aim to build upon the existing work such that our robot is also able to traverse on curved stairs in an efficient way in the ROS environment and further test it on developed hardware model.