robot kinestic guidance
Video link : Cybathlon project
Git coe : Git link ( ROS2 Humble, C++, Ubuntu 22.04)
Video link : Cybathlon project
Git coe : Git link ( ROS2 Humble, C++, Ubuntu 22.04)
The Cybathlon-VIS is a scientific and technological competition, during which people with disabilities will participate in events, all accompanied by a team of scientists, thus demonstrating the contribution of technologies for people with disabilities. The next edition of the competition will be held in 2024. Our project is to develop a technology to meet the objectives of the competition.
At that time, we already have the design of our robot, and we are trying to control it using motors, and a stereo camera, as well as various sensors. This is a one-year project. We are a team of 4 students.
This project is in collaboration with the french Lab ISIR ( Institut des Systemes Intelligents et robotique). https://www.isir.upmc.fr/
The algorithm behind the control of our robot is based on SLAM. All of our implementations will be done using ROS2.
The goal of the project :
To kinestically guide a visually impaired person from a start line to a finish line, without touching obstacles or leaving the track.
The aim of this design is not to reinvent a technique for guiding the visually impaired, but rather to simulate an existing technique. In order to move the pilot's elbow, we're going to create a handle for his hand that will simulate the behavior of his guide's arm.
The robot will be placed on a harness, worn by the pilot, and its aim will be to provide clear feedback on position by moving the pilot's arm.
The problem we had to address was how to operate this handle. The solution we chose for this was a pantograph-type mechanism, which not only gave us a fan-shaped working area - thus highly relevant for communicating a direction of movement - but above all enabled us to move the heaviest parts of the mechanism, the motors, towards the pilot's body, which was to carry it.
Workspace of the robot
We use a ZED 2 camera from Stereolabs. This is an intelligent stereoscopic camera equipped with an inertial unit, enabling it to map its environment in depth on the one hand, and to locate itself (position + orientation) on the other. In other words, it allows you to run a SLAM.
Octree transformation
Octo Map is a library for point cloud processing and 3D map rendering. In particular, it performs a probabilistic representation, in octree form, which is also a discrete representation of the environment in voxel form. This representation is useful for obtaining a 2D projection of the octree at algorithm output, and for sending an occupancy grid message, via a ROS2 topic, to the navigation section explained below.
Filtering
In the octomap_server source code, we've added lines of code that first cut the cloud in 3 dimensions: in fact, the camera sees much further and wider than necessary in relation to the known dimensions of the track. We also cut out the floor and ceiling, as these would get in the way during the 2D projection stage.
Next, we perform median filtering to remove points whose density is too low in relation to the set threshold. These are noise due to lighting or other factors.
Obtaining the occupancy grid
Once the 3D filtering is complete, octomap_server projects the octree onto a plane, producing an occupancy grid. This is a 2D representation in which each pixel has a value between 0 and 100, corresponding to the probability that it is occupied or not. The result can be seen above.
We can see that the map is not yet usable for the navigation algorithm: there are no boundaries. There are unknown zones, which could pose problems for navigation.