This project was focused on leveraging the multi-faceted grasping capabilities of the Barrett Hand (BH8-282). The key challenge was controlling the Barrett Hand. We could either use the hand's internal controller to execute a goal configuration for each of the fingers without explicitly having control over the trajectory or we could stream incremental torque commands to the Barrett Hand in real-time by generating a velocity profiled trajectory. Both approaches had their pros and cons; For instance, one would need torque control to execute a planned motion with the Barrett Hand but the internal controller's profiling was much smoother. To be able to leverage the best of both approaches, I engineered the state machine such that we could switch between the two control modes with a single controller switching call in ROS2.
This project aims to account for obstacles in the planning scene that the planner should avoid using only sensor data and without prior knowledge of the semantics of the collision body. involves integrating OctoMap (3D Occupancy Grid Mapping) into the Motion Planning pipeline so the planner can be distinguish between obstacles and known objects in the scene. The voxel grid representation shown in the pictures is constructed by using depth image stream as an input. The OctoMap server runs a static-state binary bayes filter to update its priors and spawns voxels in the planning scene, informing the planner about free and occupied space. A notable challenge I tackled here was to tune the padding and scale parameters to avoid over-estimation of the object size.
This project aims to mitigate motion planning failures in cases where the object to be grasped is outside of the robot's reach, there are other objects obstructing the object to be grasped, pose uncertainity from perception of the scene. Conventional style of motion planning for a single goal is more likely to fail in the above cases or at best, take longer to generate a plan. In contrast, planning with multiple goals give the planner more options to decide which goal would be the most feasible to plan to. Planning with multiple goals also guarantees that the search tree does not grow large compared to single goal planning, saving valuable time spent in collision checking. A challenge I overcame here was to intuitively specify multiple goals and encode them as constraints in a way that is compatible with MoveIt2.
This project tackles the issue about controlling two robot arms in co-ordination. The planning framework MoveIt2, allows for synchronous execution of two arms but that limits the two arms to start and end at the same time. However, this limitation is what guarantees that the two arms won't collide with each other - because they're both part of the same (14-DOF) trajectory. This endeavour seeks to overcome this limitation by maintaining a trajectory execution queue, executing each trajectory on a first come first serve basis. But because the two independnt 7-DOF trajectories are not time synchronized, collision checking becomes a major issue. Hence, a notable challenege with this feature has been to collision check the two arms dynamically, in an online fashion.
I developed a C++ based ROS2 hardware interface that communicates with the robot's hardware API to initialize the robot and spin up a control loop. The speciality of this hardware interface is that it manages real-time CAN bus communications with two robot arms and two end-effectors using a robust life-cycle state machine that executes in a pre-configured manner. I also developed a Telnet interface to control the two linear actuators, giving the system an additional DOF. To be able to run multiple high-frequency control loops in a relayed manner without mutual interference, I developed a controller switching mechanism that saves CAN bus bandwidth upto 20% by controlling the end-effector only when desired. This project has played a pivotal role in strengthening my C++, ROS2 and RTOS fundamentals.
This project was my introduction to autonomous vehicles where I learnt about the theory of Path Planning, State Estimation, Feedback and Model Predictive Control and Perception. As part of a four person team, my contribution was the development of a Particle Filter node based on the car's kinematic model for robot localization, given a known map generated using ROS Cartographer. I was also involved in developing PID and MPC controllers for the 1:10 scale race car. My team's effort culminated into us winning a first prize in an indoor waypoint following race with all code running on an onboard NVIDIA Jetson Nano.