Actuator Implementation
We used a custom rack and pinion gear design to implement a 1 degree of freedom actuator. The rack's motion was constrained by rails. Two vertical rails were used to constrain the vertical motion and 2 continuous servo motors were used to convert rotational motion to linear translational motion. The vertical rack gear was connected to a platform which had the claws attached to it that moved horizontally and controlled with one continuous servo motor the closure of the claw to pick and drop objects. We used FEETECH FS90R 360 continuous motors. The actuator is controlled by Arduino Uno and the motors have a separate 6 volt power supply and 9 volt battery for the Arduino. The actuator was wired using a double sided prototyping PCB. Arduino IDE was being used to write code to calibrate and determine the home position of the actuator. The Rosserial library would be used to handle the communication with ROS to integrate all the hardware.
This video shows the movement of the actuator.
Code Implementation
Launch File
The launch file is stored in /src/auto_turtle/launch/turtle.launch. This launch file configures and launches a set of nodes for TurtleBot3. This enable SLAM, AMCL, move_base navigation, and AR marker tracking.
PID Control Logic
PID controller applies correction to a control function by implementing feedback.
P depends on the present error which we utilized in our project, I is the combined past errors, and D predicts future changes.
Key Components of Code
Publishers and Subscribers
AR Detection
PID Control Logic (Slows down when it gets closer to AR tags)
Transformation between the base frame and goal frame
Node to send goal positions on map
Move_Base service for path planning to goal points
How the Complete System Works
We used lidar to avoid obstacles and the transformation method to get the TF matrix between the camera_link and the AR Tag marker. After getting the TF between the camera and the AR tag, we are able to retrieve the distance of how far we are from the AR Tag. Integrating it with PD Controller, we are able to slow down and correct the position of the robot moving to the right direction. After detecting one AR Tag, the robot will rotate to find another AR Tags for a clue on which way the robot should go. This will happen continuously until the robot finally maps the whole maze (SLAM Implementation). In our case, the maze is mapped after Blue-beary counts 3 AR tags. Afterwards, we sent the robot back to its staring position through the move_base service which does the path planning within the maze, now we can command the robot to go to the specific locations of different AR Tags. In here, we send poses to the user_poses topic which the nav_detector node is subscribed to. The move_base service will continue to execute the path planning to the goal points sent by nav_detector.