Robogrinder

RoboGrinder Team of Robomaster Competition

Thanks to our sponsors!

Photos:

Giovanni Arreaga Luyi Wang Junxian Yao Youming Qin

Yijie Bai Yuhui Xu Junjie Liang Franck Eyenga

Tenghui Zhang Yujie Chen Xingyu Lv Luke Knoble

Yi Han Zishuai Li Jingyuan Qi Matthew Foran

Jasher Grunau Zida Song Ruiping Luo

(More teammates' photo will coming soon)

Contact information

Google 电子表格

William Gerhard

This is a competition-based project. We crew students from different majors with different skills to form a team. To attend Robomaster competition.

Here is a quick introduction to the Robomaster competition. RoboMaster is an annual robotics competition for teams of aspiring engineers to design and build next-generation robots for completing difficult tasks and hand-to-hand combat.

We will build 5~7 robots every year based every new competition rules. Our robots include many high-tech points from different majors. For Computer science major, we use computer vision to fulfill the auto-aiming and shooting function. We also use machine learning to fulfill auto tracing. For mechanical major, we design the suspension system for our chassis. Our suspension system can provide a stable environment for the camera and weapon system to aim and shot under high speed movement. For electronic major, we learned and applied PID control to our robots to fulfill precise close loop control.

Entire Team:

8.20- 9.15 Gobbler fest & crew new members

9.16- 10.22 Blueprint and brain storm

10.23- end of 2017 fall semester: first gen porotype

Computer Vision Team

8.20-9.15 Hire new members

9.16-10.22 build environment and get familiar with Open CV

10.23- 11.23 be able to detect, aim and tracing the armor module.

11.24- begin of the 2018 spring semester: combine the coding with electrical team to generate a full auto shooting module.

Electrical Team

8.20-9.15 Hire new members

9.16-10.22 read the rules and get learn how to use STM32 board.

10.23-11.23 develop the super-capacitor

11.24-end of winter break: fully test the super capacitor on the infantry and finish the prototype.

Mechanical team

8.20-9.15 Hire new members

9.16-10.22 Read the rules and make blueprint of robots based on 2018 rules.

20.23-11.23 build and test the function of collecting module and chassis for the hero robot.

11.24- end of winter break : finish the entire prototype of hero robot.

l Hands on experiences

l Coding and embodied system

l Circuit board design

l Power management

l AutoCAD modeling

l Simulation with Inventor

l Get started with Arduino

l Computer vision

l Machinal learning

l Timeline Management

l Media Operation

-----Updates!

2018.04.22

ME team Updates

The hero team and infantry team spend their whole day in the lab to finish the assembling of their robot.

We worked for 14 hours until 2 am in the morning. Thank you everyone for your contribution.

https://sites.google.com/a/vt.edu/amp_lab/projects/robogrinder/IMG_2610.JPG

We also finished the suspension system of the infantry robot. The performance is as good as we expected. So the next step is replace the acrylic board into metal parts and also find the shock with both ends in metal to ensure the durability.

2018.04.22

RoboGrinder Computer Vision Apr 22 Update

New Features

    • Being able to identify multiple enemy armor units

    • Being able to select best target for attacking

    • Being able to communicate with lower level computer

    • Being able to auto adjust threshold for color recognition

    • Support Red/Blue recognition

Solution

    1. To implement the support for multiple armor unit identification, we created a class called ComfirmationAlgo. Basic data structure is a list of ArmorUnit which contains basic coordination, and update status of armor units. To add armor, it has to go through the register process. Undefined coordinates are registered as individual armor units, and preexisting armor units will be updated according to the new coordinates that is passed. Armor units that has not been updated will be removed. Client code can acquire current list of armor units

    2. Best target selecting rule: closest one. In the valid armor list, select the one that appears closest to the camera, change the status to be “locked-on”, a locked-on target will be traced until it is untraceable. Its coordinate will be sent to lower machine for tracking and attacking.

    3. Multiple processes communication (pipeline) is used to read and write UART signal from and to lower level machine. The main process executes the UART communication executable and communicate with it, which prevent UART port jam that happens regularly.

    4. Auto adjusting methods serves as calibration when the machine is at the field. If it does not see objects well, manually trigger the calibration function to adjust the exposure and threshold range for better performance.

    5. The program now fully supports 2 colors for armor unit tracking, by using different filtering model supporting each color. It is ready to deploy.

Issue

    1. Latency in the communication process result in tracking difficulties. Fast moving objects are hard to follow.

    2. Program identifies noises as armor units, happens frequently, though they are not selected as the “locked-on” target.

    3. In accurate red color threshold.

    4. Better lenses selection for range of attack.

Next Objectives

    1. Fix bug #2 in 1 week

    2. Lenses ordered. Test different lenses after its arrival

    3. Communicate with electrical team for an improvement on reaction time and armor tracking.

2018.04.22

Recent EE Progress

1. Layout in progress: Multiphase Boost Converter

This is a 6-28V in, 24-28V out polyphase boost converter establishing connection between EDLC and the chassis motors. Target power for this converter is 500W peak. The architecture and compensation loop have been carefully designed with marginal conditions and safety in mind. The design also optimizes for the highest efficiency, which is reflected in component choices, topology, and PCB layout. The board footprint is only 80x40x7mm. An additional sync port connects up to 2 modules at the same time for added performance or redundancy. Other features such as temp readout, fault signals, and inherent short protection are available. This entire module was designed from scratch by RoboGrinder.

We will be moving layout to 4-layer to incorporate other power management modules (such as the EDLC charger, measurements, transient protection and high power load switching) on the same board.

2. Motion Tracking and Integration

A complete communication stack was written and deployed on both Linux and the STM32 platform. The Linux program uses inter-process communication to interface with the CV algorithm. This communication stack includes our complete custom protocol and CRC verification.

(Images shown below are protocol documentation)

To learn more about our integration process, please watch the most recent CV demo video.

3. EDLC Module and Production

Our tried-and-true EDLC module is being produced in larger batches with the help of a silkscreen printer.

4. Other Work

Our official website: robogrinder.com was completely renovated just recently.

Pneumatic launcher proof of concept is successful

2018.04.08

Jason tried a new camera for the armor chasing program and it has a significant effective.

Please see the video below.

Aaron and Yipin borrowed an remoter from the other research group and tested our drone on Sunday.

It appears that there are some problem with the connection between the No.3 ESC and the PCB board.

We need Zida to have a diagnose on it to determine how we are going to repair it.

Justin and Gio finished the coding work for the engineering robot. The engineering robot can. be controlled by using the DT17 remoter.

2018.04.07

The Standard robot team went to the lab on Saturday and finished the assembling of the motor.

All the parts that need for the standard robot are all printed out and ready to assemble. The first standard robot will be published on April 15th.

2018.04.01

Jason, Franck and other members from the computer team cooperated with Zida from the EE team finished the armor chasing system. See the video below.

From the video we can tell that there are still many parts that need to upgrade. The camera's vision angle is not wide enough so that it will be hard to catch the target. Zida need to add an PID control loop to make sure the barrel will move toward the target smoothly. The CV team is working on add the priority algorithm. The function for this algorithm is that if multiple target shows in the vision at the same time, it will choose the target that has the biggest armor which means the closest one.

2018.03.20

Like, Ethan Guanang an Peter finished the lift construction of the engineering robot.

Purchase-Spring Break

To avoid a possible infinite loop in step 2, our implementation will exit the loop after if a predetermined number of iterations has occurred. Furthermore, it requires that the object used for calibration be placed in the center of the every image. This allows us to reduce the number of pixels that need to be visited and helps reduce the number of possible false positives. Additionally the images can be passed through a Gaussian filter to further reduced the number of false positive, although doing so for a large list of images is significantly more time consuming.

Recognition Algorithm

As outlined in the competition rules, robots will have multiple targets of a certain size on their body. A target is comprised of two parallel light-bars, colored either blue or red, with a number centered in between them. In order to correctly aim the canon on our autonomous robot, an algorithm was designed which deciphers, aka recognize, whether or not an image contains a desired target.

Our recognition algorithm works by first filtering out the non-essential pixels, pixels that fall outside the desired HSV range, in an image. This is called thresholding, and the result will be an image of only objects of the desired color. Assuming that the original image is one of an enemy robot with it’s target visible, then the thresholding will result in an image where only the light-bars are visible. We then utilize an already implemented algorithm for finding contour in an image, by leveraging OpenCV. Finding contours withing the image allows us the detect the light-bars which should now be the sole elements in the image. Once this is done, we can ascertain the rectangular region which indicates target.

Useful Links

-Robogrinder's homepage

www.robogrinder.com

-webpage of Robomaster Competition

https://www.robomaster.com/en-US