Week 1: Robotics

For our first week, we studied Robotics! This required us to use computer science as we needed to code our robot. Our specific task was to lead a group of people out of a disaster situation such as an earthquake. This required us to work on a system that would rely entirely on its sensors and not need human input as in a disaster it would be redundant to use the robot.

Design Process:

We started the design process with a group brainstorming session. We tried to determine what the robot should realistically be able to do. We decided that we would create a robot to lead people out of a building collapsed because of an earthquake. This meant that there could be no predetermined line to follow because different paths could potentially be blocked off do to rubble falling during the earthquake. After a lot of debate over how this should be accomplished, we decided to settle on an "always turn left" hypothesis. This idea states that if you are lost in a maze if you always keep your hand on the left wall you should be able to find your way out of the maze. This theory is very similar to what the robot would have to accomplish because finding a path out of a largely collapsed building is very similar to a maze. This leads us to our flow chart, which essentially states that the robot should turn left if it isn't blocked, but if the left is blocked then the robot should scan forward and see if that is blocked. If the front is not blocked then the robot should simply move forward, but if it is then it should turn right and repeat this step until it is clear. This will either lead the robot to go straight (if the left is blocked but the front is not blocked), the left (if it is unblocked), or to the right (if both the front and the left are blocked). This allows the robot to always be tracking to the left side.

However, since we could only figure out how to work with pre-constructed robots, we could not have a robot that constantly scanned both to the left and in front of it. Instead, however, we had the robot constantly stop, turn to the left, and scan, allowing the robot to get values for both its left side and in front of it. If we were to work in person, adding this second sensor would be extremely easy, however, we sadly cannot do this and so we had to compensate with this turning method.

Once we started coding we began to realize that the robot would never turn exactly the same. This is because the software was designed to simulate real-life where the turn will never be perfect. This caused the robot to never make a perfect 90-degree turn, and since the robot is constantly making 90-degree turns (to check if either wall is blocked) this immediately caused the robot to end up off course. We began to address this problem by lowering the sensor value. This meant that the sensors would not read walls from so far away, allowing the robot to properly know when a wall was close versus far. This also helped to combat this imperfect turning by allowing the robot to know immediately if it was to close to a wall and this would create a "zig-zagging" motion. One big problem that also came up from this imperfect motion was that the robot would eventually hit a wall, and not be able to recover, as the robot would not have enough space to turn right. To fix this, whenever the robot had a "double negative" reading, it was programmed to back up just a little bit, which during normal operation wouldn't be necessary, but it greatly helped from preventing the robot from getting caught.

Finally, after a lot of fine-tuning and problem solving, we were able to get the attached code to complete the maze.

This is our flow chart that we created on our first day of brainstorming

This was the first try. The goal of this was simply to move forward and sense something. It failed terribly.


During planning, we figured that it would be best if we had multiple sonar sensors. We tried to customize a robot to have 2 sonar sensors but when we went to test it, we had to choose from preset robots; therefore, having multiple sonar sensors was an infeasible idea.


Our Code:

#pragma config(StandardModel, "SWERVEBOT")

//*!!Code automatically generated by 'ROBOTC' configuration wizard !!*//


task main()

{

int i = false;

int sonarValue = 0; // will hold the values read in by the Sonar Sensor

int sonarValue2 = 0;

int sonarValue3 = 0;

while(i == false)

{

motor[port2] = 127;

motor[port3] = 130;

wait1Msec(900);

motor[port2] = 0;

motor[port3] = 0;

wait1Msec(30);

sonarValue = SensorValue(sonarCM); // set 'sonarValue' to the current reading of the sonarSensor sensor

motor[port2] = 60; // turn left

motor[port3] = -60;

wait1Msec(700);

motor[port2] = 0;

motor[port3] = 0;

wait1Msec(500);

sonarValue2 = SensorValue(sonarCM); // gets left sonar value

motor[port2] = -60; //turns back right

motor[port3] = 59;

wait1Msec(700);

motor[port2] = 0;

motor[port3] = 0;

wait1Msec(500);

if(sonarValue2 > 30)

{

motor[port2] = 60;

motor[port3] = -60; // turns left torwards new direction

wait1Msec(700);


}

if(sonarValue < 30) // if front is blocked

{

if(sonarValue2 < 30) // if left is blocked

{

motor[port2] = -60;

motor[port3] = -60;

wait1Msec(100); //Move backwards if both are blocked

motor[port2] = -60;

motor[port3] = 60; //turns right

wait1Msec(700);

sonarValue3 = SensorValue(sonarCM);

motor[port2] = 0;

motor[port3] = 0;

wait1Msec(500);

if(sonarValue3 < 30) // if right is blocked

{

motor[port2] = -60;

motor[port3] = 60;

}

}

}

}


}


Video Example Tests:

Here are some videos of our robot in action!

You can see the robot traversing various different mazes that we created to simulate traveling down complex hallways!

As you can see, in some videos the robot's sonar sensor seems to sense beyond the wall. This can be seen at the time stamp 2:31 on the second maze video for example. This shows some limitations we are working with due to the confines of the virtual world of RobotC. A thought to fix this problem would to double up each walls, yet in the real world, the sonar sensors probably would not glitch through the walls.

Furthermore, it was a struggle to obtain a perfect 90 degree turn during our sensing phase of movement due to RobotC's tendency to add slightly more power to a certain motor in order to simulate imperfections in the real world. This leads to the robot sometimes drifting off at angles which could lead to the problem of the sonar-wall glitch.

Robotics:

Robotics is simply the study of robots! Robots are machines that are created by humans to accomplish a huge array of tasks. Some famous examples of robots include Zumba the at-home vacuum, the Boston Dynamic dog, and the Canadarm. Each of these robots is very different and can accomplish a big range of tasks. The key thing, however, that makes a robot a robot is its use of actuators and effectors. Robots receive input (either from sensors or from humans) and use this to change the environment. It does this by first activating its actuators (such as its motors and servos) to then change its effectors (such as wheels and claws) which will then affect the environment around it. Robotics as a field, however, really takes advantage of a range of fields such as computer science, to accomplish the coding for the robot, as well as mechanical engineering to build and run the robots.

Computer Science:

Computer science is the study of computation and information. According to Peter Denning, a famous computer scientist, the basic question of computer science is how can [blank] be automated? This means how can various tasks be accomplished by computers to make them more efficient. To do this computer scientists work with software theory, design, and computer applications to experiment with what can be done, and so much more.

Our research paper:

Before writing our research paper we looked on google using some search terms like, "Disaster Relief Robots" and started to summarize and insert quotes from various sources so we could cite them later in our paper. After about 10-20 sources we were able to get an idea for what our paper was going to be about and at that point we transitioned to writing the paper.

Week 1 Team 5 Research Paper Sources

For writing our paper we start off with an outline. From the outline we would look at the quotes and summaries from our research brainstorming document to pull inspiration from. If we needed more inspiration about a particular item like Hurricane Katrina then we just went back to google to find sources. We were able to effectively write our paper in the tight time frame by each person writing a section of the paper. The main difficulty was doing out references as online sourcing platforms and research papers often don't offer IEEE format but more common options like MLA and APA.

Week 1 Team 5 Research Paper

Project Evaluation

Project Evaluation

Our response video to questions posted in the discussion page of blackboard: