Questions

A discussion of the project answering questions about uncertainties regarding class material

Do we think our robot is intelligent?

Before coding our robot to address the Wumpus World problem, this question seemed like a difficult one to answer. However, after coding this robot, we noticed that the robot itself is not "learning." To define the term "intelligence" in the specific case of a machine, intelligence is roughly when a machine is capable of "varying its state or action in response to varying situations or requirements" (which it can and does do), but also "varying its state or action in response to past experiences" (which it does not). The robot is essentially given certain conditions and responds accordingly depending on these given conditions. It does not truly have experiences, but only the history that we have told the program to store. Everything the robot does is preprogrammed. It does not decide it's next moves, think, or relate to past experiences. Therefore, given the definition and the code behind our robot, we do not believe that our robot is intelligent.

Is our robot conscious?

Referring to the previous question, considering we did not think our robot to be intelligent, we also do not believe our robot to be conscious. Consciousness essentially implies being aware of one's surroundings. However, the robot itself is not aware, in the way we, as humans, consider ourselves to be aware. Our robot is set to respond to given conditions in a way we specifically outlined, incorporating no version of awareness in the way it is typically defined. If the robot had any decision in the matter or could act deliberately, then this notion of consciousness would be more plausible. However, it simply takes in a scenario and responds based upon how it was told to respond.

Does our robot have a mind?

Again, referring to the questions above, as they are all interrelated, we do not believe our robot has a mind. The concept of a mind is challenging to comprehend altogether, but referring to the mind in the way it is most commonly defined, it assumes one would "be aware of the world and their experiences," as well as capable of thinking and feeling. As we discussed more in-depth in the previous section, our robot is not aware in the way we typically define awareness. Additionally, it does not think. It simply responds to each circumstance based on a set of rules. Lastly, the robot does not feel. Everything involving the robot's response is entirely backed by code. Therefore, given these important qualities of a mind, our robot does not appear to possess any of them.


To further this explanation, one can analyze many of the bugs we ran into with our robot. For example, one of the problems we ran into was that the robot would get stuck in a position and not know what to do when our code did not return a move, which shows that our robot does not make decisions or think, two important aspects of having a mind. Therefore, given this analysis, we also do not believe our robot has a mind.

Do we feel any differently regarding the feasibility of AI now that we did this project?

After coding this project, we do feel slightly different about the feasibility of AI. With our current algorithm, we do not believe it has the capability of thinking, being conscious, having a mind, or anything of the sort. However, while we do not believe our specific robot is intelligent, this does not necessarily insinuate that all variations of AI are incapable of developing some level of intelligence. Extensively more complex AI algorithms may still eventually reach the point where they can be considered "intelligent." AI in itself seems relatively less feasible after learning more about this particular AI, but since there is much more complexity to other forms of AI, its feasibility altogether cannot be ruled out.

How did your robot perform?

Our robot did not perform as well as we had hoped with the maps provided. It was especially disappointing, as our own testing had returned a fairly decent win percentage. Out of the maps provided, our robot was eliminated on the second map. However, we did notice that our robot was fully capable of solving the map directly after the second one.


The reason our robot did not perform as well is likely due to a few small bugs within our code. One of these had to do with the Wumpus, which was where our robot ended up being defeated. However, with test runs through many other maps, our robot did have a decent percentage of successes, despite not performing well on the second map.

How did others perform?

Other robots seemed to perform decently well, and some very well. While some of the robots were eliminated in earlier rounds, others succeeded on each of the given maps. One robot was even able to optimize their paths back to the beginning after finding the gold.

What are the shortcomings of our robot? How would we improve it?

The main shortcoming of our robot was not being able to shoot the Wumpus. Before altering our code, the robot was capable of shooting and killing the Wumpus. However, after we attempted and succeeded in fixing other bugs, we ran into a relatively serious bug involving killing the Wumpus. Each time, our robot would return an invalid move, essentially meaning that it did not know what to do after it encountered a Wumpus. This was definitely the biggest shortcoming we faced, as it essentially meant that each time we encountered a Wumpus, our robot would get stuck.


Another shortcoming we faced was when the gold was guarded in the lower right corner of the map. Our move function essentially told the robot to move right when applicable, move all the way to the left, move up, and continue the process (moving only to safe locations). As the robot moved, we updated our grid to find new safe locations. Therefore, even though we may have known the gold was in the bottom right corner, as well as a safe path to reach it, our algorithm made the robot continue and essentially reach the top right portion of the grid without finding the gold. We also weren't able to get our robot to move down the board once we've reached the top.


Overall, the biggest problem we had with our program was that it was extremely difficult to debug. The final program ended up having around 750 lines of code, and we had to manually go through to attempt to find where the errors were occurring. If we could improve it, or even start over, it would definitely be a good idea to find a more efficient way to program the robot, making debugging significantly less difficult and time-consuming. Other than that, our code was very close to working completely, although not the most efficiently. If debugged more, it likely would have functioned properly in most, if not all, scenarios, considering all of the functions we needed had already been written.