Reflection

How did our robot do at solving the Wumpus World?

While our team was able to successfully create a robot and implement movement, we were unable to program our logic system into the robot due to our lack of knowledge and experience in coding. However, we are confident that if we were able to implement the logic into code, IA*d would be able to solve the Wumpus World puzzle.

Shortcomings

As previously told, we were unable to translate our logic system into code for our robot. This, of course, is a pretty large shortcoming since the goal was to create an autonomous robot that could solve the Wumpus World.

Another shortcoming is that IA*d would sometimes try to adjust it's angle many times before resuming its movement.

Improvements

Some improvements that we could have done are:

  • Replace the motors and wheels as the motors are very inconsistent and the wheels constantly lose friction with the floor and start slipping

  • Clean up and simplify movement code

An improvement that we would have liked to do, but most likely would not be able to do within the time constraints:

  • Translate logic into code

Philosophical Questions for IA*d

Is IA*d intelligent?

Defining intelligence as the ability to learn and apply past information to future situations, no, IA*d is not intelligent. First, IA*d is unable to learn. It is only able to follow a set of distinct directions and cannot adapt its actions. In addition, IA*d cannot apply knowledge of a previous test to a future test either. This can be reflected in the logic as the robot only responds to a list a "if-"then" statements. For IA*d to be intelligent, we believe that it would need to be able to adapt and use previous information gained from past tests to further improve its ability to solve the Wumpus World by itself.

Does IA*d have a consciousness? Does it have a mind?

First, to answer the question of whether or not IA*d has consciousness, we need to first define what consciousness is. In general, consciousness is usually defined as something that has self-recognition and/or self-reflection. Applying this definition to IA*d, our robot is not conscious. This is because it is unable to self-evaluate its actions nor is it able to recognize that it itself is a robot. IA*d can only follow a set of directions. Because IA*d these reasons, IA*d has no consciousness.

Moving on to the second question of asking whether or not IA*d has a mind, we believe that IA*d does have a mind. This is because, defining a mind as the ability to process information, IA*d is able to process data. It is able to take an input, given off of the light sensors, and can then put the light values through a program and give an output, which in this project would be a specific type of movement. This is reflected since the robot is able to stop on a white line when the sensors see a white color.

Did this project change our opinions on the feasibility of AI?

Our original opinion on the feasibility of AI is that the creation of advanced AI is very possible, especially with the current trend of technology advancing exponentially. Doing this project did not change our opinion over the feasibility of AI despite our struggle over creating our robot. This is because we believe that our experience with this project is unreflective of the creation of AI, especially since our collective team had little to no previous experience with coding, which is where the majority of our problems resign. In addition, how our robot functions and how our logic works are not reflective of current AI technologies. In fact, our robot and logic are behind today's AIs. Because of these reasons, our opinions on the feasibility of AI remained unchanged.