The performance of our Wumpus World Solver demonstrates the strengths and limitations of rule-based AI. The program performed well in environments where hazards were sparsely distributed, consistently identifying safe paths and reaching its goal without unnecessary risks. However, it also encountered challenges in more complex scenarios, where hazards were clustered or the gold was surrounded by dangers. In this section, we’ll discuss how the program performed, the challenges it faced, and how it can be improved for future iterations.
The program achieved its primary goal in most straightforward configurations. By relying on its rule-based logic, it successfully avoided hazards, retrieved the gold, and returned to the starting point when possible.
Strengths:
Reliable Hazard Avoidance: The program consistently identified and avoided known pits and the Wumpus, demonstrating strong hazard recognition.
Logical Decision-Making: Signals like “breeze” and “stench” were used effectively to evaluate adjacent cells and update the program's understanding of the environment.
Backtracking Ability: When the program encountered dead ends, it retraced its steps, allowing it to recover and find alternative paths.
Efficiency in Simpler Scenarios: In environments with fewer hazards, the program quickly found the gold and returned safely.
Example Success Story:
In a scenario with a single Wumpus and scattered pits, the program identified a safe path, avoided the Wumpus, retrieved the gold, and successfully navigated back to the start. The decision-making was smooth and aligned with its programmed logic.
As you can see in the before video, our pathfinding was pretty inefficient when it came to coming back after getting the gold. Although after a bit of optimization where we trained the program to be less explorative and more efficient/faster, the program is able to find its spawn faster without resorting to just reversing the path it took to reach the gold.
While the program performed well in controlled environments, it struggled in more complex scenarios with higher levels of uncertainty.
Over-Cautious Behavior: The program's cautious nature sometimes prevented it from taking risks, even when a risky move might have led to the gold.
Inefficient Backtracking: In some cases, the program's backtracking added unnecessary steps, making the overall path inefficient.
Limited Probabilistic Reasoning: The program treated all potential hazards equally, lacking the ability to evaluate the likelihood of danger in ambiguous situations.
Clustered Hazards: When hazards like pits and the Wumpus were densely packed near the gold, the program struggled to find a viable path.
Example Failure:
In a scenario where the gold was surrounded by pits on three sides, the program hesitated and spent significant time backtracking before ultimately failing to find a safe route.
To address the challenges and enhance the program's performance, several improvements can be made:
Incorporate Probabilistic Reasoning: Instead of treating all hazards equally, the program could assign probabilities to unknown cells based on signals. For example, if there’s a high probability of a pit in a cell, it could explore alternative routes or take a calculated risk if necessary.
Smarter Pathfinding Algorithms: Implementing heuristic-based algorithms like A* could optimize the program's pathfinding, reducing unnecessary backtracking and improving overall efficiency.
Dynamic Exploration Strategies: The program could adopt a more balanced approach between caution and exploration. For instance, it could temporarily prioritize exploration in scenarios where no clear safe path exists.
Improved Goal Prioritization: Instead of simply avoiding all risks, the program could prioritize reaching the gold, even if it means navigating close to hazards.
Overall, the programs/logic performance highlights the strengths of rule-based AI in controlled environments, particularly its reliability and logical decision-making. However, it also exposes the limitations of deterministic systems when faced with uncertainty or complex configurations. With enhancements like probabilistic reasoning and smarter pathfinding, the program could handle these challenges more effectively and become a more robust problem-solver.
The logic displayed by our program on a 4x4 grid was very satisfying, handling most of the boards we gave it with ease—even the really tough ones Bram threw at us. There were some tricky cases where the gold was blocked by a single path guarded by a Wumpus, and the program correctly classified these as “Not Solvable.” In these situations, it would return to its starting point to signal there was no guaranteed safe way to reach the goal. While it technically could have taken a chance and tried the only available path, there was no way to know for sure if it was safe because of the nearby pits and the Wumpus. Essentially, the safest choice was to go back to the start and avoid risking it altogether.
Gold Block: Gold
Redstone Block: Wumpus
Diamond Block: Start point
Arrow Animation: Action shooting the Wumpus.