This page draws heavily on the first two chapters of Russell and Norvig, Artificial Intelligence: A Modern Approach, Third Edition.
What is the field of artificial intelligence trying to accomplish? The authors (hereinafter R&N) consider phrasing this problem in terms of the way a system acts or thinks, and whether that aspires to be human or rational.
Humans are certainly intelligent, so perhaps an AI system should try to act humanly. In general, this is too difficult in the near future. We will discuss this further near the end of the semester when we look at the Turing Test.
A system that thinks humanly embodies a model of the brain. The field of cognitive science explores this sort of system.
A system that thinks rationally uses only valid logic to make its decisions. While this approach is certainly useful, such a system would have trouble dealing with the real world, in which there are a great deal of data, most of it defined imprecisely.
Following R&N, we will therefore focus on the last option, acting rationally. A system acts rationally if it "acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome." We'll revisit this in a moment.
An agent is something that acts in some environment. The agent receives a sequence of percepts from the environment through its sensors; the agent affects the environment by taking actions through its actuators. For a given task, there is a performance measure that indicates how well the agent is doing.
A task environment can be defined in terms of a PEAS (Performance, Environment, Actuators, Sensors) description. Here are a couple of examples:
An environment can be:
Fully observable, partially observable, or unobservable, depending on the degree to which the agent can detect the state of the environment.
Single agent or multiagent, depending on the presence of other agents. Multiagent environments can be competitive or cooperative.
Deterministic (the same action in the same environment state always leads to the same next state) or stochastic (there is some randomness involved).
Static (the environment does not change while the agent is deciding what to do) or dynamic. If the environment as such doesn't change, but the agent's performance does (e.g., because speed is important), the task environment is semidynamic.
Discrete or continuous (in the mathematical sense). A Chess board is discrete because a piece is either in a given square or not. A Pool table is continuous because a ball's position can be adjusted by any amount.
Known in advance to the agent or unknown. The environment is known if the agent knows what effects its actions will have. This can be thought of as knowing the rules of the game, the laws of physics, or how the controls operate.
Repeating the examples from before, we have:
We'll look at more examples in class.
An agent is rational to the extent that it acts to maximize its expected performance measure, given external constraints. External constraints include what information the agent perceives (including past perception before or after entering the task environment) and what actions the agent is able to perform. Rationality does not demand omniscience; the agent is not responsible for information it would have no way of knowing.
Rationality is defined purely in terms of behavior, not in terms of what is going on inside the agent. We cannot, for example, say, "This agent is not rational because it cannot learn." There are some task environments (e.g., Rock-Paper-Scissors against a truly random opponent) where a non-learning agent can be perfectly rational; some learning agents are non-rational (e.g., a robot taxi that learns to obey traffic signals but changes lanes into other cars). In other words, learning is neither necessary nor sufficient for rationality. Other internal features that are neither necessary nor sufficient are memory, thinking, having a plan, trying/wanting to do the right thing, and making decisions randomly or non-randomly.
A similar definition of rationality used in economics.