Section 2.2. Intelligence as adaptation

Adaptation means change

By defining intelligence as an advanced form of adaptation, and contrasting "intelligent system" with "instinctive system", this theory draws the line between intelligent systems and non-intelligent systems mainly according to whether the problem-solution relation in the system changes over time.

Generally speaking, an adaptive system changes its behavior in the direction of improving its problem-solving (that is, goal-achieving) ability.

As information systems, adaptive systems and instinctive systems have fundamental differences.

  • The knowledge of an instinctive system is innate and constant, while the knowledge of an adaptive system is (at least partially) learned from its experience.
  • The practically achievable goals of an instinctive system remain the same, while an adaptive system may achieve goals which it could not achieve at a previous moment.
  • For a given goal (problem), the corresponding actions (solution) in an instinctive system is predictable and repeatable, which in an adaptive system they cannot be determined from the goal alone, because they also depend on the system's previous experience (before the goal appears) and current experience (when the goal is being pursued).
Therefore, adaptation means change, both within a system and in its interaction with the environment. Even so, the change is not arbitrary, but determined by the system's goals and experience.

Adaptation vs. computation

To many people, the statement "intelligence means adaptation" may sounds obviously true, and few people have argued against this position. Given this situation, it may surprise some people if they learn that mainstream AI and the "computational" school in CogSci has left little space for adaptation.

A well-known presentation of this point of view is given by David Marr (1982), where he identified three levels of analysis when we want to understand a cognitive function, then reproduce it in a computer:

  1. Computational level: The function is specified as a computation process that takes certain input and produce certain output.
  2. Algorithmic level: The computation is accomplished step-by-step by a sequence of basic operations.
  3. Implementational level: The algorithm is carried out by a hardware/software/wetware mechanism.
These three levels are exactly the main stages by which a practical problem is solved in computer science, therefore it is natural for many researchers to follow it when using computer to reproduce cognitive functions.

However, a computer system designed and built in this way is not adaptive. The system's goal is the specified computation, and its knowledge is the algorithm that achieves the goal by the system's actions (i.e., the sequence of operations). What the system can do, and how to do it, are constant, and independent of the system's experience. On the contrary, in an adaptive system there is no one-to-one mapping between the input and the output, and therefore their relation cannot be specified as "function" or "computation", in the mathematical sense of these terms.

The above conclusion does not mean that a computer system cannot be adaptive. After all, it is possible to build a computer system, whose life-long behavior is fully determined by the system's initial state and life-long experience, but each "problem" (which is a section of the life-long experience) may correspond to different "solutions" (which are sections of the life-long behavior), depending on where the problem appear in the experience.

Capability vs. meta-capability

Beside a strong theoretical heritage, the above approach toward AI also has its intuitive appeal. After all, we usually judge people's level of intelligence by the problems they can solve. Until today, most of the works in AI aim at duplicating the human capability in solving various types of practical problems.

However, according to the previous distinction, a system built in this way is an instinctive systems, because its functionality is fully determined by its design, and has nothing to do with its experience. It can carry out the specified computation, but is brittle, and have little tolerance to variations in the environment and the system.

For an adaptive system, by definition its input-output relationship cannot be specified as a computation, nor can its problem-processing process follow an algorithm, because both change over time. At different moments, the same problem may be solved in different ways and get different results. Its behavior is not only determined by its initial state and innate mechanism, but also by its experience.

Of course, this does not mean that the system cannot be accurately designed or implemented in a computer. The designer of such a system still specifies computations and design algorithms to carry them out, though these computations and algorithms are at a higher level, or a meta-level, with respect to the (object-level) problems the system will deal with itself.

For example, an adaptive system may have the ability to learn chess, though there is no chess-playing algorithm built in it. There are algorithms in the system, which make the system adaptive (among other things), though they do not directly indicate predetermined paths for the system to achieve its goals. To achieve a goal, what the system can depend on is its experience, accumulated to the current moment.

Assuming a system's practical problem-solving capability can be measured numerically, then its adaptation capability can be displyed by a function showing how this value changes over time. An instinctive system corresponds to a horizontal line (since the value does not change), while a system with a constant "adaptation rate", or "learning rate" corresponding to a line with a positive slope (since its capability increases at a fixed rate). Intuitively, the slope, or derivative, of the capability function indicates the system's adaptation ability, or its intelligence.

For human beings, their problem-solving capability and learning capability are highly correlated. Since human babies are born with very similar innate problem-solving capability, their difference in problem-solving at a later age is largely attributed to their different learning capability. Actually this is how Intelligence Quotient, or IQ, was originally defined: as the ratio of mental age to chronological age, therefore a ten-year old child of IQ 120 had learned the knowledge as a normal twelve-year old, as shown by the child's current problem-solving capability.

On the contrary, the problem-solving capability and learning capability of computer systems are largely independent of each other. There are many computer systems with very high innate problem-solving capability, but little learning capability, and we can also build learning systems that cannot do much at the very beginning. Consequently, many capabilities that must be learned in human beings can be built-in for computers. Just checking the problem-solving capability of a computer system at a certain moment usually cannot tell whether it is built-in or learned, while in human beings, this distinction usually is clear.

Therefore, to see intelligence as problem-solving capability or learning capability will lead to very different research approaches. What Marr described is the former, while this theory is the latter. Of course, both are useful, but they are very different.

Adaptation and learning

Though it is indeed the case that most AI systems developed so far a not adaptive, it does not mean that in AI no adaptive system has been studied. At least the study of machine learning is clearly related to adaptation. "Machine learning" covers many different approaches, so it is hard to address as one idea. Even so, there are some common themes that can be recognized and analyzed.

Most of the works in machine learning still agree with Marr's three levels, though here it is not the human designers who will specify the computation, design the algorithm, and develop the implementation. Instead, some of the jobs are done by machines themselves.

  • Instead of expecting the human designers to fully specify a desired input-output function, supervised learning can be used to generalize given input-output samples into such a function.
  • Instead of expecting the human designers to fully design an algorithm for a given task, reinforcement learning can be used to find such an algorithm according to the reward/punishment feedback from the environment to the system's tentative actions.
  • Instead of expecting the human designers to fully develop a hardware/software to implement an algorithm carrying out a computation, various learning and adaptation techniques can be used to semi-automatically configure such a system according to given conditions.
Machine learning is often treated as "problem-solving at a meta-level", where the result is a "learning algorithm" that takes some domain knowledge as input, and produce a problem-specific algorithm as output, which then is applied to solve the domain problem.

Clearly, compared to systems without any learning ability, these learning systems are closer to the adaptive systems defined above. Even so, they are still too much constrained by the traditional theory on computation and algorithm to cover the full range of adaptation.

According to the previous description of adaptive system, we can see that there is no guarantee that its actions to various goals will converge to a stable state, so as to be abstracted into "computation" and "algorithm". Instead, the adaptation process may be open-ended, and never converge to any fixed mapping, that is, the system's response remain experience-dependent and context-sensitive. Furthermore, the adaptation process itself may be too flexible to be specified as a "learning algorithm", therefore, we can only talk about it as a process, not as a computation following a predetermined algorithm.

Experience-driven adaptation

The "change" in adaptation comes in two major ways: experience-driven and experience-independent. In the former case, a system changes its behavior according to its experience, while in the latter the changes are "random", in the sense that they usually cannot be explained and predicted according to the history of the system. Both of them are forms of adaptation, since no matter where the changes come from, they will be evaluated according to whether they improve the system's ability to achieving its goals, and only the "good changes" are kept.

Roughly speaking, "intelligence" corresponds to experience-dependent changes within an individual, while "evolution" corresponds to experience-independent changes within a species. Though the two have similarities, their differences are also important. For a system, the changes produced by intelligence are usually more conservative, gradual, and cautious, while the changes produced by evolution are usually more radical, abrupt, and vital. In general, we cannot say which one is better, since they are good for different situations.

A more detailed comparison between the two is left for [Special Topic: Intelligence and Evolution], and in the rest of the book is focused on intelligence.

Advantage of intelligence

Though "intelligent" and "adaptive" are usually used as commendatory terms, it does not mean that such a system is always better than an instinctive system in its ability to achieving its goals.

It largely depends on the environment. If the environment never changes or only charges in a circular, or some other predictable, way, then an instinctive system is more stable and efficient in achieving its goals, because its knowledge provides a determined way to invoke the needed actions whenever necessary. This explains why for many tasks conventional computer systems work better than human beings --- when a goal can be routinely achieved by carrying out a sequence of operations, it is better to build an instinctive system, following the three levels Marr outlined, than to give it to an adaptive or intelligent system, because the flexibility of the latter can only make things worse.

If the environment changes in a unpredictable way, an instinctive system will not be able to always achieve its goals, given its fixed, and therefore outdated, knowledge about the effect of its action in the environment. It is in such an environment that an adaptive (including intelligent) system has some chance. The system attempts to adjust its knowledge to capture the changes in the effects of its actions, so as to better achieve its goals. As far as the environment does not change to quickly or radically, these attempts may success, after some failures.

It is important to remember that by definition, an intelligent system only tries to adapt, which does not mean that it always becomes adapted well enough to achieve its goals. Since the only guidance of the intelligence is the system's past experience, and the future experience will surely be different, failures in prediction is inevitable.

David Hume (1748) demonstrated clearly that there is no reasonable way to establish a "Principle of the Uniformity of Nature" that guarantees the correctness of our prediction about the future, even in a probabilistic sense. Therefore, when we say that an adaptive system is "better" than an instinctive system in a unpredictable environment, it is because we can still have some hope for the former, while the latter is completely hopeless. As for an intelligent system itself, it does not follow its experience because it believes the future will be the same as, or similar to, the past, but because it is built to behave in that way. "To predict the future according to the past" is how adaptation is defined, therefore it is "meta-knowledge" in each intelligent system, even though it does not always lead to correct results when applied to each concrete situation.

Recent site activity