Section 2.1. Intelligence defined

Where to draw the line

With the preparation of Chapter 1, the main task of this chapter is to distinguish two distinct two types of information system: intelligent ones and non-intelligent ones.

This distinction is crucial for AI, because unless it can be clearly made, the research will be aimless, and to draw this line is equivalent to giving "intelligence" a working definition. [Special Topic: working definition]

Since a working definition should be faithful to the common usage of the concept, we should start with how the word "intelligence" is used by people. However, like most words in a natural language, "intelligence" is used with many different sense in different contexts, so to define it "as it is", even in its edited version, as in a dictionary, is still too messy to satisfy the other requirements of a good working definition. Therefore, every researcher in the field has tried to focus on certain "essence" of the concept. Since human intelligence is the best example of intelligence as far as we know, it is natural that all the existing working definitions of "intelligence" generalize certain aspect of human intelligence, to the extent that it can also be applied to non-human systems.

But which aspect? Within the information system framework, we can describe a system from inside, by talking about its goals, actions, and knowledge, or from outside, by talking about its experience and behavior. Since it is easier to evaluate the system's outside activities, and all the difference in the inside will eventually show up in the outside, whether (or how much) a system is intelligent is usually judged from the outside.

Other choices

An obvious choice is to define "intelligent system" as a system that produces human behavior, as evaluated by Turing Test. However, though passing the Turing Test may be a sufficient condition for being intelligent, it is clearly not a necessary condition. It will be ridiculous if this test is used to judge the intelligence of an animal, a group, or an alien (when we meet one), but in that case, why we want to apply it to a computer? Turing was fully aware of this issue, and he said (in his famous 1950 article) "May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection." Unfortunately, this point is often ignored, and therefore, many people get the misconception that Turing proposed his test as a (sufficient and necessary) definition of intelligence, and it should be used to judge the success of AI.

The same problem happens on variations of the above working definition. In AI textbooks, it is common to define intelligence by human capability (though not necessarily human behavior). For example, master chess players are usually considered as very intelligent, so if a computer can reach that level of capability, it should also be considered as intelligent, too. Though this opinion sounds natural, it suffers from several problems. First, a master-level chess playing program usually has little capability on other fields, which conflict with our intuition that intelligence is versatile. Furthermore, why chess playing requires intelligence, while solving many other problems do not? After all, nowadays computer systems have solved many problems better than any human can do, but why we still feel that they are not intelligent? What is missing in conventional computer systems?

Intelligence vs. instinct

Based on the above consideration, the working definition of intelligence accepted in this theory does not focus on the system's concrete behavior or its generalized capability, but on the relation between the system's behavior and its experience, and especially, on whether (and how) the former changes as the latter extends over time. After all, when compared with the human mind, what clearly missing in conventional computer systems are not computational power and behavioral complexity, but are natures like flexibility, versatility, and originality.

If we take each concrete goal of a system as a "problem", and the related actions of the system as an "solution" to the problem, then there are two typical types of systems.

In one type of system, the same problem always gets the same solution. The best example of this type is a conventional computer program, where the same input data is always processed in the same way, and produces the same output data. Mathematically, this program serves as a function that maps a (valid) input into a desired output. We can find the same kind of input-output mapping in many low-level animals, where the same stimulus lead to the same response. Since in such a system the input-output mapping is innate or inborn, I call it "instinctive system".

On the contrary, in the human mind, most of the problem-solution mappings are learned, so that for the same problem, the solutions often changes over time. Furthermore, this change is not arbitrary, but the result of the system's adaptation to its environment. In this process, the system has to deal with the problems by taking its past experience into consideration, and works with the restriction of available knowledge and resources. I call such a system "intelligent system".

Working definition of intelligence

Now we are ready to introduce the working definition of "intelligence", the central concept of this theory:
Intelligence, as the experience-driven form of adaptation, is the ability for an information system to achieve its goals with insufficient knowledge and resources.
The content of this brief definition can be further explained as the following:
  • Intelligence is a nature belonging to some information systems, which means, as explained in Chapter 1, that it can be described in at an abstract level, without mentioning the underlying physical, chemical, or biological processes.
  • Intelligence is adaptation, meaning the system's solution to a given problem is not determined by the problem alone, but may be influenced by the whole experience of the system.
  • The system works with insufficient knowledge, meaning that it usually lack the knowledge to provide the best solution to every problem it has to face.
  • The system works with insufficient resources meaning that it usually lack the time to explore every possibility and the space to store all information.
In a sense, the rest of the book does nothing but to justify this definition, as well as to reveal its implications.

Degree of intelligence

Though the above definition tends to draw a sharp line between intelligent systems and non-intelligent ones, it still allows intelligence to be taken as a matter of degree, as our intuition suggests.

First, we can compare the intelligence of different systems according to the definition. If everything else is equal, but one system is faster in adapting to the changes in the environment, then the former is more intelligent than the latter. The same is true if the former is open to more forms of input information, or is more efficient in using available resources.

Based on this kind of comparison, we can define a relative measurement of a system's intelligence: if a system has n comparable systems, and is more intelligent than m of them, then the system's degree of intelligence is m/n. Therefore, among comparable systems, the most intelligent one will have a degree of 1, while the least intelligent one will have 0.

Of course, this solution still cannot compare the degree of intelligence between two arbitrary systems. However, it is good enough for this theory at the current stage. Hopefully in the future we can develop a more general measurement of intelligence, based on the new knowledge coming from the progress in this field.