Is there a future for AI without representation

This paper addresses the issue of representation in Artificial Intelligence (AI) by presenting an analysis of the exact definition and characteristics of representation and whether the traditional AI or new (Brook’s) AI possess these features of representation. As it surprisingly reveals based on philosophical and logical arguments, neither traditional nor Brook’s AI use representation. It is then inferred that the main difference in the traditional and Brook’s AI is merely architectural, viz., the absence of central control in Brook’s AI. The paper also charts some recent work that use Brook’s philosophy of non-centralized AI, and concludes that AI may benefit more by using a non-centralized architecture rather than the conventional centralized architecture.

The paper points out that philosophers have been dismissive about AI due to a fundamental flaw in the approach that AI takes. Philosophers have shown that the symbols used in AI do not represent and are not grounded. The paper later deduces using various arguments and examples that this is indeed the case, the symbols in AI do not represent. However, this may not be a flaw. Philosophers and other critics of AI believe it to be flaw because they assume that essential features of cognition like perceptions, reasoning, planning, etc. are based on representations. Thus, the lack of ‘mental representation’ in AI is considered fatal for creating intelligence. The three theses assumed to be characterizing cognition are:

a. Cognition is the processing of information

b. Information is in the form of representation

c. Processing of representation is computational

However, as the paper suggests, this indirect dependence of cognition on representation is just an assumption, which might be proven false by cognitive scientists some day.

In order to evaluate the prospects of AI without representation, the paper considers two questions:

1. What exactly ‘AI without representation’ means? Or When can it be said that an AI system involves representation?

2. What are the characteristics features of the AI (with or without representation), that we need to consider in order to evaluate the prospects?

The key point of the paper is the discussion about the definition of ‘representation’. The traditional AI has discussed a lot about knowledge representation, but never mentioned what representation actually means. The paper explains ‘representation’, and what counts as ‘representation’. The paper distinguishes between the terms ‘icons’, ‘indices’, and ‘symbols’ (from C. S. Peirce’s classical theory of signs) and concludes that symbols has three place relationship: X represents Y for Z. The paper says that symbols have the requisite characteristics of ‘representation’. ‘Representation’ has two important components. First, ‘representation’ has information about what it represents, and second, it has some function or intention. To highlight and differentiate, information is just the result of causal relationship (which is bound to be true), whereas representation refers to the information with some intention or function (which may result into misrepresentation, if the function or relevance to the information is not proper).

After clearly defining ‘representation’, the paper investigates involvement of representation in the traditional AI. Through arguments and examples, the paper concludes that the representation used in AI is actually not a representation for the AI system itself. It is rather a representation for us (AI users/creators). The paper remarks that such mistake is made often by cognitive scientists also.

This paper’s focus is on the ‘new’ AI proposed by Brooks. In this regard, the paper begins by identifying the following characteristics of Brook’s AI:

1) Layered subsumption architecture

2) Demand for embodiment

3) Demand for situatedness

4) Emergence of intelligent behavior

5) Rejection of representation

6) Rejection of central control

The paper then considers if these characteristics can be used to judge the prospects of Brook’s AI. The paper argues that embodiment and situatedness are mandatory for robots. So, these cannot be considered as the relevant characteristics to judge the prospects. Emergence of intelligent behavior is a hopefully present behavior of Brook’s architecture. The subsumption architecture itself is a simple multilayer architecture, where each layer has a specific task, and the hierarchical position of layer represents the level of abstraction. In such layer, it is clear that environment has not been modeled or represented explicitly. Thus, the two characteristics that can be focused upon are the rejection of representation and the rejection of central control.

Regarding representation, Brooks rejects the idea of ‘traditional’, ‘central’, or ‘explicit’ representation. The paper considers if Brook’s proposal is to use grounded representations (as suggested in some of his works) or to reject causal representation or to reject the use of representation completely. In this regard, the paper concludes that neither traditional AI, nor Brook’s AI involve representation in the sense defined in the paper. The only difference is that Brook’s AI does not pretend to be using any sort of representation like the traditional AI does.

Thus, the main distinguishing characteristic of Brook’s AI that can be used for evaluating its prospects is the lack of central control, and the question of evaluation can be rephrased as “how complex the behavior can be achieved using non-centralized control?”. The paper lists some concerns that are believed to be difficult to achieve using such decentralized AI systems. These are:

· perceptual recognition,

· fusion of several perceptual information,

· capability to plan, expect, and predict,

· pursuing goals,

· ability to think conditionally/counterfactually,

· remember,

· use language (in the sense of production, comprehension),

· be aware, conscious and experience.

Regarding these tasks, mentioned above, the paper reiterates that these tasks are considered to be difficult to achieve, as it is generally assumed that these tasks depend directly (like language) or indirectly (like thinking) upon representation. However, such assumption might not be true. In fact, capability to plan and pursue goals has already been demonstrated or claimed as possible by Brooks. Various alternatives to forgo these assumptions have been tried in AI in the last decade and have demonstrated potential. These include concepts of embodiment, external mind, lack of distinction between perception and action, etc. In addition, from the cognitive sciences front, scientists are beginning to deny the role of representation in human cognition. Such denials include neural network inspired approaches and criticism of encodingism (assuming that mind is encoding and decoding the data, for whom this is being done?). The critiques of encodingism argue that assuming that mind encodes and decodes assumes the presence of representation. However, if there is no representation, there is no need to encode and decode.

The paper concludes that only the task of experiencing, of conscious presence, is not possible in the absence of a central control. In the meanwhile, the paper suggests that consciousness is not an epiphenomenon. It is a part of the causal cycle of actions and events. If it is present, it does effect the decision or the action that is taken. Therefore, consciousness should be possible with the inclusion of central control. However, conscious presence, or consciousness, is not necessary for intelligence and intelligent action.

The paper finally concludes that if we forgo the assumptions that relate intelligence and cognizance with representation and symbolism, traditional (and ‘new’) AI has never departed from the traditional cognitive science. Under this premise, if the idea of central control for intelligence is given up, the future of AI seems promising.