Chinese room
An argument forwarded by John Searle intended to show that the mind is not a computer and how the Turing Test is inadequate. See artificial intelligence, Turing Test, Searle.

Searle first formulated this problem in his paper Minds, brains and programs published in 1980. Ever since, it has been a mainstay of debate over the possibility of what Searle called 'strong artificial intelligence'. Supporters of strong artificial intelligence believe that a correctly programmed computer isn't simply a simulation or model of a mind, it actually would count as a mind. That is, it understands, has cognitive states, and can think. Searle's argument (or more precisely, thought experiment) against this position, the Chinese room argument, goes as follows:
Suppose that, many years from now, we have constructed a computer which behaves as if it understands Chinese. In other words, the computer takes Chinese symbols as input, consults a large look-up table (as all computers can be described as doing), and then produces other Chinese symbols as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing Test. In other words, it convinces a human Chinese speaker that it is a Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese speaker. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.
Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese symbols, looks them up on look-up table, and returns the Chinese symbols that are indicated by the table. Searle notes, of course, that he doesn't understand a word of Chinese. Furthermore, his lack of understanding goes to show, he argues, that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is - and they don't understand what they're 'saying', just as he doesn't.
The two most popular replies to this argument (both of which Searle (1980) considers) are the 'systems reply' and the 'robot reply'. Briefly, the systems reply is simply that though Searle himself doesn't understand Chinese in the thought experiment, it is perfectly correct to say that Searle plus look-up table understand Chinese. In other words, the entire computer would understand Chinese, though perhaps the central processor or any other part might not. It is the entire system that matters for attributing understanding. In response, Searle claims that if we simply imagine the person in the Chinese room to memorize the look-up table, we have produced a counter example to this reply.
The robot reply is similar in spirit. The robot reply notes that the reason we don't want to attribute understanding to the room, or a computer as described by Searle is that the system doesn't interact properly with the environment. This is also a reason to think the Turing Test is not adequate for attributing thinking or understanding. If, however, we fixed this problem - i.e. we put the computer in a robot body that could interact with the environment, perceive things, move around, etc. - we would then be in a position to attribute understanding properly. In reply, Searle notes that proponents of this reply have partially given up the tenet of AI that cognition is symbol manipulation. More seriously, he proposes that he could be in a Chinese robot, just as easily as a Chinese room, and that he still wouldn't understand Chinese.
Chris Eliasmith