The Turing Test and the Chinese Room

The Turing Test

"In his landmark paper ‘Computing Machinery [məˈʃiːnəri] and Intelligence[ɪnˈtɛlɪʤəns]’ (1950) Alan Turing proposed the ‘imitation game’, a potential experiment which could help philosophers address the question of whether machines [məˈʃiːnz] can think.

He says, imagine that you're in a room and you're facing a barrier [ˈbærɪə] and behind the barrier, there's a human and there's a computer. And you can ask them questions but you don't know which one is which. You don't know which is the human and which is the computer. And your task is to work out which is the human and which is the computer just from the answers you get from the questions you ask.

Now, the questions that you can ask can be about anything. And you take the answers that you get. And Turing says, when we get to the point that we cannot decide which is the human or which is the machine, then we have managed to build a machine that's capable of mentality [mɛnˈtælɪti], a machine that can think."

Three problems with the Turing test

"1. All the testing is for an intelligence that can communicate via [ˈvaɪə] ( посредством ) language. We couldn't, for instance, check for animal intelligence if those animals couldn't speak languages.

2. It's too anthropocentric. Because what we're testing it for, is whether a machine can be mistaken for being a human being. That means we're testing for human intelligence, and surely, it seems very anthropocentric to think that the only intelligence worth studying is human intelligence.

3. It doesn't take into account the inner states of the machine."

John Searle’s Chinese room thought experiment

"Searle says, imagine that you are in a room and there are two slots (щель) entering the room. One that says, I, and one that says, O. And what happens is you get symbols on bits of paper that come through the I-slot. Now, in your room, you have a book and in the book, there is a list of algorithms about what to do depending on the type of symbol you receive. So, if you receive a symbol of type A, put out a symbol of type B. So, you have a steady (постоянный) stream of symbols coming in, and you look up in your book, what you should be putting out again, and let's imagine that you just have a whole stack (груда) of these symbols available to you in the room. So, you receive a symbol, you check in your book what you should put through the output box, the O box, you pick up a symbol, and put it back through the O box. Now imagine that, unknown to you, these symbols are actually Chinese. And the person who is feeding in the symbols is a Chinese speaker who is asking you questions. So, what's happening is that you're receiving questions and you're actually giving out answers. And the rule book is such that you're actually giving out really quite coherent (связный) answers. In fact, you're realizing something like a machine that would pass Turing's test. So, the Chinese speaker outside really believes that he is conversing with another Chinese speaker. Now, the big question that Searle asks is, well, I, the person inside the box, or the person inside the room doesn't understand Chinese. So, I don't understand Chinese. All I'm doing is looking up rules in the book.

And Searle points out that this is just how computers work. Computers receive inputs and then their program is like the book. The program tells them what sort of output they should give. So, they check it, they check up or they run through a list of rules. And see that, given a particular input, they should give a particular output.

Searle's point is that machine can never itself be said to understand Chinese. That machine isn't thinking, that machine is just putting on a very good simulation of a thinker."

SOURCES

'Introduction to Philosophy' course (the University of Edinburgh)

https://www.coursera.org/learn/philosophy