Neurohr's Spanish Room

My response to John Searle's "Chinese room" experiment. This response is also related to The Zombie Problem. (Pictures from xkcd)

#### The Experiment:

The Chinese Room showed that semantics are just as important if not more important than syntax when creating a computer that can respond to spoken or written language. My experiment works similarly to Searle's. There is a man in a room, but, instead of having a book of responses to a language, he knows the language. I took Spanish in high school, so I picked Spanish. There is another man outside the room who sends messages to the man inside the room in Spanish. The man outside the room is trying to see if whatever is inside the room is a human or computer (just like the Chinese Room). The man outside comes up with a brilliant idea. He says to himself, "If there is a computer inside it probably can't recognize hand-drawn images! I'll send it pictures and tell it to choose something!" He doodles on paper three items: a cake, a sandwich, and an apple (he's hungry). He writes above the pictures, "escoja la torta," which is Spanish for "choose the cake." He hands the paper into the room and waits. When it comes back out, the sandwich is circled. The man outside says, "Aha! Clearly there is a computer inside that room! A real person knows the difference between a cake and a sandwich! A computer would have to guess." He feels smart until the man inside walks out.

Actually what happened was the man outside was from Spain and the man inside was from Mexico. In Spain, they speak Castillo Spanish in which the word for cake is "torta." In Mexico, they speak a different dialect in which the word "torta" means "sandwich." Both men had an understanding of the task (and the semantics of the language), but there was still a division in what each man thought was supposed to happen.

#### The Meaning:

Searle was right when he said semantics were important, but he forgot about the nature of computer science. Before we can throw out the Turing Test in favor of a semantics-based test, we have to find a way to test semantics reliably. How do we know that a computer has an understanding of what is going on around it? For that matter, how do we know that a person does? How do I know you are human? How do you know you are human? Are you........a zombie? We cannot simply ask a person (or computer) if they have an understanding because, if they are a zombie (or some representation of a zombie), they will respond properly. The right question to ask is harder to find than in a knights and knaves logic problem.

Things are being done out of order. It seems to me that the whole semantics topic is not ready to enter a field as black and white as computer science. First, we have to figure out what it means in people. Until then, the Turing Test will do just fine.