Today:
Students presentation(s).
Aunt Hillary Meets the Chinese Room.
For next time:
Work on Homework 12.
Work on Exam Review questions.
On a five-point scale:
(strongly agree, agree, neutral, disagree, strongly disagree)
A B C D E
1 2 3 4 5
1) I think it will eventually be possible to build a machine that is intelligent without using biological parts.
(By ``intelligent,'' I mean whatever people mean when they say that people are intelligent, dogs are less intelligent, and trees are not intelligent.)
2) I think that there is something fundamentally unique about human intelligence that makes it impossible to duplicate mechanically.
3) I think that there is something fundamentally unique about human emotions that makes them impossible to duplicate mechanically, so even if we built an intelligent robot, it wouldn't have emotions.
4) I think that there is something fundamentally unique about human creativity that makes it impossible to duplicate mechanically, so even if we built an intelligent robot, it couldn't create original art.
5) If we made a robot as intelligent as a human, I think we would be morally obligated to give it the same legal rights we give humans.
Strong AI
Can an appropriately-programmed computer
be intelligent?
think?
be conscious?
have a mind?
have intentionality?
feel?
create?
Philosphical question, players from several fields
Philosophers, like John Searle
Computer Scientists, like David Gelernter
both, like Douglas Hofstadter
Usual problem with philosophical questions:
definition is 90% of the problem
Without careful definition, question becomes
1) trivially true
2) trivially false
3) unfalsifiable
(can never be shown to be true or false)
(no fun, possibly meaningless)
For example:
If "intelligent" means capable of solving problems that intelligent things solve, like playing chess,
-> trivially true (Weak AI)
If you are only willing to use the word "intelligent" for something biological,
-> trivially false
If "computer" means current technology, current programming paradigms
-> probably false
(People working on applied AI are generally trying to solve problems, not study how humans think)
So I propose
1) natural language definition of "intelligent" in the sense people mean when they talk about animals, but "substrate neutral"
2) computer means any electromechanical system we can reasonably conceive of building and programming
So you can have a billion processors if you think that helps, but not 10^42 processors, and probably not quantum processors.
Thought experiment #1: Neuron replacement
maybe Dan Lloyd (later Daniel Dennett)
computer = any machine with no biological parts
(built, not grown)
(no cyborgs)
0) start with a working (intelligent) brain
1) replace one neuron with electronic equivalent
2) repeat until all neurons replaced
Voila!
This seems to prove at least one formulation of the Strong AI question.
Objections?
1) mind-body duality
2) technically infeasible (does that matter?)
3) defective neuron model
insensitive to chemistry
quantum mechanical objection (Roger Penrose)
Thought experiment #2: The Chinese Room
John Searle
computer = formal system
input symbols,
output symbols,
internal state,
rules for changing state, generating output,
depending on input
Seems like a restrictive definition, but actually very general.
Every computer ever built, every program ever written.
Programming an AI means writing the set of rules so that they can be executed by a computer.
Imagine yourself in the place of the computer.
You are in a room, and people slide messages, written in Chinese, under the door.
You look up the symbols, follow rules, change state, and write notes in Chinese.
NO TRANSLATION, only symbol manipulation.
To conversants, there seems to be an intelligent, Chinese-speaking entity present.
Where is that entity?
1) it's not you; you have no understanding
2) it's not the rule book or the pieces of paper or the room
Searle concludes: a formal system, even if it appears to be intelligent by any conceivable test, is not REALLY intelligent unless there is an understanding entity there.
Objections?
1) definition of computation too narrow
(Roger Penrose again, ironically playing for the other team)
2) system response
(the whole system is intelligent, even though none of the parts are)
intelligence as an emergent property
(traffic jams go backwards even though all cars are going forward)
3) Turing's response
Thought experiment #3: Turing test
Turing didn't really reply to Searle, but we can make him reply .
"Wait," says Turing, "It's not fair to hold a computer to a higher standard than what we apply to people."
(problem of other minds)
(ultimately, we can only infer that other things are intelligent)
"Intelligence" is nothing more or less than the ability to appear intelligent (sentient, not smart) in conversation.
Looking at the internal state is cheating.
Turing test: a human interrogator "talks" to two players through a teletype/instant message. One player is human, one is a computer; both are trying to convince the interrogators that they are human.
If the interrogators cannot distinguish (better than by chance), the computer should be considered intelligent.
Objections?
1) anthropomorphization:
people naturally attribute intelligence to ANYTHING
in fact, machines have passed limited forms of Turing Test
require interrogators with expertise?
Thought experiment #4: Aunt Hillary
Hofstadter expands the system response to Searle
The organization of an ant hill is an example of an emergent property.
(Perform Ant Fugue)
Objections:
1) I/O problem: Hofstadter glosses the problem of recognizing
an intelligence without rich communication
2) Ant hills are grown, biological systems, too. Still not clear
that we can build one from non-biological parts.
Kinds of argument:
1) construction (it's possible because I can tell you how to do it)
2) contradiction (any intelligent computer is a Chinese Room and there is no intelligent entity in a Chinese Room, so the computer is not intelligent).
3) occam's razor (if it quacks like a duck and we don't know anything else about it, it is a duck, at least tentatively)
4) analogy (intelligence emerges from the organization of neurons in the same way that ant-hill behavior emerges from the organization of ants)
Each is persuasive in its way, but none is a "proof" in the sense of being irrefutable.
Exit survey
6) I think the Turing test is the right way to judge whether an entity is intelligent.
7) The neuron-replacement thought experiment convinces me that it is possible to implement intelligence in non-biological hardware.
8) The Chinese room thought experiment convinces me that a formal system like a computer cannot implement intelligence.
9) The Aunt Hillary analogy convinces me that it is possible for an intelligent system to be built from non-intelligent parts.
10) Today's discussion makes me more inclined to believe that true artificial intelligence is possible.
Reading list
http://en.wikipedia.org/wiki/Chinese_room
http://en.wikipedia.org/wiki/Turing_test
The Mind's I
a collection of essays edited
by Douglas R. Hofstadter and Daniel C. Dennett
(very pleasant, particularly "Where Am I?" by Dennett and
"An Epistemological Nightmare" by Raymond M. Smullyan)
Godel, Escher, Bach: An Eternal Golden Braid
by Douglas R. Hofstadter
(medium challenging, but broad and deep)
Minds, Brains and Science
by John R. Searle
(challenging)
Turtles, Termites, and Traffic Jams : Explorations in
Massively Parallel Microworlds (Complex Adaptive Systems)
by Mitchel Resnick
(examples of emergent properties, easy read)
The Language Instinct: How the Mind Creates Language
by Steven Pinker
(livelier than the subsequent tome, "How the Mind Works")
On Intelligence
Jeff Hawkins
(An approach to building an AI that works like the brain.)