AIRisk_Reply
Reply to AIRisk
Bill Hibbard 2006
Here is a message posted by Eliezer Yudkowsky soliciting feedback to two chapters he is writing for a book edited by Nick Bostrom, Global Catastrophic Risks.
His chapter, Artificial Intelligence as a Positive and Negative Factor in Global Risk, cites my 2001 article Super-Intelligent Machines as an example of technical failure because I use reinforcement learning (RL) as the framework for describing artificial intelligence (AI). However he confuses RL with a specific algorithm, namely feed-forward neural networks. RL is simply a framework that seperates an AI's motives from its means to achieve those motives. Yudkowsky makes the same separation in his own discussions of AI goals. Many other well-respected AI researchers use RL as a framework for AI. Examples include Eric Baum in his book, What is Thought?, and Shane Legg and Marcus Hutter in their article, A Formal Measure of Machine Intelligence.
In his chapter Yudkowsky writes "Would the network classify a tiny picture of a smiley-face into the same attractor as a smiling human face? If an AI 'hard-wired' to such code possessed the power - and Hibbard (2001) spoke of superintelligence - would the galaxy end up tiled with tiny molecular pictures of smiley-faces?" This statement shows that Yudkowsky is more concerned with dramatic effect than with truth. Even feed-forward neural networks can be trained to recognize the difference between tiny smiley-faces and human faces, and Yudkowsky must be well aware of this. In fact, recent face recognition systems, based on RL, are doing a good job of indentifying specific people in video surveillance.
Beyond being merely wrong, Yudkowsky's statement assumes that (1) the AI is intelligent enough to control the galaxy (and hence have the ability to tile the galaxy with tiny smiley faces), but also assumes that (2) the AI is so unintelligent that it cannot distinguish a tiny smiley face from a human face. Such obvious contradictory assumptions show Yudkowsky's preference for drama over reason.
Yudkowsky and I debated these issues on-line:
Here is my first feedback.
Here is Eliezer's first reply.
Here is my second feedback.
Here is an exchange on SL4 about the definition of happiness, in response to my second feedback.
Here is Eliezer's second reply.
Here is my third feedback.