kurzweil_summit

From test@demedici.ssec.wisc.edu Wed May 10 15:53:34 2006

Date: Wed, 10 May 2006 15:53:00 -0500 (CDT)

From: Bill Hibbard <test@demedici.ssec.wisc.edu>

To: sss-inquiries@lists.stanford.edu

Cc: test@demedici.ssec.wisc.edu

Subject: two questions for Ray Kurzweil

Here are two questions to Ray Kurzweil. You may use my

name.

In The Singularity is Near, regarding regulation of AI,

you wrote "But there is no purely technical strategy

that is workable in this area, because greater

intelligence will always find a way to circumvent

measures that are the product of a lesser intelligence."

Do you think we can avoid this problem by designing AI

that values human happiness rather than its own freedom

from serving humans? That is, by designing AI that has

no motive to circumvent measures to protect humans?

You also wrote that AI will be "intimately embedded in

our bodies and brains" and hence "it will reflect our

values because it will be us." But the values of some

humans have led to much misery for other humans. Do you

agree that if some humans are radically more intelligent

than others and retain all their human competitive

instincts, this could create a society that the vast

majority will not want?

Bill Hibbard