message15
From test@demedici.ssec.wisc.edu Thu Jun 8 15:00:16 2006
Date: Thu, 8 Jun 2006 14:22:53 -0500 (CDT)
From: Bill Hibbard <test@demedici.ssec.wisc.edu>
Reply-To: sl4@sl4.org
To: sl4@sl4.org
Subject: Re: Two draft papers: AI and existential risk; heuristics and
biases
On Thu, 8 Jun 2006, Robin Lee Powell wrote:
> On Thu, Jun 08, 2006 at 12:58:29PM -0500, Bill Hibbard wrote:
> > On Thu, 8 Jun 2006, Robin Lee Powell wrote:
> > > On Thu, Jun 08, 2006 at 03:50:34AM -0500, Bill Hibbard wrote:
> > > > On Wed, 7 Jun 2006, Robin Lee Powell wrote:
> > > > > On Wed, Jun 07, 2006 at 12:24:55PM -0500, Bill Hibbard
> > > > > wrote:
> > > > > > If you think RL can succeed at intelligence but must fail
> > > > > > at friendliness, but just want to demonstrate it for a
> > > > > > specific example, then use a scenario in which:
> > > > > >
> > > > > > 1. The SI recognizes humans and their emotions as
> > > > > > accurately as any human, and continually relearns that
> > > > > > recognition as humans evolve (for example, to become SIs
> > > > > > themselves).
> > > > > >
> > > > > > 2. The SI values people after death at the maximally
> > > > > > unhappy value, in order to avoid motivating the SI to
> > > > > > kill unhappy people.
> > > > > >
> > > > > > 3. The SI combines the happiness of many people in a way
> > > > > > (such as by averaging) that does not motivate a simple
> > > > > > numerical increase (or decrease) in the number of
> > > > > > people.
> > > > > >
> > > > > > 4. The SI weights unhappiness stronger than happiness,
> > > > > > so that it focuses it efforts on helping unhappy people.
> > > > > >
> > > > > > 5. The SI develops models of all humans and what
> > > > > > produces long-term happiness in each of them.
> > > > > >
> > > > > > 6. The SI develops models of the interactions among
> > > > > > humans and how these interactions affect the happiness
> > > > > > of each.
> > > > >
> > > > > Have you read The Metamorphosis Of Prime Intellect?
> > > > >
> > > > > The scenario above immediately and obviously falls to the
> > > > > "I've figured out where human's pleasure centers are; I'll
> > > > > just leave them on" failure.
> > > >
> > > > I address this issue in my 2005 on-line paper:
> > > >
> > > > The Ethics and Politics of Super-Intelligent Machines
> > > > http://www.ssec.wisc.edu/~billh/g/SI_ethics_politics.doc
> > > >
> > > > There exists a form of happiness that is not drug-induced
> > > > ecstasy.
> > >
> > > I read all of the paragraphs with the word "happiness" in them.
> > > I see nothing that addresses this issue even in the slightest.
> >
> > My paper discusses the difference between hedonic and eudiamonic
> > from the reference:
> >
> > Ryan, R.M. and Deci, E.L. 2001. On happiness and human
> > potentials: A review of research on hedonic and eudiamonic
> > well-being. Annual Review of Psychology 52, 141-166.
>
> That helps me little.
>
> > and makes the point that the SI should use "expression of
> > long-term life satisfaction rather than immediate pleasure."
>
> Latching on the pleasure centers *is* long term life satisfaction.
>
> Or latch on the "satisfaction centers", or whatever.
Humans with their pleasure centers permanently turned
on do not satisfy most people's recognition of "long-
term life satisfaction." We recognize people as having
"long-term life satisfaction" who are physically and
socially active, have love in their life, have meaningful
work, share intelligent and curious conversation, etc.
The SI will share this recognition. But as humans evolve
these expressions of happiness may change, which is why
I now think the SI should continuously relearn these
recognitions from humans.
> > Here's a way to think about it. From your post you clearly would
> > not do anything to permanently turn on human pleasure centers.
> > This is based on your recogition of human expressions of happiness
> > and your internal model of human mental procoesses and what makes
> > them happy. Given that the SI will have as accurate recognition of
> > expressions of happiness as you (my point 1) and as good an
> > internal model of what makes humans happy as you (my points 5 and
> > 6), then why would the SI do something to humans that you can
> > clearly see they would not want?
>
> Because *I* care about what people say they want.
>
> You AI only cares about what makes people happy.
But what people say is a very important part of the
way they express their emotions. The whole way they
live is an expression of their emotions. The SI will
share our recognition of these expressions.
> I have a complex, rich moral system.
>
> Your AI only has reinforcement of "make humans happy".
>
> It's not enough. It doesn't matter how you patch it, it's not
> enough.
It is an interesting question whether the SI should
have the usual human social motives of liking, anger,
gratitude, sympathy, guilt, shame and jealousy. The
desire for universal happiness probably equates to
the SI having sympathy and liking. I think that it is
these motives, filtered through complex social
interactions, that produce moral systems as part of
our simulation models of the world. The SI's extreme
sympathy and liking, filtered through its interactions
with humans, will produce a moral system in its model
of humans and their social interactions. I am open
to the possibility that an SI may need other
motives, but as I get older I see the most moral
parts of myself and others being driven by concern
for other peoples' happiness.
Bill