message11

Date: Sun, 04 Jun 2006 09:33:26 -0700

From: "Eliezer S. Yudkowsky" <sentience@pobox.com>

Subject: [extropy-chat] Two draft papers: AI and existential risk;

heuristics and biases

To: sl4@sl4.org

Cc: transhumantech@yahoogroups.com, sing@yahoogroups.com, World

Transhumanist Association Discussion List

<wta-talk@transhumanism.org>, volunteers@singinst.org, ExI chat list

<extropy-chat@lists.extropy.org>, agi@v2.listbox.com,

bafuture@yahoogroups.com

Message-ID: <44830B56.3080003@pobox.com>

Content-Type: text/plain; charset=ISO-8859-1; format=flowed

These are drafts of my chapters for Nick Bostrom's forthcoming edited

volume _Global Catastrophic Risks_. I may not have much time for

further editing, but if anyone discovers any gross mistakes, then

there's still time for me to submit changes.

The chapters are:

_Cognitive biases potentially affecting judgment of global risks_

http://singinst.org/Biases.pdf

An introduction to the field of heuristics and biases - the experimental

psychology of reproducible errors of human judgment - with a special

focus on global catastrophic risks. However, this paper should be

generally useful to anyone who hasn't previously looked into the

experimental results on human error. If you're going to read both

chapters, I recommend that you read this one first.

_Artificial Intelligence and Global Risk_

http://singinst.org/AIRisk.pdf

The new standard introductory material on Friendly AI. Any links to

_Creating Friendly AI_ should be redirected here.

--

Eliezer S. Yudkowsky http://singinst.org/

Research Fellow, Singularity Institute for Artificial Intelligence