[extropy-chat] Eugen Leitl on AI design

Eliezer Yudkowsky sentience at pobox.com
Wed Jun 2 16:34:08 UTC 2004


Brent Neal wrote:
> (6/2/04 10:46) Eliezer Yudkowsky <sentience at pobox.com> wrote:
> 
>> Only white hat AI is strong enough to defend humanity from black hat
>> AI, so yes.
> 
> Without weighing in on either side of the current argument, I'd like to
> ask a stupid question. One thing that I am unclear on is how you
> guarantee that the AI that you're building is white hat. If you are
> actually creating sentience, then wouldn't it follow that you would be
> 'teaching' the AI your particular moral codes, much as a parent tries to
> impress these upon their children. Or is this a flawed analogy - thus
> implying that you will place some limit on the decision trees and
> inference math to enforce a particular morality, analogous to the Three
> Laws (although hopefully without their cheerfully story-worthy
> ambiguities.)
> 
> Please understand that I'm uninformed and curious, not antagonistic,
> 
> Brent

The guarantee is a technical issue.  As for what we're trying to do 
morally, see:

http://sl4.org/bin/wiki.pl?CollectiveVolition

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list