[extropy-chat] "3 Laws Unsafe" by the Singularity Institute

Adrian Tymes wingcat at pacbell.net
Tue May 11 17:04:00 UTC 2004


--- "Eliezer S. Yudkowsky" <sentience at pobox.com>
wrote:
> "Human-equivalent" meaning, uploaded humans? 

Including, but not limited to.

> Totally equivalent to human 
> psychology in every way?  Including inability to
> access their own source 
> code, and the same subjective processing rate?

Not every way.  It's deliberately vague: equivalent
to human beings in the ways that matter to whoever's
making the judgement.  Capability to feel emotion
(including resentment if treated as a slave), or at
least logic with the same result (person treating me
bad = person to be dissuaded, maybe punished), for
example.  The reasoning is that human morals
evolved to fit human abilities and limitations, so if
you believe the AI has the abilities and limitations
that matter for why you treat other human beings the
way you do, then it logically follows that you should
treat AIs in the same manner - for whatever moral code
the person being addressed happens to follow.

> If you are discussing strictly the upload challenge,
> why confuse the issue 
> by using the word "AI"?

Because I'm not discussing strictly the upload
challenge.  Uploads would be one example, yes, but not
the only path to this scenario.

Now, that said, I have long suspected that uploads
would get around a lot of the sticky issues: if you
can become this new powerful type of intelligence, why
fear this new powerful type of intelligence?  But
that's a separate issue.



More information about the extropy-chat mailing list