[ExI] Hard Takeoff

Stefano Vaj stefano.vaj at gmail.com
Thu Nov 18 09:57:15 UTC 2010


On 18 November 2010 07:02, Samantha Atkins <sjatkins at mac.com> wrote:
> I think what the statement really implies is the idea that it is not rational for a much smarter than human AGI to be 'friendly' to humans.    Therefore we appeal to irrational aspects for 'friendliness'.   If this is indeed the case then there is nothing that can be done about it that is consistent with the facts of reality.    I don't believe you can pull the wool over an AGI's perception or coerce it for very long.
>
> I also doubt very much you would want anything like normal human drives and emotions in your AGI.  How many humans have ever lived that would be great or even save to have around if they thought six or more orders of magnitude faster than any other humans and at much greater depth?  What would a non-human with human emotion and drives be able to do with them exactly?

I think those are very good points. OTOH, for the purpose of
"intelligence" as it is discussed here, I am afraid that no
computational power would be recognised as "intelligence" (as in
"passing the Turing test) unless it persuasively emulates a specific
or a generic (that is, patchwork/artificial) human being - its being
or not a philosophical zombie remaining a meaningless issue for me.

This is not so crucial an experiment in comparison with other
applications of the same computer power, unless for
uploading/"reproduction" purposes, but nevertheless an interessant
one. Would it be any "dangerous"? Neither less nor more than an
ordinary human being with the same computer power at his or her
fingertips. There are good reasons, IMHO, to doubt that at the end of
the day the distinction between androids, cyborgs and fyborgs may not
really matter after all.

-- 
Stefano Vaj




More information about the extropy-chat mailing list