[extropy-chat] Fools building AIs (was: Tyranny in place)
russell.wallace at gmail.com
Sat Oct 7 01:41:26 UTC 2006
On 10/7/06, Samantha Atkins <sjatkins at mac.com> wrote:
> Actually what I know of his opinion in simply that when AGI arrives humans, even uploaded and augmented post- but former humans, will eventually not be competitive in comparison. He has said that if humanity is to disappear someday then he would much rather that it be because it is replaced by something more intelligent. He has said that in his opinion this would not be such a bad outcome. I don't see anything all that objectionable in that. Nor do I see much room to challenge his conclusion about humans relative to AIs in competitiveness.
I see plenty of room to challenge it, starting with, even if you postulate
the existence of superintelligent AI in some distant and unknowable future,
why would anyone program it to start exterminating humans? I'm certainly not
going to do any such thing in the unlikely event my lifespan extends that
long. Then there's the whole assumption that more intelligence keeps
conferring more competitive ability all the way up without limit, for which
there is no evidence. There are various arguments from game theory and
offense versus defense. There are a great many reasons to doubt the
conclusion, even based on what I can think of in 2006, let alone what else
will arise that nobody has thought of yet.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat