[extropy-chat] Fools building AIs (was: Tyranny in place)
Samantha Atkins
sjatkins at mac.com
Sat Oct 7 19:31:47 UTC 2006
On Oct 6, 2006, at 6:41 PM, Russell Wallace wrote:
> On 10/7/06, Samantha Atkins <sjatkins at mac.com> wrote:
> Actually what I know of his opinion in simply that when AGI arrives
> humans, even uploaded and augmented post- but former humans, will
> eventually not be competitive in comparison. He has said that if
> humanity is to disappear someday then he would much rather that it
> be because it is replaced by something more intelligent. He has
> said that in his opinion this would not be such a bad outcome. I
> don't see anything all that objectionable in that. Nor do I see
> much room to challenge his conclusion about humans relative to AIs
> in competitiveness.
>
> I see plenty of room to challenge it, starting with, even if you
> postulate the existence of superintelligent AI in some distant and
> unknowable future, why would anyone program it to start
> exterminating humans? I'm certainly not going to do any such thing
> in the unlikely event my lifespan extends that long. Then there's
> the whole assumption that more intelligence keeps conferring more
> competitive ability all the way up without limit, for which there is
> no evidence. There are various arguments from game theory and
> offense versus defense. There are a great many reasons to doubt the
> conclusion, even based on what I can think of in 2006, let alone
> what else will arise that nobody has thought of yet.
>
Russell, I am very surprised at you. Almost no one here believes that
AGI is in some unknowably distant future. I am certain you know full
well that it is not what the humans program the AGI to do that is
likely the concern. Hell, if it was just matter of rot programming
the AGI to exterminate humans explicitly there would be nothing to
worry about and FAI would be easy! In any field where success is
largely a matter of intelligence, information and its timely
application the significantly faster, brighter and more well informed
will exceed what can be done by others. And that doesn't even touch
on the depth of Moravec's argument which you could easily read for
yourself.
What is this blunt denial of the obvious about?
- samantha
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20061007/c82bf467/attachment.html>
More information about the extropy-chat
mailing list