[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Fri Jun 1 01:09:30 UTC 2007


On 01/06/07, Lee Corbin <lcorbin at rawbw.com> wrote:

> Such a dominant AI would be held in check by at least as many factors
> > as a human's dominance is held in check: pressure from society in the
> > form of initial programming, reward for human-friendly behaviour,
>
> You seem as absolutely sure that the first programmers will succeed
> with Friendliness as John Clark is absolutely sure that the AI will
> spontaneously ignore all its early influence.  We just don't know, we
> cannot know. Aren't there many reasonable scenarios where you're
> just wrong?  I.e., some very bright Chinese kids keep plugging away
> at a seed-AI, and take no care whatsoever that it's Friendly. They
> succeed, and bam!  the world's taken over.


I don't see how that's possible. How is the AI going to comandeer the R&D
facilities, organise manufacture of new hardware, make sure that the the
factories are kept supplied with components, make sure the component
factories are supplied with raw materials, make sure the mines produce the
raw materials, make sure the dockworkers load the raw materials onto ships
etc. etc. etc. etc. Perhaps I am sinning against the singularity idea in
saying this, but do you really think it's just a matter of writing some code
on a PC somewhere, which then goes on to take over the world?

> and the censure of other AI's (which would be just as capable,
> > more numerous, and more likely to be following human-friendly
> > programs).
>
> But how do you *know* or how are you so confident that *one*
> AI may suddenly be a breakthrough, and start making improvements
> to itself every few hours, and then simply take over everything?


It's possible that an individual human somewhere will develop a superweapon,
or mind-control abilities, or a viral vector that inserts his DNA into every
living cell on the planet; it's just not very likely. And why do you suppose
that rapid self-improvement of the world-dominating kind is more likely in
an AI than in the nanotechnology that has evolved naturally over billions of
years? For that matter, why do you suppose that human level intelligence has
not evolved before, to our knowledge, if it's so adaptive? I don't know thwe
answer to these questions, but when you look at the universe, there isn't
really any evidence that intelligence is as "adaptive" as we might assume it
to be.



-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070601/e6e71523/attachment.html>


More information about the extropy-chat mailing list