[extropy-chat] Manifest Destiny for >H

Stathis Papaioannou stathisp at gmail.com
Mon Apr 16 01:35:24 UTC 2007


On 4/16/07, Lee Corbin <lcorbin at rawbw.com> wrote:

Stathis writes
>
> > > Does anyone have a ... reason that vastly transhuman engines
> > > won't absorb all resources within reach?  Even if *some*
> > > particular AI had a predilection not to expand its influence
> > > as far as it could, wouldn't it lose the evolutionary race to
> > > those who did?
> >
> > ...You could apply the argument [that those agents who try to
> > expand their influence over everything they can] to any agent:
> > bacteria, aliens, humans, nanomachines, black holes... ultimately,
> > those entities which grow, reproduce or consume will prevail.
>
> Bacteria can be checked by clean rooms, aliens (like human empires)
> might check each other over interstellar distances, and humans (as
> individuals) are held in check by envy, law, and custom.


Right, but parrotting the argument for AI's taking over the world, some
bacteria, aliens or humans, due to diversity, would be less subject to these
checks, and they will come to predominate in the population, so that after
multiple generations the most rapacious entity will eat everything else and
ultimately make the universe synonymous with itself. On the other hand,
maybe there will be long, long periods of dynamic equilibrium, evn between
competing species grossly mismatched in intelligence, such as humans and
bacteria.

> However, it might be aeons before everything goes to hell,
> > especially if we anticipate problems and try to prevent or
> > minimise them.
>
> I don't know why you think that this must be "hell".  I could
> imagine rather beneficient super-intelligences taking over vast
> areas, checked ultimately by the speed of light, and their own
> ability to identify with far-flung branches of themselves. Some
> of these may even deign to give a few nanoseconds of runtime
> every now and then to their ancient noble creators.


I'm not as worried about the future behaviour of super-AI's as many people
seem to be. There is no logical reason why they should have one motivation
rather than another. If humans can be concerned about flowers and trees, why
can't super-intelligent beings be concerned about humans? After all, we
weren't created by flowers and trees to have any particular feelings towards
them, while we *would* be the ones creating the AI's. And even if some AI's
went rogue, that would be no different to what currently happens with large
populations of humans.

Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070416/f6ffa59b/attachment.html>


More information about the extropy-chat mailing list