[extropy-chat] Manifest Destiny for >H
lcorbin at rawbw.com
Sun Apr 15 17:12:08 UTC 2007
> > Does anyone have a ... reason that vastly transhuman engines
> > won't absorb all resources within reach? Even if *some*
> > particular AI had a predilection not to expand its influence
> > as far as it could, wouldn't it lose the evolutionary race to
> > those who did?
> ...You could apply the argument [that those agents who try to
> expand their influence over everything they can] to any agent:
> bacteria, aliens, humans, nanomachines, black holes... ultimately,
> those entities which grow, reproduce or consume will prevail.
Bacteria can be checked by clean rooms, aliens (like human empires)
might check each other over interstellar distances, and humans (as
individuals) are held in check by envy, law, and custom.
> However, it might be aeons before everything goes to hell,
> especially if we anticipate problems and try to prevent or
> minimise them.
I don't know why you think that this must be "hell". I could
imagine rather beneficient super-intelligences taking over vast
areas, checked ultimately by the speed of light, and their own
ability to identify with far-flung branches of themselves. Some
of these may even deign to give a few nanoseconds of runtime
every now and then to their ancient noble creators.
More information about the extropy-chat