[ExI] bug in outloading notion
spike66 at att.net
Sat Oct 23 18:21:10 UTC 2010
Ooops, I may have discovered a problem with my outloading idea.
Assume an emergent AI reads everything online and decides to invisibly
outload, first residing quietly in the background in the great PC network,
then outloading to satellites, where they or it creates nanobots which
continue outward to the moon, Mars, asteroids etc, intentionally keeping
life on earth as is with very little or no influence.
Problem: if AI emerged from our thinking machines once, it could emerge
twice. If so, the first AI would allow the introduction of a potentially
competing species, if I use the term species loosely, and assume it roughly
analogous to the lions vs the hyenas. In that case we have two competing
species, natural enemies which interact on a regular basis, compete for
resources and maintain presence in oscillating equilibrium.
If an emergent AI is friendly and matches our notions of ethics, it would
outload. This would set itself vulnerable to competition for resources with
a later and possibly more aggessive subsequent AI. Even if the second AI is
friendly and matches our notions of ethics, it would join forces with the
first, and both would be vulnerable to the third emergent AI. The later
AI(s) would not only compete with the first AI's resources beyond earth, but
would also threaten to devour the wonderful beasts first AI's earthly zoo.
More information about the extropy-chat