[ExI] bug in outloading notion
Brent Allsop
brent.allsop at canonizer.com
Sat Oct 23 20:14:31 UTC 2010
Yes, this is related to the 'bug' I've believed exists in such thinking
all along and that we've talked about before.
First off, the whole idea of 'competing for resources' and 'enemies'
just makes no sense from a wider perspective. Sure, one spices may
compete with itself, but is it competing with the rest of life? Does
all life on earth 'compete for resources'? Absolutely not. We all find
niches, share, and co-operate. None of us could survive without most of
nature doing what they do and helping out.
And, once you reach our level of power and intelligence, things like
experience, information, not being lonely or alone... all of which are
easily sharable and reproducible, and much more efficiently stored in
matter when co-operating (i.e. less duplicates)... become far more
valuable than any resources, even if resources are limited. (i.e. there
is lots of stuff and lots of space out there!) Once you make a
scientific discovery, it's far better to share it, than to reproduce the
very expensive science to discover it yet again...
And even if intelligence beings for some incomprehensible reason, didn't
notice or value how terrible and immoral isolation / loneliness was,
they would face the same problem you mention, in that we would always
fear some bigger powerful life force that evolved in some still distant
location, that is way ahead of them. It's all just irrational in my way
of thinking.
All such antisocial / hateful type of future thinking, in my mind, is
just full of irrational bugs that don't make any sense to me.
Brent Allsop
On 10/23/2010 12:21 PM, spike wrote:
>
> Ooops, I may have discovered a problem with my outloading idea.
>
> Assume an emergent AI reads everything online and decides to invisibly
> outload, first residing quietly in the background in the great PC network,
> then outloading to satellites, where they or it creates nanobots which
> continue outward to the moon, Mars, asteroids etc, intentionally keeping
> life on earth as is with very little or no influence.
>
> Problem: if AI emerged from our thinking machines once, it could emerge
> twice. If so, the first AI would allow the introduction of a potentially
> competing species, if I use the term species loosely, and assume it roughly
> analogous to the lions vs the hyenas. In that case we have two competing
> species, natural enemies which interact on a regular basis, compete for
> resources and maintain presence in oscillating equilibrium.
>
> If an emergent AI is friendly and matches our notions of ethics, it would
> outload. This would set itself vulnerable to competition for resources with
> a later and possibly more aggessive subsequent AI. Even if the second AI is
> friendly and matches our notions of ethics, it would join forces with the
> first, and both would be vulnerable to the third emergent AI. The later
> AI(s) would not only compete with the first AI's resources beyond earth, but
> would also threaten to devour the wonderful beasts first AI's earthly zoo.
>
> Damn.
>
> {8-[
>
> spike
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
More information about the extropy-chat
mailing list