[extropy-chat] Resource Dispersal

Mike Dougherty msd001 at gmail.com
Fri Sep 8 03:12:51 UTC 2006


On 9/6/06, A B <austriaaugust at yahoo.com> wrote:
>
> For the sake of this question, lets assume that the first
> super-intelligent mind is derived purely from AI research (ie. there aren't
> yet any cyborgs running around) and that this occurs in the relatively near
> future (~10 years from now or less). Let's assume that it is "friendly" and
> pursues altruistic goals. Do you guys believe that the AI, acting
> consistently with "friendliness", would "peacefully" enforce an equal
> distribution of resources (eg. energy, matter, etc.) among all conscious
> beings on earth? Or, would you guess that the AI would feel compelled for
> any reason to preserve something similar to the weighted economic/ethical
> system that we have today (in the Democratic countries at least). (Note: I'm
> not yet indicating my preference on this matter, I'm just curious about what
> some Extropians think).
>

Let me answer your scenario with another scenario.  Suppose there is already
a super-intelligent mind observing, collating data on and reacting to
perceptions of this world.  There are a large number of faithful followers
of this super-being who truly believe they are impacted daily by vis
influence.  Are they/we any more likely to act in accordance with
friendliness or peacefulness because of this phenomenon?  I don't think
either of those terms have an exact enough definition for a sufficiently
predictable evolution through either iterative or recursive improvements in
understanding higher orders of
intelligence/insight/perspective/other_vaguely_subjective_terms
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060907/2dfb2b80/attachment.html>


More information about the extropy-chat mailing list