[extropy-chat] IQ vs Uploads (was: what to do)

Adrian Tymes wingcat at pacbell.net
Mon Jun 13 21:45:52 UTC 2005


--- giorgio gaviraghi <giogavir at yahoo.it> wrote:
> maybe we should make another important assumption
> about AIs
> They have individual free will
> in this case they could disobey human commands, have
> their own goals, take their own decisions, refuse to
> "unplug " themselves.

...who said anything about them unplugging themselves?  Of course they
wouldn't.  The scenario under consideration is: would they have any
ability to actually stop a human from unplugging them?  In some cases,
especially early stage, the answer is a resounding no.

> In a smarter than human scenario they could connect
> between them and create a collective mind, billion of
> times more powerful than the individual.
> If you consider this possibility we have unlimited
> situations and none of them looks good for humans 

Au contraire.  If that very powerful collective mind had as its highest
priority the improved welfare of the human race - a higher priority
than even self-preservation (not that the goals would be likely to
conflict) - that might look extremely good for humanity.  Consider, as
an example, the proposed Friendly AI.  (It's not the only solution, and
it has its weaknesses and problems, but it would very likely solve this
kind of situation if it could be accomplished.)



More information about the extropy-chat mailing list