[ExI] AI extinction risk

Eugenio Martínez rolandodegilead at gmail.com
Mon Mar 17 10:30:46 UTC 2014


>
> A) AIs are no longer an extension of human will, but supersede it and are
> possibly antagonistic to it; they won't be doing the jobs people want done,
> hence, people will still need to do them.
>
> B) Machinery simply extends human will. (As it has always done.) What
> humans want done expands to accommodate the new possibilities (as it has
> always done). Since some humans own AIs with certain specialties, and other
> humans own AIs with other specialties, and they use money to keep track of
> a complex system of mutual reciprocity ("the economy"). There's a huge
> number of projects in architecture, engineering, biology, health, and
> countless other fields that we can imagine but currently can't afford--with
> AIs these open up for developement.
>
> C) AIs are separate from but not antagonistic to human will (at least not
> as a collective; individuals might still run amuck on both sides). This
> case would ooh economically similar to (B ) except that AIs would be
> sentient, autonomous economic agents as well as humans.
>

The results of A and C would be ultrarich AI´s and poor humans, starving
and dying. It would be like today, but instead of 70% of poor humans, would
be the 100%

If we achieve B and still people have to work and, therefore, still are
poors, I mean:

If we have the possibility of make everybody´s life assured (and AI´s give
us that possibility) and we don´t, Trans*Humanist* philosophies can be
marked as another crazy unreachable utopia. Saving lives (of those who want
to live) is ethically important.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140317/806befa9/attachment.html>


More information about the extropy-chat mailing list