<div dir="ltr"><div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div>A) AIs are no longer an extension of human will, but supersede it
and are possibly antagonistic to it; they won't be doing the jobs people
want done, hence, people will still need to do them.</div><div><br></div><div>B)
Machinery simply extends human will. (As it has always done.) What
humans want done expands to accommodate the new possibilities (as it has
always done). Since some humans own AIs with certain specialties, and
other humans own AIs with other specialties, and they use money to keep
track of a complex system of mutual reciprocity ("the economy"). There's
a huge number of projects in architecture, engineering, biology,
health, and countless other fields that we can imagine but currently
can't afford--with AIs these open up for developement.</div><div><br></div><div>C)
AIs are separate from but not antagonistic to human will (at least not
as a collective; individuals might still run amuck on both sides). This
case would ooh economically similar to (B ) except that AIs would be
sentient, autonomous economic agents as well as humans.</div></blockquote><br>The results of A and C would be ultrarich AI´s and poor humans, starving and dying. It would be like today, but instead of 70% of poor humans, would be the 100%<br>
<br></div>If we achieve B and still people have to work and, therefore, still are poors, I mean:<br><br>If we have the possibility of make everybody´s life assured (and AI´s give us that possibility) and we don´t, Trans<b>Humanist</b> philosophies can be marked as another crazy unreachable utopia. Saving lives (of those who want to live) is ethically important.</div>