<div class="gmail_quote">On 20 February 2012 03:49, Joseph Bloch <span dir="ltr"><<a href="mailto:seculartranshumanist@gmail.com">seculartranshumanist@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
What you<br>
suggest infers that those who do not agree with your assertion about<br>
capitalism being bad and "social justice" being good should somehow be<br>
purged. That's already been tried...<br></blockquote><div><br>I am not sure about Giovanni's intentions, but I believe that "social justice" need not be interpreted as egalitarianism - even though, admittedly, it may be.<br>
<br>Even from a hyper-orthodox Randian view, the fact that, for instance, in a given society essentially parasitic social classes may enjoy a disproportionate share of the available wealth and/or are actually protected from social competition may well be considered as "unjust". See The Fountainhead if one is looking for examples of how this may happen.<br>
<br>This is turn may well reduce resources that could otherwise be available for long-term, high-risk investment or to reward innovation, and this is of course relevant to any transhumanist agenda.<br><br></div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Especially in the context of a discussion of how to make transhumanism<br>
more palatable and popular, singularitianism has a tendency to veer<br>
towards the apocalyptic, which turns off quite a number of people (if<br>
for nothing more than its religious millenarianism). <br></blockquote><div><br>Absolutely. And the fact that regrettably many "singularitarians" now believe it is cooler to be prophets of an impending Doom - in the most parochial humanist terms, btw - rather than Rapture, make things only worse. I will not insist on the point, because I have already expressed my view on "x-risks" and "Big, Bad AIs" in the brief essay <a href="http://www.divenire.org/articolo_versione.asp?id=1"><i>Artificious Intelligences</i></a>.<br>
<br></div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">So before you develop the technology to<br>
make *some* people live to 300, you have to ensure that *all* of them<br>
live to 50. I believe exactly the opposite to be preferable; better to<br>
have a minority live to a ripe old age of 1,000 than prevent anyone<br>
from doing so until everyone can. I don't begrudge Neil Armstrong and<br>
Buzz Aldrin the opportunity to walk on the moon just because they<br>
didn't bring along 3 billion other people.<br></blockquote><div><br>Yes, I am on your line on this one. Besides, medicine has *always* been "unsustainable". By sparing the resources devoted to therapy, not to mention research, we always could save more lives than we did. Yet, traditional medical ethic orders to do anything possible for the patient at hand, even though the cost might save three other people from, say, starvation or accidents.<br>
</div><br clear="all"></div>This, however, does not necessarily mean that unless your American Express is in order when you have a heart attack it is reasonable or efficient for your community to let you die at the next crossroad.<br>
<br>-- <br>Stefano Vaj<br>