[extropy-chat] Human - Posthuman gap

Joseph Bloch jbloch at humanenhancement.com
Fri Apr 29 00:16:14 UTC 2005


Samantha Atkins wrote:

>
> On Apr 27, 2005, at 5:20 PM, Joseph Bloch wrote:
>
>> I don't see this as being what Eugen is saying at all.
>>
>> Saying that there will be an end to the Human species, and that they 
>> will be succeeded by some PostHuman successor species, is most 
>> definitely not the same as saying that "there may be no place for the 
>> majority of the people around us".
>>
>> I think even the most hardcore of us would agree that we would (and 
>> actively do) encourage the transition of each and every person alive 
>> today into a state of PostHumanity. That's why we almost universally 
>> agree on the desirability of making sure Transhumanist technologies 
>> are available to as wide an audience as possible. (Disagreement, of 
>> course, arises based on the question of just what is the most 
>> efficient way to make sure that happens; a relatively government-free 
>> Free Market, or a government-driven model; but the end result is 
>> common to both camps.)
>>
>
> Do you think the majority of the people will be interested soon 
> enough?  I don't see that as likely.   What happens to those folks who 
> aren't buying it?


I'm not sure what you mean about "soon enough". Much as I abhor 
speculation about the particulars of the coming Singularity (given that 
such a thing by definition makes prediction impossible), I happen to 
think the emergence of a PostHuman species is at least as likely to 
trigger such an event as the development of strong AI or nanotechnology 
(which are often cited as potential triggers).

It could well be that within a few months of the emergence of the first 
group of true PostHumans, with their massively enhanced intellects and 
physical forms, a wave of bandwagon-jumping will sweep the globe as the 
pre-PostHumans see the advantages of transitioning, leaving behind a 
relative handful of True Believers as the rest of us go jetting off to 
the far reaches of the Milky Way in our personal lighthuggers. Or, the 
numbers of PostHumans could remain relatively small, and they could 
dedicate themselves to the service of their still-unenhanced Little 
Cousins, utilizing a portion of their intellects to that purpose as they 
explore the wider areas of truth and knowledge amongst themselves, 
completely unsuspected by the masses. Or, the first group of PostHumans 
could wipe out the remaining humans, or uplift them against their will. 
The truth is, we have no way of knowing what our responses will be when 
we have an IQ of 10,000, physical immortality, and the resources of a 
small nation.

All the more reason to be in the vanguard, sez I. Cuts down on the risk 
of being in the ranks of the exterminated or abandoned.

Although I would say that waiting until everyone is guaranteed a seat on 
the train is not an option, mostly because it will NEVER happen. Someone 
will always choose-- through ignorance, superstition (is there a 
difference?), or just plain cussedness-- to remain as they are. Those 
people must not be allowed to hold up our own evolution:

"Your decision to remain the same does not compel me not to change."


>
>> It is a critical question, and also opens up the can of worms of the 
>> desireability/morality/practicality of imposing such a transition on 
>> those who don't, in their pre-PostHuman state, choose to undergo the 
>> transition (in a world where the majority of the population has an 
>> effective IQ of 1000, can someone with a "normal" IQ of 100 make such 
>> a choice and be said to be truly "informed"? Such standards are 
>> relative...) Another sticky wicket appears when those who choose not 
>> to transform themselves seek to impose that decision on everyone.
>>
>
> If it is wrong for them to keep us relatively dumb and short-lived 
> would it be right for us to force >human intelligence and indefinitely 
> long life span on them against their wishes?    


To my mind, the answer to your question would be no; to force such a 
transition against someone's will would be immoral. There are two 
caveats, however.

If it became impossible to attain (or retain) a PostHuman status 
_without_ resorting to such a mass-uplift program (a-la Magneto in the 
first X-Men movie, who sought to turn the world's leaders into mutants 
so they would become sympathetic to the plight of mutants), then I would 
say it would be justified to do so. I will grant that it is certainly 
not a cut-and-dried case. In the situation as I describe it, it is 
literally the evolutionary struggle for survival between two species, 
and given a choice I think forced uplifting is preferable (on an 
individual level) to extermination, if a PostHuman species is faced with 
extermination itself.

The second caveat is, I think, even more disturbing to us mere humans in 
its way. Much as we, today, concede that the mentally retarded are 
incapable of making sound judgements for themselves, it may well be the 
case that PostHumanity could see mere humanity in a similar position, 
relatively speaking. Insights and gestalts which are clear as crystal to 
someone with an IQ of 10,000 would doubtless be hopelessly confused 
muddles of ignorance and confusion in our poor primate brains. It may 
well be the case that there is an objective and perfectly sound reason 
for forcing such uplifts that is unknowable to someone in our state, the 
logic and inevitability of which can only be appreciated after one has 
been uplifted. Much as someone with an IQ of 50 today would be incapable 
of making an informed decision to _decline_ the opportunity to take a 
drug that would double their IQ.

But of course those caveats are mere speculation. Until we are faced 
with the reality of the situation unfolding before our eyes, the 
original principle, that forced uplifting is immoral, should stand as 
the default position. Only if that position becomes untenable should it 
be reassessed.

> Do we want to continue this step-wise conundrum indefinitely? Will 
> each step of further progress revisit the question?  


Now you're getting into the "what will the Singularity after the 
Singularity be like?" Obviously, we can't know. But for now, we can 
tentatively say we can apply the same principle, subject to revision 
once our several-orders-of-magnitude-smarter intellects deal with the 
question.

> Who is going to pay for all the upgrades if they aren't all 
> effectively too abundantly available to need bother charging for?   


Here's where the capitalist-vs.-socialist axis within contemporary >H 
comes in. The capitalists (I include here the libertarians, naturally) 
would answer your question by saying that everybody will pay for their 
own upgrades. The socialists would answer by saying the State will 
provide upgrades for everyone. It's a legitimate debate (despite what 
people on BOTH sides of the discussion say), and the reality will 
probably come somewhere in the middle, but from a personal standpoint I 
will tell you that I will be paying for as many upgrades as corporations 
will sell me and the state will allow me to purchase (and I can afford, 
naturally!).


> Wouldn't we then be attempting to force an equality of capability?  Is 
> that a good thing?  


I daresay if you asked that question of James Hughes and Max More, you 
would get pretty much diametrically opposed answers. Yet each is a 
Transhumanist in his own way, because that's a question of methods, not 
goals. Personally, I don't think it's practical to force equality of 
capability short of a social system that would make Aldous Huxley blush. 
I favor personal competition, tempered by the intervention of society as 
a whole when such competition spills over into realms where it proves 
detrimental to society as a whole (I am thinking of Icelandic feud as an 
example of a social convention used to regulate individual conflict, 
keeping it from degenerating into whole-scale civil war; there are many 
others).


> Who says our set of desirability metrics are the most righteous and 
> privileged to the point of forcing others to adopt what we consider 
> desirable?


I don't think we should, with the caveats I give above. Let each have 
his choice, unless their choice means I will be eliminated myself, or 
unless my massively-augmented intellect realizes that those poor 
colloidal-brained primates aren't capable of making those decisions for 
themselves. But I'll defer those questions until they become relevant...


>
>> I say give as many people as possible the opportunity to shed their 
>> pre-PostHuman existence, and I am fairly sure that Eugen (and indeed 
>> the vast majority here) would agree on that broad goal.
>>
>
> I agree on the goal but it ducks a lot of very thorny questions to 
> just say that much.


Not at all; it just provides a point of reference from which we _can_ 
start talking about the thorny issues, especially since there seem to be 
well-defined camps with different opinions on the best way to approach 
some of them. Which is what we're doing right here (and I might argue 
which is one of the main purposes of these sorts of discussion lists, 
especially this one and WTA-talk).

Joseph

Enhance your body "beyond well" and your mind "beyond normal": 
http://www.humanenhancement.com
New Jersey Transhumanist Association: http://www.goldenfuture.net/njta
PostHumanity Rising: http://transhumanist.blogspot.com/ (updated 
yesterday!)



More information about the extropy-chat mailing list