[ExI] Fwd: [x-risk] Martin Rees and Huw Price on the Posthuman World

Anders Sandberg anders at aleph.se
Thu Sep 24 14:58:23 UTC 2015


A rather nice demonstration how xrisk concern dovetails with a big 
posthuman, cosmist perspective.

-------- Forwarded Message --------
Subject: 	[x-risk] Martin Rees and Huw Price on the Posthuman World
Date: 	Thu, 24 Sep 2015 10:51:57 -0400
From: 	J Hughes <jjhughes2 at earthlink.net>
To: 	ieet-news at ieet.org, existential at ieet.org



http://www.huffingtonpost.com/martin-rees/post-human-world_b_8148732.html


  A Post-Human World: Should We Rage, Rage Against the Dying of the Mites?

Posted:09/23/2015 8:57 am EDTUpdated:09/23/2015 6:59 pm EDT

Astronomers and philosophers both like big pictures, but they often have 
different measures in mind. Astronomers "go big" in space and time -- 
philosophers do so in levels of abstraction from the mundane matters of 
everyday life. But when it comes to the question of the future of 
humanity, these dimensions coincide to a considerable extent.

We humans tend to think of ourselves as special, the culmination of the 
evolutionary tree. But that hardly seems credible to an astronomer, 
aware that although our Sun formed 4.5 billion years ago, it is barely 
in middle age. Any creatures witnessing the Sun's demise six billion 
years hence won't be human -- they'll be as different from us as we are 
from insects. Post-human evolution -- here on Earth and far beyond -- 
could be as prolonged as the Darwinian evolution that's led to us, and 
even more wonderful. And of course, this evolution is even faster now - 
it's happening on a technological timescale, driven by advances in 
genetics and in artificial intelligence, and thus far faster than 
natural selection.

In particular, few who seriously consider the issue would doubt that 
machines will eventually surpass more and more of our distinctively 
human capabilities -- or enhance them via cyborg technology. 
Disagreements are basically about the timescale -- the rate of travel, 
not the direction of travel. The cautious among us envisage timescales 
of centuries rather than decades for these transformations. Be that as 
it may, the timescales for technological advance are tiny compared to 
the timescales of the Darwinian selection that led to humanity's 
emergence -- and they are less than a millionth of the vast expanses of 
time lying ahead. So the outcomes of future technological evolution may 
surpass humans, intellectually speaking, by as much as we surpass a bug.

*Post-human evolution -- here on Earth and far beyond -- could be as 
prolonged as the Darwinian evolution that's led to us, and even more 
wonderful.*

Is this a cause for pessimism? Should we regret our eventual 
obsolescence or try to prevent it -- to rage, rage against the dying of 
the mites, as it were? The optimistic view is we humans shouldn't feel 
too sad or too humbled. We are surely not the terminal branch of an 
evolutionary tree but we could be of special cosmic significance for 
jump-starting the transition to silicon-based (and potentially immortal) 
entities, spreading their influence far beyond the Earth and far 
transcending our limitations.

If all goes well, the far future will bear traces of humanity, just as 
our own age retains influences of ancient civilizations (and our bodies 
and minds retain traces of the struggles of our pre-human ancestors). 
Humans and human thoughts might be a transient precursor to the deeper 
cogitations of another culture -- one dominated by machines, extending 
deep into the future and spreading far beyond Earth.

If there's a shadow over this optimistic vision, it is the possibility 
that we might instead be heading for a dead-end, long before this future 
opens up. Astronomers are well aware of the possibility of sudden cosmic 
catastrophes, such as asteroid impacts, that are blind to the wellbeing 
of a planetary biosphere. More alarmingly still, there's the possibility 
that the biosphere might engineer its own destruction by producing 
creatures -- us -- who are too smart for our own good. Some unforeseen 
consequence of one of our powerful new technologies -- synthetic 
biology, some single-minded form of artificial intelligence or something 
else -- might manage to wipe us out, long before any laudable silicon 
descendant takes intelligence to the stars.

We can't do much, yet, about cosmic catastrophes, but we may be able to 
steer ourselves away from some of our homegrown existential risks. It 
certainly makes sense to try. As philosophers such as Derek Parfit 
havepointed out 
<http://commonweb.unifr.ch/artsdean/pub/gestens/f/as/files/4610/17613_101712.pdf>, 
absolute extinction is much worse even than a calamity that wipes out, 
say, 95 percent of humanity, because it prevents the existence of all 
the future generations, as well as destroying the present generation. A 
growing group of organizations -- e.g., the Future of Humanity Institute 
in Oxford, the Centre for the Study of Existential Risk in Cambridge and 
the Future of Life Institute at MIT -- is trying to tackle some of these 
challenges.

At this point, the pessimist raises an objection. If we are going to 
"mutate" so radically in the long term, then we humans won't survive 
anyway. How is that different from quicker and messier paths to 
extinction? Either way, to misquote Keynes a little: in the long run, we 
humans are all gone. So why all the fuss?

The optimist has two answers to this. The rosier view is that there are 
paths to a post-human future in which each generation is able to feel, 
yes, these are our children, our descendants, our legacy. Children grow 
up, and make their own decisions, some of which might shock even their 
parents -- let alone their more distant ancestors. That's life, and it 
is better in most ways than the alternatives. A fight against extinction 
needn't be a fight for stasis.

*We humans tend to think of ourselves as special, the culmination of the 
evolutionary tree. But that hardly seems credible to an astronomer, 
aware that although our Sun formed 4.5 billion years ago, it is barely 
in middle age.*

The less rosy view is that even this may be too much to hope for. All 
the same, the not-quite-an-optimist says, there's a vast gulf between a 
future rich with strange, alien and morphing descendants and a future in 
which the distinctive strand of intelligence that arose in our corner of 
the galaxy has fallen silent. Our sense of what we value now tells us 
that that silent future is the one we should work to avoid.

There may be billions of Earth-like planets in our galaxy alone, and 
hundreds of billions of other galaxies, similarly rich with the kind of 
environments we know to be favorable for our sort of life. Unless we're 
a lot more special than at present we have any reason to think -- it 
seems likely that other technological civilizations have reached this 
point and perhaps had these kind of thoughts.

If these civilizations differ in what happens next, it may be because 
they differ in what they do with the realization that -- far more than 
ever before and thanks to the extraordinary power of their new 
technologies -- their future is in their own hands (or appendages of 
some other kind, perhaps). It remains to be seen what we humans will 
manage to do with this rather daunting responsibility. Watch this space, 
as an astronomer might say!

/This was published in partnership with the////Berggruen Philosophy and 
Culture Center <http://philosophyandculture.berggruen.org/>////and is 
part of the////WorldPost Series on Exponential Technology 
<http://www.huffingtonpost.com/news/exponential-technology/>./



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150924/31e12b40/attachment.html>


More information about the extropy-chat mailing list