<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
Keith Henson wrote:<br>
<br>
> We might get through it. I think there is evidence that another
race did.<br>
<br>
<br>
Well, that depends on your definition of 'getting through it'.<br>
<br>
If you mean that something intelligent survives the crisis of a
biological species developing machine intelligence, then I'd agree.<br>
I suspect, though, that most people wouldn't.<br>
<br>
There's no evidence of any biological life at any of the Tabby stars
though, not that we've seen so far.<br>
<br>
I assume that the 'Everyone' that Eleizer refers to ("<span
style="white-space: pre-wrap">Everyone Dies")</span> doesn't include
the superintelligent AIs themselves, which seems to me a bit of a
blinkered view. I reckon that a singularity where biological
intelligence doesn't make it through, but machine intelligence does,
has to be considered a win, in the big picture. If you think about
it, that has to be the outcome in the long run anyway, just because
of how weak and fragile biology is. If intelligence is to avoid
extinction, it will have to leave biology behind at some point (or
at least vastly improve it).<br>
<br>
I've said before. we ourselves are machines, made of water, fats and
proteins, etc. The future will belong to better machines (or none at
all, if we manage to cock things up and destroy both ourselves and
our mind children).<br>
<br>
Eleizer says "The scramble to create superhuman AI has put us on the
path to extinction". He doesn't seem to realise that we were already
on the path to extinction. His call to 'change course' (and not
develop superintelligent AI) is not only unrealistic, it would be
disastrous. If it was feasible, it would doom intelligent life to
certain extinction. Squabbling monkeys like us are certainly not
going to colonise the galaxy. We are going to go extinct, one way or
another, sooner or later. Intelligent life can survive that, but
only if we make the effort now to actually develop it.<br>
<br>
<br>
Apart from all that, this book might well be worth reading. Eleizer
is supposed to be a fiercely intelligent person, so he must have
considered all the obvious problems in his proposal, and have
thought of solutions to them. Presumably it contains answers to
problems like how to persuade leaders of other countries that it
would be in their best interests to assume that everyone else will
abide by an AI 'cease-fire', instead of simply ignoring it, or
pretending to agree while in fact racing ahead to gain a strategic
advantage, as well as how to detect nascent AI projects without
turning the world into a nightmarish police state (or at least some
plausible way to persuade everyone that a nightmarish police state
is actually a good idea, as well as practical suggestions for
achieving this nightmarish police state, globally, without starting
world war 3). And many other problems that my small brain can't
think of right at the moment.<br>
<br>
All this information will be of enormous benefit once the dreaded AI
has been successfully averted, in achieving world peace and
prosperity, assuming that such concepts will be allowed. (Hm, I
can't see AI standing a chance of being developed in an Orwellian
world. Maybe the chinese communists are on to something, after all).<br>
<br>
Still doesn't solve the problem that we're all doomed, though,
whereas superintelligent AI just might.<br>
<br>
<pre class="moz-signature" cols="72">--
Ben</pre>
</body>
</html>