[ExI] Hard Takeoff

spike spike66 at att.net
Tue Nov 16 05:13:01 UTC 2010


 

 

From: extropy-chat-bounces at lists.extropy.org
[mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Michael
Anissimov
.



>Heya Spike.

Heya back Michael!  The level of discourse here has improved an order of
magnitude since you started posting last week.  Thanks!  You SIAI guys are
aaaallways welcome here.

On Sun, Nov 14, 2010 at 10:10 PM, spike <spike66 at att.net> wrote:


>>I am not advocating a Bill Joy approach of eschewing AI research, just the
opposite.  A no-singularity future is 100% lethal to every one of us, every
one of our children and their children forever.  A singularity gives us some
hope, but also much danger.  The outcome is far less predictable than
nuclear fission.


>Would you say the same thing if the Intelligence Explosion were initiated
by the most trustworthy and altruistic human being in the world, if one
could be found?...

 

Ja I would say nearly the same thing, however I cheerfully agree we have a
muuuch better chance of a good outcome if the explosion is initiated by the
most trustworthy and altruistic among us carbon units.  I am a big fan of
what you guys are doing as SIAI.  It pleases me to see you working the
problem, for without you, the inevitable Intelligence Explosion falls to the
next bunch, who I do not know, who may or may not make it their focus to
produce a friendly AI.  That would reduce the probability of a good outcome.

 

That being said:

 

>In general, I agree with you except the last sentence.


>michael.anissimov at singinst.org

>Singularity Institute

>Media Director

 

I do hope you are right in that disagreement, but I will defend my pessimism
in any case.  The engineering world is filled with problems which
unexpectedly defeated their designers, or do something completely
unexpected.  In my own field, the classic example is the hybrid aerospike
engine, which was designed to burn both kerosene and liquid hydrogen, and
also to throttle efficiently.  If we can get a single engine to do that,
optimizing thrust at varying altitudes and burn two different fuels without
duplicating nozzles, pumps, thrust vector control, all that heavy stuff,
then we can achieve single stage to orbit.  We poured tons of money into the
effort, but that seemingly straightforward engineering problem unexpectedly
defeated us.  We cannot use a single engine to burn both fuels, and
consequently we have no SSTO to this day.  The commies worked the same
problem, it kicked their asses too, as good as they are at large scale
propulsion.  There were unknowns that no one knew were unknowns.

 

It could be my own ignorance of the field (hope so) but it seems to me like
there are waaay many unknowns in what an actual intelligence (artificial or
bio) will do.  It appears to me to be inherent in the field of intelligence.
Were you to suggest literature, I will be willing to study it.  I want to
encourage you lads up there in Palo Alto.  Your cheering section is going
wild. 

 

We know the path to artificial intelligence is littered with the corpses of
those who have gone before.  The path beyond artificial intelligence may one
day be littered with the corpses of our dreams, of our visions, of
ourselves.

 

spike

 

 

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101115/bb0a6273/attachment.html>


More information about the extropy-chat mailing list