[extropy-chat] AI design

Eugen Leitl eugen at leitl.org
Fri Jun 4 18:56:53 UTC 2004


On Fri, Jun 04, 2004 at 02:02:06PM -0400, Dan Clemmensen wrote:

> This thread has focused almost exclusively on the worst-case outcome of 
> creating an SI. I think extropians have a fairly good idea of the 

Of course, because we're familiar with the status quo. It ain't that
horrible, you know.

> magnitude of potential best-case outcomes, also.  However, we've been 

Sure. You drink this koolaid, you're gonna hitch the ride on the alien
spaceship. That's the best case outcome. Nevermind the worst case outcome,
and the outcome where you -- uh, thanks -- politely decline the invitation.

> neglecting the more mundane cost/benefit analysis of deferral. The 
> worst-case outcomes of deferral are pretty horrific. It is quite easy to 

How so? I didn't have to wade to work through burning brimstone. People have
been dying for a long time now. What of mass CR, what of cryonics, medical
nanotechnology, uploading? None of it strikes me as a hell-on-earth
enhancer technology. 

> envision plausible scenarios in which humanity destroys civilization, 
> humanity, the ecosystem, or the earth, without any SI involvement. there 

I don't think any catastrophic scenario not involving an SI is very
realistic. Slow poisoning, maybe, but it *is* very slow. This is one hell of
a smart culture, they've been dealing with the pee-in-the-pool problem for a
while now.

> are also several classes of cosmic catastrophe that can destroy 
> humanity. A "good" SI could prevent these disasters. So we need to 

While mapping out possible impactors, building an early warning and reaction
system are all worthwhile activities (given the budget, and the potential
ROI) over the course of next 30-50 years the chances of such world-enders
occuring within said period are effectively zero. Not worth losing much sleep
over, imo. 

> analyze the relative risks.
> 
> Moving back from the worst cases, we pay a huge everyday price by 
> deferring the SI. If the SI bootstraps a hard-takeoff singularity, or 
> even if it "just" massively increases productivity, millions of lives 

Unless we've got lots of really good prototypes showing a good chance of
guardian-controlled-ascent, with long-term trajectory containment trying to
build one is our best chance to reliably kill off everybody for good. This is
not just probable side effect, this is the by far likeliest outcome.

Our best protection seems to be that empirically it's really, really hard to
do.

> will be saved. Deferring the SI effectively kills those people.

Supersized meals and contaminated water kill people.

So, do you prefer cherry, or grape?

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org         http://nanomachines.net
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20040604/1fe30bae/attachment.bin>


More information about the extropy-chat mailing list