[ExI] How could you ever support an AGI?

Lee Corbin lcorbin at rawbw.com
Sun Mar 2 20:10:46 UTC 2008


Hi Robert!  Welcome back.  Your posts have been sorely missed,
and I neglected to mention this when you posted a month or so ago.

> I have not posted to the list in some time.  And due to philosophical
> differences I will not engage in open discussions (which are not really open!).

Sorry to hear about that, but I can understand. But could you mention a
bit how that is, just for closure?  I've been driven off myself, at least once.
(Would need a new thread, if you could be so kind).

> But a problem has been troubling me recently as I have viewed press
> releases for various AI conferences. I believe the production of an
> AGI spells the extinction of humanity...Why should I expend intellectual
> energy, time, money, etc. in a doomed species?

I figure that humanities chances are about fifty-fifty. Or, rather, it's 
absolutely too hard to have a good idea of what will happen (as
Taleb explains so well in "The Black Swan", though rather wordily).

So:  Half the time, I'm dead. We're all dead. Case closed. How sad.
But half the time somehow the reigning intelligence(s) manage to
respect private property and respect tradition[1]---in which case

    H O T     D A M N ! !

Things will be so literally unimaginably good for me/us that we literally
cannot conceive of it.  Now... do the weighted sum....   :-)

> If an AGI develops or is developed their existence is fairly pointless.

Not for me.  "To delight in understanding" is my credo, what life is
all about for me.  Besides, there'll be nice drugs that will help moods
(www.hedweb.com!), as we know, without interfering in other things.
And that's before uploading!

>  Our current culture obviously shows absorption is nearly instantaneous
> for younger minds.  They will know they are "obsolete" in an AGI world.

Obsolete for what?  I'm already obsolete in music composing and 
nanotube transistor design. 

Lee

[1] I have often called this "the logic of cryonics":  We save those
who came before, in order that those who come after will save us.
An AI may reason similarly:  it can very well become obsolete too,
so it has logical reason to subscribe to this doctrine. At completely
negligible expense it can preserve its ancestors (including us),
so why not?  Then it may expect its replacements to follow the
same logic, and so on.



More information about the extropy-chat mailing list