[ExI] How could you ever support an AGI?

John Grigg possiblepaths2050 at gmail.com
Sun Mar 2 21:40:07 UTC 2008


Robert Bradbury (>) and Lee Corbin wrote:
> But a problem has been troubling me recently as I have viewed press
> releases for various AI conferences. I believe the production of an
> AGI spells the extinction of humanity...Why should I expend intellectual
> energy, time, money, etc. in a doomed species?
I figure that humanities chances are about fifty-fifty. Or, rather, it's
absolutely too hard to have a good idea of what will happen (as
Taleb explains so well in "The Black Swan", though rather wordily).

So:  Half the time, I'm dead. We're all dead. Case closed. How sad.
But half the time somehow the reigning intelligence(s) manage to
respect private property and respect tradition[1]---in which case

   H O T     D A M N ! !

Things will be so literally unimaginably good for me/us that we literally
cannot conceive of it.  Now... do the weighted sum....   :-)


> If an AGI develops or is developed their existence is fairly pointless.

Not for me.  "To delight in understanding" is my credo, what life is
all about for me.  Besides, there'll be nice drugs that will help moods
(www.hedweb.com!), as we know, without interfering in other things.
And that's before uploading!


>  Our current culture obviously shows absorption is nearly instantaneous
> for younger minds.  They will know they are "obsolete" in an AGI world.

Obsolete for what?  I'm already obsolete in music composing and
nanotube transistor design.

Lee

[1] I have often called this "the logic of cryonics":  We save those
who came before, in order that those who come after will save us.
An AI may reason similarly:  it can very well become obsolete too,
so it has logical reason to subscribe to this doctrine. At completely
negligible expense it can preserve its ancestors (including us),
so why not?  Then it may expect its replacements to follow the
same logic, and so on.
(end of excerpt, but hopefully not humanity)


Wow!  This exchange between Robert and Lee reminded me of the "good old
days" of the Extropy list (back before even the dawn of the 21st century). :
)  I have very fond memories of Robert Bradbury due to his online postings
and having met him in person.  He helped me attend Extro 5, where I had some
great experiences.  I wish Robert would come back to the group and that he
could feel free to speak his mind.

I am disturbed that he now views the prospects for humanity as being very
slim to none.  Robert Bradbury is a very bright and educated man so I take
into serious consideration what he says.  But I do wonder if matters in his
personal life have somehow clouded his perspective (this is only conjecture
and I mean no offense to you, Robert).  I say this because sometimes when I
let life get me down, I temporarily develop a very negative
worldview.

Lee's thoughtful words put a smile on my face and helped boost my spirits
because of his very life-affirming and enthusiastic logic.  I realize Robert
has a point but we must not give in to dispair.

Lee Corbin wrote regarding if things should actually work out for humanity:
Things will be so literally unimaginably good for me/us that we literally
cannot conceive of it.  Now... do the weighted sum....   :-)
>>>

I have to say..., "I don't know, Lee, I can literally conceive of ALOT of
good things!" LOL : )  But I understand what you mean.  And while I don't
envision an utterly problem free utopia, I do think we will look back to
this present time as darkly medieval by comparison.

John Grigg
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20080302/9cfd6cce/attachment.html>


More information about the extropy-chat mailing list