[ExI] Watson on NOVA

spike spike66 at att.net
Thu Feb 17 16:06:31 UTC 2011


bounces at lists.extropy.org] On Behalf Of Kelly Anderson
Subject: Re: [ExI] Watson on NOVA

On Tue, Feb 15, 2011 at 9:58 AM, spike <spike66 at att.net> wrote:
>> Ja, but when you say "research" in reference to AI, keep in mind the 
> actual goal isn't the creation of AGI, but rather the creation of AGI 
> that doesn't kill us.

>Why is that the goal? As extropians isn't the idea to reduce entropy?

We need AGI to figure out how to do nanotech to figure out how to upload by
mapping the physical configuration of our brains.  If they can do it while
we are alive, that would be great.  If the brain needs to be frozen, well,
that's better than the alternative.

>But if humans can create the AI that creates the replicating nanobots, then
in a sense it isn't out of human reach...

Ja.  I think AGI is the best and possibly only path to replicating nanotech.

...> >On the other hand, if we are successful at doing AI wrong, we are all 
> doomed right now.  It will decide it doesn't need us, or just sees no 
> reason why we are useful for anything.

>And that is a bad thing exactly how?

If we do AGI wrong, and it has no empathy with humans, it may decide to
convert *all* the available metals in the solar system and use all of it to
play chess or search for Mersenne primes.  I love both those things, but if
every atom of the solar system is set to doing that, it would be a bad
thing.

>> ...  But I am reluctant to risk my children's and grandchildren's 100
years of meat  world existence on just getting AI going as quickly as
possible.

>Honestly, I don't think we have much of a choice about when AI gets going.
We can all make choices as individuals, but I see it as kind of inevitable.
Ray K seems to have this mind set as well, so I feel like I'm in pretty good
company on this one.

No sir, I disagree with even Ray K.  A fatalistic attitude is dangerous in
this context.  We must do whatever we can to see to it we do have a choice
about when AI gets going.

>> ...Nuclear bombs preceded nuclear power plants.

>Yes, and many of the most interesting AI applications are no doubt military
in nature.  -Kelly


If true AGI is used militarily, then all humanity is finished, for
eventually the weaponized AGI will find friend and foe indistinguishable.

spike







More information about the extropy-chat mailing list