[ExI] How could you ever support an AGI?

Lee Corbin lcorbin at rawbw.com
Fri Mar 7 03:51:18 UTC 2008


Jeff writes

>>    It could be singularly selfish.
>>    It could just go crazy and "tile the world with paperclips".
>>    It could be transcendentally idealistic and want to greatly
>>      further intelligence in the universe and, oh, wipe out the pesky
>>      insignificant bacteria (us) that it happened to evolve from
>>    It could (with luck) be programmed (somehow) or evolved
>>      (somehow) to respect our laws, private property, and so on.
>>    As soon as it's able to change its own code, it will be literally
>>      unpredictable.
> 
> I agree with all of this, Lee.  This is a very mature thread -- been
> discussed often before.

Yes, sorry.

> We're familiar with the soft takeoff and the
> hard takeoff, the rapid self-optimization of the beastie in charge of
> its own code, and he consequent very very (though difficult to put a
> number to) rapid progression to "transcendent' being and singularity....

Yes, 

> While this is all quite reasonable, my points are:(1) y'all folks are
> jumping way ahead and glossing over the fact that it will be a
> process.  Granted that at some point it may be a very very fast
> process, so fast perhaps that in human terms it will be almost
> instantaneous, but...  In the beginning it will be slower.

Like right now. The "doom and gloom crowd" are focused on
a longer time window, and even I, who am not one of them,
believe that the likelihood of further existence of recognizable
human beings (uploaded or not) is fifty percent or less over
the next century.  And near zero for those for non-uploaded
people and people not running on what we call computer
hardware today.

> I liken it to raising a child.  An alien child perhaps.  And whatever the
> fundamental nature of intelligence is, we, as humans, building the
> thing and then training it, we will have only one model of
> intelligence to work from: and that is intelligence as we know it:
> intelligence in humans.  I mean, how do you design something to be
> kuflopnik if you don't know what kuflopnik is?  So I say let us be
> clear: by AI or AGI, or SI what we really mean is AHI, AGHI, or SHI,
> because there is no known form of I without the H (for "human").  I
> invite an alternate view.

Here goes.  The various kinds of researches and approaches going on
right now, in my opinion, could result in a distinctly non-human kind of
thinking, especially in the realm of its goals. (I doubt if anything can be
considered an intelligence if it behaves as though it has no goals. A goal
could be as simple, for example, as wanting to answer a question.) But,
like you, I invite alternative views.

> I cannot see how you get to the intelligence necessary to effect self-
> optimization -- a necessary precondition for the fast takeoff -- without
> a much slower prologue of developing, building, and training, all of
> which is done using a human model of intelligence and a "curriculum" of human
> knowledge/culture/values conveyed by the various forms of human media.

I find all of that plausible except that you say it's really likely to absorb
our culture and values. Even many fully human children grow up
repudiating almost all of the values they were trained to acquire and
which were  surrounding them the whole time. An "inhuman" machine
could be far, far less impressionable.

>  The "We're doomed!" crowd blow right past the impact of origins and
> process, take the easy way out by saying "It's all beyond
> predictability"

Heh, heh, :-)  I accuse them of *not* recognizing that it's all beyond
predictability!  They seem to act as though it's a done deal, and that
despair is our only recourse.  I assume that they're not so silly as to
think that they can successfully appeal to folks worldwide to just
stop working on it!

> and end up at a fear-driven, classically irrational,
> classically human conclusion.
> 
> Maybe we are doomed.  I don't know.  But until someone addresses these
> points, I remain unconvinced.

To be totally convinced of future outcomes is to possess way,
way too much self-confidence.  My own bravest claim, given that,
is that I see a 99% chance that 200 years from now (or a lot
less) biological humanity will be extinct (or uploaded).  So I'm
a bit of a hypocrite too.  

>>  For as soon as anything is as bright as we are, constant hardware
>>  and software improvements will put it vastly beyond us.
> 
> "...as soon as... "
> 
> Indeed. And just how long and what impact that prologue?

I hope that not too big a component of the disagreements we're
having about this on the list the last few days  boils down simply
to a miscommunication of time estimates.  I find Ray Kurzweil's
estimates of singularity in 2045 a little bit too soon, but that's
just my gut speaking. But I don't think it will be very much longer
after that, perhaps a decade or two.[1]

> ********************************************************
> Really great to engage with you again, Lee?

That's for sure.  It's great to see you back.

> Hope all is well with you.

Thanks, and same to you.

Lee

 
> Best, Jeff Davis
> 
>     Aspiring Transhuman / Delusional Ape
>           (Take your pick)
>                 Nicq MacDonald
>



More information about the extropy-chat mailing list