[ExI] How could you ever support an AGI?

Jeff Davis jrd1415 at gmail.com
Thu Mar 6 20:37:47 UTC 2008


On Wed, Mar 5, 2008 at 9:27 PM, Lee Corbin <lcorbin at rawbw.com> wrote:

>  How can you "conclude" what a General Artificial Intelligence
>  (AGI) will think about humanity?  But the danger that Robert
>  Bradbury, who started this thread, sees is that once it's at
>  human level intelligence, it will quickly go beyond, and
>  be utterly unpredictable. If it is a lot smarter than we are,
>  there is no telling what it might think.
>
>    It could be singularly selfish.
>    It could just go crazy and "tile the world with paperclips".
>    It could be transcendentally idealistic and want to greatly
>      further intelligence in the universe and, oh, wipe out the pesky
>      insignificant bacteria (us) that it happened to evolve from
>    It could (with luck) be programmed (somehow) or evolved
>      (somehow) to respect our laws, private property, and so on.
>    As soon as it's able to change its own code, it will be literally
>      unpredictable.

I agree with all of this, Lee.  This is a very mature thread -- been
discussed often before.  We're familiar with the soft takeoff and the
hard takeoff, the rapid self-optimization of the beastie in charge of
its own code, and he consequent very very (though difficult to put a
number to) rapid progression to "transcendent' being and singularity.
I recognize that this implies so vast a degree of "superiority"
relative to our pitifully primitive form of intelligence that the
relationship is often compared to the humans-to-bacteria relationship.

While this is all quite reasonable, my points are:(1) y'all folks are
jumping way ahead and glossing over the fact that it will be a
process.  Granted that at some point it may be a very very fast
process, so fast perhaps that in human terms it will be almost
instantaneous, but...  In the beginning it will be slower.  I liken it
to raising a child.  An alien child perhaps.  And whatever the
fundamental nature of intelligence is, we, as humans, building the
thing and then training it, we will have only one model of
intelligence to work from: and that is intelligence as we know it:
intelligence in humans.  I mean, how do you design something to be
kuflopnik if you don't know what kuflopnik is?  So I say let us be
clear: by AI or AGI, or SI what we really mean is AHI, AGHI, or SHI,
because there is no known form of I without the H (for "human").  I
invite an alternate view.

That said, I get to point #2.  Everything arises out of process and
bears the imprint of its origins.  The origin of this artificial
intelligence, no matter how transcendent, will be a human one.  Not
biological, but cultural and intellectual.  (Even our own prokaryotic
bacterial origins are extant in our eukaryotic human cells, and the
legacy of the most ancient somatic "motivations" are the foundation of
our own somatic "motivations".)  I cannot see how you get to the
intelligence necessary to effect self-optimization -- a necessary
precondition for the fast takeoff -- without a much slower prologue of
developing, building, and training, all of which is done using a human
model of intelligence and a "curriculum" of human
knowledge/culture/values conveyed by the various forms of human media.
  The "We're doomed!" crowd blow right past the impact of origins and
process, take the easy way out by saying "It's all beyond
predictability." and end up at a fear-driven, classically irrational,
classically human conclusion.

Maybe we are doomed.  I don't know.  But until someone addresses these
points, I remain unconvinced.

>  > [Alex wrote]
>
> >> Expecting It to adhere to these moral codes would be
>  >> akin to you or I adhering to the moral codes of Ants.
>  >
>  > Too big a jump, at least for the first generation AI.
>
>  But the "first generation" may not last very long at all.

Granted, but versions 0.1 to 0.99 will.

>  For as
>  soon as anything is as bright as we are, constant hardware
>  and software improvements will put it vastly beyond us.
>

"...as soon as... "

Indeed. And just how long and what impact that prologue?


********************************************************

Really great to engage with you again, Lee? Hope all is well with you.

Best, Jeff Davis

     Aspiring Transhuman / Delusional Ape
           (Take your pick)
                 Nicq MacDonald



More information about the extropy-chat mailing list