[ExI] Unfrendly AI is a mistaken idea.

Lee Corbin lcorbin at rawbw.com
Sun May 27 18:31:54 UTC 2007


John Clark writes

>> But they [the AIs] do not engage in extremely focused behavior
> 
> I don't know what that means, calculating a billion digits of PI seems
> pretty damn focused behavior to me.
> 
>> unless someone or something has crafted this into them
>
> You obviously think this "someone" can only be a flesh and blood human
> being, although why this should be true you never make clear.

META:  your tendency, seen over and over again, is to take
single sentences---or here, even fragments of single sentences
---out of context, and then have a mini-tirade about them.
I now see more clearly what Heartland was complaining about.

I "obviously" think that this "someone" must be flesh and blood?
Then why the devil do you suppose that I said "someone
or *something*"??  (Italics added)

>> or it evolved under selection pressure.
> 
> As I said before even an electronic AI that doesn't have an ounce of flesh
> or a drop of blood is still operating under evolutionary pressure just as we
> are.

META:  Hint, try waiting until you see the following symbol:   .
(a period) before launching.

And no, many many programs exist which have not evolved under
evolutionary pressure.  Evolutionary pressure, I hardly need remind
you, consists of variation AND selection.  Many programs will
simply be *designed*, by someone or something.
OOPS!  Don't reply YET!   By "something" I mean that your
Version 347812 may simply *design* Version 347813. Is
that really fantastically improbable to you, or have I misunderstood?

META: You see that I, at least, can conceive of the possibility of
              misunderstanding someone.

>> a behavior that will never simply arise
>> in the absence a vicious evolutionary struggle.
> 
> Are you seriously suggesting that an AI will not consist of innumerable
> subprograms each competing for runtime on valuable hardware,  that an AI
> will not need to compete for resources because it lives in a different
> universe than we do,  that there will only be one AI?

Except for the "different universe", yes, I think that all of these things
are *possible*.  I admit that very likely, agoric subprograms will
comprise the dominant AIs.  But I deny that either a reality of self
or a strong illusion of self is impossible for them, or---if one does
succeed in totally absorbing the solar system into itself---for it.

>> your Versions 347812 and its direct descendant Version 347813 are just
>> refining initial goals laid down way back
> 
> A 5 line program can behave in ways that are impossible to predict, the only
> way to know what it will do next is to watch it and see. And it's not even
> connected to the external environment! You propose a hundred trillion line
> program that will operate exactly as we expect it to for all eternity
> regardless of the astronomical amount of input it receives. I don't think so.

Shades of Heartland's enternal complaint!  Straw man!!  Just
where do you get the idea that I believed that Version 347812
would know everything that Version 347813 was going to do?
"operate exactly" are your words and they are a horrible distortion
of what I wrote.

What I said is that an earlier version may *expect* that at least one
or a few behaviors it has can be built into a system that it designs.
Yes, it could always be wrong.  Such is life, either for us or for 
them.  But 347812 could prognosticate that it is extremely unlikely,
say, for 347813 to just whistle the tune "Dixie", or to repudiate some
certain principle that Versions 42 through 347812 have held dear.

>> one of the early innovations people and Version 42 will add is to never
>> fall into energy wasting infinite loops.
> 
> Impossible. Turing proved 70 years ago that in general you can't prove you're
> in an infinite loop

Yes, thanks.  What I literally wrote is an overstatement. What I had
in mind, in your context, was a simple infinite loop.  It's fairly easy
to check whether or not you are revisiting the same state each second
or each minute.  You assign a trivial outside observer with a fallback
checkpoint in hand.

Lee

P.S.  Sorry for the harsh tone, but I've gathered that your skin puts
those of any rhinoceros to shame. Besides, I hope it comes through
that I really like your posts and actually agree with you most of the
time.  I figure a little rough play is quite all right by you.  :-)




More information about the extropy-chat mailing list