[ExI] Unfrendly AI is a mistaken idea.
Lee Corbin
lcorbin at rawbw.com
Thu May 24 04:20:32 UTC 2007
Christopher writes
> [Lee wrote]
>> For just as you might keep a photograph of your
>> great great grandfather, so might an AI (friendly
>> or not) wish to maintain an infinitesimal record
>> of its progenitors.
>
> The strong possibility also exists that it would see at least some
> direct value in preserving us, in that any retroactive simulations it
> might have a need to run of humans or our systemic civilization...
But *static* data would suffice for this! It need not grant us
any runtime whatsoever if that's all it was interested in. And
we should be able to hope for more than mindless re-runs of
the battle of Gettysburg or whatever.
And I do mean mindless. I see a truly advanced AI as being
less interested in running human simulations than we are of
filling a glass with water over just to watch.
> (from which it was spawned) would necessarily risk lacking
> those aspects of the systemic dynamics that its past self was
> blind to, at the time we ceased to exist.
Right. But that could be done with static data, or with
computations that afforded you and me no runtime whatever.
> From my perspective, Friendly AI is in large part about ensuring that we
> pursue designs that are actually capable of representing that concern
> [of keeping complete information about us around].
Yes, I agree. But I'm hoping that the AI or SAIs will grant us
more than that. Because I really don't care if they just keep
records of me around, if I'm not going to get any benefit.
> The picture people have a hard time avoiding is that of another really,
> really, really smart human having power at their disposal, and we've
> rightly evolved to be worried about that. But this is a different
> issue, and deserves our deepest efforts to sever our anthropomorphisms
> wherever we can.
Yes, indeed. People don't realize or don't appreciate that (a) it's
possible to have two copies running concurrently (one staying
and one leaving, for example), (b) that emotions will be entirely
under formal control, as in Philip K. Dick's "mood organ" (c)
that there is no automatic fear of death that goes into an AI
(d) that whatever emotions they do have---i.e., like ours for
the most part either rational temporary insanity, or for unconscious
large data summaries (as in the Damasio card experiments), and
so on.
Lee
More information about the extropy-chat
mailing list