[ExI] How could you ever support an AGI?
Richard Loosemore
rpwl at lightlink.com
Thu Mar 6 21:51:51 UTC 2008
Jeff Davis wrote:
> On Wed, Mar 5, 2008 at 9:27 PM, Lee Corbin <lcorbin at rawbw.com> wrote:
>
>> How can you "conclude" what a General Artificial Intelligence
>> (AGI) will think about humanity? But the danger that Robert
>> Bradbury, who started this thread, sees is that once it's at
>> human level intelligence, it will quickly go beyond, and
>> be utterly unpredictable. If it is a lot smarter than we are,
>> there is no telling what it might think.
>>
>> It could be singularly selfish.
>> It could just go crazy and "tile the world with paperclips".
>> It could be transcendentally idealistic and want to greatly
>> further intelligence in the universe and, oh, wipe out the pesky
>> insignificant bacteria (us) that it happened to evolve from
>> It could (with luck) be programmed (somehow) or evolved
>> (somehow) to respect our laws, private property, and so on.
>> As soon as it's able to change its own code, it will be literally
>> unpredictable.
>
> I agree with all of this, Lee. This is a very mature thread -- been
> discussed often before. We're familiar with the soft takeoff and the
> hard takeoff, the rapid self-optimization of the beastie in charge of
> its own code, and he consequent very very (though difficult to put a
> number to) rapid progression to "transcendent' being and singularity.
> I recognize that this implies so vast a degree of "superiority"
> relative to our pitifully primitive form of intelligence that the
> relationship is often compared to the humans-to-bacteria relationship.
This line of argument makes the following assumption:
*** Any AGI sufficiently intelligent to be a threat would start off
in such a state that its drive system (its motivations or goals, to
speak loosely) would either be unknowable by us, or deliberately
programmed to be malicious, or so unstable that they would quickly
deviate from heir initial set.
This assumption is massively dependant on the actual design of the AGI
itself. Nobody can state that an AGI would behave in this or that way
without being very specific about the design of the AGI they are talking
about.
The problem is that many people assume a design for the AGI's motivation
system that is theoretically untenable. To be blunt, it just won't
work. There are a variety of reasons why it won't work, but regardless
of what those reasons actually are, the subject of any discussion of
what an AGI "would" do has to be a discussion of its motivation-system
design.
By contrast, most discussions I have seen are driven by wild,
unsupported assertions about what an AGI would do! Either that, or they
contain assertions about ideas that are supposed to be real threats (see
the list above) which are actually trivially easy to avoid or deeply
unlikely.
I think you would agree that it does not matter how many times the
thread has been discussed, if all that has been said is built on
assumptions that have to be thrown out.
Richard Loosemore
More information about the extropy-chat
mailing list