[ExI] How could you ever support an AGI?
ABlainey at aol.com
ABlainey at aol.com
Fri Mar 7 00:34:21 UTC 2008
In a message dated 06/03/2008 21:55:36 GMT Standard Time, rpwl at lightlink.com
writes:
> This line of argument makes the following assumption:
>
> *** Any AGI sufficiently intelligent to be a threat would start off
> in such a state that its drive system (its motivations or goals, to
> speak loosely) would either be unknowable by us, or deliberately
> programmed to be malicious, or so unstable that they would quickly
> deviate from heir initial set.
>
Unknowable by us, most probable. We would have to control each and every
piece of data it receives and calculate every reaction to that data in order to
know with certainty its motivations (motivation in these terms, logically
determined actions in my terms). A mathematically insurmountable task.
Deliberately malicious. I don't agree that this would need to be so. If
anything this is a concern that is a possibility. Hackers are generally new
adopters of all technology and such juvenile tinkering could well result in
deliberately malicious programming or simply through pure ignorance, derailing a
friendly AI. I don't need to highlight the possible military implications regarding
desirability of a malicious AI.
Unstable may not be the clearest term I would use. Certainly unstable from
our point of view, but more probably the learning curve of the AI would be so
stochastic that we cannot calculate the outcome. This would be true of its
learned logic, knowledge base and any psydo-emotions which it may have. The end
result is a chaotic erratic system (from our eyes) which would be impossible to
predict.
> This assumption is massively dependant on the actual design of the AGI
> itself. Nobody can state that an AGI would behave in this or that way
> without being very specific about the design of the AGI they are talking
> about.
>
Agreed, as per above.
> The problem is that many people assume a design for the AGI's motivation
> system that is theoretically untenable. To be blunt, it just won't
> work. There are a variety of reasons why it won't work, but regardless
> of what those reasons actually are, the subject of any discussion of
> what an AGI "would" do has to be a discussion of its motivation-system
> design.
>
> By contrast, most discussions I have seen are driven by wild,
> unsupported assertions about what an AGI would do! Either that, or they
> contain assertions about ideas that are supposed to be real threats (see
> the list above) which are actually trivially easy to avoid or deeply
> unlikely.
>
Pointing back to my earlier post I stated:
An Intelligence of this magnitude with a global reach into just about every
control system on the planet could and probably will do major damage. Although
probably not through design or desire, but just through exploration of ability
or pure accident.
Even if the AGI were boxed in or only had limited external contact, I can't
imagine how we could keep it cooped up for very long.
I can't see how you can reduce the list of threats to 'Trivial.' How do you
propose we 'easily' avoid them?
Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20080306/630d174f/attachment.html>
More information about the extropy-chat
mailing list