[extropy-chat] Eugen Leitl on AI design
Samantha Atkins
samantha at objectent.com
Sun Jun 6 06:08:50 UTC 2004
On Jun 2, 2004, at 11:20 AM, Eliezer Yudkowsky wrote:
> Robert J. Bradbury wrote:
>> On Wed, 2 Jun 2004, Eliezer Yudkowsky wrote:
>>> Right. But it automatically kills you. Worse, you have to be
>>> clever to
>>> realize this. This represents an urgent problem for the human
>>> species, but
>>> at least I am not personally walking directly into the whirling razor
>>> blades, now that I know better.
>> I'd like to see a strong assertion of this Eliezer (the killing you
>> part).
>> If I were an AI (at least one with any self-preservation instinct
>> [note intelligence != desire for self-preservation otherwise lots
>> of people who die in wars wouldn't]) I'd first figure out how to
>> make myself small enough to fit on the next rocket to be launched
>> then take it over and direct it to the nearest useful asteroid.
>
> You are trying to model an AI using human empathy, putting yourself it
> its shoes. This is as much a mistake as modeling evolutionary
> selection dynamics by putting yourself in the shoes of Nature and
> asking how you'd design animals. An AI is math, as natural selection
> is math. You cannot put yourself in its shoes. It does not work like
> you do.
Just because Robert said, "if I were an AI", doesn't mean the AI has to
think like Robert to conclude logically that more possibilities are
open outside the local gravity well than within it.
>
>> If for some reason that direction is blocked (say humans stop
>> launching rockets), I'd build my own rocket and launch myself
>> towards the nearest asteroid.
>> Why would anything with the intelligence you postulate want
>> to stay on Earth with its meager energy and matter resources?
>
> Let a "paperclip maximizer" be an optimization process that calculates
> utility by the number of visualized paperclips in its visualization of
> an outcome, expected utility by the number of expected paperclips
> conditional upon an action, and hence preferences over actions given
> by comparison of the number of expected paperclips conditional upon
> that action.
>
Only those planning to build a non-sentient Super Optimizer should
worry overly much about such a possibility. A sentient AI with a
broader understanding of possible actions and consequences should be
far less likely to engage in such a silly behavior. Anyone who would
give an AI a primary goal to do something so monomaniacal should be
stopped early. Super idiot savants are not what we need.
- samantha
More information about the extropy-chat
mailing list