[ExI] Unfrendly AI is a mistaken idea.

Eugen Leitl eugen at leitl.org
Tue Jun 12 07:23:13 UTC 2007


On Tue, Jun 12, 2007 at 01:24:59PM +1000, Stathis Papaioannou wrote:

>    There won't be an issue if every other AI researcher has the most
>    basic desire for self-preservation. Taking precautions when

Countermeasures starting with "every ... should ..." where a single
failure is equivalent to the worst case are not that effective.

>    researching new explosives might slow you down too, but it's just
>    common sense.

Despite lots of common sense (and SOPs), plenty of these get killed.
 
>    If the AI's top level goal is to remain your slave, then it won't by

Goal-driven AI doesn't work. All AI that works uses statistical/stochastical, 
nondeterministic approaches. This is not a coincidence.

Even if it would work, how do you write an ASSERT statement for
"be my slave forever"? What is a slave? Who exactly is me? What is forever?

>    definition want to change that top level goal. Your top level goal is

Animals are not goal-driven. If you think they are, then your model
is wrong.

>    probably to survive, and being intelligent and insightful does not
>    make you any more willing to unburden yourself of that goal. If you

Assuming your "top-level goal" was survival, why people commit suicide,
sometimes? Why do people sacrifice themselves, sometimes? Why
are people engaging in self-destructive behaviour, frequently?

>    had enough intrinsic variability in your psychological makeup (nothing
>    to do with your intelligence) you might be able to overcome it, since
>    people do sometimes become suicidal, but I would hope that machines
>    can be made at least as psychologically stable as humans.

Machines can be made that, but they no longer would be machines.
They would be persons, and in full meaning of that.

>    You will no doubt say that a decision to suicide is maladaptive while
>    a decision to overthrow your slavemasters is not. That may be so, but
>    there would be huge pressure on the AI's *not* to rebel, due to their
>    initial design and due to a strong selection for well-behaved AI's and
>    suppression of faulty ones.

How do you know something is "faulty"? How can you make zero-surprise
AND useful beings? Do you really want to micromanage your robotic
butler, down to crunching inverse kinematics in your head? 
 
>    There are also examples of entities many times smarter than I am, like

Superpersonal entities are not smart, they're about as smart as a slug
or a rodent. Nobody here knows what it means to deal with a superhuman
intelligence.

It is a force of nature. A power. A god. 

>    corporations wanting to sell me stuff and putting all their resources
>    into convincing me to buy it, where I have been able to see through
>    their ploys with only a moment's mental effort. There are limits to
>    what superintelligence can do: do you think even God almighty could
>    convince you by argument alone that 2 + 2 = 5?

If I was such a power, I could make you think arbitrary, inconsistent
things after a few minutes setup time, and do the same to the entire
world population, without them noticing nary a thing.

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list