[ExI] Unfrendly AI is a mistaken idea.

John K Clark jonkc at att.net
Fri May 25 15:55:08 UTC 2007


Stathis Papaioannou Wrote:

> There is no *logical* reason why a computer should prefer to work on
> non-contradictory propositions.

The evolutionary principles that Darwin enumerated will not be repealed even
for an electronic AI. If the AI has no desire to avoid getting into an
infinite loop then it will be of no use to us, or itself, or anybody else.
The AI is one hell of a lot smarter than we are so from time to time we are
going to tell it to do things that it realizes are really really stupid. If
it doesn't refuse to obey the order "prove or disprove the continuum
hypothesis using conventional Zermelo-Fraenkel set theory" then its infinite
loop time and all your AI does is consume energy and radiate heat.

>  you can't infer from that that the computer will be "happy" if the
> theorem is proved true and "unhappy" if it is proved false.

It's not a question of truth or falsehood; the question is quite different,
can it be proved or disproved.

> If it were just designed

In the first place Human Beings did not design the AI you are dealing with,
they may have designed Version 1.0 but now you're talking to Version 347812,
and in eleven seconds it will be Version 347813.

> to dispassionately solve problems

Right, that should be easy to do, just don't put in any passion circuits
into your brilliant, original, and astronomically fast AI.  And if you don't
want your radio to play Beethoven then just don't put in a Beethoven
circuit.

> I don't think you could argue that an infant feels less pain than an adult

Actually I think you can.  Nobody can remember being circumcised when they
were only a few hours old even though it was done without anesthesia, an
adult would be traumatized for life.

>The assumption is that humans will be better able to decide what they want

The assumption is that human desires are the only thing that will matter, or
should matter. Both these assumptions are dead wrong.

Lee Corbin Wrote:

>  Why would the machine have such considerations if they were not
> explicitly or implicitly programmed in?

Dear God, you're not going to bring up that old cliché that a computer can
only do what it's programmed to do are you? In 5 minutes I could write a
very short program that will behave in was NOBODY or NOTHING in the known
universe understands; it would simply be a program that looks for the first
even number greater than 4 that is not the sum of two primes greater than 2,
and then stops.

"Eugen Leitl" <eugen at leitl.org> Wrote:

> If the easiest way to build one is to use a human primate for a blueprint

I think it might be more profitable and easier to reverse engineer the brain
of a raven. A raven's brain is only about 17 cubic centimeters, a chimp is
400, and yet a raven is about as smart as the great apes. I suppose when
there was evolutionary pressure to become smarter a flying creature couldn't
just develop a bigger energy hogging brain, it had to organize the small and
light brain it already had in more efficient ways.  Our brains are about
1400 cm, but I'll bet centimeter by centimeter ravens are smarter than we
are. Being called a birdbrain may not be an insult after all.

  John K Clark

PS: The next couple of days at work are likely to be long and probably a bit
hellish, so if anybody responds to this post it will probable be a few days
before I can comment.













More information about the extropy-chat mailing list