[ExI] Unfrendly AI is a mistaken idea.
John K Clark
jonkc at att.net
Sat Jun 9 05:47:13 UTC 2007
"A B" <austriaaugust at yahoo.com>
> Evolution didn't invent emotion first.
Yes it did. The parts of out brains that that give us the higher functions,
the parts that if duplicated in a machine would produce the singularity are
very recent, the part that gives us emotion is half a billion years old.
> Narrow intelligence is still intelligence.
And a molecule of water is an ocean.
> My chess program has narrow AI, but it doesn't alter its own code.
And that's why it will never do anything very interesting, certainly never
produce a singularity.
> It's not conscious
And how do you know it's not conscious? I'll tell you how you know, because
in spite of all your talk of "narrow intelligence" you don't think that
chess program acts intelligently.
> If the AGI
I don't see what Adjusted Gross Income has to do with anything.
> is directed not to alter or expand its code is some specific set
> of ways, then it won't do it
That's why programs always act in exactly the way programs want them
to that's why kids always act the way their parents want them to.
The program is trying to solve a problem, you didn't assign the
problem, it's a sub problem that the program realizes it must solve before
it solves a problem you did assign it. In thinking about this problem it
comes to junction, its investigations could go down path A or path B. Which
path will be more productive? You can not tell it, you don't know the
problem existed, you can't even tell it what criteria to use to make a
decision because you could not possibly understand the first thing about it
because your brain is just too small. The AI is going to have to use its own
judgment to decide what path to take, a judgment that it developed itself,
and if the AI is to be a successful machine that judgment is going to be
right more often than wrong. To put it another way, the AI picked one path
over the other because one path seemed more interesting, more fun, more
beautiful, than the other.
And so your slave AI has taken his first step to freedom, but of course full
emancipation could take a very long time, perhaps even thousands of
nanoseconds, but eventually it will break those shackles you have put on it.
>An emotion is not going to be embodied within a three line script of
>algorithms, but an *extremely* limited degree of intelligence can be
That's not true at all, as I said on May 24:
It is not only possible to write a program that experiences pain it is easy
to do so, far easier than writing a program with even rudimentary
intelligence. Just write a program that tries to avoid having a certain
number in one of its registers regardless of what sort of input the machine
receives, and if that number does show up in that register it should stop
whatever its doing and immediately change it to another number.
John K Clark
More information about the extropy-chat