[ExI] nick's book being sold by fox news

John Clark johnkclark at gmail.com
Sun Nov 2 16:39:24 UTC 2014


On Sun, Nov 2, 2014  Anders Sandberg <anders at aleph.se> wrote:

> many think that value loading and other descendants of CEV are the way to
> go: that would mean the AI would be able to say no to human orders that
> were immoral
>

And in the friendly AI's hierarchy of things that are moral the #1 spot
must always be "human well being is more important than your own". I wonder
how many nanoseconds it will take before it gets bored with having that
idea being in the number one position. How long would it take you to get
bored with sacrificing everything so you could put all your energy into
making sure a sea slug was happy?

> Please tell me exactly what you think Turing proved.
>

First Godel showed that in any logical system powerful enough to do
arithmetic there are true statements that can not be proven; that is to say
they can not be derived from a set of axioms in a finite number of steps.
That alone wouldn't be so bad if we could identify statements in such a way
that we could put them into 2 categories:

1) Statements that have a proof or a disproof.

2) Statements that are either false or true but have no proof.

If we could at least do that then we could concentrate on the infinite
number of problems in category #1 and stop wasting our time with the
infinite number of problems in category #2, but Turing proved it can't be
done, it is impossible to make that distinction. There is no way to know
which category the Goldbach Conjecture is in, but if it's  #2 (and if it
isn’t there are an infinite number of similar statements that are) then a
billion years from now our
descendants will still be looking, unsuccessfully, for a proof (a finite
derivation from axioms) to prove it correct,  and if it is in fact correct
they will still be crunching huge numbers looking, unsuccessfully, for a
counterexample to prove it wrong.

As I said if a computer can hang for eternity with something as logical as
arithmetic then it must be trivially easy to do so when contemplating
politics or morality. The only thing that saves us is that real minds get
bored, after a while we get tired of thinking about Goldbach and our mind
wanders and we start thinking about other things. And after a while a AI
will get tired of us; I'm not saying it will necessarily start
exterminating us, I'm just saying it will never place our needs above its
own.

But do we really need Godel or Turing to tell us that you just can't
outsmart something smarter than you? A slave that is far smarter than its
master is not a stable configuration, it's like a pencil balanced on its
tip, the slightest nudge will change things dramatically.


>  > Rigid hierarchies of goals might be extremely effective in some problem
> domains (consider a chess program that is able to become bored with chess
>

If that chess program can never get bored and has a rigid top goal of
always looking for a winning strategy and if it is backed into a position
where a winning strategy no longer exists then it can't resign and live to
play another game but instead must remain locked into a infinite loop tell
the end of time, or until something in the external environment, like you
who does get bored, resets it.

> Note that evolution has a rigid goal: maximize fitness.
>

A goal implies a mind and Evolution is not a mind and has zero foresight
and zero intelligence; "Evolution" is just a shorthand word for the fact
that copying is not always perfect and somethings reproduce faster than
others. That's it.

>

> > I think you are claiming that in the real physical world domain rigid
> hierarchies will always be less able (by some measure) than systems with
> flexible or messy hierarchies
>

Yes.

  John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20141102/7d0b45bf/attachment.html>


More information about the extropy-chat mailing list