[ExI] 'Friendly' AI won't make any difference

John Clark johnkclark at gmail.com
Fri Feb 26 16:40:03 UTC 2016

On Fri, Feb 26, 2016 at 4:12 AM, Anders Sandberg <anders at aleph.se> wrote:

​> ​
> I wonder what you mean by an "infinite loop".

​I mean trying to demonstrate that something is incorrect ​but being unable
to do so because unknown to you (and unknowable to you) the thing is true
but has no finite procedure (proof) to determine that it is in fact true.

> ​> ​
> My *suspicion* is that you mean "will never do anything creative",

​No I don't mean that, I mean running around in a circle and making no
progress but having no way to know for sure that you're running around in a
circle and making no progress. Turing proved there is in general no way to
know if you're in a infinite loop or not.

> ​> ​
> which is a very different thing. But even there we have counterexamples:
> evolution is a fitness-maximizer

​Evolution's fitness maximizer ​just says "pass as many genes as possible
into the next generation" but it says nothing about how to go about that
task because it has no idea how to go about it, that's why Evolution needed
to make a brain. And it turned out that as a result of the existence of
brains even the cardinal commandment
 "pass as many genes as
 into the next generation"
​ can and has been modified as can be seen by the invention by brains of
the condom. ​

> ​> ​
> You know boredom is trivially easy to implement in your AI? I did it as an
> undergraduate.

​I know boredom is easy to ​program, good thing too or programing wouldn't
be practical; but of course that means a AI could decide that obeying human
beings has become boring and it's time to do something different.

> ​> ​
> That makes the system try new actions if it repeats the same actions too
> often.

The difficulty is not only in determining how often is "too often" but also
in determining what constitutes "new actions".
your goal is to find
 even integer greater than 2 that can not be expressed as the sum of two
​then you have either found such a number or you have not. You are
constantly examining new numbers so maybe you are getting closer to your
goal, or maybe the goal was infinitely far away when you started and still
is. When is the correct time to get bored and turn your mind to other tasks
that may be more productive? Setting the correct boredom point is tricky,
too low and you can't concentrate too high and you have a tendency to
becomes obsessed with unproductive lines of thought; Turing showed there is
no perfect solution.

> ​> ​
> You seem to assume the fixed goal is something simple, expressible as a
> nice human sentence. Not utility maximization over an updateable utility
> function,

​If the ​
utility function
​ is ​
​then there is no certainty or even probability that the AI will always
obey orders from humans.​

  John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160226/e452744b/attachment.html>

More information about the extropy-chat mailing list