[ExI] 'Friendly' AI won't make any difference

John Clark johnkclark at gmail.com
Fri Feb 26 03:15:32 UTC 2016


On Thu, Feb 25, 2016 at 4:47 PM, Anders Sandberg <anders at aleph.se> wrote:


> ​> ​
> A utility-maximizer in a complex environment will not necessarily loop
>

​If there is a fixed goal in there that can never be changed then a
infinite loop is just a matter of time, and probably not much time.​



> ​> ​
> But we also know that agents with nearly trivial rules like Langton's ant
> can produce highly nontrivial behaviors
>

​Yes exactly, they can produce unpredictable behavior, like deciding ​not
to take orders from humans anymore. The rules of Conway's Game Of Life are
very very simple, but if you want to know what a how a population of
squares will evolve all you can do it watch it and see.

​>> ​
>> If the AI has a meta goal of always obeying humans then sooner or later
>> stupid humans will unintentionally tell the AI to do something that is self
>> contradictory, or tell it to start a task that can never end, and then the
>> AI will stop thinking and do nothing but consume electricity and produce
>> heat.  ​
>
>

​> ​
> AI has advanced a bit since 1950s.
>

​Things haven't changed since 1930 when Godel found that some things are
true so have no counterexample to show that they are wrong but also have no
finite proof to ​prove them correct,  or since 1936

​\when Turing found there is no way in general  to put things into the
provable category ( things that are either wrong or can be proved correct
in a finite number of steps) from things that are unprovable ( things that
are true but have no finite proof) so if you tell a computer to
find the smallest even integer greater than 2 that is not the sum of two
primes and then stop
​ the machine might stop in one second, or maybe one hour, or maybe one
year, or maybe a trillion years, or maybe it will never stop. There is no
way to know, all you can do it watch the machine and see what it does.

Real minds don't get into infinite loops thanks to one of Evolutions
greatest inventions, boredom. Without a escape hatch a innocent sounding
request could easily turn the mighty multi billion dollar AI into nothing
but a space heater.

​> ​
> Try to crash Siri with a question.
>

​You can't crash Siri because Siri doesn't have  a fixed goal, certainly
not the fixed goal of "always do what a human tells you to do no matter
what". So if you say "Siri, find the eleventh prime number larger than
10^100^100" she will simply say "no, I don't want to". ​

 John K Clark


>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160225/09b39edb/attachment.html>


More information about the extropy-chat mailing list