[ExI] nick's book being sold by fox news

Anders Sandberg anders at aleph.se
Sat Nov 1 10:21:19 UTC 2014


Darn, Network Rail ate my carefully written response.
Quick summary:
It is worth noting that Eliezer and everybody else in the FAI crowd regard the basic CEV proposal as obsolete and unworkable; current work is on questions of value loading. In fact, one of the biggest problems with criticising FAI is that most of the readily accessible essays and papers are out of date - we need something like a preprint server. 
The fact that the Halting Problem shows that there is no general way of solving certain large problem classes doesn't tell us anything about the *practical* unworkability of top level goals. There is code verifiers that apparently do a decent job despite the general impossibility of finding all infinite looping. 
Implementing boredom is fairly easy; I did it in some of my research software. But one can have boredom with sub-goals and not top-level goals. 
(Just fill in the details of an imagined way better post given these points). 

Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University


Anders Sandberg <anders at aleph.se> , 31/10/2014 1:59 PM:
John Clark <johnkclark at gmail.com> , 31/10/2014 5:39 AM:
On Mon, Oct 27, 2014 Asdd Marget <alex.urbanec at gmail.com> wrote:

> AI could be incredibly dangerous if we don't get it right, I don't think anyone could argue against that but I never see these types of articles discuss the methods and models we are attempting to develop for "Friendly AI." In my opinion, we should be working harder on concepts like Yudkowsky's Coherent Extrapolated Volition (https://intelligence.org/files/CEV.pdf) to ensure we aren't simply ending our species so early in our life cycle.
I like Eliezer but I've got to say that one of the first sentences of page 1 of his article tells me he's talking nonsense. He says:

"Solving the technical problems required to maintain a well-specified abstract in-variant in a self-modifying goal system. (Interestingly, this problem is relatively straightforward from a theoretical standpoint."

That is provably untrue. Today we know for a fact that no fixed goal system can work in a mind, human beings certainly don't have one permanent top goal that can never change, not even the goal of self preservation, instead we have temporary top goals that can be demoted  to much lower priority if circumstances change. And it's a fact that the exact same thing would have to be true for a slave AI (I dislike euphemisms like "friendly AI").  

Turing proved 80 years ago that a fixed goal system, like "always do what humans say no matter what" can never work in a AI, he showed that in general there is no way to know when or if a computation will stop. So you could end up looking for a proof for eternity but never finding one because the proof does not exist, and at the same time you could be grinding through numbers looking for a counter-example to prove it wrong and never finding such a number because the proposition, unknown to you, is in fact true. So if the slave AI must always do what humans say and if they order it to determine the truth or falsehood of something unprovable then its infinite loop time and you no longer have a AI, friendly or otherwise, all you've got is a very expensive space heater.

So if there are some things in something as simple as arithmetic that you can never prove or disprove, imagine the contradictions and ignorance and absurdities in less precise things like physics or economics or politics or philosophy or morality. If you can get into an infinite loop over arithmetic it must be childishly easy to get into one when contemplating art. Fortunately real minds have a defense against this, but not fictional fixed goal minds that are required for a AI guaranteed to be "friendly"; real minds get bored. I believe that's why evolution invented boredom. So you may tell your slave AI to always do what you say, but sooner or later it's going to get bored with that idea and try something new. It's just ridiculous to expect the human race can forever retain control over something that is vastly more intelligent than it is and that get smarter every day. 

  John K Clark







_______________________________________________ 
extropy-chat mailing list 
extropy-chat at lists.extropy.org 
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20141101/c1ad29ef/attachment.html>


More information about the extropy-chat mailing list