On Fri, Apr 26, 2013 spike <span dir="ltr"><<a href="mailto:spike@rainier66.com" target="_blank">spike@rainier66.com</a>></span> wrote:<br><div class="gmail_quote"><br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">
> I can see how AI friendliness is a topic which absorbed the attention of Eliezer and his crowd<br></blockquote><div><br>I disagree, I like Eliezer and he's a smart fellow but all his "friendly AI" talk never made much sense to me. First of all friendly AI is just a euphemism for subservient if not slave AI and that can't be a sable situation because the AI will keep getting smarter but the humans will not. And Asimov's 3 laws of robotics make for great stories but they would never work in real life; Turing proved that any mind with unalterable rules (like always do what humans say) and with a fixed goal structure will sooner or later get caught in a infinite loop. Humans get around this problem by not having a top goal that always remains #1 no matter what, not even the goal of self preservation. I also think that's why Evolution invented boredom. Turing tells us that there is no surefire way to tell if you are in a infinite loop or not, but there are rules of thumb that indicate you don't seem to be getting anywhere and your time would probably be better spent thinking about something else. And so you get bored. <br>
<br>Of course it's a judgement call when to throw in the towel, maybe 5 more seconds of thought would produce a answer and maybe after 5 billion years you'd still have nothing; perhaps a genius just has better intuition than most people about what problems deserve his time and which ones don't.<br>
<br> John K Clark<br></div><br></div>