<div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:rgb(0,0,0)">
>...Scott Adams has a blog post up saying that in our modern world
everything important is corrupt.That obviously applies to markets and
politics.<br><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:rgb(0,0,0)">The rest of the world, for the most part, thinks we, USA mostly, just don't understand human nature. Corruption, as we call it, implying wrongdoing, is just the cost of doing business to most of the world.<br><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:rgb(0,0,0)">Is there anyone on this list that has never stolen anything at all, not even a paperclip? Hah! Humans are highly prone to larceny. It is evident in the youngest children who have any control over the muscles at all. "MINE", they cry, and grab whatever toys they want. Subsequent training in morals, religion, laws, does little make greed disappear. Deception and deception detection, may not come as naturally, as most people can be taken in by a skillful liar.<br><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:rgb(0,0,0)">I cannot speak to AI and programming, etc. but what this field needs is more progress in psychology, and maybe especially, lie detection.<br><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;font-size:small;color:rgb(0,0,0)">bill w<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 4, 2016 at 9:44 AM, Anders Sandberg <span dir="ltr"><<a href="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 2016-02-04 12:30, BillK wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 4 February 2016 at 11:48, Anders Sandberg wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Is this why non-godlike people are totally unable to cope with lying humans?<br>
</blockquote>
That's the trouble - they either can't cope (see criminal records for<br>
a start) or they join in (to a greater or lesser extent).<br>
</blockquote>
<br></span>
So, either you are godlike, you can't cope, or you have joined the lying? :-)<br>
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
In reality it is all a matter of being Bayesian. Somebody says something<br>
supporting a hypothesis H. You update your belief in H:<br>
<br>
P(H|claim) = P(claim|H)P(H)/P(claim) = P(claim|H)P(H) / [<br>
P(claim|H)P(H)+P(claim|not H)P(not H)]<br>
</blockquote>
<br></span><span class="">
That's too academic a solution for dealing with humanity. People are<br>
not consistent. Sometimes they lie, sometimes they are truthful and<br>
sometimes all the range in between. Even peer-reviewed journals are a<br>
mishmash.<br>
</span></blockquote>
<br>
I disagree. This is the internal activity of the AI, just like your internal activity is impemented as neural firing. Would you argue that complex biochemical processes are too academic to deal with humanity?<br>
<br>
One can build Bayesian models of inconsistent people and other information sources. Normally we do not consciously do that, we just instinctively trust Nature over National Inquirer, but behind the scenes there is likely a Bayes-approximating process (full of biases and crude shortcuts).<br>
<br>
The problem for an AI understanding humans is that it needs to start from scratch, while humans have the advantage of shared mental hardware which gives them decent priors. Still, since an AI that gets human intentions is more useful than an AI that requires a lot of tedious training, expect a lot of research to focus on getting the right human priors into the knowledge database (I know some researchers working on this).<span class="HOEnZb"><font color="#888888"><br>
<br>
-- <br></font></span><span class="im HOEnZb">
Anders Sandberg<br>
Future of Humanity Institute<br>
Oxford Martin School<br>
Oxford University<br>
<br></span><div class="HOEnZb"><div class="h5">
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</div></div></blockquote></div><br></div>