[ExI] Google’s Go Victory Is Just a Glimpse of How Powerful AI Will Be

William Flynn Wallace foozler83 at gmail.com
Thu Feb 4 16:21:57 UTC 2016


 >...Scott Adams has a blog post up saying that in our modern world
everything important is corrupt.That obviously applies to markets and
politics.

The rest of the world, for the most part, thinks we, USA mostly, just don't
understand human nature.  Corruption, as we call it, implying wrongdoing,
is just the cost of doing business to most of the world.

Is there anyone on this list that has never stolen anything at all, not
even a paperclip?  Hah!  Humans are highly prone to larceny.  It is evident
in the youngest children who have any control over the muscles at all.
"MINE", they cry, and grab whatever toys they want.  Subsequent training in
morals, religion, laws, does little make greed disappear. Deception and
deception detection, may not come as naturally, as most people can be taken
in by a skillful liar.

I cannot speak to AI and programming, etc. but what this field needs is
more progress in psychology, and maybe especially, lie detection.

bill w

On Thu, Feb 4, 2016 at 9:44 AM, Anders Sandberg <anders at aleph.se> wrote:

> On 2016-02-04 12:30, BillK wrote:
>
>> On 4 February 2016 at 11:48, Anders Sandberg  wrote:
>>
>>> Is this why non-godlike people are totally unable to cope with lying
>>> humans?
>>>
>> That's the trouble - they either can't cope (see criminal records for
>> a start) or they join in (to a greater or lesser extent).
>>
>
> So, either you are godlike, you can't cope, or you have joined the lying?
> :-)
>
>
>
>> In reality it is all a matter of being Bayesian. Somebody says something
>>> supporting a hypothesis H. You update your belief in H:
>>>
>>> P(H|claim) = P(claim|H)P(H)/P(claim) = P(claim|H)P(H) / [
>>> P(claim|H)P(H)+P(claim|not H)P(not H)]
>>>
>>
>> That's too academic a solution for dealing with humanity. People are
>> not consistent. Sometimes they lie, sometimes they are truthful and
>> sometimes all the range in between. Even peer-reviewed journals are a
>> mishmash.
>>
>
> I disagree. This is the internal activity of the AI, just like your
> internal activity is impemented as neural firing. Would you argue that
> complex biochemical processes are too academic to deal with humanity?
>
> One can build Bayesian models of inconsistent people and other information
> sources. Normally we do not consciously do that, we just instinctively
> trust Nature over National Inquirer, but behind the scenes there is likely
> a Bayes-approximating process (full of biases and crude shortcuts).
>
> The problem for an AI understanding humans is that it needs to start from
> scratch, while humans have the advantage of shared mental hardware which
> gives them decent priors. Still, since an AI that gets human intentions is
> more useful than an AI that requires a lot of tedious training, expect a
> lot of research to focus on getting the right human priors into the
> knowledge database (I know some researchers working on this).
>
> --
> Anders Sandberg
> Future of Humanity Institute
> Oxford Martin School
> Oxford University
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160204/1b836f45/attachment.html>


More information about the extropy-chat mailing list