[ExI] Google’s Go Victory Is Just a Glimpse of How Powerful AI Will Be
Anders Sandberg
anders at aleph.se
Thu Feb 4 15:44:27 UTC 2016
On 2016-02-04 12:30, BillK wrote:
> On 4 February 2016 at 11:48, Anders Sandberg wrote:
>> Is this why non-godlike people are totally unable to cope with lying humans?
> That's the trouble - they either can't cope (see criminal records for
> a start) or they join in (to a greater or lesser extent).
So, either you are godlike, you can't cope, or you have joined the
lying? :-)
>
>> In reality it is all a matter of being Bayesian. Somebody says something
>> supporting a hypothesis H. You update your belief in H:
>>
>> P(H|claim) = P(claim|H)P(H)/P(claim) = P(claim|H)P(H) / [
>> P(claim|H)P(H)+P(claim|not H)P(not H)]
>
> That's too academic a solution for dealing with humanity. People are
> not consistent. Sometimes they lie, sometimes they are truthful and
> sometimes all the range in between. Even peer-reviewed journals are a
> mishmash.
I disagree. This is the internal activity of the AI, just like your
internal activity is impemented as neural firing. Would you argue that
complex biochemical processes are too academic to deal with humanity?
One can build Bayesian models of inconsistent people and other
information sources. Normally we do not consciously do that, we just
instinctively trust Nature over National Inquirer, but behind the scenes
there is likely a Bayes-approximating process (full of biases and crude
shortcuts).
The problem for an AI understanding humans is that it needs to start
from scratch, while humans have the advantage of shared mental hardware
which gives them decent priors. Still, since an AI that gets human
intentions is more useful than an AI that requires a lot of tedious
training, expect a lot of research to focus on getting the right human
priors into the knowledge database (I know some researchers working on
this).
--
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University
More information about the extropy-chat
mailing list