[ExI] Google’s Go Victory Is Just a Glimpse of How Powerful AI Will Be
Anders Sandberg
anders at aleph.se
Thu Feb 4 11:48:52 UTC 2016
On 2016-02-04 09:24, BillK wrote:
> So how is an advanced AI going to cope with all these lying, cheating
> humans? It cannot assume everyone is cheating, because there are some
> honest people around. (Although they probably should be described as
> 'mostly' honest). And people have good intentions but are mistaken.
> And even fraudsters are honest sometimes when it helps their scams.
> The AI needs to be god-like, with all-encompassing knowledge of every
> case when people are misbehaving.
Is this why non-godlike people are totally unable to cope with lying humans?
In reality it is all a matter of being Bayesian. Somebody says something
supporting a hypothesis H. You update your belief in H:
P(H|claim) = P(claim|H)P(H)/P(claim) = P(claim|H)P(H) / [
P(claim|H)P(H)+P(claim|not H)P(not H)]
If people were 100% truthful, then P(claim|H)=1, and if they were never
mistaken P(claim|not H)=0. But since they are not, you will not update
as strongly. And as you occasionally get fooled or find them mistaken,
you update P(claim|H) and P(claim|not H).
The problem for AI and humans is that different sources have different
credibility, and you want to estimate it without lots of data. So you
start making estimates of P(source is trustworthy|evidence) based on
uncertain evidence, like whether they are a peer-reviewed scientific
journal or that your friend (who is fairly trustworthy) said the source
was good. One can do all of these calculations and it never ends, since
now you will also update your credibility in different sources of
credibility information. However, the real measure is of course if your
rough-and-ready approximations lead to good enough behaviour. That is,
can you become a connoisseur of information sources?
--
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University
More information about the extropy-chat
mailing list