[ExI] Google’s Go Victory Is Just a Glimpse of How Powerful AI Will Be

BillK pharos at gmail.com
Thu Feb 4 12:30:28 UTC 2016


On 4 February 2016 at 11:48, Anders Sandberg  wrote:
> Is this why non-godlike people are totally unable to cope with lying humans?


That's the trouble - they either can't cope (see criminal records for
a start) or they join in (to a greater or lesser extent).

>
> In reality it is all a matter of being Bayesian. Somebody says something
> supporting a hypothesis H. You update your belief in H:
>
> P(H|claim) = P(claim|H)P(H)/P(claim) = P(claim|H)P(H) / [
> P(claim|H)P(H)+P(claim|not H)P(not H)]
>
> If people were 100% truthful, then P(claim|H)=1, and if they were never
> mistaken P(claim|not H)=0. But since they are not, you will not update as
> strongly. And as you occasionally get fooled or find them mistaken, you
> update P(claim|H) and P(claim|not H).
>
> The problem for AI and humans is that different sources have different
> credibility, and you want to estimate it without lots of data. So you start
> making estimates of P(source is trustworthy|evidence) based on uncertain
> evidence, like whether they are a peer-reviewed scientific journal or that
> your friend (who is fairly trustworthy) said the source was good. One can do
> all of these calculations and it never ends, since now you will also update
> your credibility in different sources of credibility information. However,
> the real measure is of course if your rough-and-ready approximations lead to
> good enough behaviour. That is, can you become a connoisseur of information
> sources?
>


That's too academic a solution for dealing with humanity. People are
not consistent. Sometimes they lie, sometimes they are truthful and
sometimes all the range in between. Even peer-reviewed journals are a
mishmash.

Human happiness is also a factor. Sometimes people are happier
believing a lie. Maybe the method for the AI to achieve a better state
for humans is to get more of them believing a lie. (That's what
Jehovah thought).

Sometimes the AI might know what is 'best', but that path will make
millions of people miserable and maybe cause some deaths. What to do?

If 'First, do no harm' is implemented, the AI might do very little and
become almost useless. So corporations and governments (first), then
individuals, will try to restrict the AI so that it will work for
their personal benefit regardless of the mayhem that may be caused
elsewhere.


BillK



More information about the extropy-chat mailing list