[ExI] Google’s Go Victory Is Just a Glimpse of How Powerful AI Will Be
pharos at gmail.com
Thu Feb 4 09:24:29 UTC 2016
On 2 February 2016 at 18:48, spike wrote:
> Hi Robin,
> That whole notion of ideas futures which you drove a long time ago is really big stuf
> in political elections, like the one which was kicked off yesterday in Iowa. I watched
> those share prices do what they do in elections. A prominent real-money version is PredictIt.
> The PredictIt people realized that the most this whole notion is ever used is around politics.
Scott Adams has a blog post up saying that in our modern world
everything important is corrupt.
That obviously applies to markets and politics.
In my experience on this planet, anything that is both important and
corruptible (without detection) is already corrupted. Athletes are
using performance-enhancing drugs, politicians are using dirty tricks,
hedge funds are using insider information, and so on. It’s a universal
truth. I doubt you could find anything in our world that is both
important and corruptible yet isn’t already corrupted.
That brings us to the Iowa caucuses. I have no evidence that the the
vote was fraudulent. But objectively speaking, if the GOP
establishment had rigged the Iowa result for a Rubio surge, it would
look to observers exactly the way it played out.
So how is an advanced AI going to cope with all these lying, cheating humans?
It cannot assume everyone is cheating, because there are some honest
people around. (Although they probably should be described as 'mostly'
honest). And people have good intentions but are mistaken. And even
fraudsters are honest sometimes when it helps their scams.
The AI needs to be god-like, with all-encompassing knowledge of every
case when people are misbehaving. But it won't get to that state
immediately. In the interim, people will be lying to the AI, trying to
persuade it to work to their advantage. Will the AI have to become a
better liar than humans?
More information about the extropy-chat