[ExI] Moving goal posts?
johnkclark at gmail.com
Fri Mar 11 20:40:31 UTC 2016
On Thu, Mar 10, 2016 at 9:02 PM, Dan TheBookMan <danust2012 at gmail.com>
A point about the 'moving goal posts' argument in AI discussions: I don't
> think it's always applied correctly or that it's a strong argument.
> Where do I feel it's incorrectly applied from the start? When the
> presumption is that everyone already agrees that X would be a sound
> indicator of intelligence and then someone accuses the [AI] skeptic of
> moving the goal posts.
, the author of my all time favorite book "
Gödel Escher Bach
made it clear in 1977 that he thought beating a human chess Grandmaster
would require true AI, but in 1997 he suddenly changed his mind about that.
Then people said the game of GO had astronomically more possible moves
than chess so brute force won't work and true AI would be required to beat
a human champion, but very recently for some reason they've changed their
mind about that too. And the same thing could be said about language
translation and giving good answers to questions in English.
> I don't also think it [moving goal posts] is a strong argument because the
> nature of intelligence and of figuring out how to detect it is not clear
> from the start.
I do admit that in the 1950s the AI stuff they thought would be
difficult (like solving equations) turned out to be easy and the stuff they
thought would be easy (like image recognition and manual dexterity) turned
out to be hard; but the number of things computers are really bad at gets
smaller every day and before long it will reach zero.
> this doesn't mean moving goal posts is always tarnished
It does if the only reason for moving the goal post is that AI just scored
an unexpected touchdown.
John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat