[ExI] Moving goal posts?

Dan TheBookMan danust2012 at gmail.com
Sat Mar 12 02:59:49 UTC 2016


On Fri, Mar 11, 2016 at 12:40 PM, John Clark <johnkclark at gmail.com> wrote:
> On Thu, Mar 10, 2016 at 9:02 PM, Dan TheBookMan <danust2012 at gmail.com>
wrote:
>
>> A point about the 'moving goal posts' argument in AI discussions: I
don't think it's always applied correctly or that it's a strong argument.
>> Where do I feel it's incorrectly applied from the start? When the
presumption is that everyone already agrees that X would be a sound
indicator of intelligence and then someone accuses the [AI] skeptic of
moving the goal posts.
>
>
> Douglas Hofstadter
> , the author of my all time favorite book "
> Gödel Escher Bach
> "
> made it clear in 1977 that he thought beating a human chess Grandmaster
would
> require true AI, but in 1997 he suddenly changed his mind about that.
Then people
> said the game of GO had astronomically more possible moves than chess so
> brute force won't work and true AI would be required to beat a human
champion,
> but very recently for some reason they've changed their mind about that
too. And
> the same thing could be said about language translation and giving good
answers
> to questions in English.

Sure, some people do move goal posts. I didn't deny that. I also think
sometimes goal posts can be moved for the right reasons -- e.g., the area
of interest becomes more conceptually clarified

>> I don't also think it [moving goal posts] is a strong argument because
the nature
>> of intelligence and of figuring out how to detect it is not clear from
the start.
>
> I do admit that in the 1950s the AI stuff they thought would be difficult
(like solving
> equations) turned out to be easy and the stuff they thought would be easy
(like
> image recognition and manual dexterity) turned out to be hard;> but the
number of
> things computers are really bad at gets smaller every day and before long
it will
> reach zero.

That's partly what I was speaking to. Is it perhaps unimaginable that
further advances might change the targets here too? Of course, I agree that
ever more stuff comes under the explanation via or replication in machines
the AI-skeptic has a harder task. (Presuming the skeptic has good reasons
for their skepticism, there's no reason they can't simply learn and grow
here. My original post was about getting beyond the bald assertion of
moving goal posts. I'm not sure if there are many AI-skeptics out there who
said a year ago, 'If a machine can play Go at a grandmaster level, that
would be evidence of intelligence, but we all know for certain that that'll
never happen.' and then said today -- like someone running for president,
'I think playing Go at any level is not evidence of intelligence. In fact,
even experts at playing Go now, human ones including, are void of
intelligence.' That said, I'd love to see someone be so asinine as that.)

>>  this doesn't mean moving goal posts is always tarnished
>
>
> It does if the only reason for moving the goal post is that AI just
scored an unexpected touchdown.

Well, you trimmed out the part where I wrote exactly that:

"For [the 'moving goal posts' argument] to be applied correctly, the
accuser would have to show the accused really believed that X was a good
indicator and only changed their mind because X has been achieved."

This is a habit I've noticed with you with regard to me: you chop out a
point I made and then post something that either repeats the point you
chopped out or, worse, post something that was refuted by the part you
chopped out.

Regards,

Dan
  Sample my Kindle books via:
http://www.amazon.com/Dan-Ust/e/B00J6HPX8M/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160311/c3aea0cb/attachment.html>


More information about the extropy-chat mailing list