[ExI] Moving goal posts?

Dan TheBookMan danust2012 at gmail.com
Fri Mar 11 02:02:03 UTC 2016


A point about the 'moving goal posts' argument in AI discussions: I don't
think it's always applied correctly or that it's a strong argument.

Where do I feel it's incorrectly applied from the start? When the
presumption is that everyone already agrees that X would be a sound
indicator of intelligence and then someone accuses the [AI] skeptic of
moving the goal posts. (X can be playing chess at a grandmaster level, for
instance.) For it to be applied correctly, the accuser would have to show
the accused really believed that X was a good indicator and only changed
their mind because X has been achieved.

I don't also think it [moving goal posts] is a strong argument because the
nature of intelligence and of figuring out how to detect it is not clear
from the start. I think there might be good reasons in some cases -- not
all -- to really admit one thought X would be an indicator of intelligence,
but then when X is achieved, perhaps by the method of achieving it (think
of many automated reasoning tools) isn't really doing anything
intelligence. In other words, one has good reason to believe that the
presumption might be wrong.

Anyhow, this doesn't mean moving goal posts is always tarnished, but I do
feel it's used too much -- as if merely stating it banished any doubts save
for them held by the most pigheaded skeptics.

Regards,

Dan
  Sample my Kindle books via:
http://www.amazon.com/Dan-Ust/e/B00J6HPX8M/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160310/0b079bc0/attachment.html>


More information about the extropy-chat mailing list