[ExI] ai class at stanford
atymes at gmail.com
Tue Aug 30 16:13:50 UTC 2011
On Tue, Aug 30, 2011 at 5:26 AM, Kelly Anderson <kellycoinguy at gmail.com> wrote:
> So, if we pass the Turing test, for example, without understanding
> 100% how humans do it, then we understand how humans talk "well
> enough" to be useful and reproducible. So in this "engineering sense",
> the Turing test says we understand "intelligence" to a particular
> measurable level.
Standard terms (such as "well enough") in quotes to flag similes is a
warning flag. You give the impression that you do not understand that
"well enough" means "well enough" in all senses, and that you might
not be able to distinguish between your use of quotes-as-similes and
this paragraph's use of quotes-as-designators. This suggests that
rational conversation may be impossible.
Put another way, consider the difference in intended meaning between
"we understand how humans talk 'well enough' to be useful" and "we
understand how humans talk well enough to be useful".
> So rather than calling this the "Turing" test, we'll call this the
> "Adrian" test.
...and there's the ad hominem. The expected value of the rest of your
post is low enough that I'm not even going to read it. I will, however,
post this just in case those were honest mistakes, so you can reply
without making yourself appear not worth talking to in the future.
More information about the extropy-chat