[ExI] The symbol grounding problem in strong AI

Jeff Davis jrd1415 at gmail.com
Fri Dec 25 19:30:41 UTC 2009


On Wed, Dec 23, 2009 at 4:59 AM, Gordon Swobe <gts_2000 at yahoo.com> wrote:
>... Searle wants to know what possesses some intelligent people to attribute "mind" to mere programs running on computers, programs which in the final analysis do nothing more interesting that kitchen can-openers.

This is where you lose me.  You've got the faulty theoretical confused
with the indisputably empirical: a theory,... no, not even a theory,..
a funky-ass notion of mind, thoroughly corrupted by a persistent
pre-conscious legacy of spiritualism, versus the blunt, mundane,
un-hyped FACT ...... of mind .

Minds arise from dirt.  This is self-evident.  This is empirical.
Consequently the default assumption, should be that they "...do
nothing more interesting than kitchen can openers..."

You are such a mind... and you're understandably impressed.  But your
notion that minds are -- ergo -- "interesting" -- more interesting
than kitchen can-openers,... well sorry bro, but that's not
logic/science, it's egoism.

Humans, with their minds are no more interesting than mosquitoes with
their lower order of mind.  Such is the pernicious penetration of
spiritualism, that even this is faulty.  By using a mosquito -- a form
of life -- as a comparator, I have engaged the unspoken spiritualist
assumption that life is "interesting".  To make the break completely,
I should write "Humans, with their minds are no more "interesting"
than kitchen can-openers."

Wrap your mind around that and you'll come to understand what I meant
when I described this epiphany as liberating.  It allows you to cast
off the entire legacy of ooga booga(superstitious nonsense).  Sweep
away the legacy blindfold of ignorance, and start over, clear and
clean.

Best, Jeff Davis

  "Everything's hard till you know how to do it."
                          Ray Charles



More information about the extropy-chat mailing list