[ExI] Meaningless Symbols

Mike Dougherty msd001 at gmail.com
Sat Jan 9 20:11:10 UTC 2010


On Sat, Jan 9, 2010 at 1:54 PM, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> The question of strong AI is: "How can we make computers actually understand the meanings and not merely appear to understand the meanings?"

How can we make people actually understand the meanings and not merely
appear to understand the meanings?

To the degree that it/you/anyone serves my purpose, I don't care what
it/you/they "understand" as long as the appropriate behavior is
displayed.  Why is that so difficult to grasp?  When I tell me dog
"Sit" and it sits down, I don't need to be concerned about the dog's
qualia or a platonic concept of sitting - only that the dog does what
I want.  If I tell a machine to find a worthy stock for investment,
that's what I expect it should do.  I would even be happy to entertain
follow-up conversation with the machine regarding my opinion of
"worthy" and my long-term investment goals - just like I would expect
with a real broker or investment advisor.  At another level of
conversational interaction with proposed AGI, it might start asking me
novel questions for the purpose of qualifying its model of my expected
behavior.  At that point, how can any of us declare that the machine
doesn't have 'understanding' of the data it manages?  Why would we?

Understanding is highly overrated.  Many people stumble through their
lives with only a crude approximation of what is going on around them
- and it works.



More information about the extropy-chat mailing list