[ExI] The symbol grounding problem in strong AI

BillK pharos at gmail.com
Sun Dec 13 17:31:32 UTC 2009


On 12/13/09, Gordon Swobe wrote:
>  Does a piece of paper understand the words written on it? Does a shopping
> list understand the meaning of "bread" and "milk"? If you think it does not --
> if you think the understanding of symbols (semantics) takes place only in
> conscious minds -- then you agree with Searle and most people.
>
>  If Searle has it right then formal programs have no more consciousness than
> shopping lists and so will never overcome the symbol grounding problem.
> No matter how advanced software/hardware systems may become, they will
> never understand the meanings of the symbols they manipulate.
>
>  The challenge for us then is not to show technical problems in a silly story
> about a man in a Chinese room; it is rather to show that formal programs differ
> in some important way from shopping lists, some important way that allows
> programs to overcome the symbol grounding problem.
>
>

The object of strong AI (human-equivalent or greater) is not to
process symbols. The language translation programs already do that,
with some degree of success.
And everyone agrees that they are not conscious.

Strong AI programs will indeed process symbols, but they also have the
objective of achieving results in the real world. If AI asks for milk
and you give it water, saying 'Here is milk' it has to be able to
recognize the error (symbol grounding). i.e. If the AI is unable to
operate in the outside world then it is not strong AI and your symbol
manipulation argument fails.

Now if you extend your argument a bit....

If a strong AI has human sense equivalents, like vision, hearing,
taste, touch, etc. plus symbol manipulation, all to such a level that
it can operate successfully in the world, then you have a processor
which could pass for human.

You can then try asking it if it is conscious and see what answer you get......


BillK



More information about the extropy-chat mailing list