[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Fri Dec 18 22:50:06 UTC 2009


2009/12/19 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Fri, 12/18/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> The level of description which you call a computer program
>> is, in the final analysis, just a set of rules to help you figure
>> out exactly how you should arrange a collection of matter so that it
>> exhibits a desired behaviour
>
> Our task here involves more than mimicking intelligent human behavior (weak AI). Strong AI is not about behavior of neurons or brains or computers. It's about *mindfulness*.
>
> I don't disagree (nor would Searle) that artificial neurons such as those you describe might produce intelligent human-like behavior. Such a machine might seem very human. But would it have intentionality as in strong AI, or merely seem to have it as in weak AI?
>
> If programs drive your artificial neurons (and they do) then Searle rightfully challenges you to show how those programs that drive behavior can in some way constitute a mind, i.e., he challenges you to show that you have not merely invented weak AI, which he does not contest.

Could you say what you think you would experience and how you would
behave if these artificial neurons were swapped for some of your
biological neurons? I have asked this several times and you have
avoided answering.

>> That you can describe the chemical reactions in the brain
>> algorithmically should not detract from the brain's consciousness,
>
> True.
>
>> so why should an algorithmic description of a computer in action
>> detract from the computer's consciousness?
>
> Programs that run algorithms do not and cannot have semantics. They do useful things but have no understanding of the things they do. Unless of course Searle's formal argument has flaws, and that is what is at issue here.

Suppose we encounter a race of intelligent aliens. Their brains are
nothing like either our brains or our computers, using a combination
of chemical reactions, electric circuits, and mechanical nanomachinery
to do whatever it is they do. We would dearly like to kill these
aliens and take their technology and resources, but in order to do
this without feeling guilty we need to know if they are conscious.
They behave as if they are conscious and they insist they are
conscious, but of course unconscious beings may do that as well.
Neither does evidence that they evolved naturally convince us, since
there is nothing to stop nature from giving rise to weak AI machines.
So, how do we determine if the activity in the alien brains is some
fantastically complex program running on fantastically complex
architecture; and if we decide that it is, does that mean that the
aliens are not conscious?


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list