[ExI] The symbol grounding problem in strong AI
Gordon Swobe
gts_2000 at yahoo.com
Sat Dec 19 00:21:00 UTC 2009
--- On Fri, 12/18/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>> If programs drive your artificial neurons (and they
> do) then Searle rightfully challenges you to show how those
> programs that drive behavior can in some way constitute a
> mind, i.e., he challenges you to show that you have not
> merely invented weak AI, which he does not contest.
>
> Could you say what you think you would experience and how
> you would behave if these artificial neurons were swapped
> for some of your biological neurons? I have asked this several
> times and you have avoided answering.
Because I represent Searle here (even as I criticize him on another discussion list) I will say that I think my consciousness might very well fade away in proportion to the number of neurons you replaced that had relevance to it.
This could happen even as I continued to behave in a manner consistent with intelligence. In other words, it seems to me that I would change gradually from a living example of strong AI to a living example of weak AI.
>> Programs that run algorithms do not and cannot have
> semantics. They do useful things but have no understanding
> of the things they do. Unless of course Searle's formal
> argument has flaws, and that is what is at issue here.
>
> Suppose we encounter a race of intelligent aliens.
> Their brains are nothing like either our brains or our computers,
> using a combination of chemical reactions, electric circuits,
> and mechanical nanomachinery to do whatever it is they do. We
> would dearly like to kill these aliens and take their technology
> and resources, but in order to do this without feeling guilty we
> need to know if they are conscious. They behave as if they are
> conscious and they insist they are conscious, but of course
> unconscious beings may do that as well. Neither does evidence that
> they evolved naturally convince us, since
> there is nothing to stop nature from giving rise to weak AI
> machines. So, how do we determine if the activity in the alien
> brains is some fantastically complex program running on fantastically
> complex architecture
I notice first that we need to ask ourselves the same question, (as many here no doubt already have): how do we know for certain that the human brain does not do anything more than run some fantastically complex program on some fantastically complex architecture?
I think that if my brain runs any programs then it must do something else too. I understand the symbols that my mind processes and having studied Searle's arguments carefully I simply I do not see how a mere program can do the same, no matter how complex.
As for how we would know about the aliens... I think we would need to present them with the same arguments and information and ask them to use reason and logic to decide for themselves, just as I ask you to use reason and logic to decide for yourself.
>... and if we decide that it is, does that mean that the
> aliens are not conscious?
Yes.
-gts
More information about the extropy-chat
mailing list