[ExI] Meaningless Symbols.
Gordon Swobe
gts_2000 at yahoo.com
Mon Jan 11 12:42:41 UTC 2010
--- On Sun, 1/10/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
> 2010/1/11 John Clark <jonkc at bellsouth.net>:
>
>> If the computer did understand the meaning you think
>> the machine would continue to operate exactly as it did before,
>> back when it didn't have the slightest understanding of anything.
>> So, given that understanding is a completely useless property why
>> should computer scientists even bother figuring out ways to make a
>> machine understand? Haven't they got anything
>> better to do?
>
> Gordon has in mind a special sort of understanding which
> makes no objective difference and, although he would say it makes a
> subjective difference, it is not a subjective difference that a person
> could notice.
Not so. The conscious intentionality I have in mind certainly does make a tremendously important subjective difference in every person. You and I have it but vegetables and unconscious philosophical zombies do not. It appears software/hardware systems also do not and cannot.
John makes the point that one might argue that it does not matter from a practical point of view if software/hardware systems can or cannot have consciousness. I don't disagree, and I have no axe to grind on that subject. I don't pretend to defend strong AI research in software/hardware systems.
My interest concerns the ramifications of the seeming hopelessness of strong AI research in s/h systems for us as humans. It tells us something important in the philosophy of mind.
I wonder if everyone understands that if strong AI cannot work in digital computers then it follows that neither can "uploading" work as that term normally finds usage here.
-gts
More information about the extropy-chat
mailing list