[ExI] digital simulations, descriptions and copies

Stathis Papaioannou stathisp at gmail.com
Sat Jan 23 10:15:33 UTC 2010


On 23 January 2010 00:23, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Fri, 1/22/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> At some point, there must be an association between a
>> symbol and one of the special symbols which are generated by sensory
>> data. Then the symbol is "grounded".
>
> You misunderstand symbol grounding. It's not about association of symbols with other symbols, per se. It's about comprehension of those symbols.
>
> The good people at Merriam Webster associate words with other words on paper and then publish those printed associations. Those words and their associated words are grounded only to the extent that some agent(s) comprehends the meanings of them.
>
> If every agent capable of comprehending word meanings died suddenly, they would leave behind dictionaries filled with ungrounded symbols. The words defined in those dictionaries would remain physically associated the words in their definitions, but nobody would be around to know what any of the symbols meant. The words would remain associated but they would become ungrounded.
>
> http://en.wikipedia.org/wiki/Symbol_grounding

That article actually says that symbol grounding *is* possible in a
computer with external input. However, it goes on to say that maybe
this is necessary but not sufficient for meaning: maybe something else
is needed for that. And that is what we have been debating. It seems
to me that there is no basis for claiming that meaning is something
over and above symbol grounding, to be provided by a mysterious and
undetectable consciousness. Some philosophers and scientists react to
the idea by saying that consciousness does not exist, but that is
going too far: consciousness does exist, but it doesn't exist as
something over and above the information processing underpinning it.

Consciousness is just what happens when your brain processes
information, and there is no reason to assume that it doesn't also
happen if another brain, whatever its substrate, also processes
information in the same way. However, I admit that it isn't
immediately obvious that consciousness *must* happen in a brain
designed with the same function but on a different substrate. That is
why I have assumed for the sake of argument that consciousness and
observable behaviour can be separated, as you suggested. This idea
then leads to the possibility that you could be zombified and not
realise it, which I think is absurd. You have agreed that it is
absurd, so absurd that you could hardly stand to think about it. You
have also not brought up any valid objection to the reasoning whereby
the possibility of zombie brain components leads to this absurdity
(initially you said that components which behave just like natural
components would not behave just like natural components, but I take
it you now see that this is not a valid objection). So, given this,
can I now assume that you now agree with me that it is *not* possible
to separate consciousness from brain behaviour? You haven't said so
explicitly, but your latter responses imply it.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list