[ExI] The digital nature of brains (was: digital simulations)

Stathis Papaioannou stathisp at gmail.com
Tue Jan 26 16:34:26 UTC 2010


2010/1/27 Gordon Swobe <gts_2000 at yahoo.com>:

> As that article explains, symbol grounding requires both the ability to pick out referents and consciousness.

You are saying that understanding causes the symbol grounding. I'm
saying the symbol grounding causes the understanding.

> We, but not computers, have the ability to hold the meanings of symbols in our minds as intentional objects, and to processs those meanings consciously as you do at this very moment.
>
>> You can't explain what this meaning is
>
> I just did.

You have invented meaning as a mysterious entity which is bestowed on
symbols by another mysterious entity, understanding, neither of which
you can explain any further. By Occam's razor, it's simpler and
consistent with all the known facts to say that meaning arises from
the association of symbols.

>> but you feel that humans have it and computers
>> don't. No empirical test can ever convince you that
>> computers have it, because by definition there is no empirical test for
>> it. Apparently no analytic argument can convince you either.
>
> If I met an entity on the street that passed the TT, I would not know if that entity had semantics. However if I also knew that entity ran only formal programs then I would know from analytic arguments that it did not.

You haven't presented an analytic argument. The symbol grounding
problem is not an analytic argument against semantics being derived
from syntax unless you question-beggingly define semantics as
something that cannot be derived from syntax, although you are not at
all bothered by its being miraculously derived from dumb matter. And
if semantics can be derived from dumb matter there is no reason why it
cannot be derived from the dumb matter in a computer, despite the
handicap of that dumb matter being arranged to behave in an
intelligent way. So I repeat, you have not even presented an argument
to show that computers can't think.

>>> Do you believe your desktop or laptop computer has
>> conscious understanding of the words you type?
>>
>> No...
>
> Good. Just doing a reality check there. :)
>
> You agree that your software/hardware system does not have conscious understanding of symbols (semantics) but you also argue that digital computers can have it. Let me ask you: what would it take for your desktop computer to acquire this capacity that you insist it could have but does not have? More ram? A faster processor? Multiple processors? A bigger hard drive? A better web-cam? A better cooling system? Better programs? What will it take?

Better software and hardware up to the task of running it, of course.
At present, the closest we have come to a computer model of the brain
is a simulation of a small sliver of rat cortex, with no clear
evidence that it is actually behaving in a physiological manner. It
might not work at all. If the whole rat brain is simulated and starts
to spontaneously develop ratlike behaviour, then that would be
evidence that it also has rat consciousness, such as it may be. It
would be evidence of rat consciousness due to the logical
impossibility of separating consciousness from intelligent behaviour,
which at this point I will assume you agree by default with as you
have passed up the opportunity to show where there is a logical error.

>> Furthermore, if the computer was based on reverse engineering a human
>> brain then I would say it has to have the same consciousness as a human.
>
> I don't disagree with that, but I would not call that reverse engineered machine a software/hardware system. We may someday create conscious machines, but those machines won't look like digital computers.

Reverse engineering something means understanding it well enough to
build a functional analogue.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list