[ExI] The digital nature of brains (was: digital simulations)
Stathis Papaioannou
stathisp at gmail.com
Mon Jan 25 05:24:23 UTC 2010
2010/1/25 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Sat, 1/23/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>>> Digital simulations of non-digital objects never equal
>>> the things they simulate, except that some people here like
>>> to imagine so.
>>
>> It is true that a digital simulation is not the same as the
>> original, but the question is whether it performs the same function
>> as the original. A simulated apple could taste, feel, smell like a
>> real apple to a person with a lot of extra equipment which I'm sure
>> computer game developers are working on
>
> Game developers create many kinds of illusions. Here we concern ourselves with reality.
We agree that we are not reproducing the actual apple or brain, but an
engineered simulacrum. The question is, will this simulacrum have the
properties of the original? In cataract surgery the lens in the eye is
removed and replaced with a synthetic lens, which is definitely not
the same thing as the biological original, but is just as good
functionally (indeed, better since the patient had a cataract).
>> A simulated brain will not be identical to a real brain but
>> you seem to agree that it could display the same behaviour as a real
>> brain if we added appropriate sensory organs and effectors.
>> However, you make the claim that although every other function of the
>> brain could be reproduced by the simulated brain the consciousness can
>> never be reproduced.
>
> Never by a s/h system, but such a system could perform all the visible functions of a conscious brain. We can in principle create weak AI, defined as software running on hardware that exhibits all the behaviors of a natural brain. By behavior I mean physical behavior of the apparatus to which it is attached including its outputs. We can, in other words, create unconscious robots with weak AI that pass the turing test.
This is you belief but you don't provide a valid supporting reason.
The symbol grounding problem according to the very article you cited
is not a problem for a s/h system. What you claim is that whether the
symbols are grounded or not they won't have "meaning", but you don't
explain why symbol grounding is not sufficient for "meaning". You
can't explain "meaning" at all other than a mysterious thing that you
have and computers which appear to have it don't.
>> But if that were so, it would allow for the possibility
>> that you are a zombie and don't realise it, which you agree
>> is absurd.
>> Therefore, you MUST agree that it is impossible to
>> reproduce all the functions of the brain without also reproducing
>> consciousness.
>
> But I don't.
>
>> It is still open to you to claim that a computer could never
>> reproduce human intelligence (and therefore never reproduce human
>> consciousness);
>
> We can create the outward appearance of human intelligence.
Therefore, we can selectively remove any aspect of a human's
consciousness without them or anyone else realising it. You've said
you don't like the implications of this statement but you haven't
shown how it is false.
--
Stathis Papaioannou
More information about the extropy-chat
mailing list