[ExI] Semiotics and Computability
stathisp at gmail.com
Sun Feb 7 08:25:43 UTC 2010
On 7 February 2010 06:27, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> --- On Fri, 2/5/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>>> In your thought experiment, the artificial
>>> program-driven neurons will require a lot of work for the
>>> same reason that programming weak AI will require a lot of
>>> work. We're not there yet, but it's within the realm of
>>> programming possibility.
>> The artificial neurons (or subneuronal or multineuronal
>> structures, it doesn't matter)...
> If it doesn't matter, then let's keep it straightforward and refer to artificial brains rather than to artificial neurons surgically inserted into the midst of natural neurons. This will eliminate a lot of uncertainties that arise from the present state of ignorance about neuroscience.
It is a basic requirement of the experiment that the brain replacement
be *partial*. This is in order to demonstrate that there is a problem
with the idea that a brain part could have normal behaviour but lack
consciousness. Having demonstrated that the brain parts must have
consciousness, it should then be obvious that an entirely artificial
brain made out of these parts will also be conscious.
It is true that we don't at present have the capability to make such
artificial brains or neurons, but I have asked you to assume that we
do. Surely this is no more difficult than imagining the Chinese Room!
The Chinese Room is logically possible but probably physically
impossible, while artificial neurons may even become available in our
>> exhibit the same behaviour as the natural equivalents,
>> but lack consciousness.
> In my view an artificial brain can exhibit the same intelligent behaviors as a natural brain without having subjective mental states where we define behavior as, for example, acts of speech.
>> That's all you need to know about them: you don't have to worry how
>> difficult it was to make them, just that they have been made (provided
>> it is logically possible). Now it seems that you allow that such
>> components are possible, but then you say that once they are installed
>> the rest of the brain will somehow malfunction and needs to be tweaked.
>> That is the blatant contradiction: if the brain starts behaving
>> differently, then the artificial components lack
>> the defining property you agreed they have.
> As above, let's save a lot of confusion and speak of brains rather than individual neurons.
Is there anyone out there still following this thread who is confused
by my description of the thought experiment or doesn't understand its
rationale? Please email me off list if you prefer.
More information about the extropy-chat