[ExI] Semiotics and Computability

Gordon Swobe gts_2000 at yahoo.com
Sat Feb 6 19:27:40 UTC 2010

--- On Fri, 2/5/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

>> In your thought experiment, the artificial
>> program-driven neurons will require a lot of work for the
>> same reason that programming weak AI will require a lot of
>> work. We're not there yet, but it's within the realm of
>> programming possibility.
> The artificial neurons (or subneuronal or multineuronal
> structures, it doesn't matter)...

If it doesn't matter, then let's keep it straightforward and refer to artificial brains rather than to artificial neurons surgically inserted into the midst of natural neurons. This will eliminate a lot of uncertainties that arise from the present state of ignorance about neuroscience. 

> exhibit the same behaviour as the natural equivalents,
> but lack consciousness. 

In my view an artificial brain can exhibit the same intelligent behaviors as a natural brain without having subjective mental states where we define behavior as, for example, acts of speech. 

> That's all you need to know about them: you don't have to worry how 
> difficult it was to make them, just that they have been made (provided 
> it is logically possible). Now it seems that you allow that such 
> components are possible, but then you say that once they are installed 
> the rest of the brain will somehow malfunction and needs to be tweaked. 
> That is the blatant contradiction: if the brain starts behaving 
> differently, then the artificial components lack
> the defining property you agreed they have.

As above, let's save a lot of confusion and speak of brains rather than individual neurons. 



More information about the extropy-chat mailing list