[ExI] Semiotics and Computability
stathisp at gmail.com
Fri Feb 5 15:25:34 UTC 2010
On 6 February 2010 02:00, Gordon Swobe <gts_2000 at yahoo.com> wrote:
> I think you've misrepresented or misunderstood me here. Where in the same breath did I say these things?
> In your thought experiment, the artificial program-driven neurons will require a lot of work for the same reason that programming weak AI will require a lot of work. We're not there yet, but it's within the realm of programming possibility.
The artificial neurons (or subneuronal or multineuronal structures, it
doesn't matter) exhibit the same behaviour as the natural equivalents,
but lack consciousness. That's all you need to know about them: you
don't have to worry how difficult it was to make them, just that they
have been made (provided it is logically possible). Now it seems that
you allow that such components are possible, but then you say that
once they are installed the rest of the brain will somehow malfunction
and needs to be tweaked. That is the blatant contradiction: if the
brain starts behaving differently, then the artificial components lack
the defining property you agreed they have.
More information about the extropy-chat