[ExI] Semiotics and Computability

Gordon Swobe gts_2000 at yahoo.com
Fri Feb 5 15:00:11 UTC 2010

--- On Fri, 2/5/10, Stathis Papaioannou <stathisp at gmail.com> wrote:

> > --- On Fri, 2/5/10, Stathis Papaioannou <stathisp at gmail.com>
> wrote:
> >
> >>> Not impossible at all! Weak AI that passes the
> Turing
> >>> test is entirely possible. It will just take a
> lot of hard
> >>> work to get there.
> >>
> >> Yes, but then when pressed you say that such a
> brain or
> >> brain component would *not* behave exactly like
> the natural
> >> equivalent!
> >
> > I've said that such an artificial neuron/brain will
> require a lot of work before it behaves like the natural
> equivalent. This is why the surgeon in your thought
> experiment must keep replacing and re-programming your
> artificial neurons until finally he creates a patient that
> passes the TT.
> You agree that the artificial neuron will perfectly
> replicate the behaviour of the natural neuron it replaces, and in the
> same breath you say that the brain will start behaving differently and
> the surgeon will have to make further adjustments! Do you really not
> see that this is a blatant contradiction?

I think you've misrepresented or misunderstood me here. Where in the same breath did I say these things? 

In your thought experiment, the artificial program-driven neurons will require a lot of work for the same reason that programming weak AI will require a lot of work. We're not there yet, but it's within the realm of programming possibility.



More information about the extropy-chat mailing list