[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Tue Dec 29 01:35:34 UTC 2009


2009/12/29 Gordon Swobe <gts_2000 at yahoo.com>:
> --- On Mon, 12/28/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> Well, I think you've finally understood the problem. If
>> indeed there is something in the physics of neurons that is not
>> computable, then we won't be able to make artificial neurons based on
>> computation that behave like biological neurons.
>
> It seems you still don't understand the position that you really want (or should want) to refute here. :)
>
> 1) I do not believe anything about the physics of neurons makes them impossible to compute. We can in principle make exact blueprints of real neurons with and on a computer.
>
> 2) I believe we can in principle create neurons "based on" those computer blueprints, just as we can make anything from blueprints, and that those manufactured neurons will behave exactly like natural neurons.
>
> 3) I do *not* however believe that any neurons you might manufacture that contain computer simulations (i.e., formal programs) in place of the natural processes that correlate with consciousness will act like natural neurons. The reason for this is simple: computer simulations of things do not equal the things they simulate. They contain the forms of things but not the substance of things.
>
>> But Searle claims that weak AI *is* possible.
>> He even alludes to Church's thesis to support this:
>
> Yes, and so you think Searle is hoist with his own petard!
>
>> However, Searle thinks that although the behaviour of the
>> brain can be replicated by a computer, the conscious cannot.
>
> Consciousness can be replicated on a computer in much the same way as a cartoonist replicates it.
>
> The most simple kind of cartoonist puts a little cloud over his character's head and type words into it to "replicate" consciousness. A more advanced cartoonist will add a time dimension by adding several frames to his cartoon. An even more advanced cartoonist will make a computer animation and add audio to replace the clouds (or simply show the character thinking). A yet even more advanced cartoonist will make his cartoon into a 3-D hologram. At the most sophisticated level the cartoonist will create a perfect computer model of a real brain and insert that baby into his hologram, creating weak AI.
>
> But no matter sophisticated the cartoon gets, it remains just a cartoon. It never magically turns into the real McCoy.

Before proceeding, I would like you to say what you think you would
experience if some of your neurons were replaced with artificial
neurons that behave externally like biological neurons but, being
tainted with programming, lack understanding. I assumed from your
previous post that you were saying you would experience something
different and would behave differently, because it's not possible to
make artificial neurons that behave normally, but this post says
otherwise, leaving me confused as to your position.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list