[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Tue Dec 22 12:38:21 UTC 2009


--- On Mon, 12/21/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> you suggested that (a) would be the case, but then seemed to backtrack:

I suggested (a) would be the case if we replaced all neurons with your programmatic neurons. 
 
> If you don't believe in a soul then you believe that at
> least some of the neurons in your brain are actually involved in
> producing the visual experience. It is these neurons I propose replacing
> with
> artificial ones that interact normally with their
> neighbours but lack
> the putative extra ingredient for consciousness. The aim of
> the
> exercise is to show that this extra ingredient cannot
> exist, since
> otherwise it would lead to one of two absurd situations:
> (a) you would
> be blind but you would not notice you were blind; or (b)
> you would
> notice you were blind but you would lose control of your
> body, which
> would smile and say everything was fine.

I suppose (b) makes sense for the partial replacement scenario you want me to consider. If it seems bizarre, well then so too does the thought experiment!

And how does it in any way speak to the issue at hand? As in the title of the thread, our concern here is the symbol grounding problem in strong AI, or more generally "understanding" in S/H systems. To target Searle's argument (as you want to and which I appreciate) we need to use your nano-neuron thought experiments to somehow undermine his position that programs do not have semantics.


-gts



      



More information about the extropy-chat mailing list