[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Sun Dec 27 01:16:40 UTC 2009


--- On Fri, 12/25/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

>> I do however assume that natural neurons do not run
> formal programs like those running now on your computer. (If
> they do then I must wonder who wrote them.)
> 
> Natural neurons do not run human programming languages but
> they do run algorithms, insofar as their behaviour can be described
> algorithmically. 

We cannot assume that merely because we can describe a given natural process algorithmically that the process must then happen as a result of the supposed algorithm actually running somewhere programmatically!

> At the lowest level there is a small set of rules, the laws of physics, 
> which rigidly determine the future state and output of the neuron from 
> the present state and input. 

Looks like you want to liken these supposed lowest level laws of physics to a program. Where does that supposed program run? 

> That the computer was engineered and the neuron evolved should make
> no difference: if running a program destroys consciousness
> then it should do so in both cases. 

Well if you read my post from the other day (you never replied to the relevant portion of it) I allowed that if the programs replace only a negligible part of the material brain processes they simulate, they would negate the subject's intentionality/consciousness to a similarly negligible degree.

>> You have not shown that the effects that concern us
> here do not emanate in some way from the interior behaviors
> and structures of neurons. As I recall the electrical
> activities of neurons takes place inside them, not outside
> them, and it seems very possible to me that this internal
> electrical activity has an extremely important role to
> play.
> 
> The electrical activity consists in a potential difference
> across the neuron's cell membrane due to ion gradients. However, to 
> be sure you have correctly modelled the behaviour of the neuron...

I will in the next day or so if time allows write a separate post for the sole purpose of explaining what I see as the logical fallacy in your behaviorist/functionalist arguments. I wrote one already (the post with "0-0-0-0" diagram) but I see it didn't leave any lasting impression on you even if you never offered any counter-arguments. So I'll try putting another one together.

-gts 



      



More information about the extropy-chat mailing list