[ExI] The symbol grounding problem in strong AI

Gordon Swobe gts_2000 at yahoo.com
Mon Dec 28 12:47:32 UTC 2009


--- On Sun, 12/27/09, Stathis Papaioannou <stathisp at gmail.com> wrote:

> Let's assume the seat of consciousness is in the
> mitochondria. You need to simulate the activity in mitochondria 
> because otherwise the artificial neurons won't behave normally: 

Your second sentence creates a logical contradiction. If real biological processes in the mitochondria act as the seat of consciousness then because conscious experience plays a role in behavior including the behavior of neurons, we cannot on Searle's view simulate those real processes with abstract formal programs (compromising the subject's consciousness) and then also expect those neurons (and therefore the organism) to behave "normally". 

> If the replacement neurons behave normally in their
> interactions with the remaining brain, then the subject *must* 
> behave normally. 

But your replacement neurons *won't* behave normally, and so your possible conclusions don't follow. You've short-circuited the feedback loop between experience and behavior.

Your thought experiment might make more sense if we were testing the theories of an epiphenomenalist, who believes conscious experience plays no role in behavior, but Searle adamantly rejects epiphenomenalism for the same reasons most people do.

Getting back to my original point, science has almost no idea at present how to define the so-called "seat of consciousness" (what I prefer to call the neurological correlates of consciousness or NCC). In real terms, we simply don't know what happened in George Foreman's brain that caused him to lose consciousness when Ali delivered the KO punch. For that reason artificial neurons such as those you have in mind remain extremely speculative for use in thought experiments or otherwise. It seems to me that we cannot prove anything whatsoever with them. 

-gts



      



More information about the extropy-chat mailing list