[ExI] The symbol grounding problem in strong AI

Brent Allsop brent.allsop at canonizer.com
Wed Dec 16 03:18:32 UTC 2009


Hi Gordon,

Very interesting argument, this 0-0-0-0 one you make.  I've never heard 
it before.  You're getting very close to what is important with this.  
You are flopping what is important between the - and the 0, and pointing 
out that it is still a problem  either way.  Perhaps we should canonize 
this argument?

And, just FYI, Stathis is  in the camp argued for by Chalmers 
(Functional Equivalence see: http://canonizer.com/topic.asp/88/8), and 
if this tentative survey at canonizer.com is an early indicator, 
cleearely this Chalmers' camp has more expert consensus than any other 
camp at this level.  All these people clearely do think it is a "logical 
necessity" that they be right.  Also, as you point out, they also 
recognize the conundrum with their 'logical necessity'.  This is why 
they all call it a 'hard problem'.

However, there is another camp in a strong second place consensus 
position which disagrees with this position, for which there is no 
conundrum or 'hard problem'.  This is the 'nature has phenomenal 
properties' camp here:

http://canonizer.com/topic.asp/88/7

This camp asserts that all these people are making a logical error in 
their argument that such is a 'logical necessity'.  This 'fallacy' is 
being described in the transmigration fallacy camp here:

http://canonizer.com/topic.asp/79/2

Brent Allsop


Gordon Swobe wrote:
> --- On Tue, 12/15/09, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>   
>> ... the neighbouring neurons *must*
>> respond in the same way with the artificial neurons in place as 
>> with the original neurons. 
>>     
>
> Not so. If you want to make an argument along those lines then I will point out that an artificial neuron must behave in exactly the same way to external stimuli as does a natural neuron if and only if the internal processes of that artificial neuron exactly matches those of the natural neuron. In other words, we can know for certain only that natural neurons (or their exact clones) will behave exactly like natural neurons. 
>
> Another way to look at this problem of functionalism (the real issue here, I think)...
>
> Consider this highly simplified diagram of the brain:
>
> 0-0-0-0-0-0
>
> The zeros represent the neurons, the dashes represent the relations between neurons, presumably the activities in the synapses. You contend that provided the dashes exactly match the dashes in a real brain, it will make no difference how we construct the zeros. To test whether you really believed this, I asked if it would matter if we constructed the zeros out of beer cans and toilet paper. Somewhat to my astonishment, you replied that such a brain would still have consciousness by "logical necessity". 
>
> It seems very clear then that in your view the zeros merely play a functional role in supporting the seat of consciousness, which you see in the dashes. 
>
> Your theory may seem plausible, and it does allow for the tantalizing extropian idea of nano-neurons replacing natural neurons. 
>
> But before we become so excited that we forget the difference between a highly speculative hypothesis and something we must consider true by "logical necessity", consider a theory similar to yours but contradicting yours: in that competing theory the neurons act as the seat of consciousness while the dashes merely play the functional role. That functionalist theory of mind seems no less plausible than yours, yet it does not allow for the possibility of artificial neurons.
>
> And neither functionalist theory explains how brains become conscious!
>
> -gts
>
>
>
>
>       
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>   




More information about the extropy-chat mailing list