[ExI] Digital Consciousness .

Brent Allsop brent.allsop at canonizer.com
Tue May 7 15:19:39 UTC 2013


Hi Stathis,



OK, perhaps we have a problem where we are not on the same page with our
assumptions, or at least my attempt to work with the same assumptions as
you.  So let’s see how close I am.  Let me know if you disagree with any of
the following



1.      There is some physical neural correlated responsible for a redness
experience, including the possibility that this correlate may be purely
functional, or any “functional isomorph”.  In other words, you are
predicting it doesn’t matter what hardware is used, but there must be some
hardware that is the “functional isomorph” responsible for the redness
quale functionality.

2.      The “functional isomorph” for redness, will be at least
functionally different than greenness, in a way that you will be able to
observe the necessary and sufficient causal properties of the physical
matter which this “functional isomorph” is implemented on.  In other words,
you will be able to build a system that can reliably determine if someone
is experiencing a redness quale, and not greenness, and so on.   Given this
you will be able to build a ‘comparitor’ or binding neuron or system that
tells you a redness functional isomorph has been detected.

3.      The detection system has an abstracted output.  It will produce a
“1” if it is detecting a redness functional isomorph and it will output a
“0” if it detects a greenness isomorph.

4.      By definition, this abstracted “1” output, is just an abstract
representation of the redness functional isomorph being detected.  This
abstracted “1” can be anything, like maybe the abstract word 'red' with any
amount of complexity, it just, by definition, can't be a functional ismorph
of redness.  The important part here, is you don’t know what the abstracted
output “1” signal is representing, unless you know to ground the meaning by
looking this up in a dictionary, and get back the a real red functional
isomorph.  The "1", or whatever is representing it, is by definition, not a
redness functional isomorph, and its relationship to a redness functional
isomorph is completely arbitrary and only defined in a map or dictionary.

5.      Now, our goal with the neural substitution, is to replace
everything one piece at a time, including the real functional isomorphs of
redness, and replace them with the abstracted “1”, which by definition, is
not a redness functional isomorph.



Are we on the same page with all that being theoretically possible?



Brent Allsop




On Tue, May 7, 2013 at 8:21 AM, Stathis Papaioannou <stathisp at gmail.com>wrote:

>
>
> On Tuesday, May 7, 2013, Brent Allsop wrote:
>
>>
>>
>> Hi Stathis,
>>
>>
>>
>> You said:
>>
>>
>>
>> “the binding system [behaves] in the same way if it receives the same
>> input.”
>>
>>
>>
>> And this is exactly the problem.  An Input, by definition, is some
>> arbitrary medium, with a hardware translation layer, to transducer the
>> “intput” to be whatever abstracted info you want to think of it as (like a
>> “1” or a “0”).  If you have two abstracted inputs, there is some simple
>> logic, like the following which will indicate if it is the same:
>>
>>
>>
>>             I           R         I = R
>>
>>             0          0          1
>>
>>             0          1          0
>>
>>             1          0          0
>>
>>             1          1          1
>>
>>
>>
>> Whatever arbitrary stuff you are using to achieve this arbitrary logic,
>> will produce the results.  But this is radically different than what I’m
>> talking about and what consciousness is doing and why it is saying it has
>> detected redness.
>>
>>
>>
>> We select redness, not because of any old random logic, that doesn’t
>> matter what it is implemented on.  We select redness because of what it
>> is qualitatively like.
>>
>>
>>
>> If you think about what it must be, objectively, it must include some
>> kind of difference detection system that detects a real set of causal
>> properties(say those of glutamate) and that this system will only say it is
>> real glutamate, if and only if it is real glutamate.  The reason it is
>> saying it is real glutamate, is because of the real causal properties it is
>> detecting, not because of some arbitrarily set of hardware configured to do
>> the abstracted logic.
>>
>>
>>
>> The binding neuron does know about the qualitative nature of redness and
>> greenness, and it is only saying it is redness, because of it’s qualitative
>> property.  And of course, if you look at it objectively, because of the
>> quale interpretation problem, it will just look like some system that is
>> indicated it has detected redness, because it has detected whatever the
>> causal (or even functional, if you must) properties.
>>
>>
>>
>> Anyway, this is still all crude ways of saying it, so I doulbt you’ll be
>> able to get with just this, but thanks so much for your continued help with
>> me trying to find the best way to describe what I’m trying to talking about.
>>
>> Unfortunately I don't see the problem you see. We could imagine replacing
> a whole person with a zombie double who slots into society unnoticed
> despite lacking qualia, so how could there be an issue with a neuron doing
> the same among fellow neurons?
>
>
> --
> Stathis Papaioannou
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130507/be102490/attachment.html>


More information about the extropy-chat mailing list