[ExI] Is Artificial Life Conscious?

Stathis Papaioannou stathisp at gmail.com
Tue May 3 22:25:33 UTC 2022


On Wed, 4 May 2022 at 07:06, Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Hi Jason,
> We continue to talk past each other.  I agree with what you are saying
> but...
> [image: 3_robots_tiny.png]
> First off, you seem to be saying you don't care about the fact that the
> first two systems represent the abstract notion of red with different
> qualities, and that they achieve their Turing completeness in different
> ways.
> If that is the case, why are we talking?  I want to know what your redness
> knowledge is like, you don't seem to care about anything other than all
> these systems can tell you the strawberry is red, and are all turing
> complete?
>
> In addition to turing completeness, what I am interested in is the
> efficiency by which computation can be accomplished by different models.
> Is the amount of hardware used in one model more than is required in
> another?
> The reason there are only a few registers in a CPU, is because of the
> extreme brute force way you must do computational operations like addition
> and comparison when using discrete logic.  It takes far too much hardware
> to have any more than a handful of registers, which can be computationally
> bound to each other at any one time.  Whereas if knowledge composed of
> redness and greenness is a standing wave in neural tissue EM fields, every
> last pixel of knowledge can be much more efficiently meaningfully bound to
> all the other pixels in a 3D standing wave.  If standing waves require far
> less hardware to do the same amount of parallel computational binding, this
> is what I'm interested in.  They are both turing complete, one is far more
> efficient than the other.
>
> Similarly, in order to achieve substrate independence, like the 3rd system
> in the image,  you need additional dictionaries to tell you whether redness
> or greenness or +5 volts, or anything else is representing the binary 1, or
> the word 'red'. Virtual machines, capable of running on different lower
> level hardware, are less efficient than machines running on nacked
> hardware.  This is because they require the additional translation layer to
> enable virtual operation on different types of hardware.  The first two
> systems representing information directly on qualities does not require the
> additional dictionaries required to achieve the substrate independence as
> architected in the 3rd system.  So, again, the first two systems are more
> efficient, since they require less mapping hardware.
>


Substrate independence is not something that is “achieved”, it is just the
way it works. Hamming is substrate independent because you can make a
hammer out of many different things, even though a particular set of
materials may be more durable and easier to work with, because it is
impossible to separate hammering from the behaviour associated with
hammering. Similarly, it is impossible to separate qualia from the
behaviour associated with qualia (the abstract properties, as you call
them), because otherwise you could make a partial zombie, and you have
agreed that is absurd.

On Tue, May 3, 2022 at 11:34 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> If you agree with the concept of the Church-Turing Thesis, then you
>> should know that "wave computation" cannot be any more capable than the
>> "discrete logic gate" computation we use in CPUs. All known forms of
>> computation are exactly equivalent in what they can compute. If it can be
>> computed by one type, it can be computed by all types. If it can't be
>> computed by one type, it can't be computed by any type.
>>
>> This discovery has major implications in the philosophy of mind,
>> especially if one rejects the possibility of zombies. It leads directly to
>> multiple realizability, and substrate independence, as Turing noted 72
>> years ago:
>>
>> “The fact that Babbage's Analytical Engine was to be entirely mechanical
>> will help us rid ourselves of a superstition. Importance is often attached
>> to the fact that modern digital computers are electrical, and the nervous
>> system is also electrical. Since Babbage's machine was not electrical, and
>> since all digital computers are in a sense equivalent, we see that this use
>> of electricity cannot be of theoretical importance. [...] If we wish to
>> find such similarities we should look rather for mathematical analogies of
>> function.”
>> -- Alan Turing in Computing Machinery and Intelligence
>> <https://heidelberg.instructure.com/courses/6068/files/190841/download?download_frd=1>
>> (1950)
>>
>>
>> Further, if you reject the plausibility of absent, fading, or dancing
>> qualia, then equivalent computations (regardless of substrate) must be
>> equivalently aware and conscious. To believe otherwise, is to believe your
>> color qualia could start inverting every other second without you being
>> able to comment on it or in any way "notice" that it was happening. You
>> wouldn't be caught off guard, you wouldn't suddenly pause to notice, you
>> wouldn't alert anyone to your condition. This should tell you that behavior
>> and the underlying functions that can drive behavior, must be directly tied
>> to conscious experience in a very direct way.
>>
>> Jason
>>
>> On Tue, May 3, 2022 at 12:11 PM Brent Allsop <brent.allsop at gmail.com>
>> wrote:
>>
>>>
>>> OK, let me see if I am understanding this correctly.  consider this
>>> image:
>>> [image: 3_robots_tiny.png]
>>>
>>> I would argue that all 3 of these systems are "turing complete", and
>>> that they can all tell you the strawberry is 'red'.
>>> I agree with you on this.
>>> Which brings us to a different point that they would all answer the
>>> question: "What is redness like for you?" differently.
>>> First: "My redness is like your redness."
>>> Second: "My redness is like your greenness."
>>> Third: "I represent knowledge of red things with an abstract word like
>>> "red", I need a definition to know what that means."
>>>
>>> You are focusing on the turing completeness, which I agree with, I'm
>>> just focusing on something different.
>>>
>>>
>>> On Tue, May 3, 2022 at 11:00 AM Jason Resch <jasonresch at gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, May 3, 2022 at 11:23 AM Brent Allsop via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>>
>>>>> Surely the type of wave computation being done in the brain is far
>>>>> more capable than the discrete logic gates we use in CPUs.
>>>>>
>>>>>
>>>> This comment above suggests to me that you perhaps haven't come to
>>>> terms with the full implications of the Church-Turing Thesis
>>>> <https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis> or the
>>>> stronger Church-Turing-Deutsch Principle
>>>> <https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93Deutsch_principle>
>>>> .
>>>>
>>>> Jason
>>>>
>>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220504/4acc0e7c/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_robots_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220504/4acc0e7c/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_robots_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220504/4acc0e7c/attachment-0003.png>


More information about the extropy-chat mailing list