[ExI] Is Artificial Life Conscious?

Jason Resch jasonresch at gmail.com
Tue May 3 22:27:05 UTC 2022


On Tue, May 3, 2022 at 4:05 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Hi Jason,
> We continue to talk past each other.
>

Correct. If I raise points, corrections, and ask questions, which you
ignore, we are guaranteed to talk past each other.


> I agree with what you are saying but...
> [image: 3_robots_tiny.png]
> First off, you seem to be saying you don't care about the fact that the
> first two systems represent the abstract notion of red with different
> qualities, and that they achieve their Turing completeness in different
> ways.
>

"Turing completeness" refers to programming languages or systems that can
realize a Turing machine. This is something different from my claim that
physics (and accordingly any physical system or object) is Turing emulable
(something that can be perfectly emulated/simulated by a Turing machine
having the right program).

You have shown this image multiple times, but not asked me anything about
it. I don't see its relevance to the conversation, unless you have a
specific point to make about this image, or a question to ask me about it.



> If that is the case, why are we talking?  I want to know what your redness
> knowledge is like,
>

That's incommunicable. You would have to possess my brain/mind to know what
red is like to me, but then you would be me and not Brent, and so you would
be stuck in the same position we are in now, being unable to communicate to
someone with Brent's brain what red is like to someone with Jason's brain.


> you don't seem to care about anything other than all these systems can
> tell you the strawberry is red, and are all turing complete?
>

If that's what you think then I think you have missed my point. The reason
I bring up the Church-Turing thesis is to relay to you the implied
independence of the material substrate from the behaviors of an implemented
mind. This substrate independence means any effort to link glutamate (or
name your molecule/compound) can have nothing to do with the quality of
perceptions. If you think otherwise, I can show you how it leads to a
contradiction or an absurdity (like dancing qualia).


>
> In addition to turing completeness, what I am interested in is the
> efficiency by which computation can be accomplished by different models.
> Is the amount of hardware used in one model more than is required in
> another?
>

There is a result from computational complexity theory known as the
"extended Church-Turing Thesis" which says: "All reasonable computation
models can
simulate each other with only polynomial slow down." Which is to say that
there can be different efficiencies, but generally they will not be
significant.

There is, however, a substantial efficiency difference between quantum
computers and classical computers. Simulating quantum computers on a
classical computer, for some problems, requires an exponential slowdown, to
the point where even if all the matter and energy in the universe were used
to build a classical computer, it would be unable to keep up with a quantum
computer that could fit on top of a table.


> The reason there are only a few registers in a CPU, is because of the
> extreme brute force way you must do computational operations like addition
> and comparison when using discrete logic.  It takes far too much hardware
> to have any more than a handful of registers, which can be computationally
> bound to each other at any one time.
>

The way I view it is that a single-threaded CPU spreads out a computation
minimally through space, and maximally through time, while a highly
parallel computer or a biological brain, spreads out the computation
through space, and less across time. In neither case are the time or space
dimensions zero, the computation always has some positive dimensionality
across the dimensions of space and time. Thus there is no "binding" through
time, nor across space, aside from those bounds implied by the
logical/computational operation.


> Whereas if knowledge composed of redness and greenness is a standing wave
> in neural tissue EM fields, every last pixel of knowledge can be much more
> efficiently meaningfully bound to all the other pixels in a 3D standing
> wave.  If standing waves require far less hardware to do the same amount of
> parallel computational binding, this is what I'm interested in.  They are
> both turing complete, one is far more efficient than the other.
>

If they're equivalent computationally, then they're equivalent
behaviorally, and therefore they must experience the same qualia (e.g.
redness or greenness) as to believe otherwise is to accept dancing qualia
(being unable to comment on a redness and greenness swapping back and forth
in one's field of vision).


>
> Similarly, in order to achieve substrate independence, like the 3rd system
> in the image,  you need additional dictionaries
>
 to tell you whether redness or greenness or +5 volts, or anything else is
> representing the binary 1, or the word 'red'.
>

What are these dictionaries? I don't see how it is possible for any
dictionary to specify how a particular quale feels.
Qualia are first-person properties, while dictionaries concern themselves
only with third-person communicable information.


> Virtual machines, capable of running on different lower level hardware,
> are less efficient than machines running on nacked hardware.  This is
> because they require the additional translation layer to enable virtual
> operation on different types of hardware.
>

True, but irrelevant.


> The first two systems representing information directly on qualities does
> not require the additional dictionaries required to achieve the substrate
> independence as architected in the 3rd system.  So, again, the first two
> systems are more efficient, since they require less mapping hardware.
>

I am not able to make any sense of the above paragraph.

If I understand your example correctly, the three systems:
A) the conventionally red seeing man
B) the seeing green when light of 700nm strikes his retina man and
C) the robot that maps 700nm light to the state of outputting the string
"red"

Each experience '700nm' light differently, they each have different qualia.

Do we agree so far?

I have no objection to the possibility of this situation. All I say, is
that for this situation to exist, for different systems (A, B, and C) to
experience differently, they must process information differently. They
"run different programs", or you could say, they have different "high level
functional organizations".

If they ran the same programs, had the same high level functional
organizations, processed information equivalently, then they would
necessarily have the same quale for 700nm light. This would be true if one
functional organization was made of a computer composed of wooden groves
and marbles, if one was made of copper wires and electronics, or if made of
fiber optic cables and photonics. Each is a computer capable of running the
same program, so each will realize the same mind, exhibiting the same
behaviors.

Jason



>
>
>
>
>
> On Tue, May 3, 2022 at 11:34 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> If you agree with the concept of the Church-Turing Thesis, then you
>> should know that "wave computation" cannot be any more capable than the
>> "discrete logic gate" computation we use in CPUs. All known forms of
>> computation are exactly equivalent in what they can compute. If it can be
>> computed by one type, it can be computed by all types. If it can't be
>> computed by one type, it can't be computed by any type.
>>
>> This discovery has major implications in the philosophy of mind,
>> especially if one rejects the possibility of zombies. It leads directly to
>> multiple realizability, and substrate independence, as Turing noted 72
>> years ago:
>>
>> “The fact that Babbage's Analytical Engine was to be entirely mechanical
>> will help us rid ourselves of a superstition. Importance is often attached
>> to the fact that modern digital computers are electrical, and the nervous
>> system is also electrical. Since Babbage's machine was not electrical, and
>> since all digital computers are in a sense equivalent, we see that this use
>> of electricity cannot be of theoretical importance. [...] If we wish to
>> find such similarities we should look rather for mathematical analogies of
>> function.”
>> -- Alan Turing in Computing Machinery and Intelligence
>> <https://heidelberg.instructure.com/courses/6068/files/190841/download?download_frd=1>
>> (1950)
>>
>>
>> Further, if you reject the plausibility of absent, fading, or dancing
>> qualia, then equivalent computations (regardless of substrate) must be
>> equivalently aware and conscious. To believe otherwise, is to believe your
>> color qualia could start inverting every other second without you being
>> able to comment on it or in any way "notice" that it was happening. You
>> wouldn't be caught off guard, you wouldn't suddenly pause to notice, you
>> wouldn't alert anyone to your condition. This should tell you that behavior
>> and the underlying functions that can drive behavior, must be directly tied
>> to conscious experience in a very direct way.
>>
>> Jason
>>
>> On Tue, May 3, 2022 at 12:11 PM Brent Allsop <brent.allsop at gmail.com>
>> wrote:
>>
>>>
>>> OK, let me see if I am understanding this correctly.  consider this
>>> image:
>>> [image: 3_robots_tiny.png]
>>>
>>> I would argue that all 3 of these systems are "turing complete", and
>>> that they can all tell you the strawberry is 'red'.
>>> I agree with you on this.
>>> Which brings us to a different point that they would all answer the
>>> question: "What is redness like for you?" differently.
>>> First: "My redness is like your redness."
>>> Second: "My redness is like your greenness."
>>> Third: "I represent knowledge of red things with an abstract word like
>>> "red", I need a definition to know what that means."
>>>
>>> You are focusing on the turing completeness, which I agree with, I'm
>>> just focusing on something different.
>>>
>>>
>>> On Tue, May 3, 2022 at 11:00 AM Jason Resch <jasonresch at gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, May 3, 2022 at 11:23 AM Brent Allsop via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>>
>>>>> Surely the type of wave computation being done in the brain is far
>>>>> more capable than the discrete logic gates we use in CPUs.
>>>>>
>>>>>
>>>> This comment above suggests to me that you perhaps haven't come to
>>>> terms with the full implications of the Church-Turing Thesis
>>>> <https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis> or the
>>>> stronger Church-Turing-Deutsch Principle
>>>> <https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93Deutsch_principle>
>>>> .
>>>>
>>>> Jason
>>>>
>>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220503/cfce6531/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_robots_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220503/cfce6531/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_robots_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220503/cfce6531/attachment-0003.png>


More information about the extropy-chat mailing list