[ExI] Fwd: Is Artificial Life Conscious?

Stathis Papaioannou stathisp at gmail.com
Tue Apr 26 21:55:10 UTC 2022


On Wed, 27 Apr 2022 at 06:27, Colin Hales via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Tue, Apr 26, 2022 at 10:14 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Tue, Apr 26, 2022, 1:53 AM Colin Hales <col.hales at gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, Apr 26, 2022 at 2:13 PM Jason Resch <jasonresch at gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Mon, Apr 25, 2022 at 11:09 PM Colin Hales <col.hales at gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, Apr 26, 2022 at 2:01 PM Jason Resch <jasonresch at gmail.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Apr 25, 2022 at 10:54 PM Colin Hales via extropy-chat <
>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Apr 26, 2022 at 1:02 PM Rafal Smigrodzki via extropy-chat <
>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ### I would be very surprised if the functional capabilities of
>>>>>>>> brains turned out to be impossible to replicate in digital,
>>>>>>>> Turing-equivalent computers.
>>>>>>>>
>>>>>>>> Rafal
>>>>>>>>
>>>>>>>
>>>>>>> Wouldn't it be great to actually do some empirical science to find
>>>>>>> out? Like start acting as if it was true (impossible) and start building
>>>>>>> artificial inorganic brain  tissue that is NOT a general-purpose computer
>>>>>>> (that artificial tissue would also have functionally relevant EEG and MEG),
>>>>>>> and then comparing its behaviour with the general-purpose computer's model
>>>>>>> of of the same tissue?
>>>>>>>
>>>>>>>
>>>>>> It looks like this work is in the process of being done:
>>>>>> https://www.youtube.com/watch?v=ldXEuUVkDuw
>>>>>>
>>>>>> Jason
>>>>>>
>>>>>
>>>>> Not even close. Can you see what just happened? There's a general
>>>>> purpose computer and software involved.  The game ends right there! Did you
>>>>> not read what I wrote.
>>>>>
>>>>> To build an artificial version of natural tissue is not to simulate
>>>>> anything. You build the EM field system literally. The use of computers is
>>>>> a design tool, not the end product. The chips that do this would be 3D and
>>>>> have an EEG and MEG like brain tissue. No computers. No software.
>>>>>
>>>>> The game has changed!
>>>>>
>>>>>
>>>> What if the computer simulation includes the EM fields?
>>>>
>>>> Would that be sufficient to make a  conscious program?
>>>>
>>>> If not, do you predict the computer simulation including the EM fields
>>>> would diverge in behavior from the actual brain?
>>>>
>>>> Jason
>>>>
>>>
>>> *This is exactly the right question!*
>>>
>>> To find out you have to do it. You do not know. I think I know, but I
>>> can't claim to have proof because nobody has done the experiment yet. My
>>> experimental work is at the beginning of testing a hypothesis that the real
>>> EM field dynamics and the simulation's dynamics will not track, and that
>>> the difference will be the non-computable aspect of brains.
>>>
>>
>> I commend you and your work for challenging base assumptions. Such work
>> is always needed in science for progress to be made.
>>
>> The difference, I predict, will be in how the devices relate to the
>>> external world, which is something that cannot be in any model because it
>>> is precisely when the external world is unknown (that nobody can program)
>>> that you are interested in its response (that forms the test context of
>>> interest). In the end it is about the symbol grounding problem. I have a
>>> paper in review (2nd round) at the moment, in which I describe it this way:
>>> ----------------
>>> The creation of chip materials able to express EM fields structurally
>>> identical to those produced by neurons can be used to construct
>>> artificial neurons that replicate neuron signal processing through allowing
>>> the actual, natural EM fields to naturally interact in the manner they do
>>> in the brain, thereby replicating the same kind of signalling and signal
>>> processing (computation). This kind of in-silico empirical approach is
>>> simply missing from the science. No instances of in-silico-equivalent EM
>>> field replication can be found. Artificial neurons created this way could
>>> help in understanding EM field expression by excitable cell tissue. It
>>> would also facilitate a novel way to test hypotheses in-silico.
>>>
>>
>> What is the easiest way to test this theory of EMs role in consciousness
>> or intelligence?
>>
>> Would you consider the creation of an artificial neural network that
>> exhibits intelligent or novel behavior to be a disproof of this EM theory?
>>
>> Neuroscience and physics, together, could embark on such a development.
>>> It would help us reveal the neural dynamics and signal processing that is
>>> unknowingly not captured by the familiar models that abstract-away EM
>>> fields and that currently dominate computational neuroscience. *Note
>>> that the computational exploration of the EM fields (via Maxwell’s
>>> equations) impressed on space by the novel chip would constitute the design
>>> phase of the chip. The design would be sent to a foundry to build. What
>>> comes back from the foundry would express the EM fields themselves. The
>>> empirical method would be, to neuroscience, what the Wright Brothers
>>> construction of flying craft did for artificial flight.*
>>> -----------------
>>> The flight analogy is a strong one. Simulation of flight physics is not
>>> flight.
>>>
>>
>> I see this argument a lot but I think it ignores the all important role
>> of the perspective in question.
>>
>> For a being in the simulation of flight, it is flight. If we include an
>> observer in the simulation of a rainstorm, they will get wet.
>>
>> That our hypothetical simulators see only a computer humming along and no
>> water leaking out of their computer says nothing of the experiences and
>> goings-on for the perspective inside the simulation.
>>
>> As consciousness is all about perspectives and inside views, changing the
>> view to focus on the outside is potentially misleading. I could
>> equivalently say, "A person dreaming of a sunrise sees a yellow sun full of
>> brilliant light, but the room is still pitch dark!" But the darkness of the
>> room doesn't tell me anything about whatever experiences the dreaming brain
>> could be having.
>>
>> I think it's the same with computer simulations. There's an external view
>> and an internal view. Each tells very little about the other.
>>
>>
>> I predict that in exactly the same way, in the appropriate context
>>> (novelty), that a simulation of 'braining' will not be a brain (in a manner
>>> to be discovered). The reason, I predict, is that the information content
>>> in the EM field is far larger than anything found in the peripheral
>>> measurement signals hooked to it. The chip that does the fields, I predict,
>>> will handle novelty in a way that parts company with the simulation that
>>> designed the chip. The chip's behaviour (choices) will be different to the
>>> simulation.
>>>
>>
>> I do think that given the chaotic nature of a large and highly complex
>> system where small changes can be magnified, anything not modeled in a
>> brain simulation can lead to divergent behaviors. The question to me is how
>> important those unsimulated aspects are to the fidelity of the mind. Is it
>> all important to the extent the mind is inoperable without including it, or
>> is it something that makes a difference in behavior only after a
>> significantly long run? (Or is it something in between those two extremes?)
>>
>>
>>> The grand assumption of equivalence of "brain" and "computed model of
>>> brain" is and has only ever been an assumption, and the testing that
>>> determines the equivalence has never been done.
>>>
>>
>> I agree with you that it should be.
>>
>> You do not find out by assuming the equivalence and never actually
>>> testing it with a proper control (null hypothesis). Especially when the
>>> very thing that defines the grand failure of AI is when it encounters
>>> novelty ... which is exactly what has been happening for 65 years non-stop.
>>>
>>
>> I am not sure I would say AI has failed here.
>>
>> Take for example, my AI bots. They encountered novelty when I changed
>> their environment repeatedly, and each time they responded by developing
>> new more optimum strategies to cope with those changes.
>>
>> Jason
>> ____________
>
>
> If  you read the article I posted, you will find the state of affairs that
> you are projecting into simulation is called 'magical' or radical/strong
> emergence. Proving true/false what you are projecting into simulation is
> precisely the final outcome that a proper science of EM-based subjectivity
> will sort out. My prediction is that none of the things you expect will
> happen because the EM fields are organized incorrectly. They are organized
> in the form of a computer. Everything that the brain does to organize
> subjectivity is lost.
>
> So I guess we'll just have to wait till the science gets done properly.
> Only then will you know whether any of your expectations are valid.
>

It’s a given that the computer is not the same as whatever it is
simulating, but are you saying that the effect of EM fields on matter
cannot be simulated?

> --
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220427/0555e5f6/attachment-0001.htm>


More information about the extropy-chat mailing list