[ExI] Fwd: Is Artificial Life Conscious?

Colin Hales col.hales at gmail.com
Wed Apr 27 02:57:32 UTC 2022


If you know the Lorentz force on the ion in the big toe, the trajectory can
be computed. So what?

If the ion is in the brain, with massive molecular dynamics simulation, you
can computer the trajectory and be right. so what?

I am talking about the behavioural dynamics of a complete field system,
which as a massive summation of the field system impressed on space by
billions of charges. But even if you did that..... So what?

. I am talking about the subjective view of 'being' that total field
system, which is what created in a brain. To explore this you have to
physically replicate the field system. The standard model of particle
physics (Maxwell's) has ZERO content on the1PP. How this information
content is erected is a mystery. To explore it you build the fields. You
don't simulate their outward appearance.





On Wed, Apr 27, 2022, 12:36 PM Stathis Papaioannou <stathisp at gmail.com>
wrote:

>
>
> On Wed, 27 Apr 2022 at 12:15, Colin Hales <col.hales at gmail.com> wrote:
>
>>
>>
>> On Wed, Apr 27, 2022, 11:27 AM Stathis Papaioannou <stathisp at gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Wed, 27 Apr 2022 at 11:08, Colin Hales <col.hales at gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Wed, Apr 27, 2022, 10:55 AM StathinoNos Papaioannou <
>>>> stathisp at gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, 27 Apr 2022 at 09:18, Colin Hales <col.hales at gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Apr 27, 2022 at 7:55 AM Stathis Papaioannou <
>>>>>> stathisp at gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, 27 Apr 2022 at 06:27, Colin Hales via extropy-chat <
>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Apr 26, 2022 at 10:14 PM Jason Resch via extropy-chat <
>>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Apr 26, 2022, 1:53 AM Colin Hales <col.hales at gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Apr 26, 2022 at 2:13 PM Jason Resch <jasonresch at gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Apr 25, 2022 at 11:09 PM Colin Hales <
>>>>>>>>>>> col.hales at gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Apr 26, 2022 at 2:01 PM Jason Resch <
>>>>>>>>>>>> jasonresch at gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Apr 25, 2022 at 10:54 PM Colin Hales via extropy-chat <
>>>>>>>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Apr 26, 2022 at 1:02 PM Rafal Smigrodzki via
>>>>>>>>>>>>>> extropy-chat <extropy-chat at lists.extropy.org> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ### I would be very surprised if the functional capabilities
>>>>>>>>>>>>>>> of brains turned out to be impossible to replicate in digital,
>>>>>>>>>>>>>>> Turing-equivalent computers.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Rafal
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Wouldn't it be great to actually do some empirical science to
>>>>>>>>>>>>>> find out? Like start acting as if it was true (impossible) and start
>>>>>>>>>>>>>> building artificial inorganic brain  tissue that is NOT a general-purpose
>>>>>>>>>>>>>> computer (that artificial tissue would also have functionally relevant EEG
>>>>>>>>>>>>>> and MEG), and then comparing its behaviour with the general-purpose
>>>>>>>>>>>>>> computer's model of of the same tissue?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> It looks like this work is in the process of being done:
>>>>>>>>>>>>> https://www.youtube.com/watch?v=ldXEuUVkDuw
>>>>>>>>>>>>>
>>>>>>>>>>>>> Jason
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Not even close. Can you see what just happened? There's a
>>>>>>>>>>>> general purpose computer and software involved.  The game ends right there!
>>>>>>>>>>>> Did you not read what I wrote.
>>>>>>>>>>>>
>>>>>>>>>>>> To build an artificial version of natural tissue is not to
>>>>>>>>>>>> simulate anything. You build the EM field system literally. The use of
>>>>>>>>>>>> computers is a design tool, not the end product. The chips that do this
>>>>>>>>>>>> would be 3D and have an EEG and MEG like brain tissue. No computers. No
>>>>>>>>>>>> software.
>>>>>>>>>>>>
>>>>>>>>>>>> The game has changed!
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> What if the computer simulation includes the EM fields?
>>>>>>>>>>>
>>>>>>>>>>> Would that be sufficient to make a  conscious program?
>>>>>>>>>>>
>>>>>>>>>>> If not, do you predict the computer simulation including the EM
>>>>>>>>>>> fields would diverge in behavior from the actual brain?
>>>>>>>>>>>
>>>>>>>>>>> Jason
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *This is exactly the right question!*
>>>>>>>>>>
>>>>>>>>>> To find out you have to do it. You do not know. I think I know,
>>>>>>>>>> but I can't claim to have proof because nobody has done the experiment yet.
>>>>>>>>>> My experimental work is at the beginning of testing a hypothesis that the
>>>>>>>>>> real EM field dynamics and the simulation's dynamics will not track, and
>>>>>>>>>> that the difference will be the non-computable aspect of brains.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I commend you and your work for challenging base assumptions. Such
>>>>>>>>> work is always needed in science for progress to be made.
>>>>>>>>>
>>>>>>>>> The difference, I predict, will be in how the devices relate to
>>>>>>>>>> the external world, which is something that cannot be in any model because
>>>>>>>>>> it is precisely when the external world is unknown (that nobody can
>>>>>>>>>> program) that you are interested in its response (that forms the test
>>>>>>>>>> context of interest). In the end it is about the symbol grounding problem.
>>>>>>>>>> I have a paper in review (2nd round) at the moment, in which I describe it
>>>>>>>>>> this way:
>>>>>>>>>> ----------------
>>>>>>>>>> The creation of chip materials able to express EM fields
>>>>>>>>>> structurally identical to those produced by neurons can be used
>>>>>>>>>> to construct artificial neurons that replicate neuron signal processing
>>>>>>>>>> through allowing the actual, natural EM fields to naturally interact in the
>>>>>>>>>> manner they do in the brain, thereby replicating the same kind of
>>>>>>>>>> signalling and signal processing (computation). This kind of in-silico
>>>>>>>>>> empirical approach is simply missing from the science. No instances of
>>>>>>>>>> in-silico-equivalent EM field replication can be found. Artificial neurons
>>>>>>>>>> created this way could help in understanding EM field expression by
>>>>>>>>>> excitable cell tissue. It would also facilitate a novel way to test
>>>>>>>>>> hypotheses in-silico.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> What is the easiest way to test this theory of EMs role in
>>>>>>>>> consciousness or intelligence?
>>>>>>>>>
>>>>>>>>> Would you consider the creation of an artificial neural network
>>>>>>>>> that exhibits intelligent or novel behavior to be a disproof of this EM
>>>>>>>>> theory?
>>>>>>>>>
>>>>>>>>> Neuroscience and physics, together, could embark on such a
>>>>>>>>>> development. It would help us reveal the neural dynamics and signal
>>>>>>>>>> processing that is unknowingly not captured by the familiar models that
>>>>>>>>>> abstract-away EM fields and that currently dominate computational
>>>>>>>>>> neuroscience. *Note that the computational exploration of the EM
>>>>>>>>>> fields (via Maxwell’s equations) impressed on space by the novel chip would
>>>>>>>>>> constitute the design phase of the chip. The design would be sent to a
>>>>>>>>>> foundry to build. What comes back from the foundry would express the EM
>>>>>>>>>> fields themselves. The empirical method would be, to neuroscience, what the
>>>>>>>>>> Wright Brothers construction of flying craft did for artificial flight.*
>>>>>>>>>>
>>>>>>>>>> -----------------
>>>>>>>>>> The flight analogy is a strong one. Simulation of flight physics
>>>>>>>>>> is not flight.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I see this argument a lot but I think it ignores the all important
>>>>>>>>> role of the perspective in question.
>>>>>>>>>
>>>>>>>>> For a being in the simulation of flight, it is flight. If we
>>>>>>>>> include an observer in the simulation of a rainstorm, they will get wet.
>>>>>>>>>
>>>>>>>>> That our hypothetical simulators see only a computer humming along
>>>>>>>>> and no water leaking out of their computer says nothing of the experiences
>>>>>>>>> and goings-on for the perspective inside the simulation.
>>>>>>>>>
>>>>>>>>> As consciousness is all about perspectives and inside views,
>>>>>>>>> changing the view to focus on the outside is potentially misleading. I
>>>>>>>>> could equivalently say, "A person dreaming of a sunrise sees a yellow sun
>>>>>>>>> full of brilliant light, but the room is still pitch dark!" But the
>>>>>>>>> darkness of the room doesn't tell me anything about whatever experiences
>>>>>>>>> the dreaming brain could be having.
>>>>>>>>>
>>>>>>>>> I think it's the same with computer simulations. There's an
>>>>>>>>> external view and an internal view. Each tells very little about the other.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I predict that in exactly the same way, in the appropriate context
>>>>>>>>>> (novelty), that a simulation of 'braining' will not be a brain (in a manner
>>>>>>>>>> to be discovered). The reason, I predict, is that the information content
>>>>>>>>>> in the EM field is far larger than anything found in the peripheral
>>>>>>>>>> measurement signals hooked to it. The chip that does the fields, I predict,
>>>>>>>>>> will handle novelty in a way that parts company with the simulation that
>>>>>>>>>> designed the chip. The chip's behaviour (choices) will be different to the
>>>>>>>>>> simulation.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I do think that given the chaotic nature of a large and highly
>>>>>>>>> complex system where small changes can be magnified, anything not modeled
>>>>>>>>> in a brain simulation can lead to divergent behaviors. The question to me
>>>>>>>>> is how important those unsimulated aspects are to the fidelity of the mind.
>>>>>>>>> Is it all important to the extent the mind is inoperable without including
>>>>>>>>> it, or is it something that makes a difference in behavior only after a
>>>>>>>>> significantly long run? (Or is it something in between those two extremes?)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> The grand assumption of equivalence of "brain" and "computed
>>>>>>>>>> model of brain" is and has only ever been an assumption, and the testing
>>>>>>>>>> that determines the equivalence has never been done.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I agree with you that it should be.
>>>>>>>>>
>>>>>>>>> You do not find out by assuming the equivalence and never actually
>>>>>>>>>> testing it with a proper control (null hypothesis). Especially when the
>>>>>>>>>> very thing that defines the grand failure of AI is when it encounters
>>>>>>>>>> novelty ... which is exactly what has been happening for 65 years non-stop.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I am not sure I would say AI has failed here.
>>>>>>>>>
>>>>>>>>> Take for example, my AI bots. They encountered novelty when I
>>>>>>>>> changed their environment repeatedly, and each time they responded by
>>>>>>>>> developing new more optimum strategies to cope with those changes.
>>>>>>>>>
>>>>>>>>> Jason
>>>>>>>>> ____________
>>>>>>>>
>>>>>>>>
>>>>>>>> If  you read the article I posted, you will find the state of
>>>>>>>> affairs that you are projecting into simulation is called 'magical' or
>>>>>>>> radical/strong emergence. Proving true/false what you are projecting into
>>>>>>>> simulation is precisely the final outcome that a proper science of EM-based
>>>>>>>> subjectivity will sort out. My prediction is that none of the things you
>>>>>>>> expect will happen because the EM fields are organized incorrectly. They
>>>>>>>> are organized in the form of a computer. Everything that the brain does to
>>>>>>>> organize subjectivity is lost.
>>>>>>>>
>>>>>>>> So I guess we'll just have to wait till the science gets done
>>>>>>>> properly. Only then will you know whether any of your expectations are
>>>>>>>> valid.
>>>>>>>>
>>>>>>>
>>>>>>> It’s a given that the computer is not the same as whatever it is
>>>>>>> simulating, but are you saying that the effect of EM fields on matter
>>>>>>> cannot be simulated?
>>>>>>>
>>>>>>>> --
>>>>>>> Stathis Papaioannou
>>>>>>>
>>>>>>
>>>>>> In the extraordinary context of a braIn, that the standard model of
>>>>>> particle physics tells us is 100% EM field from the atomic level up ... is
>>>>>> where we have the only instance of a proved 1st-person perspective. That
>>>>>> is, 'being' the particular EM field system that literally 'is' a brain,
>>>>>> involves contact with information content of the EM field system (the
>>>>>> 1st-person perspective itself) that appears to have a vast amount of innate
>>>>>> information in it that involves the external world. That information
>>>>>> content, or access to it, *is not in any of Maxwell's equations *and
>>>>>> is degenerately (non-uniquely, irresolvably) related to any of the
>>>>>> physical input/output signals. So ... No: Not Turing Computable. To explore
>>>>>> it you need to replicate the physics itself to test this as a hypothesis,
>>>>>> not simulate. The test context that sorts it out is the very context where
>>>>>> all AI/AGI fails: where the system encounters something it has never
>>>>>> encountered before. In that state, the computer simulaiton and the hardware
>>>>>> (field) replication would part company in interesting ways.
>>>>>>
>>>>>> The only way Maxwell's equations could somehow deliver all the
>>>>>> information is if you simulate the entire external world as well, which you
>>>>>> can't because you don't have all the information.
>>>>>>
>>>>>> Overall....I am saying that the science that would prove what
>>>>>> everybody is assuming for 65 years involves using hardware that is not a
>>>>>> general purpose computer. AGI's future is critically dependent on that
>>>>>> science being done and it has not been done.
>>>>>>
>>>>>> Does that make sense?
>>>>>>
>>>>>
>>>>> So are you saying that it would be possible in theory to calculate the
>>>>> trajectory of calcium atoms in a lump of marble but not calcium atoms in my
>>>>> big toe?
>>>>>
>>>>>> --
>>>>> Stathis Papaioannou
>>>>>
>>>>
>>>>
>>>> Not at all. The marble and the big toe are EM field systems. But
>>>>
>>>> 1) they are not organized the way a brain's EM  is organized
>>>> 2) unlike the brain there is no 1st person perspective for either
>>>> marble or toe (the toe's apparent 1PP is projected by the brain onto the
>>>> toe).
>>>>
>>>> The ions are not the field system. They generate a field system and
>>>> carry it around.
>>>>
>>>
>>> A human can communicate with their big toe, so if it is possible to
>>> calculate the trajectory of calcium atoms in the bones of the big toe, it
>>> is possible to replicate human intelligence. We don’t need to say that the
>>> toe is conscious, we just need to know how the calcium atoms in the distal
>>> phalanx of the big toe move. Are you saying that the forces on those atoms
>>> are fundamentally different from the forces on calcium atoms elsewhere?
>>>
>>>> --
>>> Stathis Papaioannou
>>>
>>
>> No. What you are saying is both right and irrelevant. Brain's use
>> information encode in the field system, which is degenerately related to
>> the position of ions. Stop talking about ion positions and start talking
>> about "what it is like to BE ions. If you cannot see the problem space,
>> then this discussion cannot progress anywhere.
>>
>> The information content in the total, emergent field system (not just
>> their ionic charge source locations) is what I am talking about.
>>
>
> I am not clear from what you said whether you think it is possible, in
> theory, to calculate the trajectory of the calcium atoms in the tip of a
> human big toe.
>
>
> --
> Stathis Papaioannou
>
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
> www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> <#m_-7438638937091832737_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220427/28f787f2/attachment-0001.htm>


More information about the extropy-chat mailing list