[ExI] A science-religious experience

efc at disroot.org efc at disroot.org
Fri Mar 7 16:03:46 UTC 2025


>       >       Ok. But note that we have no proof. As far as I know, we still do not know. We
>       >       have theories and hunches.
>       >
>       > I am not arguing for what is true, I am only explaining how your statement
>       > that "[consciousness] is a process of moving physical things" is fundamentally
>       > the same as what I mean when I called it an immaterial pattern. The essence of
>       > what it is, is found in the movement, the processes, the relations, the
>       > patterns. (This is the assumption of functionalism, which I don't aim to
>       > prove, I am only showing how functionalism, if accepted, leads to these
>       > conclusions).  
>
>       Got it. Maybe the best way is to settle for that consciousness is electrons
>       moving around in a brain? (at the moment) When you say immaterial, I understood
>       it to mean that it is not something physical. But I now understand that is not
>       what you meant. Thank you for the clarification. I think when I say a process of
>       moving physical things, as crude as it might sound, maybe I mean the same thing
>       as you? ;)
> 
> Insofar as we both mean consciousness is a process, then I think we are in
> agreement.

I think so! =)

> "Consciousness, as [William] James pointed out, is a process not a thing."
> — Gerald Edelman in “Consciousness: A Process Not a Thing” (2005)
> 
> I would, however, go further than you, in saying that it need not be composed
> of electrons or atoms or anything of our physical universe.

Yes, this is right. You go further than I do here, and this relates to my view
of reality and my stubborn agnosticism. ;)

>       Well, I think this question is one where we will be able to make some scientific
>       progress, as long as we're not talking qualia or subjective states, which would
>       be outside my scope of empirical evidence. So let's see where we end up on that.
>
>       > No, but you had said that "we have no empirical evidence for any immaterial
>       > pattern." These examples were meant to highlight that we do in fact have
>       > evidence of immaterial patterns. Beethoven's 5th symphony and Moby Dick are
>       > examples.  
>
>       This is based on my misunderstanding of your use of immaterial. Beethovens 5:th
>       consists on symbols on a piece of paper.
> 
> Some scribblings on paper are only one possible physical instantiation of
> Beethoven's 5th. But it is not the same as Beethoven's 5th, which is
> informational, in its core essence.
> 
> Consider: identity relationships are transitive, if A = B, and A = C, then B =
> C.
> 
> But that doesn't work here, we can't say the scribblings on paper are
> identical with Beethoven's 5th.
> 
> The reasoning is as follows. Let's say:
> 
> A = Beethoven's 5th
> B = Scribblings of the notes of Beethoven's 5th on paper
> C = An orchestral rendition Beethoven's 5th
> 
> If there were a true identity, if the scribblings on paper are identical
> to Beethoven's 5th, then A = B. Likewise, if there is an identity between the
> orchestral rendition and Beethoven's 5th, then A = C.
> 
> But then, by the transitive nature of identity, then B ought to be identical
> with C, yet the scribblings on paper are not identical with the orchestral
> rendition. B ≠ C.
> 
> Somewhere along the way an error was made. Can you spot it?

I think this is a matter of definition. How would you define B5? As a process?

>       > using philosophical thought experiments). The agnostic who insists on
>       > empirical verification can never have satisfaction on the problem of other
>       > minds.  
>
>       All good with me! =) I don't know, and I can never know, but I still get
>       pleasure out of interacting with you, other humans and animals, be they
>       conscious or not (in the subjective sense).
> 
> Would you feel any more content with some further confidence (provided by
> philosophical thought experiments) to know there are reasons we can be quite
> sure these other beings are conscious?

Depends on the thought experiment. I might. Try me! ;)

>       > Short of postulating incomputable physical laws (which we have no evidence
>       > for), an appropriately detailed computer emulation of all the molecules in a
>       > human brain would behave identically to a brain running on "native molecules".
>       > The Church-Turing Thesis, so fundamental in computer science, tells us that
>       > the hardware makes no difference in what kinds of programs can be run, the
>       > possible behaviors of all Turing machines is equivalent.
>
>       True. This could be tested. Before then, we can only speculate.
> 
> By speculating here, note that you mean the same thing scientists do when they
> plan how to launch a rocket into orbit. But are rocket scientists "only
> speculating" when they launch a new rocket, or are they merely applying the
> well-understood principles of the laws of gravity, the laws of motion, and
> their universal applicability?

Probably the project of lauching a rocket, the first time, contains both
speculation, and application in the form of tests and experiments. Thought
experiments, reasoning etc. can be valuable tools. They can also lead us astray.

> If you would use the word speculation to cover the behavior of the rocket
> scientists in designing a rocket, then you are consistent. As I am suggesting,
> no more than what the same laws of physics tell us ought to happen and am
> assuming only that the laws apply everywhere to all physical things in this
> universe. If that requires speculation, then everything everyone ever does is
> speculation.  

I could say speculate, and then those speculation are tested, when the rocket
takes off the first time.

>       > Of course those alternatives are possible. My point all along is that
>       > "starting from functionalism" then "all this" follows.
>
>       Yes, you make a good point that it follows from functionalism. I have a creeping
>       suspicion that we are somewhat in agreement, overall, assuming an equal starting
>       point (not sure about that) and that we have different degrees of belief about
>       the probability of this happening.
> 
> I think there is a way of thinking that you seem less comfortable than I am with using. Examples are:
> 
> A mathematician starting from some axioms, and working out a proof that follows logically from them.
> A philosopher starting from a premise and working out the consequences that follow logically from that premise.
> A physicist defining the state of a physical system and predicting what the laws of physics predict the future evolution of that
> system to be.
>  
> I think this may be another core difference between us, which seems to relate
> to our different base ideas around the validity or reliability of deductive
> reasoning.

Yes I think you are right here. Where I feel uncomfortable is when those
examples are bridged to the real world. I am not uncomfortable with the
mathematician solving mathematical problems. When math is used as a helper for
physics, to describe our world, that is where my uncomfort sets in.

>       > I was under the impression you were a materialist. The materialist assumption
>       > is that the brain follows physical law. This is the only assumption you need
>       > to make for point #2 to stand. It stands even if you subscribe to
>       > type-physicalism, rather than functionalism.
>
>       It is an assumption. It has not been verified.
> 
> Could we?

Maybe!

>       So based on my current
>       experience, I have never witnessed any resurrections in the strong sense (not
>       talking about freezing/thawing or near death experiences here). So assuming an
>       all powerful AI who reassembled every atom and electron of a dead body in some
>       live state, yes, why not? But the advantage is that this would be something
>       happening in the real world, so when it happens I will revise my position.
>
>       Until it happens, I consider death to be death, since I have not yet seen any
>       evidence to the contrary.
> 
> But note that this conclusion contravenes your materialist assumption.

I don't see how that follows.

>       >       If you cannot show me proof of resurrection, resurrection is not possible,
>       >
>       > I think you are making a logical error here. Not showing proof of something,
>       > does not imply something is impossible.
>
>       No, I think the error is avoided by the fact that if you show me resurrection
>       under scientific controlled circumstances, I will revise my position from
>       impossible, to possible.
>
>       Until that experiment, the position impossible seems to be confirmed by plenty
>       of observations (apart from the niche cases above).
> 
> I think I may see the problem here.
> 
> I believe you are using the word "possible" to mean "currently technically feasible."
> Whereas I use "possible" to mean "possible in principle" (i.e. nomological possibile).
>  
> But please correct me if my interpretation is wrong.

Yes, this might be closer to the truth. Another aspect to keep in mind when I
speak of impossible, is that it does not mean impossible forever in many cases.
Then there are of course cases, to complicate matters, where I consider
impossible to be impossible for ever, such as our bearded lord reaching out from
the sky. But even that case I would be willing to reconsider if I saw proof of
it.

>       >       I do have some arguments or concerns which make me not share your definitions
>       >       though.
>       >
>       > Okay, if your only contention is with my premises/definitions, and not with my
>       > reasoning or conclusions, let us settle the definitions and premises first.
>       > Let that be our focus for now.  
>
>       See above. I think we are making progress here.
> 
> Yes, I think you accept the reasoning that follows from the premise.
> 
> Since my intention was not to prove the premise (functionalism) only to show that functionalism justifies a conception of
> consciousness that's not wholly unlike ancient conceptions of the soul, then perhaps there is no need to pursue this thread any
> further, unless others on the list want to debate the reasoning, or the premise of functionalism itself.

I think that's a good conclusion, I agree!

>       In terms of zombies, I found the following interesting:
>
>       "Many physicalist philosophers[who?] have argued that this scenario eliminates
>       itself by its description; the basis of a physicalist argument is that the world
>       is defined entirely by physicality; thus, a world that was physically identical
>       would necessarily contain consciousness, as consciousness would necessarily be
>       generated from any set of physical circumstances identical to our own."
>
>       "Another response is the denial of the idea that qualia and related phenomenal
>       notions of the mind are in the first place coherent concepts. Daniel Dennett and
>       others argue that while consciousness and subjective experience exist in some
>       sense, they are not as the zombie argument proponent claims. The experience of
>       pain, for example, is not something that can be stripped off a person's mental
>       life without bringing about any behavioral or physiological differences. Dennett
>       believes that consciousness is a complex series of functions and ideas. If we
>       all can have these experiences the idea of the p-zombie is meaningless. "
>
>       https://en.wikipedia.org/wiki/Philosophical_zombie
> 
> I think this quote, from Dennett, really drives home the problem of zombies:
>
>       "Supposing that by an act of stipulative imagination
> you can remove consciousness while leaving all
> cognitive systems intact […] is like supposing that by
> an act of stipulative imagination, you can remove
> health while leaving all bodily functions and powers
> intact. If you think you can imagine this, it’s only
> because you are confusedly imagining some health-
> module that might or might not be present in a
> body. Health isn’t that sort of thing, and neither is
> consciousness."
> — Daniel Dennett in “The Unimagined Preposterousness of
> Zombies” (1995)

Makes a lot of sense to me.

>       Upon reading a bit I also found an idea called token physicalism. As I said
>       before, haven't delved deeply into this, so apologies for not setting the entire
>       table, and shifting around the cutlery as we speak. ;)
> 
> Token-physicalism is known as non-reductive physicalism or emergent
> materialism. In general, it is far more flexible in the kinds of physical
> systems that could manifest consciousness. It says consciousness is emergent,
> a high-level, rather than a low-level phenomenon. As such, it is usually
> considered compatible with the notion of multiple realizability, which is a
> core notion in functionalism. In short, token physicalism is not incompatible
> with functionalist thinking.

You see... I'm inching along here. ;)

> "An emergent quality is roughly a quality which
> belongs to a complex as a whole and not to its parts.
> Some people hold that life and consciousness are
> emergent qualities of material aggregates of a
> certain kind and degree of complexity."
> — C. D. Broad in “The Mind And Its Place In Nature” (1925)

>       > This, should you be interested, should provide some more background for the
>       > strengths and weaknesses between type-physicalism and its alternatives.
>
>       Ahh... this was the token physicalism from above. Well, I'm afraid I have to be
>       a fence straddler here, while reading a bit. I think in theory this could be
>       investigated further once we gain a better knowledge of how our brain works.
> 
> I think if you studied the field further, you would come to the conclusion
> that we have all the information we need already.

Possibly.

>       > For example, see my section on "Conscious Behaviors" starting on page 59 of
>       > that same document I link above.
>
>       In my case... other minds is an experience I have regardless of if I want it or
>       not (disregarding here of course, suicide). So I can just react to the actions I
>       see, and act and watch the reactions. I do not think more is actually needed.
> 
> Can something that lacks consciousness do all of the following things:
>  *  Notice something
>  *  Clear one’s head
>  *  Lose one’s temper
>  *  Pay attention
>  *  Daydream
>  *  Have a favorite flavor of ice cream
>  *  Spit out a bad-tasting food
>  *  Be anesthetized
>  *  Hallucinate
>  *  Give in to torture
>  *  Get and laugh at a joke
>  *  Remember an earlier thought
>  *  Describe how one feels
>  *  Invent a theory of consciousness
> Or do you think there are some behaviors for which a conscious mind is a requirement?

I think we first of all, have a bad grasp of what consciousness is. Keeping that
in mind, I think all of the above could be replicated by a machine, in terms of
how it behaves and acts in the world. Some of those would be dependent on
definition as well.

> If there are any, then detecting consciousness can be made into an empirical science.

I am content with using behaviour in the world as a guide to consciousness. What
I am waiting for in the current AI gold rush is volition, goals and
self-preservation.

>       > Well the uploaded worm certainly died (biologically). And was uploaded and
>       > resurrected: https://www.youtube.com/watch?v=2_i1NKPzbjM
>
>       I wouldn't call it resurrected myself, but that is a matter of definition and
>       identity.
> 
> What would be needed for you to conclude it was resurrected? (Or does that
> determination rest on what you consider to be unresolved questions of
> identity?)  

Probably that the biological body started to move again, from the state where it
stopped. I'd say that what was done was that the patterns of the worm where
cloned and replicated in a computer, for now.

As for identity, this is actually an interesting question! Is there an accepted
"line" where we speak of animals with identities, and animals without
identities? Higher animals have preferences, listen to their names, to some
extent, can pick up on feelings etc. Where does that stop? Does a worm have an
identity?

>       > Then we should see mouse brains, cat brains, chimp brains, and human brains. I
>       > see, and am aware of, no reason why this should just fail for any particular
>       > species.  
>
>       Depends on if we hit any barriers we do not know about when it comes to minds,
>       how they are built, the speed of hardware, the nature of computing etc.
> 
> It's possible, but we have seen no evidence of such barriers. So then, ought
> we not dismiss those concerns (as you do for other things for which we have no
> empirical evidence of)?  

No, becaus in this case, we have a basis for devising empirical tests and
experiments. We also learn about medicine and computer science as well. We could
also learn about new break throuhgs in AI.

>       But that's just speculation. We can actually experiment and work towards it, so
>       that's all we need! =)

Yes! To agree with myself here. ;)

Best regards, 
Daniel


More information about the extropy-chat mailing list