[ExI] A science-religious experience

Jason Resch jasonresch at gmail.com
Fri Mar 7 17:48:54 UTC 2025


I think we have largely reached a conclusion on all topics raised in this
thread. I'll just leave a few responses below to some new things you raised.

On Fri, Mar 7, 2025 at 11:06 AM efc--- via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> > Insofar as we both mean consciousness is a process, then I think we are
> in
> > agreement.
>
> I think so! =)
>



>
> > "Consciousness, as [William] James pointed out, is a process not a
> thing."
> > — Gerald Edelman in “Consciousness: A Process Not a Thing” (2005)
> >
> > I would, however, go further than you, in saying that it need not be
> composed
> > of electrons or atoms or anything of our physical universe.
>
> Yes, this is right. You go further than I do here, and this relates to my
> view
> of reality and my stubborn agnosticism. ;)
>



>
> >
> > Consider: identity relationships are transitive, if A = B, and A = C,
> then B =
> > C.
> >
> > But that doesn't work here, we can't say the scribblings on paper are
> > identical with Beethoven's 5th.
> >
> > The reasoning is as follows. Let's say:
> >
> > A = Beethoven's 5th
> > B = Scribblings of the notes of Beethoven's 5th on paper
> > C = An orchestral rendition Beethoven's 5th
> >
> > If there were a true identity, if the scribblings on paper are identical
> > to Beethoven's 5th, then A = B. Likewise, if there is an identity
> between the
> > orchestral rendition and Beethoven's 5th, then A = C.
> >
> > But then, by the transitive nature of identity, then B ought to be
> identical
> > with C, yet the scribblings on paper are not identical with the
> orchestral
> > rendition. B ≠ C.
> >
> > Somewhere along the way an error was made. Can you spot it?
>
> I think this is a matter of definition. How would you define B5? As a
> process?
>

I would define Beethoven's 5th as a particular mathematical structure,
isomorphically present in all its various manifestations (as sheet music,
live performances, as various numeric or alphanumeric lists of notes, in
the particular patterns of holes in player piano rolls, etc.) this
structure, as a mathematical pattern, is abstract, informational, and
immaterial. The isomorphism common in all the various manifestations allow
us to recognize what is the same between them, but there is not an identity
between the structure to which they are all isomorphic, and each of its
various manifestations. The sheet music ≠ the orchestral performance ≠ the
piano roll. So then we cannot make an identity between any of those
manifestations and the abstract mathematical pattern, the abstract
mathematical pattern is its own unique "thing", not identical with any of
its various isomorphisms.




>
> >       > using philosophical thought experiments). The agnostic who
> insists on
> >       > empirical verification can never have satisfaction on the
> problem of other
> >       > minds.
> >
> >       All good with me! =) I don't know, and I can never know, but I
> still get
> >       pleasure out of interacting with you, other humans and animals, be
> they
> >       conscious or not (in the subjective sense).
> >
> > Would you feel any more content with some further confidence (provided by
> > philosophical thought experiments) to know there are reasons we can be
> quite
> > sure these other beings are conscious?
>
> Depends on the thought experiment. I might. Try me! ;)
>

I lay them all out starting on page 20 of section 3.5:
https://drive.google.com/drive/folders/1-SMVWgQFfImXNRRuuB9kQwhgxPLAwxYL

In brief, the thought experiments I cover are:

   1. A "Consciousness" gene (page 26)
   2. Philosophical Zombies (page 32)
   3. Zombie Earth (page 38)
   4. Lying Zombies (page 46)
   5. A Mental Lockbox (page 51)
   6. *Conscious Behaviors (page 59)*
   7. *The Argonov Test (page 73)*
   8. *Consciousness and Intelligence (page 81)*
   9. Reflective Zombies (page 88)
   10. Mary's Room (102)
   11. *Neural Substitution (135)*
   12. *Fading Qualia (151)*
   13. *Inverted Spectrum (182)*
   14. *Dancing Qualia (189)*
   15. *Hemispheric Replacement (198)*

Thought experiments 1-5 rule out zombies, and this is shown to rule out
epiphenomenalism.

Thought experiments 6-8 establish by what means we can test for and verify
the presence of consciousness.

Thought experiments 11-15, in my view, more or less conclusively establish
functionalism as the only workable theory of consciousness.




>
> Probably the project of lauching a rocket, the first time, contains both
> speculation, and application in the form of tests and experiments. Thought
> experiments, reasoning etc. can be valuable tools. They can also lead us
> astray.
>

An unforseen defect might of course, cause it to explode or fail mid
flight, but the general laws of physics, for gravity, thermodynamics,
enable the engineers and rocket scientists to compute exactly, for example,
how much fuel the rocket should need to get into orbit, or get to the moon,
etc.

Consider, for example, that Boeing's 777 aircraft was designed entirely on
a computer. There were no test flights, or prototypes built to test and
revise things along the way. The plane went straight from its design (based
entirely on models and simulations based on our understanding of physical
laws) straight into production of the millions of parts that would all need
to fit and work together. And, it turned out that when all those parts were
assembled for the first time, the result was a working aircraft that had
the range, and slight speed, and other characteristics that they had
predicted. Such is the state of our understanding of physics, and the
confidence we have in using those models to make predictions.


>
> > If you would use the word speculation to cover the behavior of the rocket
> > scientists in designing a rocket, then you are consistent. As I am
> suggesting,
> > no more than what the same laws of physics tell us ought to happen and am
> > assuming only that the laws apply everywhere to all physical things in
> this
> > universe. If that requires speculation, then everything everyone ever
> does is
> > speculation.
>
> I could say speculate, and then those speculation are tested, when the
> rocket
> takes off the first time.
>

Do you "only speculate" that the sun will rise tomorrow?
To me, "speculate" indicates a degree of uncertainty that I don't think
fits for the situation I am discussing.


>
> > I think this may be another core difference between us, which seems to
> relate
> > to our different base ideas around the validity or reliability of
> deductive
> > reasoning.
>
> Yes I think you are right here. Where I feel uncomfortable is when those
> examples are bridged to the real world. I am not uncomfortable with the
> mathematician solving mathematical problems. When math is used as a helper
> for
> physics, to describe our world, that is where my uncomfort sets in.
>

I wonder how comfortable the test pilot was who was the first to take off
in an entirely untested 777. ;-)


>
> >       > I was under the impression you were a materialist. The
> materialist assumption
> >       > is that the brain follows physical law. This is the only
> assumption you need
> >       > to make for point #2 to stand. It stands even if you subscribe to
> >       > type-physicalism, rather than functionalism.
> >
> >       It is an assumption. It has not been verified.
> >
> > Could we?
>
> Maybe!
>
> >       So based on my current
> >       experience, I have never witnessed any resurrections in the strong
> sense (not
> >       talking about freezing/thawing or near death experiences here). So
> assuming an
> >       all powerful AI who reassembled every atom and electron of a dead
> body in some
> >       live state, yes, why not? But the advantage is that this would be
> something
> >       happening in the real world, so when it happens I will revise my
> position.
> >
> >       Until it happens, I consider death to be death, since I have not
> yet seen any
> >       evidence to the contrary.
> >
> > But note that this conclusion contravenes your materialist assumption.
>
> I don't see how that follows.
>

The materialist assumes the brain is a physical object which operates
according to physical laws.
All materialists further believe in the concept of experimental
reproducibility: material systems arranged to be in the same state will
evolve in the same way over time.
These two materialist assumptions together imply that restoring a dead
brain to the state it was when it was alive will result in the brain
resuming its function as it was when it was alive.
Of course, we might run the experiment some day and find that for some
reason it doesn't work, but that would refute materialism.


>
> > I think I may see the problem here.
> >
> > I believe you are using the word "possible" to mean "currently
> technically feasible."
> > Whereas I use "possible" to mean "possible in principle" (i.e.
> nomological possibile).
> >
> > But please correct me if my interpretation is wrong.
>
> Yes, this might be closer to the truth. Another aspect to keep in mind
> when I
> speak of impossible, is that it does not mean impossible forever in many
> cases.
> Then there are of course cases, to complicate matters, where I consider
> impossible to be impossible for ever, such as our bearded lord reaching
> out from
> the sky. But even that case I would be willing to reconsider if I saw
> proof of
> it.
>

In my view it is better to speak in terms of probabilities. We could agree
seeing such an occurrence has a low probability, but it is not a logically
impossible experience to have. It is not impossible to the same extent as
"meeting a married bachelor, or "seeing a circle with four corners."


> > Yes, I think you accept the reasoning that follows from the premise.
> >
> > Since my intention was not to prove the premise (functionalism) only to
> show that functionalism justifies a conception of
> > consciousness that's not wholly unlike ancient conceptions of the soul,
> then perhaps there is no need to pursue this thread any
> > further, unless others on the list want to debate the reasoning, or the
> premise of functionalism itself.
>
> I think that's a good conclusion, I agree!
>



> > I think this quote, from Dennett, really drives home the problem of
> zombies:
> >
> >       "Supposing that by an act of stipulative imagination
> > you can remove consciousness while leaving all
> > cognitive systems intact […] is like supposing that by
> > an act of stipulative imagination, you can remove
> > health while leaving all bodily functions and powers
> > intact. If you think you can imagine this, it’s only
> > because you are confusedly imagining some health-
> > module that might or might not be present in a
> > body. Health isn’t that sort of thing, and neither is
> > consciousness."
> > — Daniel Dennett in “The Unimagined Preposterousness of
> > Zombies” (1995)
>
> Makes a lot of sense to me.
>

If you come to see zombies as logically impossible, (as I make the case for
in the thought experiments I cited above), then this means certain
behaviors can provide evidence for the presence of a mind.
Note, this does not mean behavior is mind, as behaviorists claimed, nor
does it mean absence of certain behaviors indicates a lack of a mind, but
it does mean, in certain conditions, witnessing behaviors can justify a
belief in the presence of a mind.


>
> >       Upon reading a bit I also found an idea called token physicalism.
> As I said
> >       before, haven't delved deeply into this, so apologies for not
> setting the entire
> >       table, and shifting around the cutlery as we speak. ;)
> >
> > Token-physicalism is known as non-reductive physicalism or emergent
> > materialism. In general, it is far more flexible in the kinds of physical
> > systems that could manifest consciousness. It says consciousness is
> emergent,
> > a high-level, rather than a low-level phenomenon. As such, it is usually
> > considered compatible with the notion of multiple realizability, which
> is a
> > core notion in functionalism. In short, token physicalism is not
> incompatible
> > with functionalist thinking.
>
> You see... I'm inching along here. ;)
>

:-)

It's a very deep field. I expected it to take a few months to research and
write this article, it has taken me a few years.
Consciousness is a far harder problem than existence.


>
>
> > I think if you studied the field further, you would come to the
> conclusion
> > that we have all the information we need already.
>
> Possibly.
>



>
> >       > For example, see my section on "Conscious Behaviors" starting on
> page 59 of
> >       > that same document I link above.
> >
> >       In my case... other minds is an experience I have regardless of if
> I want it or
> >       not (disregarding here of course, suicide). So I can just react to
> the actions I
> >       see, and act and watch the reactions. I do not think more is
> actually needed.
> >
> > Can something that lacks consciousness do all of the following things:
> >  *  Notice something
> >  *  Clear one’s head
> >  *  Lose one’s temper
> >  *  Pay attention
> >  *  Daydream
> >  *  Have a favorite flavor of ice cream
> >  *  Spit out a bad-tasting food
> >  *  Be anesthetized
> >  *  Hallucinate
> >  *  Give in to torture
> >  *  Get and laugh at a joke
> >  *  Remember an earlier thought
> >  *  Describe how one feels
> >  *  Invent a theory of consciousness
> > Or do you think there are some behaviors for which a conscious mind is a
> requirement?
>
> I think we first of all, have a bad grasp of what consciousness is.
> Keeping that
> in mind, I think all of the above could be replicated by a machine, in
> terms of
> how it behaves and acts in the world. Some of those would be dependent on
> definition as well.
>

Certainly such behaviors could be replicated by a machine. But the more
pertinent question is:
Could all these behaviours be replicated by a machine that was not
conscious? Or does the performance of these behaviors imply that the
machine doing them is conscious?

If there are no behaviors a non-conscious entity could not do as well as
any conscious being, then this gets back to the existence and possibility
of zombies.


> > If there are any, then detecting consciousness can be made into an
> empirical science.
>
> I am content with using behaviour in the world as a guide to
> consciousness. What
> I am waiting for in the current AI gold rush is volition, goals and
> self-preservation.
>

AI language models have goals: to produce meaningful responses that get
good feedback from the users.
And there was recently the case where researchers observed the AI acting in
a manner showing a desire for self-preservation.
( https://futurism.com/the-byte/openai-o1-self-preservation )

A (possibly relevant) cartoon:
https://www.digitaltonto.com/wp-content/uploads/2014/02/Kurzweil-AI-cartoon.gif


>
> >       > Well the uploaded worm certainly died (biologically). And was
> uploaded and
> >       > resurrected: https://www.youtube.com/watch?v=2_i1NKPzbjM
> >
> >       I wouldn't call it resurrected myself, but that is a matter of
> definition and
> >       identity.
> >
> > What would be needed for you to conclude it was resurrected? (Or does
> that
> > determination rest on what you consider to be unresolved questions of
> > identity?)
>
> Probably that the biological body started to move again, from the state
> where it
> stopped. I'd say that what was done was that the patterns of the worm where
> cloned and replicated in a computer, for now.
>

Well they were copied into a robot body. So it was given a new body. The
word resurrect means only to bring back to life (and sets no requirement on
it being the same body).
If we restrict resurrection to only bringing the original body back to
life, I would class that as "revival" or "resuscitation".


>
> As for identity, this is actually an interesting question! Is there an
> accepted
> "line" where we speak of animals with identities, and animals without
> identities? Higher animals have preferences, listen to their names, to some
> extent, can pick up on feelings etc. Where does that stop? Does a worm
> have an
> identity?
>

I don't think an entity needs to recognize or be aware of its identity for
it to have one. For example, philosophy struggles even to define identity
for inanimate objects (famously the Ship of Theseus:
https://en.wikipedia.org/wiki/Ship_of_Theseus ).

As to the matter of whether the worm has a "personal identity", to me, that
question rests on whether or not there is anything it is like to be that
worm: is that worm conscious? If so, then we can ask valid questions about
its identity in the same way as is commonly done in the field of personal
identity.

E.g., What is required for the worm to survive? Which experiences belong to
the worm? If the worm gets cut in two and continues living, does its
identity split, or does each copy preserve and contain the original worm's
identity? etc.


> >       > Then we should see mouse brains, cat brains, chimp brains, and
> human brains. I
> >       > see, and am aware of, no reason why this should just fail for
> any particular
> >       > species.
> >
> >       Depends on if we hit any barriers we do not know about when it
> comes to minds,
> >       how they are built, the speed of hardware, the nature of computing
> etc.
> >
> > It's possible, but we have seen no evidence of such barriers. So then,
> ought
> > we not dismiss those concerns (as you do for other things for which we
> have no
> > empirical evidence of)?
>
> No, becaus in this case, we have a basis for devising empirical tests and
> experiments. We also learn about medicine and computer science as well. We
> could
> also learn about new break throuhgs in AI.
>

Okay, that makes sense to me.


>
> >       But that's just speculation. We can actually experiment and work
> towards it, so
> >       that's all we need! =)
>
> Yes! To agree with myself here. ;)
>
>
:-)

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250307/7f77e701/attachment-0001.htm>


More information about the extropy-chat mailing list