[ExI] A science-religious experience

Jason Resch jasonresch at gmail.com
Tue Mar 11 18:27:13 UTC 2025


On Mon, Mar 10, 2025, 11:23 AM efc--- via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> > I think we have largely reached a conclusion on all topics raised in this
> > thread. I'll just leave a few responses below to some new things you
> raised.
>
> Agreed! There might be one or two small things, I'll have a look and
> heavily
> delete the rest.
>

Very nice ��


> >       > A = Beethoven's 5th
> >       > B = Scribblings of the notes of Beethoven's 5th on paper
> >       > C = An orchestral rendition Beethoven's 5th
> >       >
> >       > If there were a true identity, if the scribblings on paper are
> identical
> >       > to Beethoven's 5th, then A = B. Likewise, if there is an
> identity between the
> >       > orchestral rendition and Beethoven's 5th, then A = C.
> >       >
> >       > But then, by the transitive nature of identity, then B ought to
> be identical
> >       > with C, yet the scribblings on paper are not identical with the
> orchestral
> >       > rendition. B ≠ C.
> >       >
> >       > Somewhere along the way an error was made. Can you spot it?
> >
> >       I think this is a matter of definition. How would you define B5?
> As a process?
> >
> > I would define Beethoven's 5th as a particular mathematical structure,
> > isomorphically present in all its various manifestations (as sheet
> music, live
> > performances, as various numeric or alphanumeric lists of notes, in the
> > particular patterns of holes in player piano rolls, etc.) this
> structure, as a
> > mathematical pattern, is abstract, informational, and immaterial. The
> > isomorphism common in all the various manifestations allow us to
> recognize
> > what is the same between them, but there is not an identity between the
> > structure to which they are all isomorphic, and each of its various
> > manifestations. The sheet music ≠ the orchestral performance ≠ the piano
> roll.
> > So then we cannot make an identity between any of those manifestations
> and the
> > abstract mathematical pattern, the abstract mathematical pattern is its
> own
> > unique "thing", not identical with any of its various isomorphisms.
>
> For me I think it goes back to process. Depending on the context, dialogue
> or
> situation, different things can represent B5 for me. It can be the
> sequence of
> notes. It could (if written today) be the copyrighted work. It could be the
> process of me enjoying the execution of the notes. It all depends on the
> situation and the purpose of the investigation.
>

Yes I think this is what I was saying, and what I meant by all instances
containing the same isomorphic pattern.

But note that strictly speaking no instance can be "identical with" this
pattern, without (by implication) all instances being identical with each
other (which is clearly not the case). Therefore, the pattern is something
distinct from any of its particular instantiations.

Do you understand my reasoning here?



> >       > Would you feel any more content with some further confidence
> (provided by
> >       > philosophical thought experiments) to know there are reasons we
> can be quite
> >       > sure these other beings are conscious?
> >
> >       Depends on the thought experiment. I might. Try me! ;)
> >
> > I lay them all out starting on page 20 of section 3.5:
> https://drive.google.com/drive/folders/1-SMVWgQFfImXNRRuuB9kQwhgxPLAwxYL
> >
> > In brief, the thought experiments I cover are:
> >  1. A "Consciousness" gene (page 26)
> >  2. Philosophical Zombies (page 32)
> >  3. Zombie Earth (page 38)
> >  4. Lying Zombies (page 46)
> >  5. A Mental Lockbox (page 51)
> >  6. Conscious Behaviors (page 59)
> >  7. The Argonov Test (page 73)
> >  8. Consciousness and Intelligence (page 81)
> >  9. Reflective Zombies (page 88)
> >  10. Mary's Room (102)
> >  11. Neural Substitution (135)
> >  12. Fading Qualia (151)
> >  13. Inverted Spectrum (182)
> >  14. Dancing Qualia (189)
> >  15. Hemispheric Replacement (198)
> > Thought experiments 1-5 rule out zombies, and this is shown to rule out
> epiphenomenalism.
>
> Based on a materialist outlook, I think there is no possibility of a
> philosophical zombie. I don't quite see how it could be meaningful. At the
> end
> of the day, we see what we see, and the "subjective" details of what goes
> on
> inside, is forever blocked from objective investigation.
>

Good. I think most philosophers today reject zombies and epiphenomenalism.


> > Thought experiments 6-8 establish by what means we can test for and
> verify the presence of consciousness.
>
> I think conscious behaviors is a good way. I think it flows from a
> materialist
> point of view.
>

I would say it follows from a functionalist view, but not a
type-physicalist view. The type-physicalist would accept you could have
something like an android, that by all accounts, acts like a conscious,
emotive, perceptive, reflective human, without having any mind at all
(because it's brain isn't made of the right material stuff).

So if you think behavioral indicators can justify belief in the presence of
a mind, then I think you are leaning more towards the functionalist
conception of mind.


> > Thought experiments 11-15, in my view, more or less conclusively
> establish functionalism as the only workable theory of
> > consciousness.
>
> This might be for another thread.
>

Sure. Would you like to start it?


> >       Probably the project of lauching a rocket, the first time,
> contains both
> >       speculation, and application in the form of tests and experiments.
> Thought
> >       experiments, reasoning etc. can be valuable tools. They can also
> lead us astray.
> >
> > An unforseen defect might of course, cause it to explode or fail mid
> flight,
> > but the general laws of physics, for gravity, thermodynamics, enable the
> > engineers and rocket scientists to compute exactly, for example, how
> much fuel
> > the rocket should need to get into orbit, or get to the moon, etc.
>
> True.
>
> > Consider, for example, that Boeing's 777 aircraft was designed entirely
> on a
> > computer. There were no test flights, or prototypes built to test and
> revise
> > things along the way. The plane went straight from its design (based
> entirely
> > on models and simulations based on our understanding of physical laws)
> > straight into production of the millions of parts that would all need to
> fit
> > and work together. And, it turned out that when all those parts were
> assembled
> > for the first time, the result was a working aircraft that had the
> range, and
> > slight speed, and other characteristics that they had predicted. Such is
> the
> > state of our understanding of physics, and the confidence we have in
> using
> > those models to make predictions.
>
> True. But it was not designed that way in isolation. It was supported by
> countless hours of experience, experiment, knowledge, empiricism, science,
> that
> across generations, went into that software and design.
>

Yes, so all this evidence, and all this confidence that our laws and models
work, are reliable, and are accurate, to the degree that we'll build a
plane and put a person in it, suggests to me that this same confidence (in
the reliability of the laws as we understand them) applies to our physical
bodies and brains.


> >       > If you would use the word speculation to cover the behavior of
> the rocket
> >       > scientists in designing a rocket, then you are consistent. As I
> am suggesting,
> >       > no more than what the same laws of physics tell us ought to
> happen and am
> >       > assuming only that the laws apply everywhere to all physical
> things in this
> >       > universe. If that requires speculation, then everything everyone
> ever does is
> >       > speculation.
> >
> >       I could say speculate, and then those speculation are tested, when
> the rocket
> >       takes off the first time.
> >
> > Do you "only speculate" that the sun will rise tomorrow?
>
> If you want to be strict about it, yes. From a pragmatic point of view, I
> refrain from having a conscious opinion. It just happens. If I wanted to
> estimate it, I'd look to my empirical experience, and conclude that it is
> very
> certain. I think the key insight for me is that I don't have to have an
> opinion
> on the matter, and thus I avoid the eternal doubter problem.
>

For the same reason, I am very certain that a brain (in the condition of
any living brain) will continue to act as a living brain would (rather than
say, spontaneously die because it did not having a supernatural soul
assigned to it).


> > To me, "speculate" indicates a degree of uncertainty that I don't think
> fits
> > for the situation I am discussing.
>
> True. I think a lot of our differences come out of us beginning with
> different
> associations, and then discovering that once I more clearly specify what I
> think, or you explain, the difference was actually quite small or
> non-existent.
> Your use of the term immaterial for instance, was one such example.
>

Yes it is so important to get definitions right. It was my mistake to not
explicitly define immaterial.


> >       > I think this may be another core difference between us, which
> seems to relate
> >       > to our different base ideas around the validity or reliability
> of deductive
> >       > reasoning.
> >
> >       Yes I think you are right here. Where I feel uncomfortable is when
> those
> >       examples are bridged to the real world. I am not uncomfortable
> with the
> >       mathematician solving mathematical problems. When math is used as
> a helper for
> >       physics, to describe our world, that is where my uncomfort sets in.
> >
> > I wonder how comfortable the test pilot was who was the first to take
> off in
> > an entirely untested 777. ;-)
>
> I imagine he had a few butterflies in his stomach. ;)
>

��


> >       >       So based on my current
> >       >       experience, I have never witnessed any resurrections in
> the strong sense (not
> >       >       talking about freezing/thawing or near death experiences
> here). So assuming an
> >       >       all powerful AI who reassembled every atom and electron of
> a dead body in some
> >       >       live state, yes, why not? But the advantage is that this
> would be something
> >       >       happening in the real world, so when it happens I will
> revise my position.
> >       >
> >       >       Until it happens, I consider death to be death, since I
> have not yet seen any
> >       >       evidence to the contrary.
> >       >
> >       > But note that this conclusion contravenes your materialist
> assumption.
> >
> >       I don't see how that follows.
> >
> > The materialist assumes the brain is a physical object which operates
> > according to physical laws. All materialists further believe in the
> concept of
> > experimental reproducibility: material systems arranged to be in the same
> > state will evolve in the same way over time. These two materialist
> assumptions
> > together imply that restoring a dead brain to the state it was when it
> was
> > alive will result in the brain resuming its function as it was when it
> was
> > alive. Of course, we might run the experiment some day and find that for
> some
> > reason it doesn't work, but that would refute materialism.
>
> Ah, I see what you mean. Well, the easy answer is that, I'll revise my
> position
> once the experiment is performed.


But you earlier states your position is materialism.

I think your choice then is to become agnostic about materialism, or
alternatively, accept materialism and all it's implications.

If you remain agnostic about the implications of materialism, the I would
say you don't really accept materialism, and are agnostic about it.

Theoretically, it might work as you describe,
> but we must keep in mind, that at present, it is just a thought
> experiment. We
> also might discover some technical or scientific reason it might not be
> done. So
> in order to minimize my ontological commitments, I'll either say, it is
> impossible, from a day to day perspective, or I might say that I refrain
> from
> having an opinion until we have more evidence.
>
> But, to make it more interesting, let's drop the AI and say that we're
> talking
> about the probability of ressurecting someone with a body temperature of
> 13.7 C
> who has been declared dead before the arrival to the hospital, I'd say
> that the
> probability of that is definitely not zero.
>
> >       > I think I may see the problem here.
> >       >
> >       > I believe you are using the word "possible" to mean "currently
> technically feasible."
> >       > Whereas I use "possible" to mean "possible in principle" (i.e.
> nomological possibile).
> >       >
> >       > But please correct me if my interpretation is wrong.
> >
> >       Yes, this might be closer to the truth. Another aspect to keep in
> mind when I
> >       speak of impossible, is that it does not mean impossible forever
> in many cases.
> >       Then there are of course cases, to complicate matters, where I
> consider
> >       impossible to be impossible for ever, such as our bearded lord
> reaching out from
> >       the sky. But even that case I would be willing to reconsider if I
> saw proof of
> >       it.
> >
> > In my view it is better to speak in terms of probabilities. We could
> agree
> > seeing such an occurrence has a low probability, but it is not a
> logically
> > impossible experience to have. It is not impossible to the same extent as
> > "meeting a married bachelor, or "seeing a circle with four corners."
>
> True!
>

Glad we agree!


> >       > I think this quote, from Dennett, really drives home the problem
> of zombies:
> >       >
> >       >       "Supposing that by an act of stipulative imagination
> >       > you can remove consciousness while leaving all
> >       > cognitive systems intact […] is like supposing that by
> >       > an act of stipulative imagination, you can remove
> >       > health while leaving all bodily functions and powers
> >       > intact. If you think you can imagine this, it’s only
> >       > because you are confusedly imagining some health-
> >       > module that might or might not be present in a
> >       > body. Health isn’t that sort of thing, and neither is
> >       > consciousness."
> >       > — Daniel Dennett in “The Unimagined Preposterousness of
> >       > Zombies” (1995)
> >
> >       Makes a lot of sense to me.
> >
> > If you come to see zombies as logically impossible, (as I make the case
> for in
> > the thought experiments I cited above), then this means certain
> behaviors can
> > provide evidence for the presence of a mind. Note, this does not mean
> behavior
> > is mind, as behaviorists claimed, nor does it mean absence of certain
> > behaviors indicates a lack of a mind, but it does mean, in certain
> conditions,
> > witnessing behaviors can justify a belief in the presence of a mind.
>
> Well, based on a materialist starting point, I see them as impossible. It
> is a
> good example of a thought experiment gone wrong, where we chase after
> something
> which really does not make any sense at all.


Well the idea didn't originate from thought experiments, it originated from
a strict belief in physical law. This is what drove Huxley to his position
of epiphenomenalism: if the physical universe is causally closed, he saw no
room for consciousness to do anything, as everything is already
pre-determined by physical laws playing out.

Zombies are just a tool that makes understanding the implications of
epiphenomenalism more clear. They are, in fact, the philosophical tool that
allowed the construction of thought experiments that revealed Huxley's
theory of epiphenomenalism to be exposed as false. So here is an example of
thought experiments rescuing scientists from being led astray by over
extrapolating their materialist theories. ;-)


Just like Qualia. A red herring,
> that doesn't really exist as something outside of an active process when
> mind
> meets the world. Without the mind, there is no qualia or redness.
>

I am not sure why you say qualia are a red herring.

But I agree with the last sentence



> >       >  *  Remember an earlier thought
> >       >  *  Describe how one feels
> >       >  *  Invent a theory of consciousness
> >       > Or do you think there are some behaviors for which a
> conscious mind is a requirement?
> >
> >       I think we first of all, have a bad grasp of what consciousness
> is. Keeping that
> >       in mind, I think all of the above could be replicated by a
> machine, in terms of
> >       how it behaves and acts in the world. Some of those would be
> dependent on
> >       definition as well.
> >
> > Certainly such behaviors could be replicated by a machine. But the more
> > pertinent question is: Could all these behaviours be replicated by a
> machine
> > that was not conscious? Or does the performance of these behaviors imply
> that
> > the machine doing them is conscious?
>
> I think this is just a matter of definition. I'm perfectly content equating
> conscious behaviour, as per the list above, with something being
> conscious. I
> also think the zombie argument is nonsense from a material point of view. I
> really do not see how it could work.
>

I don't think it is a matter of definition. The machine exhibiting those
behaviors either has a mind or it doesn't (regardless of our definition).

So I am asking which truth do you think corresponds with reality (the
reality or nonreality of that machine's mind)?




> >       > If there are any, then detecting consciousness can be made into
> an empirical science.
> >
> >       I am content with using behaviour in the world as a guide to
> consciousness. What
> >       I am waiting for in the current AI gold rush is volition, goals and
> >       self-preservation.
> >
> > AI language models have goals: to produce meaningful responses that get
> good
> > feedback from the users. And there was recently the case where
> researchers
> > observed the AI acting in a manner showing a desire for
> self-preservation.
> > ( https://futurism.com/the-byte/openai-o1-self-preservation )
> >
> > A (possibly relevant)
> > cartoon:
> https://www.digitaltonto.com/wp-content/uploads/2014/02/Kurzweil-AI-cartoon.gif
>
> True. I have not been personally convinced yet, that LLMs are conscious. I
> encourage more research, and I would also like to see a resurrection of
> some
> kind of harder Turing-prize.
>

What would a robot have to do to convince you it was conscious?

And what would an animal have to do?


> Anyone rich reading this and who wants to sponsor please reach out, it
> would
> be a lot of fun to get involved! =)
>
> I wonder what size of the prize is necessary to motivate people to win?
>

$1000 is probably enough. The right software could automate everything too,
no need for in person events, and many people would volunteer as judges.


> >       > What would be needed for you to conclude it was resurrected? (Or
> does that
> >       > determination rest on what you consider to be unresolved
> questions of
> >       > identity?)
> >
> >       Probably that the biological body started to move again, from the
> state where it
> >       stopped. I'd say that what was done was that the patterns of the
> worm where
> >       cloned and replicated in a computer, for now.
> >
> > Well they were copied into a robot body. So it was given a new body. The
> word
> > resurrect means only to bring back to life (and sets no requirement on it
> > being the same body). If we restrict resurrection to only bringing the
> > original body back to life, I would class that as "revival" or
> > "resuscitation".
>
> This I think might be better continued in our thread about the "identity
> formula" (C)! ;)
>

��


> >       As for identity, this is actually an interesting question! Is
> there an accepted
> >       "line" where we speak of animals with identities, and animals
> without
> >       identities? Higher animals have preferences, listen to their
> names, to some
> >       extent, can pick up on feelings etc. Where does that stop? Does a
> worm have an
> >       identity?
> >
> > I don't think an entity needs to recognize or be aware of its identity
> for it
> > to have one. For example, philosophy struggles even to define identity
> for
> > inanimate objects (famously the Ship of Theseus:
> > https://en.wikipedia.org/wiki/Ship_of_Theseus ).
> >
> > As to the matter of whether the worm has a "personal identity", to me,
> that
> > question rests on whether or not there is anything it is like to be that
> worm:
> > is that worm conscious? If so, then we can ask valid questions about its
> > identity in the same way as is commonly done in the field of personal
> > identity.
> >
> > E.g., What is required for the worm to survive? Which experiences belong
> to
> > the worm? If the worm gets cut in two and continues living, does its
> identity
> > split, or does each copy preserve and contain the original worm's
> identity?
> > etc.
>
> Hmm, maybe we should move this into the other thread as well?
>

Sounds perfect for that thread: what does bodily continuity mean for worms
with split bodies or humans with split brains?

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250311/cc8b9fe2/attachment.htm>


More information about the extropy-chat mailing list