[ExI] A science-religious experience
Jason Resch
jasonresch at gmail.com
Tue Mar 18 20:53:45 UTC 2025
On Tue, Mar 18, 2025 at 7:38 AM efc--- via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
> > > I would define Beethoven's 5th as a particular
> mathematical structure,
> > > isomorphically present in all its various manifestations (as
> sheet music, live
> > > performances, as various numeric or alphanumeric lists of notes,
> in the
> > > particular patterns of holes in player piano rolls, etc.) this
> structure, as a
> > > mathematical pattern, is abstract, informational, and
> immaterial. The
> > > isomorphism common in all the various manifestations allow us to
> recognize
> > > what is the same between them, but there is not an identity
> between the
> > > structure to which they are all isomorphic, and each of its
> various
> > > manifestations. The sheet music ≠ the orchestral performance ≠
> the piano roll.
> > > So then we cannot make an identity between any of those
> manifestations and the
> > > abstract mathematical pattern, the abstract mathematical pattern
> is its own
> > > unique "thing", not identical with any of its various
> isomorphisms.
> >
> > For me I think it goes back to process. Depending on the context,
> dialogue or
> > situation, different things can represent B5 for me. It can be the
> sequence of
> > notes. It could (if written today) be the copyrighted work. It
> could be the
> > process of me enjoying the execution of the notes. It all depends
> on the
> > situation and the purpose of the investigation.
> >
> > Yes I think this is what I was saying, and what I meant by all instances
> > containing the same isomorphic pattern.
> >
> > But note that strictly speaking no instance can be "identical with" this
> > pattern, without (by implication) all instances being identical with each
> > other (which is clearly not the case). Therefore, the pattern is
> something
> > distinct from any of its particular instantiations.
> >
> > Do you understand my reasoning here?
>
> Not quite. How can it be something distinct from its instantiations without
> having any existence? I think it comes down to a wider or narrower
> definition,
> together with the context, in order to tease out a workable "B5". Given a
> specific definition, I am sure I would agree with your point, given
> another,
> maybe not.
>
Take who incarnations of B5: the sheet music (X), and the orchestral
rendition (Y). Neither of these two things is identical to each other. I
think you agree with this. That is: X != Y.
Given these two instantiations are not identical with each other, then they
cannot be identical with any third thing (let's call that third thing Z).
If X != Y, then we know ((X == Z) && (Y == Z)) must be false.
Do you agree so far?
Then my contention is, this abstract thing we call B5, cannot be identical
with X, and cannot be identical with Y, because then it would constitute
that same common thing Z, but we already proved (based on the lack of an
identity between X and Y) that there can be no third thing which X and Y
are both identical to. Thus B5 is not identical with any of its particular
instantiations, it has its own identity, independent of and from, any of
its instantiations.
>
> > > Thought experiments 11-15, in my view, more or less conclusively
> establish functionalism as the only workable theory of
> > > consciousness.
> >
> > This might be for another thread.
> >
> > Sure. Would you like to start it?
>
> What would you like to use as a starting point?
>
Up to you. Perhaps start with one you disagree with my conclusion (but name
the ones you do agree with), and we can explore that thought experiment in
more detail.
>
> > > > But note that this conclusion contravenes your
> materialist assumption.
> > >
> > > I don't see how that follows.
> > >
> > > The materialist assumes the brain is a physical object which
> operates
> > > according to physical laws. All materialists further believe in
> the concept of
> > > experimental reproducibility: material systems arranged to be in
> the same
> > > state will evolve in the same way over time. These two
> materialist assumptions
> > > together imply that restoring a dead brain to the state it was
> when it was
> > > alive will result in the brain resuming its function as it was
> when it was
> > > alive. Of course, we might run the experiment some day and find
> that for some
> > > reason it doesn't work, but that would refute materialism.
> >
> > Ah, I see what you mean. Well, the easy answer is that, I'll
> revise my position
> > once the experiment is performed.
> >
> > But you earlier states your position is materialism.
> >
> > I think your choice then is to become agnostic about materialism, or
> > alternatively, accept materialism and all it's implications.
> >
> > If you remain agnostic about the implications of materialism, the I
> would say
> > you don't really accept materialism, and are agnostic about it.
>
> Well, there are a few differences here. I might have a materialist
> outlook, and
> prefer the methods of science to find truth. But, in the example above, we
> cannot do that, so currently it is impossible. It is a thought experiment.
> The
> second argument is that materialism is not incompatible with being agnostic
> about questions, while gathering more evidence. The third point is, that
> we do
> not know what consciousness is, nor how it works. On the surface of it,
> what you
> say would seem to be the truth, but if our starting point of understanding
> is
> flawed, then the implication is flawed. That is why, as long as it is
> basically
> impossible to test, even materialists are allowed to stay agnostic. But,
> what
> separates this example from, just to take our favourite example, multiple
> worlds, is that this actually related to a possible experiment (in the
> future)
> in the physical worlds, so I would feel much more positively disposed
> towards
> the implication here, than towards any implication of multiple worlds.
>
Materialism has, at least in philosophy of mind, recently been renamed as
"physicalism" for physics reveals there are many things in the universe
that are not matter (energy, fields, spacetime, etc.)
Physicalism is the philosophy that: "all things are physical." So according
to physicalism, consciousness is physical.
If one accepts physicalism as a starting point, I think they must (if they
call themselves physicalists, rather than physicalism-agnostics) accept the
conclusions that follow from all things being physical.
Physicalism (as with materialism) are ontological commitments. I don't see
either as compatible with pure agnosticism.
>
> Can you see how agnosticism towards some of these thought experiments can
> be
> reconciled with a materialist outlook? Or am I wrong here?
>
I don't see what it means to be a physicalist if one does not accept the
implications that follow from holding to that idea. If you want to remain
agnostic on the implications that follow from physicalism, I think you must
become agnostic on physicalism as a philosophy.
Consider:
P implies Q.
You remain agnostic on Q.
Can you still accept P while remaining agnostic on Q?
I don't think so. You must either accept Q if you accept P, or you must
remain agnostic on Q and by extension, remain agnostic on P.
>
> > > If you come to see zombies as logically impossible, (as I make
> the case for in
> > > the thought experiments I cited above), then this means certain
> behaviors can
> > > provide evidence for the presence of a mind. Note, this does not
> mean behavior
> > > is mind, as behaviorists claimed, nor does it mean absence of
> certain
> > > behaviors indicates a lack of a mind, but it does mean, in
> certain conditions,
> > > witnessing behaviors can justify a belief in the presence of a
> mind.
> >
> > Well, based on a materialist starting point, I see them as
> impossible. It is a
> > good example of a thought experiment gone wrong, where we chase
> after something
> > which really does not make any sense at all.
> >
> > Well the idea didn't originate from thought experiments, it originated
> from a
> > strict belief in physical law. This is what drove Huxley to his position
> of
> > epiphenomenalism: if the physical universe is causally closed, he saw no
> room
> > for consciousness to do anything, as everything is already
> pre-determined by
> > physical laws playing out.
>
> Free will is another interesting topic. I see big probabilities, perhaps,
> of it
> becoming another mega thread? ;) I have no clear position there, but I do
> not
> Denett is a compatibilist, and I can live with that position as a starting
> point
> for further investigation.
>
I think the sentence with Dennett got autocorrected.
I believe Dennett is a compatibilist. I wouldn't mind a thread on free
will, if you want to start one. Though I am not sure I disagree on anything
if you're a compatibilist.
>
> > Zombies are just a tool that makes understanding the implications of
> > epiphenomenalism more clear. They are, in fact, the philosophical tool
> that
> > allowed the construction of thought experiments that revealed Huxley's
> theory
> > of epiphenomenalism to be exposed as false. So here is an example of
> thought
> > experiments rescuing scientists from being led astray by over
> extrapolating
> > their materialist theories. ;-)
>
> You do have a point here! =)
>
Thank you.
> > Just like Qualia. A red herring,
> > that doesn't really exist as something outside of an active
> process when mind
> > meets the world. Without the mind, there is no qualia or redness.
> >
> > I am not sure why you say qualia are a red herring.
>
> The idea here is that a red herring is something that misleads or
> distracts from
> a relevant or important question.
>
> So from a point of view of people who do believe qualia exists in the
> world, it
> is a red herring. They chase illusions, that do not exist as they define
> them.
>
I take you to mean qualia exist only in minds, not that they don't exist.
If so we agree. Otherwise I would object that qualia are real phenomenon
that require some explanation.
> > But I agree with the last sentence
>
> > > Certainly such behaviors could be replicated by a machine. But
> the more
> > > pertinent question is: Could all these behaviours be replicated
> by a machine
> > > that was not conscious? Or does the performance of these
> behaviors imply that
> > > the machine doing them is conscious?
> >
> > I think this is just a matter of definition. I'm perfectly content
> equating
> > conscious behaviour, as per the list above, with something being
> conscious. I
> > also think the zombie argument is nonsense from a material point
> of view. I
> > really do not see how it could work.
> >
> > I don't think it is a matter of definition. The machine exhibiting those
> > behaviors either has a mind or it doesn't (regardless of our definition).
>
> Why would that be regardless of the definition? Without a definition the
> statement is meaningless. We then don't know what we are talking about.
>
True. That's a valid point.
What I mean is that there is an objective answer to the question of whether
or not some entity has first person experiences, regardless of what metrics
we might define as tests or indicators of that entity having them.
> > So I am asking which truth do you think corresponds with reality (the
> reality
> > or nonreality of that machine's mind)?
>
> If the mind is defined as (and I'm just giving an example here, I'm not
> saying
> this is actually _the_ definition) a set of algorithms giving rise to the
> list
> of specified behaviours above, then the machines mind is a reality, since
> it is
> a set of algorithms that give rise to a certain behaviour in the external
> world.
>
I don't agree with purely third-person behavioral definitions of
consciousness. It is a way of sneaking in an assumed theory of
consciousness, which may or may not be valid. So in my view it is better to
stick to a definition of mind/consciousness all can agree on, then we can
argue the theory of what systems should or shouldn't possess
minds/consciousness.
> I'm not quite sure I understand your question here, so please forgive me
> if my
> answer was nonsense. ;)
>
> > > A (possibly relevant)
> > > cartoon:
> https://www.digitaltonto.com/wp-content/uploads/2014/02/Kurzweil-AI-cartoon.gif
> >
> > True. I have not been personally convinced yet, that LLMs are
> conscious. I
> > encourage more research, and I would also like to see a
> resurrection of some
> > kind of harder Turing-prize.
> >
> > What would a robot have to do to convince you it was conscious?
>
> Exhibit a certain set of behaviours.
>
> > And what would an animal have to do?
>
> The same.
>
I agree.
I would note that this view assumes a functionalist philosophy of mind.
> > Anyone rich reading this and who wants to sponsor please reach
> out, it would
> > be a lot of fun to get involved! =)
> >
> > I wonder what size of the prize is necessary to motivate people to
> win?
> >
> > $1000 is probably enough. The right software could automate everything
> too, no
> > need for in person events, and many people would volunteer as judges.
>
> Only 1000 USD?? Then maybe the time is ripe for my own AI prize!! =D I'd
> love to
> throw in some qualifiers or tests about about volition, goals and
> self-preservation
> into the mix! But I doubt that a 1000 USD would motivate many people to
> develop
> chat bots or chat AI:s to compete. But if you are right... maybe there is
> hope?
> =)
>
I think the fame/prestige is enough of a prize especially with so many AI
companies competing for glory and investors.
> > > I don't think an entity needs to recognize or be aware of its
> identity for it
> > > to have one. For example, philosophy struggles even to define
> identity for
> > > inanimate objects (famously the Ship of Theseus:
> > > https://en.wikipedia.org/wiki/Ship_of_Theseus ).
> > >
> > > As to the matter of whether the worm has a "personal identity",
> to me, that
> > > question rests on whether or not there is anything it is like to
> be that worm:
> > > is that worm conscious? If so, then we can ask valid questions
> about its
> > > identity in the same way as is commonly done in the field of
> personal
> > > identity.
> > >
> > > E.g., What is required for the worm to survive? Which
> experiences belong to
> > > the worm? If the worm gets cut in two and continues living, does
> its identity
> > > split, or does each copy preserve and contain the original
> worm's identity?
> > > etc.
> >
> > Hmm, maybe we should move this into the other thread as well?
> >
> > Sounds perfect for that thread: what does bodily continuity mean for
> worms
> > with split bodies or humans with split brains?
>
> You mean the identity thread, or a new one?
>
Identity thread.
Jason
> Best regards,
> Daniel_______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250318/6bb06364/attachment.htm>
More information about the extropy-chat
mailing list