[ExI] A science-religious experience

efc at disroot.org efc at disroot.org
Tue Mar 18 11:37:23 UTC 2025


>       > I would define Beethoven's 5th as a particular mathematical structure,
>       > isomorphically present in all its various manifestations (as sheet music, live
>       > performances, as various numeric or alphanumeric lists of notes, in the
>       > particular patterns of holes in player piano rolls, etc.) this structure, as a
>       > mathematical pattern, is abstract, informational, and immaterial. The
>       > isomorphism common in all the various manifestations allow us to recognize
>       > what is the same between them, but there is not an identity between the
>       > structure to which they are all isomorphic, and each of its various
>       > manifestations. The sheet music ≠ the orchestral performance ≠ the piano roll.
>       > So then we cannot make an identity between any of those manifestations and the
>       > abstract mathematical pattern, the abstract mathematical pattern is its own
>       > unique "thing", not identical with any of its various isomorphisms.
>
>       For me I think it goes back to process. Depending on the context, dialogue or
>       situation, different things can represent B5 for me. It can be the sequence of
>       notes. It could (if written today) be the copyrighted work. It could be the
>       process of me enjoying the execution of the notes. It all depends on the
>       situation and the purpose of the investigation.
> 
> Yes I think this is what I was saying, and what I meant by all instances
> containing the same isomorphic pattern.
> 
> But note that strictly speaking no instance can be "identical with" this
> pattern, without (by implication) all instances being identical with each
> other (which is clearly not the case). Therefore, the pattern is something
> distinct from any of its particular instantiations.
> 
> Do you understand my reasoning here?

Not quite. How can it be something distinct from its instantiations without
having any existence? I think it comes down to a wider or narrower definition,
together with the context, in order to tease out a workable "B5". Given a
specific definition, I am sure I would agree with your point, given another,
maybe not.

>       > Thought experiments 11-15, in my view, more or less conclusively establish functionalism as the only workable theory of
>       > consciousness.
>
>       This might be for another thread.
> 
> Sure. Would you like to start it?

What would you like to use as a starting point?

>       >       > But note that this conclusion contravenes your materialist assumption.
>       >
>       >       I don't see how that follows.
>       >
>       > The materialist assumes the brain is a physical object which operates
>       > according to physical laws. All materialists further believe in the concept of
>       > experimental reproducibility: material systems arranged to be in the same
>       > state will evolve in the same way over time. These two materialist assumptions
>       > together imply that restoring a dead brain to the state it was when it was
>       > alive will result in the brain resuming its function as it was when it was
>       > alive. Of course, we might run the experiment some day and find that for some
>       > reason it doesn't work, but that would refute materialism.  
>
>       Ah, I see what you mean. Well, the easy answer is that, I'll revise my position
>       once the experiment is performed.
> 
> But you earlier states your position is materialism.
> 
> I think your choice then is to become agnostic about materialism, or
> alternatively, accept materialism and all it's implications.
> 
> If you remain agnostic about the implications of materialism, the I would say
> you don't really accept materialism, and are agnostic about it.

Well, there are a few differences here. I might have a materialist outlook, and
prefer the methods of science to find truth. But, in the example above, we
cannot do that, so currently it is impossible. It is a thought experiment. The
second argument is that materialism is not incompatible with being agnostic
about questions, while gathering more evidence. The third point is, that we do
not know what consciousness is, nor how it works. On the surface of it, what you
say would seem to be the truth, but if our starting point of understanding is
flawed, then the implication is flawed. That is why, as long as it is basically
impossible to test, even materialists are allowed to stay agnostic. But, what
separates this example from, just to take our favourite example, multiple
worlds, is that this actually related to a possible experiment (in the future)
in the physical worlds, so I would feel much more positively disposed towards
the implication here, than towards any implication of multiple worlds.

Can you see how agnosticism towards some of these thought experiments can be
reconciled with a materialist outlook? Or am I wrong here?

>       > If you come to see zombies as logically impossible, (as I make the case for in
>       > the thought experiments I cited above), then this means certain behaviors can
>       > provide evidence for the presence of a mind. Note, this does not mean behavior
>       > is mind, as behaviorists claimed, nor does it mean absence of certain
>       > behaviors indicates a lack of a mind, but it does mean, in certain conditions,
>       > witnessing behaviors can justify a belief in the presence of a mind.  
>
>       Well, based on a materialist starting point, I see them as impossible. It is a
>       good example of a thought experiment gone wrong, where we chase after something
>       which really does not make any sense at all.
> 
> Well the idea didn't originate from thought experiments, it originated from a
> strict belief in physical law. This is what drove Huxley to his position of
> epiphenomenalism: if the physical universe is causally closed, he saw no room
> for consciousness to do anything, as everything is already pre-determined by
> physical laws playing out.

Free will is another interesting topic. I see big probabilities, perhaps, of it
becoming another mega thread? ;) I have no clear position there, but I do not
Denett is a compatibilist, and I can live with that position as a starting point
for further investigation.

> Zombies are just a tool that makes understanding the implications of
> epiphenomenalism more clear. They are, in fact, the philosophical tool that
> allowed the construction of thought experiments that revealed Huxley's theory
> of epiphenomenalism to be exposed as false. So here is an example of thought
> experiments rescuing scientists from being led astray by over extrapolating
> their materialist theories. ;-)

You do have a point here! =)

>       Just like Qualia. A red herring,
>       that doesn't really exist as something outside of an active process when mind
>       meets the world. Without the mind, there is no qualia or redness.
> 
> I am not sure why you say qualia are a red herring.

The idea here is that a red herring is something that misleads or distracts from
a relevant or important question.

So from a point of view of people who do believe qualia exists in the world, it
is a red herring. They chase illusions, that do not exist as they define them.

> But I agree with the last sentence

>       > Certainly such behaviors could be replicated by a machine. But the more
>       > pertinent question is: Could all these behaviours be replicated by a machine
>       > that was not conscious? Or does the performance of these behaviors imply that
>       > the machine doing them is conscious?
>
>       I think this is just a matter of definition. I'm perfectly content equating
>       conscious behaviour, as per the list above, with something being conscious. I
>       also think the zombie argument is nonsense from a material point of view. I
>       really do not see how it could work.
> 
> I don't think it is a matter of definition. The machine exhibiting those
> behaviors either has a mind or it doesn't (regardless of our definition).

Why would that be regardless of the definition? Without a definition the
statement is meaningless. We then don't know what we are talking about.

> So I am asking which truth do you think corresponds with reality (the reality
> or nonreality of that machine's mind)?

If the mind is defined as (and I'm just giving an example here, I'm not saying
this is actually _the_ definition) a set of algorithms giving rise to the list
of specified behaviours above, then the machines mind is a reality, since it is
a set of algorithms that give rise to a certain behaviour in the external world.

I'm not quite sure I understand your question here, so please forgive me if my
answer was nonsense. ;)

>       > A (possibly relevant)
>       > cartoon: https://www.digitaltonto.com/wp-content/uploads/2014/02/Kurzweil-AI-cartoon.gif
>
>       True. I have not been personally convinced yet, that LLMs are conscious. I
>       encourage more research, and I would also like to see a resurrection of some
>       kind of harder Turing-prize.
> 
> What would a robot have to do to convince you it was conscious?

Exhibit a certain set of behaviours.

> And what would an animal have to do?

The same.

>       Anyone rich reading this and who wants to sponsor please reach out, it would
>       be a lot of fun to get involved! =)
>
>       I wonder what size of the prize is necessary to motivate people to win?
> 
> $1000 is probably enough. The right software could automate everything too, no
> need for in person events, and many people would volunteer as judges.

Only 1000 USD?? Then maybe the time is ripe for my own AI prize!! =D I'd love to
throw in some qualifiers or tests about about volition, goals and self-preservation 
into the mix! But I doubt that a 1000 USD would motivate many people to develop
chat bots or chat AI:s to compete. But if you are right... maybe there is hope?
=)

>       > I don't think an entity needs to recognize or be aware of its identity for it
>       > to have one. For example, philosophy struggles even to define identity for
>       > inanimate objects (famously the Ship of Theseus:
>       > https://en.wikipedia.org/wiki/Ship_of_Theseus ). 
>       >
>       > As to the matter of whether the worm has a "personal identity", to me, that
>       > question rests on whether or not there is anything it is like to be that worm:
>       > is that worm conscious? If so, then we can ask valid questions about its
>       > identity in the same way as is commonly done in the field of personal
>       > identity.
>       >
>       > E.g., What is required for the worm to survive? Which experiences belong to
>       > the worm? If the worm gets cut in two and continues living, does its identity
>       > split, or does each copy preserve and contain the original worm's identity?
>       > etc.
>
>       Hmm, maybe we should move this into the other thread as well?
> 
> Sounds perfect for that thread: what does bodily continuity mean for worms
> with split bodies or humans with split brains?

You mean the identity thread, or a new one?

Best regards, 
Daniel


More information about the extropy-chat mailing list