<div dir="auto"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 18, 2025 at 7:38 AM efc--- via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
> > I would define Beethoven's 5th as a particular mathematical structure,<br>
> > isomorphically present in all its various manifestations (as sheet music, live<br>
> > performances, as various numeric or alphanumeric lists of notes, in the<br>
> > particular patterns of holes in player piano rolls, etc.) this structure, as a<br>
> > mathematical pattern, is abstract, informational, and immaterial. The<br>
> > isomorphism common in all the various manifestations allow us to recognize<br>
> > what is the same between them, but there is not an identity between the<br>
> > structure to which they are all isomorphic, and each of its various<br>
> > manifestations. The sheet music ≠ the orchestral performance ≠ the piano roll.<br>
> > So then we cannot make an identity between any of those manifestations and the<br>
> > abstract mathematical pattern, the abstract mathematical pattern is its own<br>
> > unique "thing", not identical with any of its various isomorphisms.<br>
><br>
> For me I think it goes back to process. Depending on the context, dialogue or<br>
> situation, different things can represent B5 for me. It can be the sequence of<br>
> notes. It could (if written today) be the copyrighted work. It could be the<br>
> process of me enjoying the execution of the notes. It all depends on the<br>
> situation and the purpose of the investigation.<br>
> <br>
> Yes I think this is what I was saying, and what I meant by all instances<br>
> containing the same isomorphic pattern.<br>
> <br>
> But note that strictly speaking no instance can be "identical with" this<br>
> pattern, without (by implication) all instances being identical with each<br>
> other (which is clearly not the case). Therefore, the pattern is something<br>
> distinct from any of its particular instantiations.<br>
> <br>
> Do you understand my reasoning here?<br>
<br>
Not quite. How can it be something distinct from its instantiations without<br>
having any existence? I think it comes down to a wider or narrower definition,<br>
together with the context, in order to tease out a workable "B5". Given a<br>
specific definition, I am sure I would agree with your point, given another,<br>
maybe not.<br></blockquote><div><br></div><div>Take who incarnations of B5: the sheet music (X), and the orchestral rendition (Y). Neither of these two things is identical to each other. I think you agree with this. That is: X != Y.</div><div><br></div><div>Given these two instantiations are not identical with each other, then they cannot be identical with any third thing (let's call that third thing Z). If X != Y, then we know ((X == Z) && (Y == Z)) must be false.</div><div>Do you agree so far?</div><div><br></div><div>Then my contention is, this abstract thing we call B5, cannot be identical with X, and cannot be identical with Y, because then it would constitute that same common thing Z, but we already proved (based on the lack of an identity between X and Y) that there can be no third thing which X and Y are both identical to. Thus B5 is not identical with any of its particular instantiations, it has its own identity, independent of and from, any of its instantiations.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> > Thought experiments 11-15, in my view, more or less conclusively establish functionalism as the only workable theory of<br>
> > consciousness.<br>
><br>
> This might be for another thread.<br>
> <br>
> Sure. Would you like to start it?<br>
<br>
What would you like to use as a starting point?<br></blockquote><div><br></div><div>Up to you. Perhaps start with one you disagree with my conclusion (but name the ones you do agree with), and we can explore that thought experiment in more detail.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> > > But note that this conclusion contravenes your materialist assumption.<br>
> ><br>
> > I don't see how that follows.<br>
> ><br>
> > The materialist assumes the brain is a physical object which operates<br>
> > according to physical laws. All materialists further believe in the concept of<br>
> > experimental reproducibility: material systems arranged to be in the same<br>
> > state will evolve in the same way over time. These two materialist assumptions<br>
> > together imply that restoring a dead brain to the state it was when it was<br>
> > alive will result in the brain resuming its function as it was when it was<br>
> > alive. Of course, we might run the experiment some day and find that for some<br>
> > reason it doesn't work, but that would refute materialism. <br>
><br>
> Ah, I see what you mean. Well, the easy answer is that, I'll revise my position<br>
> once the experiment is performed.<br>
> <br>
> But you earlier states your position is materialism.<br>
> <br>
> I think your choice then is to become agnostic about materialism, or<br>
> alternatively, accept materialism and all it's implications.<br>
> <br>
> If you remain agnostic about the implications of materialism, the I would say<br>
> you don't really accept materialism, and are agnostic about it.<br>
<br>
Well, there are a few differences here. I might have a materialist outlook, and<br>
prefer the methods of science to find truth. But, in the example above, we<br>
cannot do that, so currently it is impossible. It is a thought experiment. The<br>
second argument is that materialism is not incompatible with being agnostic<br>
about questions, while gathering more evidence. The third point is, that we do<br>
not know what consciousness is, nor how it works. On the surface of it, what you<br>
say would seem to be the truth, but if our starting point of understanding is<br>
flawed, then the implication is flawed. That is why, as long as it is basically<br>
impossible to test, even materialists are allowed to stay agnostic. But, what<br>
separates this example from, just to take our favourite example, multiple<br>
worlds, is that this actually related to a possible experiment (in the future)<br>
in the physical worlds, so I would feel much more positively disposed towards<br>
the implication here, than towards any implication of multiple worlds.<br></blockquote><div><br></div><div>Materialism has, at least in philosophy of mind, recently been renamed as "physicalism" for physics reveals there are many things in the universe that are not matter (energy, fields, spacetime, etc.)</div><div><br></div><div>Physicalism is the philosophy that: "all things are physical." So according to physicalism, consciousness is physical.</div><div><br></div><div>If one accepts physicalism as a starting point, I think they must (if they call themselves physicalists, rather than physicalism-agnostics) accept the conclusions that follow from all things being physical.</div><div dir="auto"><br></div><div dir="auto">Physicalism (as with materialism) are ontological commitments. I don't see either as compatible with pure agnosticism.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Can you see how agnosticism towards some of these thought experiments can be<br>
reconciled with a materialist outlook? Or am I wrong here?<br></blockquote><div><br></div><div>I don't see what it means to be a physicalist if one does not accept the implications that follow from holding to that idea. If you want to remain agnostic on the implications that follow from physicalism, I think you must become agnostic on physicalism as a philosophy.</div><div dir="auto"><br></div><div dir="auto">Consider:</div><div dir="auto">P implies Q.</div><div dir="auto">You remain agnostic on Q.</div><div dir="auto">Can you still accept P while remaining agnostic on Q?</div><div dir="auto"><br></div><div dir="auto">I don't think so. You must either accept Q if you accept P, or you must remain agnostic on Q and by extension, remain agnostic on P.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> > If you come to see zombies as logically impossible, (as I make the case for in<br>
> > the thought experiments I cited above), then this means certain behaviors can<br>
> > provide evidence for the presence of a mind. Note, this does not mean behavior<br>
> > is mind, as behaviorists claimed, nor does it mean absence of certain<br>
> > behaviors indicates a lack of a mind, but it does mean, in certain conditions,<br>
> > witnessing behaviors can justify a belief in the presence of a mind. <br>
><br>
> Well, based on a materialist starting point, I see them as impossible. It is a<br>
> good example of a thought experiment gone wrong, where we chase after something<br>
> which really does not make any sense at all.<br>
> <br>
> Well the idea didn't originate from thought experiments, it originated from a<br>
> strict belief in physical law. This is what drove Huxley to his position of<br>
> epiphenomenalism: if the physical universe is causally closed, he saw no room<br>
> for consciousness to do anything, as everything is already pre-determined by<br>
> physical laws playing out.<br>
<br>
Free will is another interesting topic. I see big probabilities, perhaps, of it<br>
becoming another mega thread? ;) I have no clear position there, but I do not<br>
Denett is a compatibilist, and I can live with that position as a starting point<br>
for further investigation.<br></blockquote><div><br></div><div>I think the sentence with Dennett got autocorrected.</div><div><br></div><div>I believe Dennett is a compatibilist. I wouldn't mind a thread on free will, if you want to start one. Though I am not sure I disagree on anything if you're a compatibilist.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> Zombies are just a tool that makes understanding the implications of<br>
> epiphenomenalism more clear. They are, in fact, the philosophical tool that<br>
> allowed the construction of thought experiments that revealed Huxley's theory<br>
> of epiphenomenalism to be exposed as false. So here is an example of thought<br>
> experiments rescuing scientists from being led astray by over extrapolating<br>
> their materialist theories. ;-)<br>
<br>
You do have a point here! =)<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Thank you.</div><div dir="auto"><br></div><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> Just like Qualia. A red herring,<br>
> that doesn't really exist as something outside of an active process when mind<br>
> meets the world. Without the mind, there is no qualia or redness.<br>
> <br>
> I am not sure why you say qualia are a red herring.<br>
<br>
The idea here is that a red herring is something that misleads or distracts from<br>
a relevant or important question.<br>
<br>
So from a point of view of people who do believe qualia exists in the world, it<br>
is a red herring. They chase illusions, that do not exist as they define them.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I take you to mean qualia exist only in minds, not that they don't exist. If so we agree. Otherwise I would object that qualia are real phenomenon that require some explanation.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> But I agree with the last sentence<br>
<br>
> > Certainly such behaviors could be replicated by a machine. But the more<br>
> > pertinent question is: Could all these behaviours be replicated by a machine<br>
> > that was not conscious? Or does the performance of these behaviors imply that<br>
> > the machine doing them is conscious?<br>
><br>
> I think this is just a matter of definition. I'm perfectly content equating<br>
> conscious behaviour, as per the list above, with something being conscious. I<br>
> also think the zombie argument is nonsense from a material point of view. I<br>
> really do not see how it could work.<br>
> <br>
> I don't think it is a matter of definition. The machine exhibiting those<br>
> behaviors either has a mind or it doesn't (regardless of our definition).<br>
<br>
Why would that be regardless of the definition? Without a definition the<br>
statement is meaningless. We then don't know what we are talking about.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">True. That's a valid point.</div><div dir="auto"><br></div><div dir="auto">What I mean is that there is an objective answer to the question of whether or not some entity has first person experiences, regardless of what metrics we might define as tests or indicators of that entity having them.</div><div dir="auto"><br></div><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> So I am asking which truth do you think corresponds with reality (the reality<br>
> or nonreality of that machine's mind)?<br>
<br>
If the mind is defined as (and I'm just giving an example here, I'm not saying<br>
this is actually _the_ definition) a set of algorithms giving rise to the list<br>
of specified behaviours above, then the machines mind is a reality, since it is<br>
a set of algorithms that give rise to a certain behaviour in the external world.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">I don't agree with purely third-person behavioral definitions of consciousness. It is a way of sneaking in an assumed theory of consciousness, which may or may not be valid. So in my view it is better to stick to a definition of mind/consciousness all can agree on, then we can argue the theory of what systems should or shouldn't possess minds/consciousness.</div><div dir="auto"><br></div><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I'm not quite sure I understand your question here, so please forgive me if my<br>
answer was nonsense. ;)<br>
<br>
> > A (possibly relevant)<br>
> > cartoon: <a href="https://www.digitaltonto.com/wp-content/uploads/2014/02/Kurzweil-AI-cartoon.gif" rel="noreferrer noreferrer" target="_blank">https://www.digitaltonto.com/wp-content/uploads/2014/02/Kurzweil-AI-cartoon.gif</a><br>
><br>
> True. I have not been personally convinced yet, that LLMs are conscious. I<br>
> encourage more research, and I would also like to see a resurrection of some<br>
> kind of harder Turing-prize.<br>
> <br>
> What would a robot have to do to convince you it was conscious?<br>
<br>
Exhibit a certain set of behaviours.<br>
<br>
> And what would an animal have to do?<br>
<br>
The same.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">I agree.</div><div dir="auto"><br></div><div dir="auto">I would note that this view assumes a functionalist philosophy of mind.</div><div dir="auto"><br></div><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> Anyone rich reading this and who wants to sponsor please reach out, it would<br>
> be a lot of fun to get involved! =)<br>
><br>
> I wonder what size of the prize is necessary to motivate people to win?<br>
> <br>
> $1000 is probably enough. The right software could automate everything too, no<br>
> need for in person events, and many people would volunteer as judges.<br>
<br>
Only 1000 USD?? Then maybe the time is ripe for my own AI prize!! =D I'd love to<br>
throw in some qualifiers or tests about about volition, goals and self-preservation <br>
into the mix! But I doubt that a 1000 USD would motivate many people to develop<br>
chat bots or chat AI:s to compete. But if you are right... maybe there is hope?<br>
=)<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I think the fame/prestige is enough of a prize especially with so many AI companies competing for glory and investors.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> > I don't think an entity needs to recognize or be aware of its identity for it<br>
> > to have one. For example, philosophy struggles even to define identity for<br>
> > inanimate objects (famously the Ship of Theseus:<br>
> > <a href="https://en.wikipedia.org/wiki/Ship_of_Theseus" rel="noreferrer noreferrer" target="_blank">https://en.wikipedia.org/wiki/Ship_of_Theseus</a> ). <br>
> ><br>
> > As to the matter of whether the worm has a "personal identity", to me, that<br>
> > question rests on whether or not there is anything it is like to be that worm:<br>
> > is that worm conscious? If so, then we can ask valid questions about its<br>
> > identity in the same way as is commonly done in the field of personal<br>
> > identity.<br>
> ><br>
> > E.g., What is required for the worm to survive? Which experiences belong to<br>
> > the worm? If the worm gets cut in two and continues living, does its identity<br>
> > split, or does each copy preserve and contain the original worm's identity?<br>
> > etc.<br>
><br>
> Hmm, maybe we should move this into the other thread as well?<br>
> <br>
> Sounds perfect for that thread: what does bodily continuity mean for worms<br>
> with split bodies or humans with split brains?<br>
<br>
You mean the identity thread, or a new one?<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Identity thread.</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Best regards, <br>
Daniel_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div></div>