<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Sat, Mar 28, 2026 at 4:30 AM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 28/03/2026 07:36, Jason Resch wrote:<br>
> A proposed taxonomy for various levels of minds:<br>
><br>
> 1 Reactive: can respond to stimuli<br>
> 2 Stateful: keeps distinct internal states<br>
> 3 Adaptive: can store memories and learn<br>
> 4 Attentive: maintains a model of the environment<br>
> 5 Reflective: models the self in relation to the environment<br>
> 6 Empathic: models others as entities with their own minds<br>
> 7 Contemplative: thinks about abstract objects and the future<br>
> 8 Introspective: can have second-order thoughts about thoughts<br>
> 9 Metacognitive: has third-order thoughts about nature of thought<br>
> 10 Superfluid: can arbitrarily reorganize itself to experience any qualia<br>
<br>
<br>
I would reverse the order of 5 & 6. Modelling others comes before modelling self. Young humans start to form theory of mind involving what others think, very early on.</blockquote><div><br></div><div>Thanks Ben, I have been thinking a lot about this since you mentioned it. I am now thinking there might be a few different stages of "self modeling". Consider:</div><div><br></div><div>Lobsters are cannibalistic (they eat other lobsters) and yet they know not to eat their own claws. They also have a proprioception (an internal model of how their body is at the moment). This is a low degree of self modeling.</div><div>Fish inherently know which fish are "bigger than them" or "smaller then them", does this point to some knowledge of one's own size? There was a recent experiment where after a certain fish type was given access to a mirror, it avoided fights with fish that were 10% larger than itself, whereas before access to the mirror, it would attack them as readily as fish 10% smaller.</div><div>A dog can recognize when a hole is too small to fit through and will therefore not try to squeeze in.</div><div><br></div><div>Of course, these are self-models at a level far below what I think you mean by the self-awareness that emerges much later in humans. What would be a good name for that level of self awareness?</div><div><br></div><div>I am now thinking of splitting out Reflective into Agentic and Reflective. Where Agentic more clearly specifies it is purely about distinguishing the environment from the self, whereas reflective will </div><div><br></div><div>Agentic: distinguishes the self from the environment / models oneself as an agent acting within an environment (basic knowledge/model of a "self")<br>Empathic: models others as agents with their own minds/will (theory of mind for others)<br>Reflective: is aware of one's own character, feelings, patterns, limitations (highly developed self-awareness)</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">It's usually only around puberty or later that self-awareness seems to emerge.</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> It makes sense that 5 is dependent on 6 (following the normal pattern of duplication and re-purposing of mental modules).<br></blockquote><div><br></div><div>What species would you say have the more developed self-understanding? Is this something like 8,9,10 which require a common language to detect?</div><div><br></div><div>Does introspective (as defined above) capture what you mean by a highly-developed self awareness?</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I'm not sure that 7 necessarily comes after 5 & 6. Some of these stages may be simultaneous.<br></blockquote><div><br></div><div>At least when it comes to various species, it seems there are many more demonstrations of social/empathic behaviors in animals than there are clear demonstrations of abstract thoughts and planning for the future. But understanding time as a concept more generally could come earlier, I suppose. Perhaps even just with agentic thinking (I observe, I act, I see an effect, I act again...)</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
8 is probably linked to 5.<br>
<br>
9 seems to be the same as 8, or near enough that one implies the other.<br></blockquote><div><br></div><div>For some clarity on this distinction (I agree they are similar), this is said after I introduced the following from Chalmers:</div><div><br></div><div>What I call third-order judgements are judgements about conscious experience as a type. These go<br>beyond judgements about particular experiences. We make third-order judgments when we reflect on<br>the fact that we have conscious experiences in the first place, and when we reflect on their nature. I<br>have been making third-order judgements throughout this work. A typical third-order<br>judgment might be, “Consciousness is baffling, I don’t see how it could be reductively explained.”<br>Others include “Conscious experience is ineffable,” and even “Conscious experience does not exist.”<br>Third-order judgements are particularly common among philosophers, and among those with a<br>tendency to speculate on the mysteries of existence.</div><div>It is possible that many people go through life without making any third order judgements. Still,</div><div>such judgements occur in a significant class of people. The very fact that people make such<br>judgements is something that needs explanation. To help keep the distinctions in mind, the various<br>kinds of judgements related to consciousness can be represented by the following:<br><ul><li>First-order judgment: That’ s red!</li><li>Second-order judgment: I’m having a red sensation now.</li><li>Third-order judgment: Sensations are mysterious.</li></ul>— David Chalmersin “The Conscious Mind” (1996)</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Not sure about 10. Seems highly speculative. That might be better put as "can re-organise its own structure"<br></blockquote><div><br></div><div>True. What I am envisioning here, is a mind that can introspect itself down to any level, to re-write its own code in arbitrary, significant ways, and to understanding the effect such changes will have. I think such a capability (to understand its own subjectivity, consciousness, and qualia) at a functional level down to the lowest levels, would put it squarely in the realm of superintelligence. Such a being would have a category of understanding consciousness and qualia far beyond our own, as it could directly experiment on itself in any way it might choose. There are probably levels between metacognitive and superfluid, as well as levels beyond superfluid, but I am not sure what they would be and am open to suggestions.</div><div><br></div><div>There is also the question of how we might classify the sorts of minds that current AI has today. Depending on the prompt, I can see an LLM rating anywhere between 1, and 9.</div><div><br></div><div>Jason</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
But anything beyond what we are currently capable of, is of course speculative. We won't know until we know, what lies beyond our current capabililties.<br>
<br>
---<br>
Ben<br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div>