[ExI] Uploads as a group of AI agents
Jason Resch
jasonresch at gmail.com
Sat Mar 28 02:23:19 UTC 2026
On Fri, Mar 27, 2026 at 5:15 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
> > I kind of suspect that copies will not be permitted. You think we
> have a population problem?
> > Imagine the whole population multiplying by 5, 10, or 1000. One way
> this might be done is to limit uploads to those who have stored their
> bodies in an inactive state. But I don't know how it will turn out.
> > I do find Robin Hanson's em ideas, where endless copies are made to
> reduce labor cost, to be disturbing.
>
>
> Before uploading can become a common thing, we're going to have to figure
> out how to make powerful computing systems run on much less energy than
> they do now. I'm sure plenty of people have compared the energy
> requirements of our brains to our current computers.
>
> Keith Henson wrote:
>
>
>
> Given that can be solved, there's no reason why the population of uploads
> can't be many orders of magnitude greater than the population of biological
> humans, even on the earth (and of course, uploads will be ideal for
> colonising space).
It takes an acre of farmland to feed each person. Over the day, this acre
of farmland receives an average of 663,684 watts of solar energy. If we
used this energy to directly synthesize food, e.g. with nanotechnology, we
could feed 6,853 people for the same acre.
> I'd expect the population would reach quadrillions, in the entire solar
> system. Much more, with mature nanotechnology.
>
The incident solar radiation on the Moon's surface alone is enough to
support over half a quadrillion people (at current biological brain
efficiencies).
The Moon receives 13,000 Terawatts of solar energy. Since the human brain
uses 20 watts of power, this is enough energy to power 650 trillion human
minds, or 83,000 times Earth’s current population.
>
> Once uploads exist, I expect there will be a huge jump forward in our
> understanding of how our minds (and minds in general) work, and I wouldn't
> be surprised if we soon afterwards have the ability to build them to order,
> including limited-purpose minds, ideally suited for the kinds of work that
> Robin Hanson is talking about, without any need for exploitation of any
> full-scale people (we might well have to devise a classification system for
> minds, going from fairly simple automata, through current-human level
> minds, and on to superintelligences).
>
I recently began an effort to create a taxonomy of minds, across 10 levels.
At least in biologically evolved brains, these levels appear progressive:
achieving level N implies possession of the abilities at all levels less
than N. I welcome any feedback or criticism on these levels. I am very open
to revising them if anyone sees a problem with the levels, their order, or
if you think any are missing.
A proposed taxonomy for various levels of minds:
1. *Reactive:* can respond to stimuli
2. *Stateful:* keeps distinct internal states
3. *Adaptive:* can store memories and learn
4. *Attentive:* maintains a model of the environment
5. *Reflective:* models the self in relation to the environment
6. *Empathic:* models others as entities with their own minds
7. *Contemplative:* thinks about abstract objects and the future
8. *Introspective:* can have second-order thoughts about thoughts
9. *Metacognitive:* has third-order thoughts about nature of thought
10. *Superfluid*: can arbitrarily reorganize itself to experience any
qualia
A *level 1* *reactive* mind becomes aware of changes to some environmental
variable and triggers an automatic response. For example, a jellyfish
<https://en.wikipedia.org/wiki/Jellyfish> tentacle that reflexively pulls
in prey
<https://www.pbs.org/wnet/nature/blog/no-brain-for-jellyfish-no-problem/>
on contact.
A *level 2* *stateful* mind not only tracks changes to some environmental
variable but also tracks internal states. For example, a nematode
<https://en.wikipedia.org/wiki/Nematode> will know when it becomes satiated
and will then stop eating.
A *level 3* *adaptive* mind not only tracks internal states, but can store
arbitrary pieces of new information. For example, bees
<https://en.wikipedia.org/wiki/Bee> remember visual landmarks to help them
find their way back to the hive.
A *level 4* *attentive* mind not only remembers, but continuously updates
an internal model of the environment as new information comes in. For
example, a mouse <https://en.wikipedia.org/wiki/Mus_musculus_domesticus>
tracking how to escape from a maze.
A *level 5* *reflective* mind not only models the environment, but its
model is expansive enough to include a model of the self operating in that
environment. For example, a magpie <https://en.wikipedia.org/wiki/Magpie>
recognizing itself in a mirror.
A *level 6* *empathic* mind not only models itself, but also models the
intentions, motivations, and feelings of other minds. For example, a dog
<https://en.wikipedia.org/wiki/Dog> that waits until no one is looking
before it tries to sneak a treat.
A *level 7* *contemplative* mind not only models others, but can think
about possibilities and situations that don't exist, and thereby plan for
the future. For example, ravens <https://en.wikipedia.org/wiki/Raven> will
keep tools
<https://www.forbes.com/sites/scotttravers/2026/01/28/meet-the-bird-that-can-plan-for-the-future-hint-it-uses-tools-and-trades-for-rewards/>
for use in future situations.
A *level 8* *introspective* mind can not only think about abstract
situations, but can think about thoughts and the process of thinking. For
example, a human <https://en.wikipedia.org/wiki/Human> asking "Why did I
just react in that way, what was I thinking?"
A *level 9* *metacognitive* mind not only thinks about thoughts, but
contemplates the nature of subjectivity itself as a phenomenon. For
example, *you* reading this article, trying to understand consciousness.
A *level 10* *superfluid* mind not only contemplates consciousness, but can
realize in itself any qualitative state it wants. No examples are yet
known, but a superintelligence
<https://en.wikipedia.org/wiki/Superintelligence> able to modify its mind
at will would qualify.
Defined in this way, we find several consistent patterns:
- Species at lower levels emerge earlier in history.
- Species at higher levels tend to have larger brains.
- Species remain capable of all lower-level functions.
This suggests that these levels roughly track the expanding capabilities
that minds gained during the course of their evolutionary development.
While there are no present examples of level 10 minds, they may appear in
the near future, through developments in AI
<https://en.wikipedia.org/wiki/Artificial_intelligence> or *mind uploading*
<https://en.wikipedia.org/wiki/Mind_uploading>.
A level 10 mind represents a categorically-different level beyond human
cognition. The human mind has a comparatively rigid repertoire of
qualitative states, fixed by our sensory system and neural architecture.
A superfluid mind, in comparison, can rewire itself on-the-fly to produce
in itself any conscious experience. In principle, it could even create a
conscious state that contained the qualitative aspects of two different
species' brains at once, allowing it to compare and contrast what it is
like to be a bat versus what it is like to be a human, in the same way that
we can compare and contrast different colors in a single visual scene.
There are, however, difficulties in classifying high-level minds.
It is hard to recognize level 8 and higher minds without a common language.
This is because such thoughts are internal, and behavioral indications of
these thoughts are subtle -- if present at all.
We can only guess what whales might think and sing about, with brains that
are six times larger than ours. Might they contemplate their own thought
processes or consciousness? How would we know?
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260327/3a7be503/attachment.htm>
More information about the extropy-chat
mailing list