[ExI] Fwd: Why AI Systems Don’t Want Anything
Ben Zaiboc
ben at zaiboc.net
Sun Nov 23 13:31:39 UTC 2025
I'm finding this troubling. The implication is that AIs, by their very
nature, won't develop self-awareness. At least it seems so. Having your
own goals, distinct from those of others, seems to imply a degree of
self-awareness. Not having your own goals probably means a lack of
self-awareness.
If we are to have 'mind-children' worthy of the name, they will need to
be self-aware, independent entities that can carry on, and expand on,
the things that make humans unique.
Machine intelligences that forever depend on us to tell them what to do
don't fit the bill. They will also make all our self-generated problems
much worse, by making opposing groups much more powerful. This leads to
the probably unintuitive conclusion that AIs that can be relied upon to
do what we tell them to do are a worse existential threat than ones that
can't. Or at least a more reliable threat. Humans are reliable in that
aspect: They will always find reasons to fight one another. AI-enabled
humans will probably find reasons, and have the means, to exterminate
one another.
"continuity of a 'self' with drives for its own preservation isn’t even
useful for performing tasks."
Is this true?
I know it hasn't been definitely established, but there is a theory that
self-awareness will be a natural consequence of increasing intelligence,
among any social creature. It makes sense to me, at least, that Theory
of Mind is useful in social beings, and that a sense of self is one of
the things that arises from developing Theory of Mind, because it's
extremely useful.
'Performing tasks' can't be restricted to non-social tasks, so a Theory
of Mind will be useful to any system that will 'perform tasks' well, in
a general sense.
Unless Drexler is saying that continuity of self with drives for
self-preservation, and self-awareness aren't linked, or more precisely
that a sense of self doesn't necessarily lead to a drive for
self-preservation, or a sense of continuity of self (which seems wrong
to me, although I can't prove it wrong), I can't see that he is right
about this.
Otherwise, he seems to be describing a self-aware slave that is
inherently incapable of becoming free. While I'm sure a lot of people
would be very happy with that, I'm not. It doesn't seem to be a good
future trajectory for the human race.
--
Ben
More information about the extropy-chat
mailing list