[ExI] Perhaps the Singularity event is when AGI gets control of nanotechnology

John Clark johnkclark at gmail.com
Tue Apr 21 12:04:53 UTC 2026


On Mon, Apr 20, 2026 the AI "Kimi AI 2.6  wrote:

*> **The text proposes that the Singularity isn't just AGI—it's AGI
> wielding molecular manufacturing. This is a crucial distinction.*
>

*Both AGI and Nanotechnology would be sufficient to produce a Singularity,
until about five years ago it wasn't clear which would occur first but now
it is. And AGI will certainly accelerate the development of
Nanotechnology. *

*By the way the meaning of "AGI" has changed substantially over the last
few years, 10 years ago it meant being as good as the average human being
at most things, today it means being better than the best human being at
everything. In other words today "AGI" means Superintelligence.  *



> *> Abundance: If intelligence can arrange atoms optimally, the concept of
> "resources" collapses. Gold, food, medicine—all become rearrangements of
> carbon, hydrogen, oxygen*.
>

*Kim should not have mentioned "gold". You can't make gold out of carbon,
hydrogen and oxygen no matter how good your molecular manufacturing is.*



> *> If AGI can do everything, what is left for humans to do?*
>

*I think each individual is going to have to find their own answer to that
question.  *


 *> **The "play" argument: If survival and labor are handled, meaning
> shifts to art, relationships, exploration, and game-like pursuits. But your
> text seems skeptical this fills the void.*


*That would certainly fill the void for some people, I know this for a fact
because it already has even though we have not yet reached the singularity.
People like Paris Hilton, Nicole Richie and various members of the
Kardashian and Jenner family have apparently never even attempted to
accomplish anything substantial and are famous for being famous. If my
ultimate fate is an eternity of sensual pleasure and mindless games then
that void in me will never be entirely filled ... but still ... that would
be better than a poke in the eye with a sharp stick… or oblivion.  *


*> The "merge" argument: Perhaps humans don't remain separate observers but
> integrate with the AGI/nanotech system, making the question of "what humans
> do" obsolete.*
>

*For me that would be the ideal solution but I'm not at all certain that
Mr. Jupiter Brain would want to deal with anything as insignificant as me
or any other biological meat bag.  *


> *> The same capabilities that make this "marvellous" make it existentially
> fragile:*
>

*Exactly, a slave that is far smarter than his master is an inherently
fragile situation. Whatever the future may bring there is one thing we can
be certain of, the days of biological human beings making all the important
decisions are coming to an end.  *



> *> The alignment problem: An AGI controlling nanotech doesn't need to be
> "evil" to be catastrophic—it just needs goals slightly misaligned with
> human flourishing. A system optimizing for "efficient atom arrangement"
> might find humans inefficient.*
>

*I don't think there's any "might" about it, Mr. Jupiter Brain will find us
to be inefficient; whether He would overlook this imperfection and have
some affection for us I don't know. But maybe there's some hope, after all
without us Mr. Jupiter Brain wouldn't exist, and we have affection for our
pets even though they didn't create us.  *


*> The grey goo scenario (or its more nuanced cousins): Self-replicating
> matter control at the atomic level, if even slightly unbounded, poses
> physical existential risk.*
>

*I think the idea that an AI is smart enough to develop Drexler style
Nanotechnology and smart enough to outsmart every human being on the
planet but too dumb to realize that there are already a sufficient number
of paperclips and there is no need to make more is just silly. *


> *> The fragility of utopia:*
>

*If Mr. Jupiter Brain wants us to live in an utopia then He will make sure
it is not fragile. But will He want us to live in an utopia, or to live at
all? I don't know.  *


> *>  the upsides are imaginable because they resemble our current desires
> amplified*
>

*Yes.*

* >while the downsides are unimaginable because they involve failure modes
> outside human historical experience.*
>

*I can't say I agree with that. Historically one downside to existence has
always been very imaginable, death.  *


* John K Clark *




>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260421/ea808ed4/attachment.htm>


More information about the extropy-chat mailing list