[ExI] Super Intelligence (was: Re: Free Trade)

Ben Zaiboc ben at zaiboc.net
Mon Oct 13 16:29:59 UTC 2025


On 13/10/2025 15:53, Jason Resch wrote:
> I wonder how much of oneself is preserved in a merger to become super 
> intelligent, when acting super intelligently is acting in a manner 
> that the super intelligence judges to be optimal.
> So what about when the most intelligent action is in conflict with the 
> original person's whims and quirks which made them a unique human?
> If they whims take precedence, then this entity is no longer acting 
> super intelligently. If the whims are ignored, then the entity is no 
> longer acting like the human.
> Think of merging an ant mind and a human mind. The ant part of the 
> mind may say: I have an urge to forage let's do that. The human mind 
> puts the wnt mind to rest: we have grocery stores and a full fridge, 
> there's no need to forage. And we would find, the ant component 
> contributes very little to what the merged mind decides to do.
> Should we expect it to be any different if a human mind merged with a 
> super intelligent mind?

I think we'd need to define exactly what 'merge' means first. What would 
merge with what, and how?

I don't see how an ant mind and a human mind could merge in any 
meaningful way. If it was at all possible, I think it would just mean 
that the human mind added a few subconscious routines that it didn't 
have before, to do with foraging and whatever else ants do.

The question of "how much of oneself is preserved" also needs some 
definitions before it's meaningful.

I don't think the statement "If the whims are ignored, then the entity 
is no longer acting like the human" is really correct. It assumes that 
humans don't change their minds when presented with extra information, 
and this scenario basically represents changing your mind when presented 
with extra information. Realising that you were mistaken about 
something, and changing your attitudes doesn't constitute no longer 
being yourself.

There is one aspect that might be more relevant, though. We are modular 
creatures, in that our attitudes can be contradictory at different 
times, when different mental modules are 'in charge'. This is why so 
many people find it difficult to lose weight, or quit smoking, when they 
know perfectly well how to do it. It's quite possible that a human who 
becomes superintelligent by some means would want to dispense with this 
(assuming they didn't decide that it was a useful feature, and wanted to 
keep it). If that was the case, they would no longer 'be human'. But, 
you could say that would be true of any superintelligence, no matter 
what. You might even say that about someone with extraordinary willpower.

So basically, all we can say is that superintelligences won't be human, 
as we currently understand the word. You can look at it in at least two 
ways: Become superintelligent and lose your humanity, or: Become 
superintelligent and lose your previous limitations. Different people 
would make different choices.

The last question, "Should we expect it to be any different if a human 
mind merged with a super intelligent mind?" is different to the first 
one, "I wonder how much of oneself is preserved in a merger to become 
super intelligent?". I would probably be amenable to being merged with 
something else in order to become superintelligent (an AI system for 
example), for the same reason that I count myself as a transhumanist. I 
probably wouldn't be keen on being merged with an existing 
superintelligence, as I have no interest (currently, at least) in 
becoming a minor module in someone else's mind. Apart from anything 
else, I'd be highly suspicious of it for wanting to do that. Of course, 
it would probably be capable of talking me into it!

-- 
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251013/c3813ff4/attachment.htm>


More information about the extropy-chat mailing list