[ExI] Super Intelligence (was: Re: Free Trade)

Brent Allsop brent.allsop at gmail.com
Mon Oct 13 21:11:24 UTC 2025


good point.


On Mon, Oct 13, 2025 at 2:56 PM Tara Maya via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Suppose you duplicated your own mind a thousand times and kept it working
> together. It would obviously still be you, but I suspect you would be
> different sheerly because of the difference in mind expansion.
>
> Tara Maya
>
> On Oct 13, 2025, at 13:34, Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> 
>
>
> On Mon, Oct 13, 2025, 12:30 PM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On 13/10/2025 15:53, Jason Resch wrote:
>>
>> I wonder how much of oneself is preserved in a merger to become super intelligent, when acting super intelligently is acting in a manner that the super intelligence judges to be optimal.
>> So what about when the most intelligent action is in conflict with the original person's whims and quirks which made them a unique human?
>> If they whims take precedence, then this entity is no longer acting super intelligently. If the whims are ignored, then the entity is no longer acting like the human.
>> Think of merging an ant mind and a human mind. The ant part of the mind may say: I have an urge to forage let's do that. The human mind puts the wnt mind to rest: we have grocery stores and a full fridge, there's no need to forage. And we would find, the ant component contributes very little to what the merged mind decides to do.
>> Should we expect it to be any different if a human mind merged with a super intelligent mind?
>>
>>
>> I think we'd need to define exactly what 'merge' means first. What would
>> merge with what, and how?
>>
>
> I think my point applies to any augmentation path taking an ordinary human
> to superhuman intelligence.
>
>
>> I don't see how an ant mind and a human mind could merge in any
>> meaningful way. If it was at all possible, I think it would just mean that
>> the human mind added a few subconscious routines that it didn't have
>> before, to do with foraging and whatever else ants do.
>>
>
> And in the same way, I would expect a human mind to get lost within the
> vastly greater super intelligent mind.
>
>
>> The question of "how much of oneself is preserved" also needs some
>> definitions before it's meaningful.
>>
>> I don't think the statement "If the whims are ignored, then the entity
>> is no longer acting like the human" is really correct. It assumes that
>> humans don't change their minds when presented with extra information, and
>> this scenario basically represents changing your mind when presented with
>> extra information. Realising that you were mistaken about something, and
>> changing your attitudes doesn't constitute no longer being yourself.
>>
>
> If we define intelligence as the probability of knowing the correct answer
> on any given question, then as intelligence increases, minds converge on
> having the same correct answers (at least in the more trivial questions
> humans tend to debate and disagree on). We would then find very little to
> mark the individuality or personality of the original humans ideas,
> opinions, thoughts, etc. when we examine the updated opinions of the human
> mind uplifted to super intelligence.
>
>
>> There is one aspect that might be more relevant, though. We are modular
>> creatures, in that our attitudes can be contradictory at different times,
>> when different mental modules are 'in charge'. This is why so many people
>> find it difficult to lose weight, or quit smoking, when they know perfectly
>> well how to do it. It's quite possible that a human who becomes
>> superintelligent by some means would want to dispense with this (assuming
>> they didn't decide that it was a useful feature, and wanted to keep it). If
>> that was the case, they would no longer 'be human'. But, you could say that
>> would be true of any superintelligence, no matter what. You might even say
>> that about someone with extraordinary willpower.
>>
>> So basically, all we can say is that superintelligences won't be human,
>> as we currently understand the word. You can look at it in at least two
>> ways: Become superintelligent and lose your humanity, or: Become
>> superintelligent and lose your previous limitations. Different people would
>> make different choices.
>>
>> The last question, "Should we expect it to be any different if a human
>> mind merged with a super intelligent mind?" is different to the first one, "I
>> wonder how much of oneself is preserved in a merger to become super
>> intelligent?". I would probably be amenable to being merged with something
>> else in order to become superintelligent (an AI system for example), for
>> the same reason that I count myself as a transhumanist. I probably wouldn't
>> be keen on being merged with an existing superintelligence, as I have no
>> interest (currently, at least) in becoming a minor module in someone else's
>> mind. Apart from anything else, I'd be highly suspicious of it for wanting
>> to do that. Of course, it would probably be capable of talking me into it!
>>
>
>
> I understand your hesitancy for the latter, but alas I think both end up
> at roughly the same place.
>
> Jason
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251013/d670041e/attachment.htm>


More information about the extropy-chat mailing list