[ExI] Free Trade

Jason Resch jasonresch at gmail.com
Mon Oct 13 13:22:58 UTC 2025


On Sun, Oct 12, 2025, 4:02 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On 11/10/2025 21:50, BillW wrote:
> >   agree with EFC/Daniel.  People-produced things will still be desired.
> An AI can give us acceptable music in the style of any composer, but can it
> create new forms and sounds?  Remains to be seen.
>
> I suppose it depends on what you mean by 'people'.
>
> If there's a conscious intention behind the work, then it will probably
> be different to something produced by a purely automatic system, like
> our current LLM-based AIs.
>
> Future AGIs will trend towards being 'people' in their own right, and
> who knows what they'll be capable of. Conscous intention will probably
> be one of their attributes, at some point. (And who knows, they may
> display other 'emergent properties' that we haven't seen or thought of
> before. We often talk about 'consciousness', maybe there are other, even
> better things waiting to happen. We would probably want to call that
> 'super-consciousness', much like monkeys might imagine super-monkeys as
> having 'super-bananas'. You can't conceive of the things you can't
> conceive).
>
> AGIs should lead to Artificial Super-Intelligences. ASIs will become
> better than biological humans at everything, without exception
> (everything they decide to turn their hands to, anyway. They would
> probably be capable of being better biological humans, but I doubt they
> would want to. They might decide to create some, though). If they ever
> come to exist. I think and hope they will, otherwise we will have failed
> as an intelligent species, and will go extinct (as all (evolved)
> biological things do) without any successors.
>
> If we go the uploading route, then we will become the ASIs ourselves.
>
> Another possibility might be to redesign ourselves, but I don't see that
> happening without the help of ASIs, or at least AGIs. Anyone who's
> studied biology in any depth will realise it's hellish complicated, I
> doubt that we can understand enough of it on our own to be really useful.
>

I wonder how much of oneself is preserved in a merger to become super
intelligent, when acting super intelligently is acting in a manner that the
super intelligence judges to be optimal.

So what about when the most intelligent action is in conflict with the
original person's whims and quirks which made them a unique human?

If they whims take precedence, then this entity is no longer acting super
intelligently. If the whims are ignored, then the entity is no longer
acting like the human.

Think of merging an ant mind and a human mind. The ant part of the mind may
say: I have an urge to forage let's do that. The human mind puts the wnt
mind to rest: we have grocery stores and a full fridge, there's no need to
forage. And we would find, the ant component contributes very little to
what the merged mind decides to do.

Should we expect it to be any different if a human mind merged with a super
intelligent mind?

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251013/fc301ab2/attachment.htm>


More information about the extropy-chat mailing list