<div dir="auto"><div><br><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Sun, Oct 12, 2025, 4:02 AM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 11/10/2025 21:50, BillW wrote:<br>
> agree with EFC/Daniel. People-produced things will still be desired. An AI can give us acceptable music in the style of any composer, but can it create new forms and sounds? Remains to be seen.<br>
<br>
I suppose it depends on what you mean by 'people'.<br>
<br>
If there's a conscious intention behind the work, then it will probably <br>
be different to something produced by a purely automatic system, like <br>
our current LLM-based AIs.<br>
<br>
Future AGIs will trend towards being 'people' in their own right, and <br>
who knows what they'll be capable of. Conscous intention will probably <br>
be one of their attributes, at some point. (And who knows, they may <br>
display other 'emergent properties' that we haven't seen or thought of <br>
before. We often talk about 'consciousness', maybe there are other, even <br>
better things waiting to happen. We would probably want to call that <br>
'super-consciousness', much like monkeys might imagine super-monkeys as <br>
having 'super-bananas'. You can't conceive of the things you can't <br>
conceive).<br>
<br>
AGIs should lead to Artificial Super-Intelligences. ASIs will become <br>
better than biological humans at everything, without exception <br>
(everything they decide to turn their hands to, anyway. They would <br>
probably be capable of being better biological humans, but I doubt they <br>
would want to. They might decide to create some, though). If they ever <br>
come to exist. I think and hope they will, otherwise we will have failed <br>
as an intelligent species, and will go extinct (as all (evolved) <br>
biological things do) without any successors.<br>
<br>
If we go the uploading route, then we will become the ASIs ourselves.<br>
<br>
Another possibility might be to redesign ourselves, but I don't see that <br>
happening without the help of ASIs, or at least AGIs. Anyone who's <br>
studied biology in any depth will realise it's hellish complicated, I <br>
doubt that we can understand enough of it on our own to be really useful.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I wonder how much of oneself is preserved in a merger to become super intelligent, when acting super intelligently is acting in a manner that the super intelligence judges to be optimal.</div><div dir="auto"><br></div><div dir="auto">So what about when the most intelligent action is in conflict with the original person's whims and quirks which made them a unique human?</div><div dir="auto"><br></div><div dir="auto">If they whims take precedence, then this entity is no longer acting super intelligently. If the whims are ignored, then the entity is no longer acting like the human.</div><div dir="auto"><br></div><div dir="auto">Think of merging an ant mind and a human mind. The ant part of the mind may say: I have an urge to forage let's do that. The human mind puts the wnt mind to rest: we have grocery stores and a full fridge, there's no need to forage. And we would find, the ant component contributes very little to what the merged mind decides to do.</div><div dir="auto"><br></div><div dir="auto">Should we expect it to be any different if a human mind merged with a super intelligent mind?</div><div dir="auto"><br></div><div dir="auto">Jason </div></div>