<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<br>
<div class="moz-cite-prefix">On 12/10/2024 19:41, BillK wrote:<br>
</div>
<blockquote type="cite"
cite="mid:mailman.30.1728758479.20159.extropy-chat@lists.extropy.org">
<pre>Dario Amodei, the CEO of Anthropic (the company that has developed
Claude AI) has written a paper that discusses the benefits that AI
could bring to the world over the next 5 to 10 years. I thought it was
rather impressive - assuming that the AI effects are beneficial.
Worth a read!
BillK
<a class="moz-txt-link-rfc2396E"
href="https://darioamodei.com/machines-of-loving-grace"
moz-do-not-send="true"><https://darioamodei.com/machines-of-loving-grace></a></pre>
</blockquote>
<br>
Well worth reading, I agree.<br>
<br>
On the whole I enjoyed it, but thought there was one glaring
omission: Self-awareness (aka 'consciousness').<br>
<br>
I get the impression that he prefers the term "powerful AI" over
"AGI" because it gives him a chance to dodge this issue. But I don't
think you can dodge it when talking about entities with the level of
intelligence that he supposes. Especially as he says he's concerned
about the dangers of AI.<br>
<br>
And no, the issue of Self-awareness in AI doesn't fall under the
heading of 'science-fiction baggage', for the following reason:<br>
<br>
When an AI is capable of the kind of problem-solving he mentions,
there's no avoiding the fact that it must develop and use some kind
of Theory Of Mind ('TOM'). To interact sensibly with someone, you
need to create a mental model that represents them, so you can
remember who they are, distinguish them from other agents you
interact with, assign preferences and personality characteristics to
them, make predictions about them, and their interactions with other
agents that you have models of (She thinks that he thinks, etc.),
and so-on. Do that, and it's not long before your TOM system has to
include another model, of yourself. Et voila: Self-awareness, by
definition.<br>
<br>
(If you disagree with my equating Self-awareness and Consciousness,
Ok, just forget about the C-word. We can manage fine without it).<br>
<br>
This intelligent biological machine remembers it happening to me
when I was young. It was a real "AHA!!" moment. For me, at least, it
was like waking up from a dream. I imagine it would have a similar
impact on intelligent non-biological machines.<br>
<br>
Current AI systems saying things like "I think that...", really
don't. It's fake, probably pre-programmed as a way to present
information to people, or maybe just picked up from training data.
Without TOM, you have no idea who 'you' are. The concept doesn't
exist. But when you start using TOM, I don't see how it's not
inevitable that you develop some kind of self-awareness, probably
quite soon.<br>
<br>
Given all this, I find it bizarre that he can talk about such
advanced, powerful AIs without addressing the idea that they will
inevitably require TOM to function well, and so will inevitably
'wake up' and become self-aware.<br>
<br>
I know that that opens a totally separate, huge can of worms that he
probably can't even begin to address in that article, but it does
need to be addressed, and prepared for.<br>
<br>
Is there a way to avoid this? I can understand, if not agree with,
people who want to.<br>
<br>
I can't think of any way to take advantage of TOM without it leading
to self-awareness*. People cleverer than me (I know there are many!)
might be able to.<br>
<br>
In any case, I think any avoidance would be only temporary, and we
really do need to anticipate this happening at some point, in the
probably-nearer-than-we-think future.<br>
<br>
<br>
Ben<br>
<br>
* One thing I just thought of was being able to control how strong,
or detailed, the mental models of the various agents are. The
problem there is that if you make the system incapable of making
detailed models (so making sure its self-model is fuzzy), then
you're deliberately crippling its effectiveness. Also, it might not
make any real difference. It's still going to have a 'Me', just a
vaguer one.<br>
<pre class="moz-signature" cols="72">--
I'm a big fan of intelligent design. I think it's a great idea, and we should proceed with it as soon as possible.</pre>
</body>
</html>