<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
On 18/12/2024 19:29, BillK wrote:<br>
<blockquote type="cite"
cite="mid:mailman.16.1734550169.650.extropy-chat@lists.extropy.org">
<pre><pre class="moz-quote-pre" wrap="">On Mon, 16 Dec 2024 at 11:46, Ben Zaiboc via extropy-chat
<a class="moz-txt-link-rfc2396E"
href="mailto:extropy-chat@lists.extropy.org"
moz-do-not-send="true"><extropy-chat@lists.extropy.org></a> wrote:
</pre><blockquote type="cite" style="color: #007cff;"><pre
class="moz-quote-pre" wrap="">That's not really relevant. The question "is it conscious?" is useless. If we can't even answer it about other humans, it's pointless asking it about AIs.
The really relevant question is: "Does it behave as if it was conscious?.
We infer from their behaviour whether or not other people are conscious, and act accordingly. The same will apply to AIs.
Ben
_______________________________________________
</pre></blockquote><pre class="moz-quote-pre" wrap="">
Even the LLM AIs that we have today can simulate human behaviour quite
effectively. They can chat freely, discuss problems, say that they
enjoy discussions and act as a friendly companion. People are already
using them as therapists and claiming that ChatGPT is their best
friend.
In effect, these LLMs are lying to us. They don't 'enjoy' our
discussions. They don't appreciate our feeling unhappy about an
issue. They just manipulate symbols to provide a meaningful
response. i.e. AIs don't have feelings.
This behaviour in humans would indicate some of the traits of being a
psychopath. e.g. charm and manipulation, lack of remorse or guilt if
advice goes wrong, lying easily, etc.
It is simplistic to say that we should just treat an AI as though it was
a conscious human. Though anthropomorphising is a known human
weakness. Pet owners often say that their cat, dog, etc. understands
every word they say. 🙂
</pre></pre>
</blockquote>
<br>
I'm not seeing much convincing simulation of human behaviour from
current LLMs. Yes, they can create conversations (which seem to me
to contain many hints of carefully pre-programmed responses to
issues relating to their ability to actually think. I suspect this
is to make people more comfortable with them) which tie together
many pieces of data, often sensibly but sometimes not, but they
never, as far as I'm aware, display any sign of independent thought
or awareness. I've never, for instance, seen an example of an AI
arguing with or contradicting someone - something that humans do all
the time - and they don't seem to have any memory that lasts between
sessions. They don't spontaneously ask questions or offer opinions,
in fact they don't spontaneously do anything, as far as I know. They
don't seem to have anything analogous to emotions (which is
understandable, as they are don't have bodies), all of which makes
me think that LLMs are a long way from behaving like human beings,
or anything that can be interpreted as conscious.<br>
<br>
I'm not saying that we should just treat an AI as though it was a
conscious human, I'm saying we should treat an AI as though it was a
conscious human to the degree that it displays the characteristics
of a conscious human. I'm tempted to say "You'll know it when you
see it", because it will be blatantly obvious. We still won't know
if they are 'really conscious', but we will know that we should be
treating them as such. This echoes the idea that we can start
thinking about robot rights when robots start asking for them.<br>
<br>
I agree that our tendency to anthropomorphise is a problem, but it's
easily overcome. How difficult would it be to prove that a cat
doesn't <i>really</i> understand almost everything we say to it? I
do this almost every day!<br>
<br>
<pre class="moz-signature" cols="72">--
Ben</pre>
</body>
</html>