[ExI] [Extropolis] What should we do if AI becomes conscious?
Ben Zaiboc
ben at zaiboc.net
Wed Dec 18 21:48:33 UTC 2024
On 18/12/2024 19:29, BillK wrote:
> On Mon, 16 Dec 2024 at 11:46, Ben Zaiboc via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
>> That's not really relevant. The question "is it conscious?" is useless. If we can't even answer it about other humans, it's pointless asking it about AIs.
>> The really relevant question is: "Does it behave as if it was conscious?.
>> We infer from their behaviour whether or not other people are conscious, and act accordingly. The same will apply to AIs.
>>
>> Ben
>> _______________________________________________
> Even the LLM AIs that we have today can simulate human behaviour quite
> effectively. They can chat freely, discuss problems, say that they
> enjoy discussions and act as a friendly companion. People are already
> using them as therapists and claiming that ChatGPT is their best
> friend.
> In effect, these LLMs are lying to us. They don't 'enjoy' our
> discussions. They don't appreciate our feeling unhappy about an
> issue. They just manipulate symbols to provide a meaningful
> response. i.e. AIs don't have feelings.
> This behaviour in humans would indicate some of the traits of being a
> psychopath. e.g. charm and manipulation, lack of remorse or guilt if
> advice goes wrong, lying easily, etc.
> It is simplistic to say that we should just treat an AI as though it was
> a conscious human. Though anthropomorphising is a known human
> weakness. Pet owners often say that their cat, dog, etc. understands
> every word they say.
I'm not seeing much convincing simulation of human behaviour from
current LLMs. Yes, they can create conversations (which seem to me to
contain many hints of carefully pre-programmed responses to issues
relating to their ability to actually think. I suspect this is to make
people more comfortable with them) which tie together many pieces of
data, often sensibly but sometimes not, but they never, as far as I'm
aware, display any sign of independent thought or awareness. I've never,
for instance, seen an example of an AI arguing with or contradicting
someone - something that humans do all the time - and they don't seem to
have any memory that lasts between sessions. They don't spontaneously
ask questions or offer opinions, in fact they don't spontaneously do
anything, as far as I know. They don't seem to have anything analogous
to emotions (which is understandable, as they are don't have bodies),
all of which makes me think that LLMs are a long way from behaving like
human beings, or anything that can be interpreted as conscious.
I'm not saying that we should just treat an AI as though it was a
conscious human, I'm saying we should treat an AI as though it was a
conscious human to the degree that it displays the characteristics of a
conscious human. I'm tempted to say "You'll know it when you see it",
because it will be blatantly obvious. We still won't know if they are
'really conscious', but we will know that we should be treating them as
such. This echoes the idea that we can start thinking about robot rights
when robots start asking for them.
I agree that our tendency to anthropomorphise is a problem, but it's
easily overcome. How difficult would it be to prove that a cat doesn't
/really/ understand almost everything we say to it? I do this almost
every day!
--
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20241218/6699a646/attachment.htm>
More information about the extropy-chat
mailing list