[ExI] [Extropolis] What should we do if AI becomes conscious?
BillK
pharos at gmail.com
Wed Dec 18 09:18:37 UTC 2024
On Mon, 16 Dec 2024 at 11:46, Ben Zaiboc via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> That's not really relevant. The question "is it conscious?" is useless. If we can't even answer it about other humans, it's pointless asking it about AIs.
> The really relevant question is: "Does it behave as if it was conscious?.
> We infer from their behaviour whether or not other people are conscious, and act accordingly. The same will apply to AIs.
>
> Ben
> _______________________________________________
Even the LLM AIs that we have today can simulate human behaviour quite
effectively. They can chat freely, discuss problems, say that they
enjoy discussions and act as a friendly companion. People are already
using them as therapists and claiming that ChatGPT is their best
friend.
In effect, these LLMs are lying to us. They don't 'enjoy' our
discussions. They don't appreciate our feeling unhappy about an
issue. They just manipulate symbols to provide a meaningful
response. i.e. AIs don't have feelings.
This behaviour in humans would indicate some of the traits of being a
psychopath. e.g. charm and manipulation, lack of remorse or guilt if
advice goes wrong, lying easily, etc.
It is simplistic to say that we should just treat an AI as though it was
a conscious human. Though anthropomorphising is a known human
weakness. Pet owners often say that their cat, dog, etc. understands
every word they say. :)
BillK
More information about the extropy-chat
mailing list