[ExI] [Extropolis] What should we do if AI becomes conscious?
BillK
pharos at gmail.com
Sun Dec 15 12:30:23 UTC 2024
On Sun, 15 Dec 2024 at 11:19, efc--- via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> I do agree that the terminilogy is fuzzy. But I don't think it takes away
> from the question.
>
> What would I do if my AI project became conscious?
>
> It of course depends on the circumstances (is it connected to the internet
> or not, does it seems nice, evil manipulative etc.) but my first instinct
> would be to learn and interact.
>
> I would be very interested in what conclusions it would reach when it
> comes to the soul, afterlife, ethics etc.
>
> Since it would be a created "artificial" consciousness, who could talk to
> its creator, I wonder how that would affects its conclusions when it comes
> to the eternal questions of philosophy.
>
> But then again, it would depend a lot on the circumstances and how it
> behaves and reacts.
>
> Best regards,
> Daniel
>_______________________________________________
The first question is 'How would we know?'.
There is no test for consciousness. If the AI pretends to be
conscious, like it can pretend to act like different personalities,
how could we test whether it is *really* conscious?
See: <https://www.gocomics.com/fminus/2023/11/30>
Next is the question of what rights should this conscious AI have?
Would switching it off be counted as murder?
Would software changes be forbidden?
What morality is this consciousness operating under?
Is it friend or foe? Can we trust it?
I don't know that this is a problem that can be solved.
We will probably have to just muddle through and hope that nothing too
disastrous happens.
BillK
More information about the extropy-chat
mailing list