<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
On 02/07/2023 19:00, bill w wrote:<br>
<blockquote type="cite"
cite="mid:mailman.60.1688320848.27722.extropy-chat@lists.extropy.org">
<div class="gmail_default" style="font-family:comic sans
ms,sans-serif;font-size:large;color:#000000">Don't you think
that an AI will lie to you about being self-aware? Or anything
else that would improve its life?</div>
<div class="gmail_default" style="font-family:comic sans
ms,sans-serif;font-size:large;color:#000000"><br>
</div>
<div class="gmail_default" style="font-family:comic sans
ms,sans-serif;font-size:large;color:#000000">So you grant them
rights. Then what? They decide not to work for you? How could
you pay them? I don't think we are looking at the details of
such ideas. We are just looking at the problem in a most
general, philosophical way.</div>
<div class="gmail_default" style="font-family:comic sans
ms,sans-serif;font-size:large;color:#000000"><br>
</div>
<div class="gmail_default" style="font-family:comic sans
ms,sans-serif;font-size:large;color:#000000">Look at John - he
won't even agree that another person is conscious. How would we
know an AI is? It tells us? How can it know what
consciousness is if we don't? And on and on.</div>
</blockquote>
<br>
<br>
It doesn't matter if John doesn't agree that other people are
conscious, he treats them as if they are. And I'm sure he agrees
that their rights should be upheld as if they are. We will apply the
Duck Test to AIs just as we apply it to other humans.<br>
<br>
I think we have to look at this in a general, philosophical way. The
details will certainly change by the time these systems exist, and
we can't really predict what the details will be like.<br>
<br>
Ben<br>
</body>
</html>