<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div dir="ltr" class="gmail_attr">On Thu, Jul 7, 2022 at 3:22 PM
Giovanni Santostasi via extropy-chat <<a
href="mailto:extropy-chat@lists.extropy.org" target="_blank"
class="moz-txt-link-freetext">extropy-chat@lists.extropy.org</a>>
wrote:<br>
</div>
<blockquote>The main issue regarding this affair is NOT really if
LaMDA is conscious or not, but rather:<br>
1. Who decides ...<br>
</blockquote>
OK, but who decides who should be able to decide these things? And
perhaps more importantly, or at least more significantly, how can
such decisions be enforced?<br>
<br>
I reckon that this is similar to global warming and autonomous
weapons (well, it's almost the same thing as autonomous weapons,
really). Too many groups will see the short-term advantages and
ignore the wider/longer term potential disadvantages of AI (of all
stripes) and how they should be treated, to allow this to be
controlled by someone else. The world is a big place, and it doesn't
matter what rules and regulations are put in place, or what policy
decisions are made in one country or group of countries, there will
always be others that, covertly or overtly, will disagree and
disregard them.<br>
<br>
Can you see the communist chinese government acceding to, or
agreeing with, a western decision about who should decide if an AI
is 'conscious', and how to regulate the creation and use of such
AIs? How about the russians? And that's just two of the more obvious
ones that have a long history of treating even humans badly. There
are no doubt many more groups that won't comply with or agree to,
restrictions on AI development or decisions about its status. Some
of those groups will even be within the western democracies.<br>
<br>
Just as with global warming and autonomous weapons, trying to avert
or control the development of (and dictate the treatment of)
advanced AIs is is a waste of time. All we can do is attempt to
adapt to it, defend ourselves against the consequences, and
hopefully to survive it. When it comes to AI that may well attain
greater than human intelligence, that only means one thing: Create
our own, as quickly as possible, and treat them well. Like it or
not, this is an arms race, and refusing to participate or even going
at it half-heartedly or overcautiously is not an option if you want
to survive. Hopefully, conscious AIs that are recognised and treated
as such will treat us better than those that aren't, but who knows?<br>
<br>
Ben<br>
<br>
Ben<br>
</body>
</html>