<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">On 15/12/2025 11:47, John Clark wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAJPayv0CiZfuXgmsPe8JV4YiMXs4hQAj3AG_EAFNS4B9Bs6b8w@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div dir="ltr">
<div class="gmail_default"
style="font-family:arial,helvetica,sans-serif"><span
style="font-family:Arial,Helvetica,sans-serif">On Sun, Dec
14, 2025 at 7:03 PM Ben Zaiboc via extropy-chat <<a
href="mailto:extropy-chat@lists.extropy.org"
moz-do-not-send="true" class="moz-txt-link-freetext">extropy-chat@lists.extropy.org</a>>
wrote:</span></div>
</div>
<div class="gmail_quote gmail_quote_container"><font
face="georgia, serif" size="4"><i><br>
</i></font>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div><span><font size="4" face="georgia, serif"><i> </i></font></span><span><font
size="4" face="DejaVu Sans"><i><span
class="gmail_default"
style="font-family:arial,helvetica,sans-serif">> </span>depending
on what 'replicating' means.You could say that
replicating the physics of weather systems has never
happened. In a sense, that is true, although it's
irrelevant, because the aim is to model weather
systems, inside a computer. </i><br>
</font></span></div>
</blockquote>
<div><br>
</div>
<div><font size="4" face="tahoma, sans-serif"><b>Yes, but
there is a difference.<span class="gmail_default"
style=""> When we model a hurricane on a computer we
are modeling something concrete, but an AI running on
that same computer is modeling something much more
abstract, intelligence. As you point out, when a
Calculator adds 2+2 the integer 4 it produces is also
abstract, but that 4 is just as real as the 4 my
biological brain produces when it adds 2+2.</span> There
is no difference between <span class="gmail_default"
style="font-family:arial,helvetica,sans-serif">"</span>real<span
class="gmail_default"
style="font-family:arial,helvetica,sans-serif">"</span>
arithmetic <span class="gmail_default"
style="font-family:arial,helvetica,sans-serif">and</span>
<span class="gmail_default"
style="font-family:arial,helvetica,sans-serif">"</span>simulated<span
class="gmail_default"
style="font-family:arial,helvetica,sans-serif">"</span>
arithmetic.<span class="gmail_default"
style="font-family:arial,helvetica,sans-serif"> </span></b></font></div>
<div><br>
</div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span
style="font-size:large"><i style=""><font
face="georgia, serif"><span class="gmail_default"
style="font-family:arial,helvetica,sans-serif">> </span>That's
something we can do pretty well these days, and is
extremely useful.</font></i></span></blockquote>
<div><br>
</div>
<div><font size="4" face="tahoma, sans-serif"><b>That is quite
an understatement<span class="gmail_default" style="">,</span><span
class="gmail_default" style=""> and AIs are useful for
one reason only, they are intelligent. As for
consciousness, Gemini, Claude and GPT certainly act as
if they're conscious, as do my fellow human beings,
but if they're not and are only pretending to be so,
well..., that's their problem not mine. </span> </b></font></div>
<div><br>
</div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div><span><font size="4" face="georgia, serif"><i style=""><span
class="gmail_default"
style="font-family:arial,helvetica,sans-serif">> </span>Simulations
of things produce simulated results, so a
(sufficiently good) simulation of a rainstorm will
produce simulated wetness. In the same way, a
simulation, in a general-purpose computer, of
natural excitable cells will produce simulated EEG
and MEG</i></font></span></div>
</blockquote>
<div><br>
</div>
<font size="4" face="tahoma, sans-serif"><b>And why do
scientists believe that EEG and MEG have anything to do
with consciousness? Because of behavior, it always boils
down to behavior. When EEG and MEG devices produce certain
patterns a consistent correlation is observed between them
and sounds produced by the subject's mouth reporting
particular conscious experiences. AIs can also produce
sounds<span class="gmail_default"
style="font-family:arial,helvetica,sans-serif"> </span>even
though they don't have a mouth. I see no reason why we
should believe a human if he says he's conscious but
disbelieve an AI if it insists it's conscious, especially
if it makes its case with more eloquence and intelligence
than the Human did. </b></font></div>
<div class="gmail_quote gmail_quote_container"><font
face="tahoma, sans-serif" size="4"><b><br>
</b></font></div>
<font face="tahoma, sans-serif"><b><font size="4">Actually now
that I think of it<span class="gmail_default" style="">,</span>
maybe I'm not conscious but you are. Maybe what I think of
as "consciousness" is just a weak pale thing compared with
the magnificent experience you have <span
class="gmail_default" style="">that</span> I'm incapable
of imagining.<span class="gmail_default" style=""> </span></font><span
style="font-size:large">Maybe you're a supernova but I'm
just a firefly. Or maybe it's the other way around.
Neither of us will ever know. </span></b></font>
<div><span style="font-size:large"><font
face="tahoma, sans-serif"><b><br>
</b></font></span></div>
<div><span style="font-size:large"><font
face="tahoma, sans-serif" style=""><b><span
class="gmail_default" style="">John K Clark</span> </b></font></span></div>
</div>
</blockquote>
<br>
I agree that behaviour is the only realistic way we have for judging
if some other agent is conscious or not. I would tend to use the
questions that others ask (and the circumstances under which they
ask them), to decide how conscious and how intelligent they are.<br>
<br>
As far as I know, no AI has yet spontaneously started asking any
questions at all. Granted, they have access to enormous quantities
of information already, but there are some questions that can't be
answered by what's on the internet and in their training sets.<br>
<br>
When we see signs of genuine curiosity, a desire to know things for
their own sake rather than in order to respond to an inquiry, then I
reckon that will be behaviour that would be hard to explain without
invoking some kind of consciousness. I don't know how that could be
faked or happen accidentally, and it would be a better indicator, in
my opinion, than the answer to a direct or implied question about
their consciousness.
<pre class="moz-signature" cols="72">--
Ben</pre>
<br>
</body>
</html>