[ExI] Fwd: thought experiment part 1
Ben Zaiboc
ben at zaiboc.net
Mon Dec 15 13:15:14 UTC 2025
On 15/12/2025 11:47, John Clark wrote:
> On Sun, Dec 14, 2025 at 7:03 PM Ben Zaiboc via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> /
> /
>
> ///> depending on what 'replicating' means.You could say that
> replicating the physics of weather systems has never happened. In
> a sense, that is true, although it's irrelevant, because the aim
> is to model weather systems, inside a computer. /
>
>
> *Yes, but there is a difference.When we model a hurricane on a
> computer we are modeling something concrete, but an AI running on that
> same computer is modeling something much more abstract, intelligence.
> As you point out, when a Calculator adds 2+2 the integer 4 it produces
> is also abstract, but that 4 is just as real as the 4 my biological
> brain produces when it adds 2+2. There is no difference between "real"
> arithmetic and "simulated" arithmetic.*
>
> /> That's something we can do pretty well these days, and is
> extremely useful./
>
>
> *That is quite an understatement,and AIs are useful for one reason
> only, they are intelligent. As for consciousness, Gemini, Claude and
> GPT certainly act as if they're conscious, as do my fellow human
> beings, but if they're not and are only pretending to be so, well...,
> that's their problem not mine. *
>
> /> Simulations of things produce simulated results, so a
> (sufficiently good) simulation of a rainstorm will produce
> simulated wetness. In the same way, a simulation, in a
> general-purpose computer, of natural excitable cells will produce
> simulated EEG and MEG/
>
>
> *And why do scientists believe that EEG and MEG have anything to do
> with consciousness? Because of behavior, it always boils down to
> behavior. When EEG and MEG devices produce certain patterns a
> consistent correlation is observed between them and sounds produced by
> the subject's mouth reporting particular conscious experiences. AIs
> can also produce soundseven though they don't have a mouth. I see no
> reason why we should believe a human if he says he's conscious but
> disbelieve an AI if it insists it's conscious, especially if it makes
> its case with more eloquence and intelligence than the Human did. *
> *
> *
> *Actually now that I think of it, maybe I'm not conscious but you are.
> Maybe what I think of as "consciousness" is just a weak pale thing
> compared with the magnificent experience you have that I'm incapable
> of imagining.Maybe you're a supernova but I'm just a firefly. Or maybe
> it's the other way around. Neither of us will ever know. *
> *
> *
> *John K Clark *
I agree that behaviour is the only realistic way we have for judging if
some other agent is conscious or not. I would tend to use the questions
that others ask (and the circumstances under which they ask them), to
decide how conscious and how intelligent they are.
As far as I know, no AI has yet spontaneously started asking any
questions at all. Granted, they have access to enormous quantities of
information already, but there are some questions that can't be answered
by what's on the internet and in their training sets.
When we see signs of genuine curiosity, a desire to know things for
their own sake rather than in order to respond to an inquiry, then I
reckon that will be behaviour that would be hard to explain without
invoking some kind of consciousness. I don't know how that could be
faked or happen accidentally, and it would be a better indicator, in my
opinion, than the answer to a direct or implied question about their
consciousness.
--
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20251215/31c5a39c/attachment.htm>
More information about the extropy-chat
mailing list