[ExI] Seemingly Conscious AI Is Coming

Adrian Tymes atymes at gmail.com
Wed Sep 17 19:09:52 UTC 2025


That's not what I answered.  You say, "If we don't see that" - as in,
"which of all our descriptions of stuff in the brain it is that has
redness" - "engineered into an AI system, we will know it is lying."  An
AI, or any potentially-conscious system, does not have to know what redness
is in order to not lie.  It could also use different words that amount to
the same description.

I am reminded of a game, Zendo, played with pieces of different colors and
sizes.  (Wikipedia says it was republished in 2017, but I remember the
original version.)  One person came up with a set of rules for valid
patterns, and the other players had to guess the rules.  The first person
gave one example of something that fit the pattern, and one example that
did not.  Other players then took turns proposing models and asking if they
fit the pattern.

The most relevant bit: when a guesser proposed rules, the person who came
up with the rules had to come up with an example that either matched their
rules and did not match the guesser's, or did not match the guesser's but
did match the rules.  If they could not, the guesser was correct.

The exact words used did not matter.  There was no room for any ineffable
difference.  Either there was at least one example that was different
between the two sets of rules, or the two sets of rules were identical.

I'm thinking the same sort of thing may apply here.  An AI might come up
with a definition of "redness" that sounds nothing like what you think
redness is - and yet, for every measurable application, the AI's definition
works out to the same thing as your definition.  If that happens, the AI
would not be lying about what redness is, despite sounding like it's saying
something totally different.

Alas that I am not aware of any online implementation of this game.  If
you, I, and someone else (maybe Spike - the game works best with at least 3
players) were ever to meet up IRL, and I had the pieces on me (I think I
still have my set somewhere), I might take you through a few rounds, making
sure you got at least one round each as rule-setter and as guesser
("master" and "student", as the rules were themed on Hollywoodish
depictions of Zen).


On Wed, Sep 17, 2025 at 1:30 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> Hi Adrian,
>
> Yes, that is what I'm saying.  An AI can be engineered to represent red
as 'red' or it can be engineered to use '0xFF0000' or anything else we care
to engineer it to represent information with, including actual redness,
once we know what it is that has a redness quality.
>
> Or are you claiming or thinking that information like an abstract word,
engineered to be substrate independent of what is representing it (i.e. it
needs a dictionary transducer to know what it means) is phenomenally like
something?
>
>
>
>
> On Wed, Sep 17, 2025 at 11:14 AM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>>
>> On Wed, Sep 17, 2025 at 12:30 PM Brent Allsop via extropy-chat
>> <extropy-chat at lists.extropy.org> wrote:
>> > Of course you can engineer an AI system to lie, but I've never met one
that does.
>>
>> I've seen quite a few.  Grok, for instance, was famously tweaked to
>> lie on certain topics not so long ago.
>>
>> > And once we know which of all our descriptions of stuff in the brain
it is that has redness, if we don't see that engineered into an AI system,
we will know it is lying.
>>
>> No?  An AI can experience things differently than humans, say so, and
>> be telling the truth.
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250917/f4a8c3af/attachment.htm>


More information about the extropy-chat mailing list