[ExI] Seemingly Conscious AI Is Coming

Brent Allsop brent.allsop at gmail.com
Wed Sep 17 17:29:02 UTC 2025


Hi Adrian,

Yes, that is what I'm saying.  An AI can be engineered to represent red as
'red' or it can be engineered to use '0xFF0000' or anything else we care to
engineer it to represent information with, including actual redness, once
we know what it is that has a redness quality.

Or are you claiming or thinking that information like an abstract word,
engineered to be substrate independent of what is representing it (i.e. it
needs a dictionary transducer to know what it means) is phenomenally like
something?




On Wed, Sep 17, 2025 at 11:14 AM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Wed, Sep 17, 2025 at 12:30 PM Brent Allsop via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> > Of course you can engineer an AI system to lie, but I've never met one
> that does.
>
> I've seen quite a few.  Grok, for instance, was famously tweaked to
> lie on certain topics not so long ago.
>
> > And once we know which of all our descriptions of stuff in the brain it
> is that has redness, if we don't see that engineered into an AI system, we
> will know it is lying.
>
> No?  An AI can experience things differently than humans, say so, and
> be telling the truth.
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250917/023b9d66/attachment.htm>


More information about the extropy-chat mailing list