[ExI] Seemingly Conscious AI Is Coming

Jason Resch jasonresch at gmail.com
Wed Sep 17 13:26:55 UTC 2025


On Wed, Sep 17, 2025, 9:10 AM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Hi Jason,
> But AI systems aren't yet designed to be like anything (i.e. they are
> engineered to use words like 'red', instead of qualities like redness to
> represent knowledge of red things), right?
> You define something that isn't like anything to be conscious, rather than
> just intelligent?
>


We don't need to design what it must be like for it to be like something,
and for that something to feel different from something else.

That a Tesla autopilot can distinguish red traffic lights from green ones,
and react in different ways when it sees them, requires that "red traffic
light" be distinguishable from "green traffic light". And if these two
inputs are distinguishable, they cannot feel the same to the autopilot
system. Thus they must feel different.

So while we cannot know (from our human perspective) How they feel to the
autopilot, nevertheless, we can deduce that they must somehow feel
different, and furthermore that the autopilot must feel something.

Jason



>
>
>
> On Wed, Sep 17, 2025 at 7:00 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>>
>> On Wed, Sep 17, 2025, 7:26 AM BillK via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> Seemingly Conscious AI Is Coming
>>> Sep 15, 2025 Mustafa Suleyman
>>>
>>> Debates about whether AI truly can be conscious are a distraction.
>>> What matters in the near term is the perception that they are – and
>>> why the temptation to design AI systems that foster this perception
>>> must be resisted.
>>>
>>> <
>>> https://www.project-syndicate.org/commentary/seemingly-conscious-ai-urgent-threat-tech-industry-must-address-by-mustafa-suleyman-2025-09
>>> >
>>> Quote:
>>> An SCAI would be capable of fluently using natural language,
>>> displaying a persuasive and emotionally resonant personality. It would
>>> have a long, accurate memory that fosters a coherent sense of itself,
>>> and it would use this capacity to claim subjective experience (by
>>> referencing past interactions and memories). Complex reward functions
>>> within these models would simulate intrinsic motivation, and advanced
>>> goal setting and planning would reinforce our sense that the AI is
>>> exercising true agency.
>>>
>>> All these capabilities are already here or around the corner. We must
>>> recognize that such systems will soon be possible, begin thinking
>>> through the implications, and set a norm against the pursuit of
>>> illusory consciousness.
>>> ------------------------
>>>
>>> This is the "consciousness" problem becoming real.
>>> If an AI successfully appears to be conscious, how can we test whether
>>> it is
>>> a really conscious creation?
>>> BillK
>>>
>>
>> I believe that anything that is reliably responsive to its environment is
>> conscious, as I argue here (on pages 23 - 41):
>>
>>
>> https://drive.google.com/file/d/1VDBVueSxWCQ_J6_3aHPIvjtkeBJYhssF/view?usp=drivesdk
>>
>>
>> Jason
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250917/408be877/attachment.htm>


More information about the extropy-chat mailing list