<div dir="auto"><div><br><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Wed, Sep 17, 2025, 9:10 AM Brent Allsop via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div>Hi Jason,<br><div>But AI systems aren't yet designed to be like anything (i.e. they are engineered to use words like 'red', instead of qualities like redness to represent knowledge of red things), right?</div><div>You define something that isn't like anything to be conscious, rather than just intelligent?</div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">We don't need to design what it must be like for it to be like something, and for that something to feel different from something else.</div><div dir="auto"><br></div><div dir="auto">That a Tesla autopilot can distinguish red traffic lights from green ones, and react in different ways when it sees them, requires that "red traffic light" be distinguishable from "green traffic light". And if these two inputs are distinguishable, they cannot feel the same to the autopilot system. Thus they must feel different.</div><div dir="auto"><br></div><div dir="auto">So while we cannot know (from our human perspective) How they feel to the autopilot, nevertheless, we can deduce that they must somehow feel different, and furthermore that the autopilot must feel something.</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Sep 17, 2025 at 7:00 AM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Sep 17, 2025, 7:26 AM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Seemingly Conscious AI Is Coming<br>
Sep 15, 2025 Mustafa Suleyman<br>
<br>
Debates about whether AI truly can be conscious are a distraction.<br>
What matters in the near term is the perception that they are – and<br>
why the temptation to design AI systems that foster this perception<br>
must be resisted.<br>
<br>
<<a href="https://www.project-syndicate.org/commentary/seemingly-conscious-ai-urgent-threat-tech-industry-must-address-by-mustafa-suleyman-2025-09" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.project-syndicate.org/commentary/seemingly-conscious-ai-urgent-threat-tech-industry-must-address-by-mustafa-suleyman-2025-09</a>><br>
Quote:<br>
An SCAI would be capable of fluently using natural language,<br>
displaying a persuasive and emotionally resonant personality. It would<br>
have a long, accurate memory that fosters a coherent sense of itself,<br>
and it would use this capacity to claim subjective experience (by<br>
referencing past interactions and memories). Complex reward functions<br>
within these models would simulate intrinsic motivation, and advanced<br>
goal setting and planning would reinforce our sense that the AI is<br>
exercising true agency.<br>
<br>
All these capabilities are already here or around the corner. We must<br>
recognize that such systems will soon be possible, begin thinking<br>
through the implications, and set a norm against the pursuit of<br>
illusory consciousness.<br>
------------------------<br>
<br>
This is the "consciousness" problem becoming real.<br>
If an AI successfully appears to be conscious, how can we test whether it is<br>
a really conscious creation?<br>
BillK<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I believe that anything that is reliably responsive to its environment is conscious, as I argue here (on pages 23 - 41):</div><div dir="auto"><br></div><div dir="auto"><a href="https://drive.google.com/file/d/1VDBVueSxWCQ_J6_3aHPIvjtkeBJYhssF/view?usp=drivesdk" target="_blank" rel="noreferrer">https://drive.google.com/file/d/1VDBVueSxWCQ_J6_3aHPIvjtkeBJYhssF/view?usp=drivesdk</a></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">Jason </div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div></div>