<div dir="ltr"><div dir="ltr">On Mon, 27 Apr 2026 at 18:42, John Clark via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a> wrote:<br>> But what makes you believe that your fellow human beings are better at that than Claude or Gemini? There must be some reason why you believe computers are not conscious but also think that solipsism is not true. Is it just that computers have brains that are soft and squishy while other humans have brains that are hard and dry? <br>><br>> John K Clark<br>> _______________________________________________<br><div><br></div><div><br></div><div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">A senior staff scientist at Google’s artificial intelligence laboratory DeepMind, Alexander Lerchner, argues <a href="https://deepmind.google/research/publications/231971/?ref=404media.co" rel="noreferrer noopener" target="_blank"><u>in a new paper</u></a> that no AI or other computational system will ever become conscious. </div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">"The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness".</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><<a href="https://www.404media.co/google-deepmind-paper-argues-llms-will-never-be-conscious/" target="_blank">https://www.404media.co/google-deepmind-paper-argues-llms-will-never-be-conscious/</a>></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">Quote:</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">Lerchner’s paper argues that AGI without sentience is
possible, saying that “the development of highly capable Artificial
General Intelligence (AGI) does not inherently lead to the creation of a
novel moral patient, but rather to the refinement of a highly
sophisticated, non-sentient tool.”</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">--------------------------------</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">Other cognitive scientists agree with his conclusions but are rather upset that he hasn't cited any of their decades of research papers. :)</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">BillK</div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><br></div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default"><br></div><br></div></div>
</div>