<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=EN-US link="#0563C1" vlink="#954F72" style='word-wrap:break-word'><div class=WordSection1><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><div><div style='border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in'><p class=MsoNormal><b>…</b>> <b>On Behalf Of </b>Ben Zaiboc via extropy-chat<br><b>Cc:</b> Ben Zaiboc <ben@zaiboc.net><br><b>Subject:</b> Re: [ExI] all we are is just llms<o:p></o:p></p></div></div><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>On 21/04/2023 06:28, spike wrote:<br><br><o:p></o:p></p><blockquote style='margin-top:5.0pt;margin-bottom:5.0pt'><p class=MsoNormal>>>…Regarding measuring GPT’s intelligence, this must have already been done and is being done. Reasoning: I hear GPT is passing medical boards exams and bar exams and such, so we should be able to give it IQ tests, then compare its performance with humans on that test. I suspect GPT will beat everybody at least on some tests.<o:p></o:p></p></blockquote><p class=MsoNormal><br><br>>…Yeah, but don't forget, spike, they just have <i>simulated</i> understanding of these things we test them for. So the test results are not really valid. That will include IQ tests. No good. Simulated intelligence, see?<br><br>Ben<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Ja, Ben where I was really going with that idea is exploring whether it is possible to separate consciousness from intelligence. It isn’t clear at all that those are two different things, but I am no expert on these matters. We can imagine some kinds of intelligence tests which AI can beat everyone, but that in itself doesn’t prove that software is conscious. If it doesn’t, then what I am looking for is a way to somehow model consciousness as a separate thing from intelligence, even if the two are highly correlated (which I suspect they are (but I don’t know (because I am waaaay the hell outside my area of expertise with this entire discussion (I have learned a lot here however (and thanks to all who are posting on the topic.)))))<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>A lotta what people are doing with ChatGPT today is just assuming intelligence and assuming away or ignoring consciousness, treating those as two separate things.<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Eliezer must have come upon this question dozens of times by now, but I haven’t followed Less Wrong over the years. Eli followers, has he published anything anywhere close to the notion of treating consciousness and intelligence as two separate things?<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>spike <o:p></o:p></p></div></body></html>