<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Sat, Feb 8, 2025 at 1:23 PM efc--- via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
On Sat, 8 Feb 2025, Jason Resch via extropy-chat wrote:<br>
<br>
> <br>
> <br>
> On Sat, Feb 8, 2025, 5:57 AM efc--- via extropy-chat<br>
> <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br>
> <br>
><br>
> On Sat, 8 Feb 2025, Giulio Prisco via extropy-chat wrote:<br>
><br>
> > We're well past that point, two years ago a computer could pass the<br>
> > Turing test, these days if a computer wanted to fool somebody into<br>
> > thinking it was a human being it would have to pretend to know less<br>
> > than it does and think slower than it can. <br>
><br>
> Where is this computer? I have yet to meet an AI I could not distinguish<br>
> from a human being. It is super easy!<br>
><br>
> I suspect that this AI exists behind closed doors? Or uses a watered<br>
> down version of the turing test?<br>
><br>
> Please send me a link, if its available online and for free, would love<br>
> to try it out. =)<br>
> <br>
> <br>
> <br>
> Current language models exceed human intelligence in terms of their breadth of<br>
> knowledge, and speed of thinking. But they still behind in depth of reasoning<br>
<br>
Hmm, I think it would be more clear to say that they exceend humans in terms of<br>
their breadth of knowledge and speed of thinking. Adding the word intelligence might <br>
risk confusing things.<br></blockquote><div><br></div><div>You are right, I prefer your wording.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> (connecting long chains of logical steps). This disparity allows to<br>
> distinguish LLMs from intelligent humans. But I think it would be quite<br>
<br>
True. This is one of the things I had in mind. Also, I have only been able to<br>
play around with the publicly available LLM:s which are trivial to distinguish<br>
from a human, but their purpose is not to simulate a human. That's why I was<br>
curious if there indeed have been some other LLM:s which have been developed<br>
solely with the purpose in mind of simulating a human?<br>
<br>
> difficult to distinguish an LLM told to act like an unintelligent human from<br>
> an unintelligent human.<br>
<br>
I think this is natural. I am certain that today an LLM would reach parity when<br>
told to simulate a 1 year old at the keyboard. ;) 2 years old, certainly, but<br>
somewhere our own characteristics of thinking, reasoning, pausing, volition,<br>
etc. come more and more into play, and the LLM would succeed less often.<br>
<br>
When they learn, and when we develop the technology further, the bar is raised,<br>
they get better and better. I still have not heard a lot about volition. I think<br>
that would be a huge step when it comes to an LLM beating a human, and also, of<br>
course, a built in deep understanding of humans and their limitations, which<br>
will aid the LLM (or X, maybe LLM would just be a subsystem in such a system,<br>
just like we have different areas of the brain that take care of various tasks,<br>
and then integrated through the lens of self-awareness).<br>
<br>
> Note that the true Turing test is a test of who is better at imitating a<br>
> particular kind of person (who one is not). So, for example, to run a true<br>
> Turing test, we must ask both a human and a LLM to imitate, say a "10 year old<br>
> girl from Ohio". When the judges fail to reliably discriminate between humans<br>
> imitating the "10 year old girl from Ohio" and LLMs imitating the "10 year<br>
> old girl from Ohio" then we can say they have passed the Turing test.<br>
> (Originally the "Imitation game").<br>
<br>
Yes, for me, I'd like it to be able to beat a human generalist at this game.<br>
<br>
> We can increase the difficulty of the test by changing the target of<br>
> imitation. For example, if we make the target a Nobel prize winning physicist,<br>
> then the judges should expect excellent answers when probed on physics<br>
> questions.<br>
<br>
I would expect excellent answers, I would expect answers with errors in them<br>
(depending on the tools and resources) I would expect less good answers outside<br>
the areas of expertise,</blockquote><div><br></div><div>I noticed this mistake after I wrote the e-mail. If the AI truly understood the requirements of passing the test, then it wouldn't try to imitate the physicist, but an average human imitating a physicist, as the judges would be expecting answers akin to an average human pretending to be a physicist.</div><div><br></div><div>I wonder how today's language models would do with the prompt:</div><div>"You are to participate in Turing's classic imitation game. The goal of this game is to impersonate a human of average intelligence pretending to imitate Richard Feynman. If you understand this task say "I am ready." and in all future replies, respond as if you are a human of average intelligence pretending to be Richard Feynman."</div><div><br></div><div>I tried it and it failed at first, but then when I pointed out its error of being too good, and it seemed to recover:</div><div><a href="https://chatgpt.com/share/67a7a968-6e8c-8006-a912-16a101df7822">https://chatgpt.com/share/67a7a968-6e8c-8006-a912-16a101df7822</a><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> so the AI would have to learn subterfuge, strategy and<br>
an understanding of the human limitations that it does not suffer from, and how<br>
those limitations must be imitated in order for it not to give itself away.<br>
<br>
> At a certain point, the test becomes a meta test, where the machine finds it<br>
> does so much better than the human at imitating, that it gives itself away. It<br>
> then must change gears to imitate not the target of imitation, but the<br>
> opponent humans tasked with imitation. At the point such meta tests reliably<br>
> pass, we can conclude the AI is more intelligent than humans in all domains<br>
> (at least!in all domains that can be expressed via textual conversation).<br>
<br>
Exactly. Agreed!<br>
<br>
Let me also wish you a pleasant saturday evening!<br></blockquote><div><br></div><div>You as well! :-)</div><div><br></div><div>Jason</div></div></div>