[ExI] [Extropolis] Should we still want biological space colonists?
efc at disroot.org
efc at disroot.org
Sat Feb 8 18:22:52 UTC 2025
On Sat, 8 Feb 2025, Jason Resch via extropy-chat wrote:
>
>
> On Sat, Feb 8, 2025, 5:57 AM efc--- via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
>
>
> On Sat, 8 Feb 2025, Giulio Prisco via extropy-chat wrote:
>
> > We're well past that point, two years ago a computer could pass the
> > Turing test, these days if a computer wanted to fool somebody into
> > thinking it was a human being it would have to pretend to know less
> > than it does and think slower than it can.
>
> Where is this computer? I have yet to meet an AI I could not distinguish
> from a human being. It is super easy!
>
> I suspect that this AI exists behind closed doors? Or uses a watered
> down version of the turing test?
>
> Please send me a link, if its available online and for free, would love
> to try it out. =)
>
>
>
> Current language models exceed human intelligence in terms of their breadth of
> knowledge, and speed of thinking. But they still behind in depth of reasoning
Hmm, I think it would be more clear to say that they exceend humans in terms of
their breadth of knowledge and speed of thinking. Adding the word intelligence might
risk confusing things.
> (connecting long chains of logical steps). This disparity allows to
> distinguish LLMs from intelligent humans. But I think it would be quite
True. This is one of the things I had in mind. Also, I have only been able to
play around with the publicly available LLM:s which are trivial to distinguish
from a human, but their purpose is not to simulate a human. That's why I was
curious if there indeed have been some other LLM:s which have been developed
solely with the purpose in mind of simulating a human?
> difficult to distinguish an LLM told to act like an unintelligent human from
> an unintelligent human.
I think this is natural. I am certain that today an LLM would reach parity when
told to simulate a 1 year old at the keyboard. ;) 2 years old, certainly, but
somewhere our own characteristics of thinking, reasoning, pausing, volition,
etc. come more and more into play, and the LLM would succeed less often.
When they learn, and when we develop the technology further, the bar is raised,
they get better and better. I still have not heard a lot about volition. I think
that would be a huge step when it comes to an LLM beating a human, and also, of
course, a built in deep understanding of humans and their limitations, which
will aid the LLM (or X, maybe LLM would just be a subsystem in such a system,
just like we have different areas of the brain that take care of various tasks,
and then integrated through the lens of self-awareness).
> Note that the true Turing test is a test of who is better at imitating a
> particular kind of person (who one is not). So, for example, to run a true
> Turing test, we must ask both a human and a LLM to imitate, say a "10 year old
> girl from Ohio". When the judges fail to reliably discriminate between humans
> imitating the "10 year old girl from Ohio" and LLMs imitating the "10 year
> old girl from Ohio" then we can say they have passed the Turing test.
> (Originally the "Imitation game").
Yes, for me, I'd like it to be able to beat a human generalist at this game.
> We can increase the difficulty of the test by changing the target of
> imitation. For example, if we make the target a Nobel prize winning physicist,
> then the judges should expect excellent answers when probed on physics
> questions.
I would expect excellent answers, I would expect answers with errors in them
(depending on the tools and resources) I would expect less good answers outside
the areas of expertise, so the AI would have to learn subterfuge, strategy and
an understanding of the human limitations that it does not suffer from, and how
those limitations must be imitated in order for it not to give itself away.
> At a certain point, the test becomes a meta test, where the machine finds it
> does so much better than the human at imitating, that it gives itself away. It
> then must change gears to imitate not the target of imitation, but the
> opponent humans tasked with imitation. At the point such meta tests reliably
> pass, we can conclude the AI is more intelligent than humans in all domains
> (at least!in all domains that can be expressed via textual conversation).
Exactly. Agreed!
Let me also wish you a pleasant saturday evening!
Best regards,
Daniel
> Jason
>
>
More information about the extropy-chat
mailing list