[ExI] [Extropolis] Should we still want biological space colonists?
Jason Resch
jasonresch at gmail.com
Sat Feb 8 18:59:49 UTC 2025
On Sat, Feb 8, 2025 at 1:23 PM efc--- via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Sat, 8 Feb 2025, Jason Resch via extropy-chat wrote:
>
> >
> >
> > On Sat, Feb 8, 2025, 5:57 AM efc--- via extropy-chat
> > <extropy-chat at lists.extropy.org> wrote:
> >
> >
> > On Sat, 8 Feb 2025, Giulio Prisco via extropy-chat wrote:
> >
> > > We're well past that point, two years ago a computer could pass
> the
> > > Turing test, these days if a computer wanted to fool somebody
> into
> > > thinking it was a human being it would have to pretend to know
> less
> > > than it does and think slower than it can.
> >
> > Where is this computer? I have yet to meet an AI I could not
> distinguish
> > from a human being. It is super easy!
> >
> > I suspect that this AI exists behind closed doors? Or uses a
> watered
> > down version of the turing test?
> >
> > Please send me a link, if its available online and for free, would
> love
> > to try it out. =)
> >
> >
> >
> > Current language models exceed human intelligence in terms of their
> breadth of
> > knowledge, and speed of thinking. But they still behind in depth of
> reasoning
>
> Hmm, I think it would be more clear to say that they exceend humans in
> terms of
> their breadth of knowledge and speed of thinking. Adding the word
> intelligence might
> risk confusing things.
>
You are right, I prefer your wording.
>
> > (connecting long chains of logical steps). This disparity allows to
> > distinguish LLMs from intelligent humans. But I think it would be quite
>
> True. This is one of the things I had in mind. Also, I have only been able
> to
> play around with the publicly available LLM:s which are trivial to
> distinguish
> from a human, but their purpose is not to simulate a human. That's why I
> was
> curious if there indeed have been some other LLM:s which have been
> developed
> solely with the purpose in mind of simulating a human?
>
> > difficult to distinguish an LLM told to act like an unintelligent human
> from
> > an unintelligent human.
>
> I think this is natural. I am certain that today an LLM would reach parity
> when
> told to simulate a 1 year old at the keyboard. ;) 2 years old, certainly,
> but
> somewhere our own characteristics of thinking, reasoning, pausing,
> volition,
> etc. come more and more into play, and the LLM would succeed less often.
>
> When they learn, and when we develop the technology further, the bar is
> raised,
> they get better and better. I still have not heard a lot about volition. I
> think
> that would be a huge step when it comes to an LLM beating a human, and
> also, of
> course, a built in deep understanding of humans and their limitations,
> which
> will aid the LLM (or X, maybe LLM would just be a subsystem in such a
> system,
> just like we have different areas of the brain that take care of various
> tasks,
> and then integrated through the lens of self-awareness).
>
> > Note that the true Turing test is a test of who is better at imitating a
> > particular kind of person (who one is not). So, for example, to run a
> true
> > Turing test, we must ask both a human and a LLM to imitate, say a "10
> year old
> > girl from Ohio". When the judges fail to reliably discriminate between
> humans
> > imitating the "10 year old girl from Ohio" and LLMs imitating the "10
> year
> > old girl from Ohio" then we can say they have passed the Turing test.
> > (Originally the "Imitation game").
>
> Yes, for me, I'd like it to be able to beat a human generalist at this
> game.
>
> > We can increase the difficulty of the test by changing the target of
> > imitation. For example, if we make the target a Nobel prize winning
> physicist,
> > then the judges should expect excellent answers when probed on physics
> > questions.
>
> I would expect excellent answers, I would expect answers with errors in
> them
> (depending on the tools and resources) I would expect less good answers
> outside
> the areas of expertise,
I noticed this mistake after I wrote the e-mail. If the AI truly understood
the requirements of passing the test, then it wouldn't try to imitate the
physicist, but an average human imitating a physicist, as the judges would
be expecting answers akin to an average human pretending to be a physicist.
I wonder how today's language models would do with the prompt:
"You are to participate in Turing's classic imitation game. The goal of
this game is to impersonate a human of average intelligence pretending to
imitate Richard Feynman. If you understand this task say "I am ready." and
in all future replies, respond as if you are a human of average
intelligence pretending to be Richard Feynman."
I tried it and it failed at first, but then when I pointed out its error of
being too good, and it seemed to recover:
https://chatgpt.com/share/67a7a968-6e8c-8006-a912-16a101df7822
> so the AI would have to learn subterfuge, strategy and
> an understanding of the human limitations that it does not suffer from,
> and how
> those limitations must be imitated in order for it not to give itself away.
>
> > At a certain point, the test becomes a meta test, where the machine
> finds it
> > does so much better than the human at imitating, that it gives itself
> away. It
> > then must change gears to imitate not the target of imitation, but the
> > opponent humans tasked with imitation. At the point such meta tests
> reliably
> > pass, we can conclude the AI is more intelligent than humans in all
> domains
> > (at least!in all domains that can be expressed via textual conversation).
>
> Exactly. Agreed!
>
> Let me also wish you a pleasant saturday evening!
>
You as well! :-)
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250208/ac87d1bf/attachment.htm>
More information about the extropy-chat
mailing list