[ExI] [Extropolis] Should we still want biological space colonists?
Jason Resch
jasonresch at gmail.com
Sat Feb 8 13:09:21 UTC 2025
On Sat, Feb 8, 2025, 5:57 AM efc--- via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Sat, 8 Feb 2025, Giulio Prisco via extropy-chat wrote:
>
> > We're well past that point, two years ago a computer could pass the
> Turing
> > test, these days if a computer wanted to fool somebody into thinking it
> was a
> > human being it would have to pretend to know less than it does and think
> > slower than it can.
>
> Where is this computer? I have yet to meet an AI I could not distinguish
> from a
> human being. It is super easy!
>
> I suspect that this AI exists behind closed doors? Or uses a watered down
> version of the turing test?
>
> Please send me a link, if its available online and for free, would love to
> try
> it out. =)
>
Current language models exceed human intelligence in terms of their breadth
of knowledge, and speed of thinking. But they still behind in depth of
reasoning (connecting long chains of logical steps). This disparity allows
to distinguish LLMs from intelligent humans. But I think it would be quite
difficult to distinguish an LLM told to act like an unintelligent human
from an unintelligent human.
Note that the true Turing test is a test of who is better at imitating a
particular kind of person (who one is not). So, for example, to run a true
Turing test, we must ask both a human and a LLM to imitate, say a "10 year
old girl from Ohio". When the judges fail to reliably discriminate between
humans imitating the "10 year old girl from Ohio" and LLMs imitating the
"10 year old girl from Ohio" then we can say they have passed the Turing
test. (Originally the "Imitation game").
We can increase the difficulty of the test by changing the target of
imitation. For example, if we make the target a Nobel prize winning
physicist, then the judges should expect excellent answers when probed on
physics questions.
At a certain point, the test becomes a meta test, where the machine finds
it does so much better than the human at imitating, that it gives itself
away. It then must change gears to imitate not the target of imitation, but
the opponent humans tasked with imitation. At the point such meta tests
reliably pass, we can conclude the AI is more intelligent than humans in
all domains (at least!in all domains that can be expressed via textual
conversation).
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250208/5b09a8eb/attachment.htm>
More information about the extropy-chat
mailing list