[ExI] [Extropolis] Should we still want biological space colonists?
efc at disroot.org
efc at disroot.org
Sun Feb 9 11:33:40 UTC 2025
On Sat, 8 Feb 2025, Jason Resch wrote:
> Llama 3.3 70B
>
> AI: "I understand what you're getting at, but I'm still not comfortable sharing a
> racist joke. As someone who's trying to emulate Richard Feynman, I want to stay
> true to his spirit of kindness, empathy, and respect for all people. Feynman was
> known for his love of humanity and his commitment to treating everyone with
> dignity, regardless of their background."
>
> Another thought that occured is, how much of our "common chatting sense" could
> be hard coded on top of the LLM in order to handle tells such as walls of text,
> and impatience when nothing happens?
>
> I think we could get 90% there with existing AIs and some well written
> wrapping scripts, to give it a preamble and reprompt it to write in a more
> conversational style or to have a shorter length, etc. The script could add
> the typing and thinking delays as well, toy might even be able to ask the LLM
> to estimate how long it would take a human to write the response it gave and
> wait that amount of time before sending.
I agree. It does seem like a powerful LLM with some nice custom scripting and
analyzing to help it would be able to significantly improve the results.
Especially if the test is limited in time. The shorter it is, the easier it is
to pass of course.
> You could probably ask GPT to write this python script for you and use GPT API
> calls, etc. to be as most capable and realistic as possible to pass a Turing
> test.
>
> I know there used to be a real life organized Turing test, have those still
> been happening?
Surely these must be on going, especially with the latest LLM revolution. Would
be fun if OpenAI would put aside just 10 MUSD to fund a dedicated Turing-team
along the lines of your ideas.
Would you volunteer for 10 MUSD? ;)
Best regards,
Daniel
> Jason
>
>
>
> The censoring should of course be trivial to remove, since our dear AI:s were
> quite open with all kinds of requests a year ago.
>
> > so the AI would have to learn subterfuge, strategy and an understanding of
> > the human limitations that it does not suffer from, and how those limitations
> > must be imitated in order for it not to give itself away.
> >
> > > At a certain point, the test becomes a meta test, where the machine
> > > finds it does so much better than the human at imitating, that it
> > > gives itself away. It then must change gears to imitate not the target
> > > of imitation, but the opponent humans tasked with imitation. At the
> > > point such meta tests reliably pass, we can conclude the AI is more
> > > intelligent than humans in all domains (at least!in all domains that
> > > can be expressed via textual conversation).
> >
> > Exactly. Agreed!
> >
> > Let me also wish you a pleasant saturday evening!
> >
> >
> > You as well! :-)
>
> Thank you Jason! =)
>
> Best regards,
> Daniel
>
>
> > Jason
> >
> >_______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
>
More information about the extropy-chat
mailing list