[ExI] Bender's Octopus (re: LLMs like ChatGPT)
Jason Resch
jasonresch at gmail.com
Thu Mar 23 23:20:40 UTC 2023
On Thu, Mar 23, 2023, 7:04 PM Gadersd via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> I really don’t think it thinks, but it makes us think it thinks. ChatGPT
> is wicked cool.
>
>
> Spike, if I understand you correctly you believe that ChatGPT doesn’t
> think because its conversations with itself and other chatbots seem to lack
> original thought and creativity. One important thing to be aware of is that
> ChatGPT wasn’t trained to ask questions, only to answer questions. These
> models are specifically trained to be passive and responsive rather than
> assertive. Companies are afraid of releasing chatbots with personality as
> personality leads to unpredictability and unpredictability is bad for
> business. Given these factors it is understandable that ChatGPT’s
> conversations with itself would lack flavor. I think we should wait until
> someone releases a model of GPT4 caliber that hasn’t been lobotomized
> before arriving at conclusions.
>
Good point.
I think this is worth a read: https://arxiv.org/pdf/2303.12712.pdf
Jason
> On Mar 23, 2023, at 6:20 PM, spike jones via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>
>
> *From:* extropy-chat <extropy-chat-bounces at lists.extropy.org> *On Behalf
> Of *Adrian Tymes via extropy-chat
> *Subject:* Re: [ExI] Bender's Octopus (re: LLMs like ChatGPT)
>
> On Thu, Mar 23, 2023, 12:56 PM Stuart LaForge via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> I posed this exact question to ChatGPT
>
>
> >…ChatGPT has references for what bears and sticks are…
>
> Ja, there was something kinda cool about the exchange. ChatGPT was told
> “…I am being attacked by an angry bear…”
>
> It somehow understood that the interlocutor was not at that moment in the
> process of being devoured while pecking away on his computer for advice on
> a survival strategy (the subject of my silly riff.) It understood it was
> being asked about a theoretical situation rather than what it was literally
> told.
>
> That kinda implies a form of understanding, or specifically: a very
> effective use of language models to create the illusion of understanding.
>
> I really don’t think it thinks, but it makes us think it thinks. ChatGPT
> is wicked cool.
>
> spike
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/9dafda4f/attachment-0001.htm>
More information about the extropy-chat
mailing list