[ExI] Language models are like mirrors

Gordon Swobe gordon.swobe at gmail.com
Mon Apr 3 04:55:40 UTC 2023


> The test I proposed of asking ChatGPT about a topic for which you know
the right answers.  GPT fails those, but… in a fun way it comes across as
the talking dog, ja?

I have an example that is not so fun. For reasons of privacy, I was
debating whether to share it, but I think I can make it sufficiently
abstract. While I am no physician, I am highly knowledgeable about a
particular disease that afflicts someone I know. I have spent more than a
year researching it. No FDA approved meds exist for the treatment of this
disease, but there does exist a class of medications that would seem to
make sense as they are approved for some very similar and related disease.
That class of meds can be divided into two subclasses. In the related
diseases, any med in either subclass is more or less as safe and effective
as any other.

But in this particular disease that concerns me, meds in one of the
subclasses are contraindicated and strongly so. They are potentially lethal.

When ChatGPT first went online as version 3.5, I asked what would be some
proper medications for this disease I have in mind, and was appalled to see
it list mostly medications in the contraindicated subclass. I filled out
the feedback form to OpenAI to warn them of the error.

I'm glad to see now in version 4 that they've got it right. Not only are
the contraindicated meds not listed in the answer, but ChatGPT-4 warns
about them.

-gts

On Sun, Apr 2, 2023 at 10:03 PM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
>
>
> *…*> *On Behalf Of *Adrian Tymes via extropy-chat
> *Subject:* Re: [ExI] Language models are like mirrors
>
>
>
> On Sun, Apr 2, 2023 at 5:32 PM spike jones via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> So… try it yourself.  Take some area which you know more about that
> anyone, some technical niche perhaps, something on which you are a
> qualified judge, hand it to ChatGPT.  It will take a shot at it and it will
> really sound like it knows from shinola.
>
>
>
> OK then, does it?
>
>
>
> >…HA HA HA no.
>
>
>
> ---
>
>
>
> >…  While this AI is making the same mistakes as humans, it's still making
> mistakes that show it doesn't understand what it's talking about…
>
>
>
> Adrian you said something important there.
>
>
>
> >…I could go on at much further length, but I believe these three examples
> adequately establish the point… Adrian
>
>
>
>
>
> OK cool, now think about Adrian’s point that ChatGPT making the same
> mistake as humans.
>
>
>
> Consider the old joke about the farmer selling a talking dog for ten
> bucks, the dog tells the buyer of his adventures as an undercover FBI
> agent, a stock broker, a Coast Guard officer, and now a farm dog.  The
> astonished owner asks the farmer why he is selling the dog so cheap, at
> which the farmer says “Because he’s a damn liar.  He ain’t done half that
> stuff.”
>
>
>
> OK then, sure, ChatGPT is so dumb it makes the same mistakes humans do.
> Now consider news stories.  We read those, we assume they are more or less
> correct, but once in a long while we see a news story about something we
> know a lotta lotta about because we were there when it happened.  We saw,
> we heard.  Later we read the news account.  Invariably… we say nooooo no
> no, that’s not what happened, that is a terrible description of the event,
> lousy.
>
>
>
> Then the thought occurs to us: what if… all news stories are this bad?
>
>
>
> The test I proposed of asking ChatGPT about a topic for which you know the
> right answers.  GPT fails those, but… in a fun way it comes across as the
> talking dog, ja?  You don’t take it too seriously in details, and you know
> it certainly isn’t flawless, but… it’s a talking dog fer cryin out loud, we
> don’t expect it to be Mr. Peabody (younger crowd, ask ChatGPT who is Mr.
> Peabody.)
>
>
>
> ChatGPT has its uses, and has demonstrated itself to be a marvelous
> teacher and trainer.  I have used it recently to come up to speed on legal
> terms, and I am convinced it will have enormous impact on society in many
> ways.  I think it produces insight-like comments, but it is clear enough to
> me it found them in a huge database rather than invented them.  Perhaps
> that counts as a kind of legitimate pseudo-insight, and has its uses.  I
> will accept that it is better than humans at many things we do and pay for,
> resulting in some professions going away.  The one that comes to mind first
> is paralegals.  Those guys are adios amigos methinks.
>
>
>
> ChatGPT makes the same mistakes as humans and it is a marvelous novelty
> like a talking dog.  I haven’t been able to convince myself it is going to
> result in the big S Singularity by rewriting itself and becoming a
> Bostrom-style superintelligence.  That is still ahead of us.
>
>
>
> But hey, here is an optimistic parting shot: let us use ChatGPT as a
> trainer, ask it to teach us how to set up large language models.  Then we
> can all try our own hands at it, ja?
>
>
>
> spike
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230402/d3ff1e3a/attachment-0001.htm>


More information about the extropy-chat mailing list