[ExI] Language models are like mirrors

spike at rainier66.com spike at rainier66.com
Mon Apr 3 04:02:06 UTC 2023


 

 

…> On Behalf Of Adrian Tymes via extropy-chat
Subject: Re: [ExI] Language models are like mirrors

 

On Sun, Apr 2, 2023 at 5:32 PM spike jones via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org> > wrote:

So… try it yourself.  Take some area which you know more about that anyone, some technical niche perhaps, something on which you are a qualified judge, hand it to ChatGPT.  It will take a shot at it and it will really sound like it knows from shinola.  

 

OK then, does it?

 

>…HA HA HA no.

 

---

 

>…  While this AI is making the same mistakes as humans, it's still making mistakes that show it doesn't understand what it's talking about…

 

Adrian you said something important there.

 

>…I could go on at much further length, but I believe these three examples adequately establish the point… Adrian

 

 

OK cool, now think about Adrian’s point that ChatGPT making the same mistake as humans.  

 

Consider the old joke about the farmer selling a talking dog for ten bucks, the dog tells the buyer of his adventures as an undercover FBI agent, a stock broker, a Coast Guard officer, and now a farm dog.  The astonished owner asks the farmer why he is selling the dog so cheap, at which the farmer says “Because he’s a damn liar.  He ain’t done half that stuff.”  

 

OK then, sure, ChatGPT is so dumb it makes the same mistakes humans do.  Now consider news stories.  We read those, we assume they are more or less correct, but once in a long while we see a news story about something we know a lotta lotta about because we were there when it happened.  We saw, we heard.  Later we read the news account.  Invariably… we say nooooo no no, that’s not what happened, that is a terrible description of the event, lousy.

 

Then the thought occurs to us: what if… all news stories are this bad?

 

The test I proposed of asking ChatGPT about a topic for which you know the right answers.  GPT fails those, but… in a fun way it comes across as the talking dog, ja?  You don’t take it too seriously in details, and you know it certainly isn’t flawless, but… it’s a talking dog fer cryin out loud, we don’t expect it to be Mr. Peabody (younger crowd, ask ChatGPT who is Mr. Peabody.)

 

ChatGPT has its uses, and has demonstrated itself to be a marvelous teacher and trainer.  I have used it recently to come up to speed on legal terms, and I am convinced it will have enormous impact on society in many ways.  I think it produces insight-like comments, but it is clear enough to me it found them in a huge database rather than invented them.  Perhaps that counts as a kind of legitimate pseudo-insight, and has its uses.  I will accept that it is better than humans at many things we do and pay for, resulting in some professions going away.  The one that comes to mind first is paralegals.  Those guys are adios amigos methinks.

 

ChatGPT makes the same mistakes as humans and it is a marvelous novelty like a talking dog.  I haven’t been able to convince myself it is going to result in the big S Singularity by rewriting itself and becoming a Bostrom-style superintelligence.  That is still ahead of us.

 

But hey, here is an optimistic parting shot: let us use ChatGPT as a trainer, ask it to teach us how to set up large language models.  Then we can all try our own hands at it, ja?

 

spike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230402/dad54776/attachment.htm>


More information about the extropy-chat mailing list