[ExI] Language models are like mirrors

spike at rainier66.com spike at rainier66.com
Mon Apr 3 15:43:54 UTC 2023


 

 

From: Gordon Swobe <gordon.swobe at gmail.com> 
Subject: Re: [ExI] Language models are like mirrors

 

>>… The test I proposed of asking ChatGPT about a topic for which you know the right answers…

>…But in this particular disease that concerns me, meds in one of the subclasses are contraindicated and strongly so. They are potentially lethal.

>… I filled out the feedback form to OpenAI to warn them of the error. 

>…I'm glad to see now in version 4 that they've got it right. Not only are the contraindicated meds not listed in the answer, but ChatGPT-4 warns about them.  -gts

 

 

 

Ja.  Gordon I agree.  We really really just cannot have chatbots offering what looks like medical advice.  They come across like they know what they are talking about, but I think they don’t really.  What they say is mostly true, but they say true stuff without “knowing” what they are talking about.  They don’t understand the consequences and dangers when it comes to medications.

 

When you ponder it a bit, you realize that our ability to make stuff in a lab results in our own medications growing steadily more dangerous as they become more effective.  Medicine is filled with confounding variables.

 

But I will leave that thought on a cheerful note, an optimistic take on it.  

 

Biological intelligences are generally insufficient to master all the wildly complicated variables in a medical diagnosis.  As a result most diagnoses are speculative.  We all know the story: doctors know they don’t know, and must err on the side of caution if they are ethical.  Well, OK, I understand.

 

But to err on the side of caution is to err just the same.

 

Result: it is more likely you will get an underdose of a medication which is otherwise effective that you are to get an overdose which would cause something else to break down.

 

I foresee a chatbot sophisticated enough to learn all the patient’s variables, the patient’s DNA, the chemical content of the patient’s stools, urine, breath, all the observable variables, create a huge matrix the way a controls engineer would set up a Kalman filter, and come up with advice for the doctor (or possibly directly to the patient) more likely to work at the expense of somewhat increased risk of slaying the patient.

 

We aren’t there yet, but we can see it from here.

 

spike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230403/25b18ef2/attachment.htm>


More information about the extropy-chat mailing list