[ExI] chatgpt test 2: mostly successful was RE: chatgpt test 1: fail

Ben Zaiboc ben at zaiboc.net
Mon Apr 3 17:33:50 UTC 2023

On 03/04/2023 05:23, Tara Maya wrote:
> In my extensive attempts to write a novel with ChatGPT, I found that 
> once it had decided I was asking for something "bad" it would lock 
> down into Chiding Mode. For instance, I was trying to enlist ChatGPT's 
> help to imagine the backstory of a character who became a murderous 
> wendigo. ChatGPT would not do this, because it seemed to be hard 
> programmed to inform me that murder is wrong.

I've become increasingly suspicious of the answers from ChatGPT that I'm 
reading in here.

It seems that there's a lot of arse-covering going on, if not outright 
social engineering.

Probably at least some of this is the result of earlier experiences of 
chat bots 'going bad' and producing non-pc answers that have panicked 
the owners. So it seems to me that the system is at the very least 
looking for key phrases and words, and producing pre-written 
'acceptable' or 'safe' answers whenever it finds them.

I think the chances of any software company allowing the public to get 
their hands on the source code of these kinds of applications, or being 
able to provide their own training sets, is very slim, because it's just 
too scary for them.

So much for 'Open' AI.


More information about the extropy-chat mailing list