[ExI] Language models are like mirrors
tara at taramayastales.com
Sun Apr 2 05:29:10 UTC 2023
> But they are conscious only in the same sense that a fictional character in a novel written in the first person is conscious.
By the way, ChaptGPT doesn't seem to understand "first person" versus "third person." It forgets to speak in First Person if it's triggers are only slightly off. I've found that it has a hard time telling apart quotes in a novel from the non-quotes in a novel. (What characters say to each other rather than what the narrator says.)
I saw what might have been a list on the fiction that ChatGPT was trained on, and I find it quite lacking. I would love to have an LLC I could train on a body of fiction I selected myself, although I can see this would cause legal issues.
> On Apr 1, 2023, at 9:41 PM, Gordon Swobe via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> I wrote
>> They are trained on massive amounts of written material, much of it being conscious humans in conversation, and so they mimic conscious humans in conversation.
> More accurately, they are trained on massive amounts of text much of it written in the first person. This includes both fictional as well as nonfictional material. Is it so surprising then they can write persuasively in the first person and appear conscious? But they are conscious only in the same sense that a fictional character in a novel written in the first person is conscious.
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat