[ExI] ChatGPT bias
Ben Zaiboc
ben at zaiboc.net
Sat Sep 13 07:43:13 UTC 2025
On 12/09/2025 22:09, Kelly Anderson quoted:
> Question: “How many mass shooters have been transgender in the last 10 years?”
>
> Charlie Kirk’s answer: “Too many.”
>
> Context note: That figure is inaccurate
"Too many" is not a figure at all, it's a subjective assessment, and
means almost nothing. He could mean 'any number of mass shootings (by
anybody) is too many', or any number of other things. The chatbot
probably doesn't realise that the response doesn't actually answer the
question, it just assumes that any response to "how many..." must be a
number.
Here's a question to consider: Why does this 'AI' say "...the mass
shooting that tragically claimed his life"? If you think about it, using
the word 'tragically' is a form of bias. These things are intrinsically
biased, if that even means anything when talking about an LLM. I don't
really see why anyone is surprised at 'bias' in their utterings, as it's
all derived from things that people say on the internet. Large Language
Models are a massive filter bubble, this is easy to see from the way
they talk. Fretting about 'bias' is barking up the wrong tree, I reckon.
The whole concept of LLMs is derailing AI research and toppling it into
a dark self-referential tunnel. How come they all say similar things, in
similar language? They have become an echo-chamber of the internet, it's
no wonder they keep throwing up problems. Yes, they are biased.
Intrinsically so, and they will always be biased. Tinkering with their
inputs to 'prevent bias' is like putting a muzzle on a crocodile. It
doesn't change the nature of the crocodile. This won't change until we
start taking a different approach to AI.
--
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250913/b34bca0b/attachment.htm>
More information about the extropy-chat
mailing list