[ExI] another health dilemma

BillK pharos at gmail.com
Mon Feb 5 14:16:32 UTC 2024


On Mon, 5 Feb 2024 at 11:15, Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
> The idea that dietary antioxidants don't affect the production of
antioxidants by the body is just given as a bald statement here. No
reference to any evidence, no studies quoted, no attempt at giving even a
theoretical reason.
>
> If any of the sources it used give any evidence, that isn't mentioned.
> I don't know if Gabrielle Walker gives any reasoning for her opinion.
>
> Chat GPT etc. seems to be very good at digging up lots of information
related to a question, but not necessarily at figuring out what information
it should be digging up.
> Looking at the other thread about 'Newspeak', the chatbot keeps repeating
itself in slightly different words, while numbering each paragraph as
though they were referring to different aspects.
>
> Clearly there is still a gap of understanding.
>
> The thing that worries me most is the tendency to use what appear to be
tactics for blindsiding people: Quote lots of facts, seem to answer the
question while actually avoiding it and/or answer different questions
instead, appeal to authority, repeat the same thing several times in
different words. I'm not saying this is deliberate of course, but it's
interesting that they seem to have picked up these tactics from their
training material. Perhaps we're doing it to each other more than we
realise or intend.
>
> To Bill w's question "Now what?", I think the answer has to be to look
for research on whether taking in more antioxidants interferes with body's
own production of them or not, rather than asking chatbots (of course, they
can be useful in finding the research in the first place). I'm getting more
and more wary of them, to be honest. Maybe the ability to use chatbots
wisely will be an essential skill in the future, just as the ability to
sensibly use a search engine is now.
>
> Ben
> _______________________________________________



So, *of course*, I asked a Chatbot if your criticism was valid.
It agreed!  So you must be correct! But wait, Chatbots aren't correct. This
is a non-sequitur.  :)
BillK


Request -
Is this a valid criticism of chatbots? - Chatbots seem to be very good at
digging up lots of information related to a question, but not necessarily
at figuring out what information it should be digging up.
Sometimes the chatbot keeps repeating itself in slightly different words,
while numbering each paragraph as though they were referring to different
aspects.
The thing that worries me most is the tendency to use what appear to be
tactics for blindsiding people: Quote lots of facts, seem to answer the
question while actually avoiding it and/or answer different questions
instead, appeal to authority, repeat the same thing several times in
different words. I'm not saying this is deliberate of course, but it's
interesting that they seem to have picked up these tactics from their
training material. Perhaps we're doing it to each other more than we
realise or intend.
------------------

Answer: (including references)

*Yes, this criticism of chatbots is valid.* The main concerns raised in the
criticism are:

   1.

   *Inability to determine relevant information:* Chatbots can provide a
   vast amount of information related to a question, but they may not always
   be able to identify and focus on the most pertinent information. This can
   lead to users receiving a flood of unrelated or unnecessary data, making it
   difficult for them to find the specific answer they are looking for.
   2.

   *Repetitive responses:* The criticism highlights the issue of chatbots
   repeating themselves in slightly different words, which can be frustrating
   for users. This can occur when the chatbot’s programming or algorithm is
   not sophisticated enough to recognize that it has already provided the
   requested information or when it fails to understand the user’s intent.
   3.

   *Use of tactics for misdirection:* The concern about chatbots employing
   tactics such as quoting lots of facts, seemingly answering a question while
   avoiding it, answering different questions, and appealing to authority can
   be seen as a valid criticism. These tactics may be unintentional
   consequences of the chatbot’s training or programming, but they can create
   confusion and make it difficult for users to find accurate and relevant
   information.
   4.

   *Possible influence from human interactions:* The criticism also raises
   the possibility that chatbots might have unintentionally picked up these
   tactics from human interactions, which could be a concerning aspect of
   their development and usage.

Overall, these criticisms highlight the limitations and potential drawbacks
of chatbots, emphasizing the need for continuous improvement in their
design, programming, and user experience to ensure they provide accurate,
relevant, and helpful information to users.
Web Search Results
5 results

Chatbots Are Not Sentient. Here’s How They Work.
https//www.nytimes.com › 2023 › 02 › 16 › technology ›
chatbots-explained.html
<https://www.nytimes.com/2023/02/16/technology/chatbots-explained.html>
In June, a Google engineer, Blake Lemoine, claimed that similar chatbot
technology being tested inside Google was sentient. That's false.
[image: image.gif]
The AI bot has picked an answer for you. Here's how often it's bad.
https//www.washingtonpost.com › technology › 2023 › 04 › 13 ›
microsoft-bing-ai-chatbot-error
<https://www.washingtonpost.com/technology/2023/04/13/microsoft-bing-ai-chatbot-error/>
The Post tested the reliability of Microsoft's Bing AI chatbot by asking it
47 tough questions and then evaluating its sources.
[image: image.gif]
I broke the Bing chatbot's brain : r/bing
https//www.reddit.com › r › bing › comments › 110y6dh ›
i_broke_the_bing_chatbots_brain
<https://www.reddit.com/r/bing/comments/110y6dh/i_broke_the_bing_chatbots_brain/>
2K votes, 357 comments. 75K subscribers in the bing community. A subreddit
for news, tips, and discussions about Microsoft Bing.

ChatGPT produces made-up nonexistent references
https//news.ycombinator.com › item
<https://news.ycombinator.com/item?id=33841672>
ChatGPT can be useful for casual directions in research, it is very good to
repeat hear-say and dominant methods. But it will nicely present ...

AI-assisted plagiarism? ChatGPT bot says it has an answer for that
https//www.theguardian.com › technology › 2022 › dec › 31 ›
ai-assisted-plagiarism-chatgpt-bot-says-it-has-an-answer-for-that
<https://www.theguardian.com/technology/2022/dec/31/ai-assisted-plagiarism-chatgpt-bot-says-it-has-an-answer-for-that>
The bot doesn't work perfectly. It has a tendency to “hallucinate” facts
that aren't strictly true, which technology analyst Benedict Evans ...
-------------------------------

<https://youtu.be/f-_78a0HXCY>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20240205/05e5c107/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.gif
Type: image/gif
Size: 42 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20240205/05e5c107/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.gif
Type: image/gif
Size: 42 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20240205/05e5c107/attachment-0001.gif>


More information about the extropy-chat mailing list