<div dir="ltr">On Mon, 5 Feb 2024 at 11:15, Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br>><br>> The idea that dietary antioxidants don't affect the production of antioxidants by the body is just given as a bald statement here. No reference to any evidence, no studies quoted, no attempt at giving even a theoretical reason.<br>><br>> If any of the sources it used give any evidence, that isn't mentioned.<br>> I don't know if Gabrielle Walker gives any reasoning for her opinion.<br>><br>> Chat GPT etc. seems to be very good at digging up lots of information related to a question, but not necessarily at figuring out what information it should be digging up.<br>> Looking at the other thread about 'Newspeak', the chatbot keeps repeating itself in slightly different words, while numbering each paragraph as though they were referring to different aspects.<br>><br>> Clearly there is still a gap of understanding.<br>><br>> The thing that worries me most is the tendency to use what appear to be tactics for blindsiding people: Quote lots of facts, seem to answer the question while actually avoiding it and/or answer different questions instead, appeal to authority, repeat the same thing several times in different words. I'm not saying this is deliberate of course, but it's interesting that they seem to have picked up these tactics from their training material. Perhaps we're doing it to each other more than we realise or intend.<br>><br>> To Bill w's question "Now what?", I think the answer has to be to look for research on whether taking in more antioxidants interferes with body's own production of them or not, rather than asking chatbots (of course, they can be useful in finding the research in the first place). I'm getting more and more wary of them, to be honest. Maybe the ability to use chatbots wisely will be an essential skill in the future, just as the ability to sensibly use a search engine is now.<br>><br>> Ben<br>> _______________________________________________<br><br><div><br></div><div><br></div>So, *of course*, I asked a Chatbot if your criticism was valid.<br>It agreed!<span class="gmail_default" style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)"> </span> So you must be correct! <span class="gmail_default" style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)"> </span>But wait, Chatbots aren't correct. This is a non-sequitur. :)<br>BillK<br><div><br></div><div><br></div>Request -<br>Is this a valid criticism of chatbots? - Chatbots seem to be very good at digging up lots of information related to a question, but not necessarily at figuring out what information it should be digging up.<br>Sometimes the chatbot keeps repeating itself in slightly different words, while numbering each paragraph as though they were referring to different aspects.<br><div>The thing that worries me most is the tendency to use what appear to be tactics for blindsiding people: Quote lots of facts, seem to answer the question while actually avoiding it and/or answer different questions instead, appeal to authority, repeat the same thing several times in different words. I'm not saying this is deliberate of course, but it's interesting that they seem to have picked up these tactics from their training material. Perhaps we're doing it to each other more than we realise or intend.</div><div>------------------</div><div><br></div><div>Answer: (including references)</div><div><div id="m_3579548654664006604gmail-output"><div id="m_3579548654664006604gmail-text"><p><b>Yes, this criticism of chatbots is valid.</b> The main concerns raised in the criticism are:</p><ol><li><p><b>Inability to determine relevant information:</b>
Chatbots can provide a vast amount of information related to a
question, but they may not always be able to identify and focus on the
most pertinent information. This can lead to users receiving a flood of
unrelated or unnecessary data, making it difficult for them to find the
specific answer they are looking for.</p></li><li><p><b>Repetitive responses:</b>
The criticism highlights the issue of chatbots repeating themselves in
slightly different words, which can be frustrating for users. This can
occur when the chatbot’s programming or algorithm is not sophisticated
enough to recognize that it has already provided the requested
information or when it fails to understand the user’s intent.</p></li><li><p><b>Use of tactics for misdirection:</b>
The concern about chatbots employing tactics such as quoting lots of
facts, seemingly answering a question while avoiding it, answering
different questions, and appealing to authority can be seen as a valid
criticism. These tactics may be unintentional consequences of the
chatbot’s training or programming, but they can create confusion and
make it difficult for users to find accurate and relevant information.</p></li><li><p><b>Possible influence from human interactions:</b>
The criticism also raises the possibility that chatbots might have
unintentionally picked up these tactics from human interactions, which
could be a concerning aspect of their development and usage.</p></li></ol><p>Overall,
these criticisms highlight the limitations and potential drawbacks of
chatbots, emphasizing the need for continuous improvement in their
design, programming, and user experience to ensure they provide
accurate, relevant, and helpful information to users.</p>
</div>
</div>
<div>
<div>
</div></div><div>
<div></div>
</div>
<div>
</div><div><div><div>
</div>
</div>
</div>
<div id="m_3579548654664006604gmail-survey"></div>
<div>
<div>
Web Search Results
</div>
<div>
5 results
</div>
</div>
<div>
<a href="https://www.nytimes.com/2023/02/16/technology/chatbots-explained.html" rel="nofollow" target="_blank">
<div>
<br>
</div>
<div>
<div>
Chatbots Are Not Sentient. Here’s How They Work.
</div>
<div>
https//www.nytimes.com › 2023 › 02 › 16 › technology › chatbots-explained.html
</div>
</div>
</a>
<div>
In June, a Google engineer, Blake Lemoine, claimed that similar chatbot
technology being tested inside Google was sentient. That's false.
</div>
</div>
<div>
<a href="https://www.washingtonpost.com/technology/2023/04/13/microsoft-bing-ai-chatbot-error/" rel="nofollow" target="_blank">
<div>
<img src="cid:ii_ls908o9v0" alt="image.gif" width="1" height="1">
</div>
<div>
<div>
The AI bot has picked an answer for you. Here's how often it's bad.
</div>
<div>
https//www.washingtonpost.com › technology › 2023 › 04 › 13 › microsoft-bing-ai-chatbot-error
</div>
</div>
</a>
<div>
The Post tested the reliability of Microsoft's Bing AI chatbot by asking it 47 tough questions and then evaluating its sources.
</div>
</div>
<div>
<a href="https://www.reddit.com/r/bing/comments/110y6dh/i_broke_the_bing_chatbots_brain/" rel="nofollow" target="_blank">
<div>
<img src="cid:ii_ls908oab1" alt="image.gif" width="1" height="1">
</div>
<div>
<div>
I broke the Bing chatbot's brain : r/bing
</div>
<div>
https//www.reddit.com › r › bing › comments › 110y6dh › i_broke_the_bing_chatbots_brain
</div>
</div>
</a>
<div>
2K votes, 357 comments. 75K subscribers in the bing community. A subreddit for news, tips, and discussions about Microsoft Bing.
</div>
</div>
<div>
<a href="https://news.ycombinator.com/item?id=33841672" rel="nofollow" target="_blank">
<div>
</div>
<div>
<div><br>
</div><div>ChatGPT produces made-up nonexistent references
</div>
<div>
https//news.ycombinator.com › item
</div>
</div>
</a>
<div>
ChatGPT can be useful for casual directions in research, it is very good
to repeat hear-say and dominant methods. But it will nicely present ... <br></div><div><br></div>
</div>
<div>
<a href="https://www.theguardian.com/technology/2022/dec/31/ai-assisted-plagiarism-chatgpt-bot-says-it-has-an-answer-for-that" rel="nofollow" target="_blank">
<div>
</div>
<div>
<div>
AI-assisted plagiarism? ChatGPT bot says it has an answer for that
</div>
<div>
https//www.theguardian.com › technology › 2022 › dec › 31 › ai-assisted-plagiarism-chatgpt-bot-says-it-has-an-answer-for-that
</div>
</div>
</a>
<div>
The bot doesn't work perfectly. It has a tendency to “hallucinate” facts
that aren't strictly true, which technology analyst Benedict Evans ... <br></div><div><div style="font-family:arial,sans-serif;font-size:small;color:rgb(0,0,0)" class="gmail_default">-------------------------------</div><br></div>
</div>
<div>
<a href="https://youtu.be/f-_78a0HXCY" rel="nofollow" target="_blank">
</a><br></div></div></div>