[ExI] Will advancing AI hinder or help online discussion groups?
BillK
pharos at gmail.com
Mon Dec 23 14:13:18 UTC 2024
On Mon, 23 Dec 2024 at 12:24, Adrian Tymes via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> You presume error free AIs that are capable of human-grade imagination and are in every way superior to humans. Reality is falling far short of that, and that might not just be in the short term.
>
> Besides, humans find it quite easy to criticize even the most logically perfect human responses, and have for millenia. Consider how humans would respond if you had one of these hypothetical "perfect" AIs but people thought they were speaking to a person. Do you honestly believe they would not criticize it?
>
> It can already take hours of work to completely rebut a Gish gallop, and that is demonstrably well within human capability. Getting people to opt out of further discussion because it takes so much effort to rebut is rather the point, despite spewing so much incorrectness.
> _______________________________________________
The 'thinking' AIs that have appeared in the last few weeks seem to be
a big improvement over earlier chatbots. But I agree that any AI
responses should be checked for factual accuracy.
I have realised that AIs have the great ability to do as they are
told. So if you ask an AI to explain in detail why X is a bad idea and
to list the weaknesses, it will follow instructions and produce a long
report for you. We can use AI to help us criticise other AI reports.
I think some people have already used AI agents to argue / discuss
online with other AI agents. For the LOLs. :)
BillK
More information about the extropy-chat
mailing list