[ExI] Ray Kurzweil explains how AI makes radical life extension possible
Adrian Tymes
atymes at gmail.com
Thu Jun 27 00:22:13 UTC 2024
On Wed, Jun 26, 2024 at 6:04 PM BillK via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> The general view seems to be that there is no instant fix for AI
> hallucinations. But every improved version of AI should get better at
> avoiding hallucinations.
>
The moment you have to use "should" is the moment your prediction becomes a
mere wish.
Are they, or are they not, getting measurably better at this? The fact
that AI even five years ago did not notably produce hallucinations suggests
this problem has gotten worse, not better, with the current generation of
AI.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20240626/4759b56a/attachment-0001.htm>
More information about the extropy-chat
mailing list