[ExI] Ray Kurzweil explains how AI makes radical life extension possible

efc at swisscows.email efc at swisscows.email
Thu Jun 27 07:11:09 UTC 2024



On Wed, 26 Jun 2024, Adrian Tymes via extropy-chat wrote:

> On Wed, Jun 26, 2024 at 6:04 PM BillK via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>       The general view seems to be that there is no instant fix for AI
>       hallucinations. But every improved version of AI should get better at
>       avoiding hallucinations.
> 
> 
> The moment you have to use "should" is the moment your prediction becomes a mere wish.
> 
> Are they, or are they not, getting measurably better at this?  The fact that AI even five years ago did not notably produce
> hallucinations suggests this problem has gotten worse, not better, with the current generation of AI.

But the hallucinations as I understand it, are due to the method used 
(LLM:s). Use a different method, and hallucination might disappear. But as 
long as you use the method where they result spontaneously it does seem a 
bit suboptimal to me to try and "fix" it afterwards.


More information about the extropy-chat mailing list