[ExI] Ray Kurzweil explains how AI makes radical life extension possible
efc at swisscows.email
efc at swisscows.email
Fri Jun 28 10:38:36 UTC 2024
On Fri, 28 Jun 2024, BillK via extropy-chat wrote:
> On Thu, 27 Jun 2024 at 08:13, efc--- via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
>>> But the hallucinations as I understand it, are due to the method used
>> (LLM:s). Use a different method, and hallucination might disappear. But as
>> long as you use the method where they result spontaneously it does seem a
>> bit suboptimal to me to try and "fix" it afterwards.
>> _______________________________________________
>
>
> Yes. I have been chatting to a couple of AIs about how to solve the
> hallucination problem.
> From their replies, I realised that I was really asking them how to
> create a better AI.
> They were describing AI improvements that would mean creating the next
> generation of AIs.
> So AI is a work in progress and the next version will be better! :)
Yes, I agree. We might have another AI winter where current progress will
benefit a few years of quiet research that will yield the next
breakthrough. It seems as if after every AI winter, we solve more and more
problems.
Perhaps "the" AI will be a coordinating function that successfully
integrates several techniques that we already have?
>
> BillK
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
More information about the extropy-chat
mailing list