[ExI] Von Neumann Probes

Jason Resch jasonresch at gmail.com
Mon Jan 26 01:04:52 UTC 2026


On Sun, Jan 25, 2026 at 4:49 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Sunday, 25 January 2026 at 16:38, spike at rainier66.com <
> spike at rainier66.com> wrote:
>
> >
> > Your calculation is right Ben. I rounded up the .9 to 1.
> >
> > Never trust those AI bahstids. They don't know what they are calculating
> about.
>
> Evidently.
> Ironic, isn't it, that computers, that are so good at doing things that we
> can't, like complex maths, are so bad at the same things when we get them
> to behave .. more .. like ..
>

The language models have a network that is only a few dozen to a few
hundred levels deep. Any computation we expect them to be able to perform
must be something that a computer algorithm can do in a few hundred steps.

It's clear then why LLMs fail to be able to multiply large numbers in their
head. Multiplication isn't something that can be done in a fixed number of
steps. Few math problems can be solved in constant time.

Jason



>
> Ah.
> I think I see where we went wrong.
> Dammit.
>
> OK, maybe we should be teaching AIs to use computers, for the things they
> aren't so good at...
>
> I think I'm staring down a rabbit hole.
>
> ---
> Ben
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260125/72136b31/attachment.htm>


More information about the extropy-chat mailing list