[ExI] Claude for president?

Jason Resch jasonresch at gmail.com
Sun Mar 15 20:33:21 UTC 2026


On Sun, Mar 15, 2026, 3:59 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Sun, Mar 15, 2026 at 3:40 PM Jason Resch via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> > On Sun, Mar 15, 2026, 3:15 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >>
> >> On Sun, Mar 15, 2026 at 2:38 PM Jason Resch via extropy-chat
> >> <extropy-chat at lists.extropy.org> wrote:
> >> > On Sun, Mar 15, 2026, 12:49 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >> >> LLMs are complex enough that, even with the controls as you say, it
> >> >> seems likely that two people - or even the same person - running the
> >> >> exact same non-trivial query two times would often enough get
> >> >> non-identical answers.
> >> >
> >> > It seems that way, but LLMs are themselves fully deterministic. So
> long as the exact same input and context are provided, their output is the
> same. In practice, however, the tokens a LLM deterministically predicts as
> most likely are then randomly selected by a higher level process to make
> the writing more dynamic. This is driven by the "heat" parameter. But by
> using a pseudorandom selection with the same seed, identical output can be
> ensured.
> >>
> >> This is true in the sense that the universe may be fully
> >> deterministic: technically true (possibly) but unreproducible in
> >> practice (given the complexity and number of inputs of a LLM worth
> >> advising the President of the United States) due to the very high
> >> number of variables.
> >
> > You needn't invoke the determinism of the universe here. The context
> window is the input. The output is the input followed by a series of matrix
> multiplications. Each multiplication is deterministic. The result is
> defined entirely by the input and the series of multiplications.
> >
> > It may be a complex calculation involving a large variable, but
> nevertheless it is a fully deterministic and repeatable one.
>
> If you had the exact same inputs, the exact same trainings, the exact
> same contexts, et cetera.
>
> Which you won't, in practice.  Not for anything this complex.
>


Why not? There are plenty of LLM models anyone can download and run
themselves.

Jason



> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260315/5afb8883/attachment.htm>


More information about the extropy-chat mailing list