[ExI] Claude for president?
Jason Resch
jasonresch at gmail.com
Sun Mar 15 23:40:48 UTC 2026
On Sun, Mar 15, 2026 at 4:56 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Sun, Mar 15, 2026 at 4:34 PM Jason Resch via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> > On Sun, Mar 15, 2026, 3:59 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >>
> >> On Sun, Mar 15, 2026 at 3:40 PM Jason Resch via extropy-chat
> >> <extropy-chat at lists.extropy.org> wrote:
> >> > On Sun, Mar 15, 2026, 3:15 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >> >>
> >> >> On Sun, Mar 15, 2026 at 2:38 PM Jason Resch via extropy-chat
> >> >> <extropy-chat at lists.extropy.org> wrote:
> >> >> > On Sun, Mar 15, 2026, 12:49 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >> >> >> LLMs are complex enough that, even with the controls as you say,
> it
> >> >> >> seems likely that two people - or even the same person - running
> the
> >> >> >> exact same non-trivial query two times would often enough get
> >> >> >> non-identical answers.
> >> >> >
> >> >> > It seems that way, but LLMs are themselves fully deterministic. So
> long as the exact same input and context are provided, their output is the
> same. In practice, however, the tokens a LLM deterministically predicts as
> most likely are then randomly selected by a higher level process to make
> the writing more dynamic. This is driven by the "heat" parameter. But by
> using a pseudorandom selection with the same seed, identical output can be
> ensured.
> >> >>
> >> >> This is true in the sense that the universe may be fully
> >> >> deterministic: technically true (possibly) but unreproducible in
> >> >> practice (given the complexity and number of inputs of a LLM worth
> >> >> advising the President of the United States) due to the very high
> >> >> number of variables.
> >> >
> >> > You needn't invoke the determinism of the universe here. The context
> window is the input. The output is the input followed by a series of matrix
> multiplications. Each multiplication is deterministic. The result is
> defined entirely by the input and the series of multiplications.
> >> >
> >> > It may be a complex calculation involving a large variable, but
> nevertheless it is a fully deterministic and repeatable one.
> >>
> >> If you had the exact same inputs, the exact same trainings, the exact
> >> same contexts, et cetera.
> >>
> >> Which you won't, in practice. Not for anything this complex.
> >
> > Why not? There are plenty of LLM models anyone can download and run
> themselves.
>
> They are simpler models. The President would not be using those.
>
>
It doesn't matter how complex or simple the model is. They all operate on
the same principle of matrix multiplication. Larger models simply have more
matrices, or more rows or columns, but the algorithm doesn't change or
become non-deterministic by using these additional matrices, rows, or
columns.
Of course, if the model used were proprietary and closed, verification
would be impossible. However, that contradicts what I stated: if the models
were open source anyone could verify the results (provided the random-seed,
and the exact input supplied for the prompts).
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260315/7a84f516/attachment.htm>
More information about the extropy-chat
mailing list