[ExI] Claude for president?

Jason Resch jasonresch at gmail.com
Mon Mar 16 09:41:12 UTC 2026


On Sun, Mar 15, 2026, 10:46 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Sun, Mar 15, 2026 at 7:41 PM Jason Resch <jasonresch at gmail.com> wrote:
> > On Sun, Mar 15, 2026 at 4:56 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >> On Sun, Mar 15, 2026 at 4:34 PM Jason Resch via extropy-chat
> >> <extropy-chat at lists.extropy.org> wrote:
> >> > On Sun, Mar 15, 2026, 3:59 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >> >>
> >> >> On Sun, Mar 15, 2026 at 3:40 PM Jason Resch via extropy-chat
> >> >> <extropy-chat at lists.extropy.org> wrote:
> >> >> > On Sun, Mar 15, 2026, 3:15 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >> >> >>
> >> >> >> On Sun, Mar 15, 2026 at 2:38 PM Jason Resch via extropy-chat
> >> >> >> <extropy-chat at lists.extropy.org> wrote:
> >> >> >> > On Sun, Mar 15, 2026, 12:49 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >> >> >> >> LLMs are complex enough that, even with the controls as you
> say, it
> >> >> >> >> seems likely that two people - or even the same person -
> running the
> >> >> >> >> exact same non-trivial query two times would often enough get
> >> >> >> >> non-identical answers.
> >> >> >> >
> >> >> >> > It seems that way, but LLMs are themselves fully deterministic.
> So long as the exact same input and context are provided, their output is
> the same. In practice, however, the tokens a LLM deterministically predicts
> as most likely are then randomly selected by a higher level process to make
> the writing more dynamic. This is driven by the "heat" parameter. But by
> using a pseudorandom selection with the same seed, identical output can be
> ensured.
> >> >> >>
> >> >> >> This is true in the sense that the universe may be fully
> >> >> >> deterministic: technically true (possibly) but unreproducible in
> >> >> >> practice (given the complexity and number of inputs of a LLM worth
> >> >> >> advising the President of the United States) due to the very high
> >> >> >> number of variables.
> >> >> >
> >> >> > You needn't invoke the determinism of the universe here. The
> context window is the input. The output is the input followed by a series
> of matrix multiplications. Each multiplication is deterministic. The result
> is defined entirely by the input and the series of multiplications.
> >> >> >
> >> >> > It may be a complex calculation involving a large variable, but
> nevertheless it is a fully deterministic and repeatable one.
> >> >>
> >> >> If you had the exact same inputs, the exact same trainings, the exact
> >> >> same contexts, et cetera.
> >> >>
> >> >> Which you won't, in practice.  Not for anything this complex.
> >> >
> >> > Why not? There are plenty of LLM models anyone can download and run
> themselves.
> >>
> >> They are simpler models.  The President would not be using those.
> >
> > It doesn't matter how complex or simple the model is. They all operate
> on the same principle of matrix multiplication. Larger models simply have
> more matrices, or more rows or columns, but the algorithm doesn't change or
> become non-deterministic by using these additional matrices, rows, or
> columns.
>
> The problem lies in precisely replicating all the inputs, weights, et
> al.  For the more complex models, the plausibility is akin to relying
> on, as you put it, the determinism of the universe: in theory it would
> be possible if you had all that information, in practice it is never
> possible because you don't.
>


The context window is finite. For todays models it can be up to a few
hundred KB.

I don't know why you think this context couldn't be shared and distributed
if the goal was to support auditability and verifiability of outputs.

The government distributes files (images, PDFs) much larger than this all
the time. In a hypothetical future where a leader wanted to prove it
followed the advice of an AI, it would only need to reference the model it
used and this context (and what random seed was supplied). Note many LLM
APIs already let you specify the random seed to support reproducibility of
outputs.

See this article, for instance:
https://medium.com/@2nick2patel2/llm-determinism-in-prod-temperature-seeds-and-replayable-results-8f3797583eb1


However to your point, there does seem to be a growing trend, wherein
reasoning models that make calls to do web searches, or models that are
computed in a distributed environment where matrix operations are performed
in different orders and then recombined in different ways the induce
rounding errors, can lead to non deterministic behavior.

I see this as a practical problem, and one that could be engineered around
if obtaining repeatable output were the goal.

For example, by including the web search results as part of the context
that is shared, and if using a distributed calculation, publishing the
order in which the results were computed and combined, such that anyone
could reproduce that calculation in the same order and obtain the same
rounding errors.

LLMs may be complex but they're not magic. They're simply the result of a
large computation, and computations are reproducible, so long as all the
necessary information concerning how it was computed is preserved.

You say today's LLM service providers don't share this information today,
and on that we agree. But my point is that it doesn't have to be this way.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260316/25e8e27a/attachment.htm>


More information about the extropy-chat mailing list