[ExI] Claude for president?
Adrian Tymes
atymes at gmail.com
Mon Mar 16 02:46:04 UTC 2026
On Sun, Mar 15, 2026 at 7:41 PM Jason Resch <jasonresch at gmail.com> wrote:
> On Sun, Mar 15, 2026 at 4:56 PM Adrian Tymes via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>> On Sun, Mar 15, 2026 at 4:34 PM Jason Resch via extropy-chat
>> <extropy-chat at lists.extropy.org> wrote:
>> > On Sun, Mar 15, 2026, 3:59 PM Adrian Tymes via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>> >>
>> >> On Sun, Mar 15, 2026 at 3:40 PM Jason Resch via extropy-chat
>> >> <extropy-chat at lists.extropy.org> wrote:
>> >> > On Sun, Mar 15, 2026, 3:15 PM Adrian Tymes via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>> >> >>
>> >> >> On Sun, Mar 15, 2026 at 2:38 PM Jason Resch via extropy-chat
>> >> >> <extropy-chat at lists.extropy.org> wrote:
>> >> >> > On Sun, Mar 15, 2026, 12:49 PM Adrian Tymes via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>> >> >> >> LLMs are complex enough that, even with the controls as you say, it
>> >> >> >> seems likely that two people - or even the same person - running the
>> >> >> >> exact same non-trivial query two times would often enough get
>> >> >> >> non-identical answers.
>> >> >> >
>> >> >> > It seems that way, but LLMs are themselves fully deterministic. So long as the exact same input and context are provided, their output is the same. In practice, however, the tokens a LLM deterministically predicts as most likely are then randomly selected by a higher level process to make the writing more dynamic. This is driven by the "heat" parameter. But by using a pseudorandom selection with the same seed, identical output can be ensured.
>> >> >>
>> >> >> This is true in the sense that the universe may be fully
>> >> >> deterministic: technically true (possibly) but unreproducible in
>> >> >> practice (given the complexity and number of inputs of a LLM worth
>> >> >> advising the President of the United States) due to the very high
>> >> >> number of variables.
>> >> >
>> >> > You needn't invoke the determinism of the universe here. The context window is the input. The output is the input followed by a series of matrix multiplications. Each multiplication is deterministic. The result is defined entirely by the input and the series of multiplications.
>> >> >
>> >> > It may be a complex calculation involving a large variable, but nevertheless it is a fully deterministic and repeatable one.
>> >>
>> >> If you had the exact same inputs, the exact same trainings, the exact
>> >> same contexts, et cetera.
>> >>
>> >> Which you won't, in practice. Not for anything this complex.
>> >
>> > Why not? There are plenty of LLM models anyone can download and run themselves.
>>
>> They are simpler models. The President would not be using those.
>
> It doesn't matter how complex or simple the model is. They all operate on the same principle of matrix multiplication. Larger models simply have more matrices, or more rows or columns, but the algorithm doesn't change or become non-deterministic by using these additional matrices, rows, or columns.
The problem lies in precisely replicating all the inputs, weights, et
al. For the more complex models, the plausibility is akin to relying
on, as you put it, the determinism of the universe: in theory it would
be possible if you had all that information, in practice it is never
possible because you don't.
More information about the extropy-chat
mailing list