[ExI] The "Unreasonable" Effectiveness of Mathematics in theNatural Sciences
jef at jefallbright.net
Mon Sep 29 20:17:38 UTC 2008
On Mon, Sep 29, 2008 at 11:35 AM, Mike Dougherty <msd001 at gmail.com> wrote:
> On Mon, Sep 29, 2008 at 11:01 AM, Jef Allbright <jef at jefallbright.net> wrote:
>> Tell me, Human, how can any system, functioning exactly according to
>> its nature within its environment, be "wrong", other than with respect
>> to a particular (necessarily subjective) context from which to make
>> such an assessment? Does it seem to you that "Truth" is somehow
>> diminished, when it is accepted as "merely" the best truth presently
> Perhaps you have a much larger point in mind, but I'll add this
> response to the above;
I think you've roughly grokked an aspect of the simple something I've
been trying to say. Frankly, I'm repeatedly boggled by how this
concept is so apparently alien to so many, but then, I've always felt
> Within a particular context, the best approximation of truth may be
> verified as good enough.
Well, you're still displaying the presumption of a point of view
somehow outside the system from which to distinguish "best
approximation" from "good enough" approximation, and your use of
"verified" seems again to imply some reference standard outside the
system. But further down, you seem to have captured at least part,
which is why I said you've (only) roughly grokked my point.
It's like the (oversimplified) difference between Positivism and
Pragmatism: For the Positivist, beliefs are expressions about
reality. For the Pragmatist, beliefs are expressions of reality. The
distinction is the functional relationship of the observer to the
> If the same principle is applicable to a
> different context, I believe that principle has a greater measure of
> this property defined as truth.
So to paraphrase, if the same model appears to apply also to a
different context (i.e. to an increasing context), then we are
justified in increasing our estimation of the model's correspondence
So then yes, this is a simple variation on the principle of Maximum
Entropy, closely related to Occam's Razor and Bayes Theorem.
> If this principle can be used to
> correctly predict the situation in new contexts, this further measures
> the principle's approach to an ideal Truth. Since it is arguably
> impossible to have verified truth in absolutely every context,
Why do you say "arguably"? How might it not be impossible (unless one
were to argue from Providence)?
> we must
> accept that "best presently known truth" may only continue to approach
> the absolute (until disproved?). I attempt to minimize confusion by
> treating "Truth" as an asymptotic limit to the maximum measurable
Here we go again, speaking of the "asymptotic limit to the maximum
as if it were meaningful (could be functionally modeled with its
> In that case, the absolute is not minimized because it is an
> ideal that may never be reached.
> It's a difficult topic because it's self-referential (either the
> subject is referring to itself, or Truth is somehow reflective of its
> it's own value)
Stop trying to model "Truth" in your statements about truth and the
problem disappears, with nothing (actual) lost.
More information about the extropy-chat