[ExI] The "Unreasonable" Effectiveness of Mathematics in theNatural Sciences

Jef Allbright jef at jefallbright.net
Tue Sep 30 16:36:13 UTC 2008


On Mon, Sep 29, 2008 at 5:14 PM, Mike Dougherty wrote:

> A more rigorous definition of 'people' may be
> required (are children considered people?  are cannibals considered
> people?

Which is why I often prefer "agents" as in adaptive agents,
intentional agents, etc., so that our thinking on these matters is
extensible to self-aware non-biological machines, augmented dolphins,
... any system that acts to promote a model of its values into its
future.


> It's amazing we
> can communicate despite all the ways communication can fail.

Yes, our communication is quite prone to error, especially when
constrained within this medium of text, and any time we range outside
the bounds of commonly shared context.

[I've always been intrigued by science fiction visions of
super-efficient communication, usually portrayed between telepathic
twins, members of a hive mind, or between super-geniuses implementing
their own system of signs.  Oh well...  Once in a while, I do enjoy
that kind of flow with the rare individual of compatible intelligence
and background.  In between I try to content myself with the poetry of
the rare exceptionally well-written research paper or book.  ;-)]


>> It's like the (oversimplified) difference between Positivism and
>> Pragmatism:  For the Positivist, beliefs are expressions about
>> reality.  For the Pragmatist, beliefs are expressions of reality. The
>> distinction is the functional relationship of the observer to the
>> observed.
>
> fwiw - I consider myself reasonably intelligent, but I'm completely
> lost by that analogy.

In my opinion, this goes to the heart of the present topic of
mathematical Platonism.  I can't do it justice in this space, but
Positivism inherits the venerable bias, the popular transparent
assumption, of an observer making more or less justified statements
based on its observations of reality.  It's incoherent because such a
relationship between observer and observed can't be modeled.  When I
hear statements like that, I tend to visualize a graph of
"correspondence with reality" that folds back on itself in an ugly
way, alerting me that something seems to be wrong.

In comparison, a pragmatic view sees the observer as an adaptive
system embedded in "reality" with its model of "truth" tending to
improve with increasing coherence over increasing context of
interaction.  On this view, the "truth" of a model is assessed in
terms of its perceived relative effectiveness within a particular
context, with no need whatsoever to model or otherwise entertain the
notion of an absolute "Truth." When I visualize this line of
reasoning, I see a monotonic correspondence with reality (no folding
back on itself) continuing until it fades out to the limits of
observation.

Indeed, in a pseudo-Gödelian sense, isn't it clearly incoherent to
suppose that the "Truth" of the reality of an evolving universe is
somehow constant or absolute at all points along its evolution from
the Big Bang?  And if one attempts to defend that belief, aren't they
directly confronted with the immense disparity in information content
between the world in which we interact and any conceivable set of
Platonic priors?

<snip>

> How Zen.  If you can remove the thing from its description, can that
> thing be properly described?

This goes to the heart of the infamous "Grounding Problem" in machine
intelligence and to a pragmatic view of semantics.  Simply put, the
"meaning" of any referent corresponds with its observed effect within
a particular context.  No end-to-end grounding is ever ultimately
needed nor ultimately possible.  This strikes me as especially funny
when it involves would-be AI creators imagining that a "relatively
simple" computer program could encapsulate a process delivering
"intelligence" while they remain blithely unaware and unconcerned
about the essential contribution of the layers of software, microcode,
electronic hardware, turtles all the way down... (within an
environment of adaptation supporting such activity.)

I see Lee suggesting that our discussion is becoming "high falutin'"
and off topic, <smile>, but it does appear to me that once again we're
near the point of significantly diminishing returns.  Feel free to
contact me offlist if you wish to pursue this further.

- Jef



More information about the extropy-chat mailing list