[ExI] What can be said to be "wrong", and what is "Truth"

Jef Allbright jef at jefallbright.net
Mon Oct 13 19:54:16 UTC 2008


On Mon, Oct 13, 2008 at 11:41 AM, Mike Dougherty <msd001 at gmail.com> wrote:

> The context in which I wrote the original was that (I felt) Jef
> repeatedly pigeonholed me as a "believer" in absolute anything.

You sound a bit angry.  I am sorry for my impatience.  I never
intended anything like what you apparently perceived.

> I
> made the connection to "objective" reality and what I thought was
> Jef's attempt to point to some kind of "root" in his tree analogy.

My main point had to do with our inherent inability to point to any
such root -- it may be 1 mile away or 10 or 100 miles away for all we
can ever know -- so no matter how good your trigonometry, you can't
determine the "true" pointing vector.  If you're in a tree and all you
can ever see is fractally converging branches, is there *any*
practical value to knowledge of the "ultimate point of convergence"?
Can you even form a coherent function model of such a referent?


>  We
> are left with [?] each member in the conversation has their "inherent
> subjectivity"  - I think the thread trailed away once it became clear
> there was no way to "win."

I've never hoped to win, but I often feel an unreasonable desire to be
heard and understood.


> I reiterate:  No; I am unable to provide examples.  If the exercise is
> to prove a class, then a series of object examples can illuminate the
> class but never prove it completely.  If other classes are meant to be
> used inductively, they must either be a priori agreed upon or else
> also proven by another method.  I become discouraged when faced with
> the idea that people truly care only about their own concerns.

I often feel discouraged that people grasp for absolutes (if qualified
by "contingent", fine, but contingent on what specifically?) and
blithely refer to "the simple truth" when there can be only subjective
probabilities.  In the bigger picture, pragmatic predictive success
has never been about knowing what's correct, but about knowing more
and more what's unlikely to be correct.   I care about this because it
has a direct bearing on our prospects for social decision-making seen
as increasingly moral over increasing scope of consequences.  Any
assumption of absolute knowledge eventually impedes the process (from
any -- necessarily subjective -- point of view.)  This understanding
is critical to shifting the focus from expected *outcomes increasingly
unspecifiable*, to increasingly effective *methods in principle* for
achieving increasingly desireable outcomes.  The race continues to
evolve, while our faithful old race car becomes obsolete.

> An
> other scenario is that I'm having a solipsist argument where what I
> perceive as agreement is a reflection of a desire for agreement
> (arbitrarily relaxing the requirement for exactness to achieve it)
> Either way, it seems a futile effort.

I think it's damn close to futile, but not absolutely futile, and
that's what makes it interesting.

- Jef



More information about the extropy-chat mailing list