[ExI] What can be said to be "wrong", and what is "Truth"
jef at jefallbright.net
Fri Oct 3 16:29:41 UTC 2008
On Thu, Oct 2, 2008 at 5:53 PM, Mike Dougherty <msd001 at gmail.com> wrote:
> On Thu, Oct 2, 2008 at 6:53 PM, Jef Allbright <jef at jefallbright.net> wrote:
>> Argh. ;-) Show me a functional (versus operational) model of a
>> "superposition." This is crucial. You MUST stand somewhere to be a
>> "you." I emphasize that you MUST have a subjective point of view.
>> But there is no need or basis for any attempt to define that
>> subjective POV in (fundamentally unfounded and unfoundable) objective
>> terms. Don't need it, never did, although it's quite clear from
>> cognitive and evolutionary psychology why we as individuals and as a
>> culture tend to think and reinforce our thinking in terms of discrete
>> selves, absolute truth, fear of the unknown, respect for authority,
>> and so on. But that environment of evolutionary adaptation is rapidly
>> slipping behind us and the effective heuristics of our ancestors are
>> decreasing in utility. Let the unfounded ontological assumption go,
>> and everything is seen to work as before, but according to a more
>> coherent and thus more extensible model.
> "Argh" ? You missed talk like a pirate day (9/19) I hope i'm not so
> missing your point that it causes frustration.
It's mildly frustrating to me how this kind of dialog always builds up
some energy, then runs around in loops and up against walls within
conceptual boxes closely held, until dissipating without any
appreciable progress. There are some slight derivative benefits:
someone may gain an inspiring thought or two, I sometimes gain
thoughtful new offline contacts and benefit from the practice, and
even Lee shows signs of being perturbed, not just emotionally, but
moved to think at least for a short while outside simple (and
perfectly correct) elliptical orbits of thought. But frustratingly,
just as in our domestic politics, things soon snap back to "normal."
> "unfounded ontological assumption" - I'm not sure what I have assumed
> (that's clearly a problem).
It's the unfounded assumption of the existence of an objective point
of view (which some people here get), or even of an objective measure
of where one stands in relation to a hypothetical asymptotic objective
point of view (which fewer people get.) And the point I try to convey
is that from the point of view of any necessarily subjective system of
observation, there is no rational justification for any claim that our
present model of truth is nearer of farther from Truth.
It takes only one new observation to radically revise our model of
truth, as we've seen repeatedly with models of our place within the
Earth, solar system, galaxy, ?, ... or to extend BillK's example of
theories of health in terms of displeased gods, evil humors,
imbalanced chi, build-up of toxins, homeostasis and immune function as
feedback loops, persistence and robustness of evolving structures, ?,
..., or to *any* model of how things Really Work. It's not just that
our language is necessarily imperfect, or that our measurements are
necessarily imperfect, but that fundamentally we lack any basis for
knowing how far up or down we are on the tree of subjective reality.
And that's perfectly all right. Indeed -- and this is my point --
we're better off in practical terms to acknowledge this inherent
subjectivity, removing the unwarranted conceptual bump from our model,
to reduce the friction involved in further updating our model in a
world of accelerating change.
In even more concrete terms, it's about realizing that within an
entirely subjective model - the only coherent model -- nothing is lost
with regard to discriminating and decision-making within this model,
but the advantage -- and this amounts to a moral imperative -- is that
progress is reframed from the classical view of successively closer
approximation to Reality, to successively accelerating improvements in
the process of improving our model ... of X (it doesn't matter.)
It's a different dynamic, a vehicle requiring quite different gearing.
For virtually all of human history, in an environment relatively
unchanging in regard to human action, it has appeared "objectively
obvious" that "good" is in terms of solving problems. But solving a
problem is coherent only to the extent the problem is defined (which
until recently it has been, as we'd been living within a special case
of the more general principle I'm trying to convey.) Now, in an
environment of accelerating change, focus must shift from "solving
problems" specified explicitly or implicitly within a seldom changing
or punctuated but slowly changing model of reality, to "improving our
problem solvers" applicable to staying in the Red Queen's Race.
> You said, "you MUST have a subjective
> point of view." I agree. And so must you.
Huh. I thought it should be clear that I meant any "you".
> There is an inherent
> parallax for which we need to be aware. I can abandon my own point of
> view and merge my state of awareness completely to yours. We will
> have total agreement and also the same identity (I'm not reinforcing
> my discrete self). You can similarly abandon your viewpoint. Who
> would i/we express ourself(s) to if this were the case?
Seems you're dealing in superpositions again. My point is that every
intentional agent must, by definition, have a point of view. No
abandoning, merging, or superimposing of POV is involved. I'll assume
you've already read my follow-up post, a somewhat poetic expression
using the metaphor of a tree. It fully accommodates the necessarily
subjective view of each individual leaf (agent, human, man, woman,
athlete, artist, robot, dolphin, dog, ...) interacting with others in
its local environment, discovering agreement on the basis of their
branches combining with increasing probability in the direction of the
(assumed) root of reality.
> Perhaps the
> mind-merge does not have to be complete to find a point of agreement -
> (thinking of Venn diagram where two sets share commonality at their
Just as I tried to convey earlier in regard to the pragmatics of
semantics and the infamous "Grounding Problem", agreement does not
entail to individuals merging, but that their actions, based on
relevant aspects of their models, are seen to be aligned. A key here
is that all agents, rooted in (descended from) a common reality
(regardless of knowing its specific nature) will necessarily have
evolved aspects of their nature (their model of reality) in common.
Thus there is an inherent basis for increasing probability of
increasing agreement on increasingly fundamental principles of
"reality" supporting the ongoing actions of any group of agents. This
is the central point of my "Arrow of Morality" message.
I'll pause here rather than pursuing the remaining tangential statements.
P.S. This morning I came across a reminder of possible relevance:
Most people are not intuitively comfortable with the concept of
mathematical induction, by which reasoning in terms of X can be shown
to be absolutely mathematically true regardless of the actual X. Its
exact analogue in programming is recursion, with which many otherwise
competent programmers remain uncomfortable. This has a direct bearing
on my point.
More information about the extropy-chat