[ExI] NYT ninny

Jef Allbright jef at jefallbright.net
Wed May 14 19:12:11 UTC 2008


On Tue, May 13, 2008 at 9:00 PM, Mike Dougherty <msd001 at gmail.com> wrote:

>  I was thinking of analogy between a broadcasting transmitter sending a
>  signal with original intensity being observed at decreasing power at
>  greater distances.  However, the 'distance' is not measured
>  physically, but by the subjective difference in perspective.  I meant
>  subjective in the sense that it is measured by each individual without
>  some platonic topological reference point.  From the original point
>  about "stupid" or the lack of intelligence- perhaps you have now
>  perceived me as less effective at communicating my intention than
>  yourself.  I would agree that I have observed this also.  You mustn't
>  assume that my ineffectual transmission of meaning is necessarily
>  direct correlated to my ability to understand your meaning.

Yes, the cross-sectional intensity decreases as the square of the
distance.  But here are some problems with your analogy applied to
memetics or to innovation within society:
*  The effective cross-section of the receiver can be increased
arbitrarily by making it larger to intercept a greater area of the
radiated power -- theoretically one could completely enclose the
transmitter and recover nearly all the radiated power -- or, assuming
it's periodic, by using a phased array of receiving elements.
*  The received power has no direct correlation with the signal (the
information content.)  In fact, moving away from old-fashioned AM to
increasingly sophisticated (matched) coding leads to increasingly
robust communication that increasingly appears to be random noise.
[Which has profound implications.]

So, I find your analogy of signal diminishing with distance
inapplicable unless we were to reframe it as an aspect of the
principle that all action is necessarily local, and thus causal chains
tend to dissipate with distance...


>  I had visualized signal to be a measurable pattern of intelligent
>  behavior from one 'cognoscenti' to another.

It might be worthwhile here to suggest that the hallmark of
intelligent action is that it tends to maximize the intended while
minimizing the unintended.  Like the analogy of communication in my
second bullet-point above, increasingly effective implies increasingly
subtle (for any given context.)


>  Again, I was making
>  analogy to radio/EM broadcast power.  Some transmitters broadcast with
>  more power than others, but that doesn't imply their programming is
>  better or more right.  The background noise to which I later referred
>  is what would be observed when there is no detectable meaning or
>  pattern on any particular carrier.  Whether this is due to "the
>  ignorant masses" mindlessly chattering over their nearest cognoscenti
>  or something equivalent to encryption between parties make no
>  difference - it still has no discernible value without the proper
>  codec.


Buried within your mixed metaphors I still detect a hint of belief
that there is an objective "true signal" to be found.  To this I would
offer that coherence by no means entails Truth, while coherence over
increasing context entails increasing probability of Truth.  More
concretely, on the one hand simple beliefs such as those held by
children, primitive societies, religious sects, can be very coherent
with their narrow context, while being seen as completely untrue from
a larger context. On the other hand, high intelligence is quite adept
at synthesizing a coherent model from any given context, again having
no direct relationship to Truth.  We can certainly say what cannot be
(within a specified context) but we are fundamentally unable to make
specific predictions to the extent the future context is unknown.
More directly to the topic of this thread, the cognoscenti can be
recognized by their awareness of what cannot be in the present
context, but their competence at prediction must vary inversely with
specificity (rather than distance.)  There is no free lunch.
Successful prediction is not about getting it right, but about not
doing it wrong.


>  Do you also find in the tendency to work with greater degree of
>  abstraction that you are either in agreement with entire classes of
>  conclusion (despite particular instances that may be wholly off-base)
>  or that you are rarely in agreement with anyone that does not accept
>  every instance implied by your generalization?  I'm not asking to be
>  confrontational; I feel I commonly experience exactly this situation.

It's important to operate at an appropriate level of abstraction.
Continuing the electrical theme, this can be thought of as "impedance
matching", maximizing energy transfer for very complex impedances.
Personally, I tend to be a very visual thinker and I'm almost
constantly aware of (imagined) geometric forms representing the topic
at hand.  So for example, if I can't make sense of what you're saying,
I visualize this as something like a 3D scatter plot showing clusters
of probability density.  If someone says something that matches my
model of reality only within a narrow region, then I may imagine a
truncated plot or graph superimposed on my own (or vice versa.) If the
geometry has ripples, or worse, folds back on itself
non-monotonically, these regions merit particular interest, since a
proper mapping of model to reality should (ideally) be flat across the
entire domain.  Operating at a high level of abstraction allows one to
rapidly detect when something's "not quite right" but effective action
requires that one perform the necessary transformation , or "impedance
matching" to deal with the environment of interaction on its own
terms.  [Something I often don't take the time, or choose, to do.]



>  >  I would agree that the relationship of individual contributors to
>  >  technological innovation is changing much as you suggest, but as I
>  >  pointed out in my earlier post, I think what's most significant is not
>  >  the direct discovery/development but the evolution of increasingly
>  >  effective structures supporting discovery/development.
>
>  Would you say that these structures represent less the achievement of
>  any particular individual, and increasingly illustrate the  emergence
>  of a different order of self-organization?

Yes, the latter was my intended point.


>  >  Retrospectively, such structures are often recognized as particularly
>  >  elegant, in sharp contrast to your view that innovation tends to
>  >  depend on general breadth of knowledge.  There's a strong analogy to
>  >  genetic programming, where success depends on a diverse set of
>  >  possibilities, exploited via a strong model of probabilities.
>
>  ... what you perceived as my view from a single email on the subject.
>  I would like to clarify that my point was regarding the historical (?)
>  idea that a genius possessed the ability to apply domain knowledge
>  from one field in an apparently unrelated field

Yes, this corresponds with biological evolution's exploitation of
genetic recombination.


> [example omitted to
>  prevent confusion with an instance-level disagreement]

Thanks for that.


> I do think
>  this is an effective way to assess the ability to bring previous
>  experience to a new situation (which must be at least some part of
>  general adaptive intelligence), but I would also agree that there are
>  elegant (to adopt the term you used) examples of innovative
>  advancements in a narrow field relying solely on internally consistent
>  propositions and conclusions.


Yes, while mutation and recombination continue to apply to both
genetics and processes of human innovation, there are increasingly
competent ways to model and select from that (not so random)
distribution.


>  >  >  I agree that what was once considered intelligence is often lost in
>  >  >  background noise
>  >
>  >  Well, that wasn't my point, and isn't my belief.  I think we are still
>  >  within the developmental window where a strong individual intelligence
>  >  can make astounding progress, not by grasping all the relevant
>  >  knowledge, but by having a very good grasp of sense-making.
>
>  Can I replace "a good grasp of sense-making" with 'intuition'?

To the extent that "intuition" represents heuristics
evolutionarily/developmentally encoded into the agent, then yes.
Again, my point is that we have at hand the recent capability to
intentionally improve our heuristics.


>  I feel
>  that we would likely be in agreement with what what you are expressing
>  here.  What mechanism is employed to somehow discover the optimal
>  solution with minimal trial/testing?

Bayes would be an excellent beginning.


> Perhaps this is an example of
>  the self-organizing principle I mentioned above?  Does the "strong
>  individual intelligence" contribute as an ego-driven will, or an
>  efficient "sense-making" drone to a hive process?

Good questions for a possible follow-up at a later time.  I've
exhausted my self-imposed time budget for now.  For me, an effective
answer to these questions requires a more extensible concept of "self"
rather than trying to imagine organizations composed of discrete
selves.


>  I don't intend to be right or to prove one point is any better than
>  another.  To me, a good discussion is about the process of getting to
>  an agreement.

Or rather, an understanding encompassing the separate points of view.


  Maybe you'll take answer the questions I've asked and
>  pose others in this vein - thanks in advance if you do.

Thank you.  And back to work for me.

- Jef



More information about the extropy-chat mailing list