[ExI] NYT ninny

Mike Dougherty msd001 at gmail.com
Wed May 14 04:00:13 UTC 2008


On Tue, May 13, 2008 at 10:36 PM, Jef Allbright <jef at jefallbright.net> wrote:
>  >  Perhaps the greater subjective distances from one another causes the
>  >  signal to possess less apparent strength?
>
>  Huh?  I tried interpreting that several ways but couldn't arrive at
>  any substantial meaning.  [Subjective distance, indeed.]
[breakdown snipped]
>  Huh??  Please let me know if I'm missing something key here?

I was thinking of analogy between a broadcasting transmitter sending a
signal with original intensity being observed at decreasing power at
greater distances.  However, the 'distance' is not measured
physically, but by the subjective difference in perspective.  I meant
subjective in the sense that it is measured by each individual without
some platonic topological reference point.  From the original point
about "stupid" or the lack of intelligence- perhaps you have now
perceived me as less effective at communicating my intention than
yourself.  I would agree that I have observed this also.  You mustn't
assume that my ineffectual transmission of meaning is necessarily
direct correlated to my ability to understand your meaning.

>  If you are somehow suggesting that there exists a "signal"
>  representing the right or best course into the future, then we have a
>  difference which is interesting, because so many naive futurists seem
>  to assume the something like that.

I had visualized signal to be a measurable pattern of intelligent
behavior from one 'cognoscenti' to another.  Again, I was making
analogy to radio/EM broadcast power.  Some transmitters broadcast with
more power than others, but that doesn't imply their programming is
better or more right.  The background noise to which I later referred
is what would be observed when there is no detectable meaning or
pattern on any particular carrier.  Whether this is due to "the
ignorant masses" mindlessly chattering over their nearest cognoscenti
or something equivalent to encryption between parties make no
difference - it still has no discernible value without the proper
codec.

>  To recap, Damien expressed his dismay at a particular instance of
>  "stupidity", representative of broader patterns in our society.  I
>  agreed, and expanded on the theme with some examples perhaps painting
>  a picture of broader trends.  I then observed from a higher level of
>  abstraction that perhaps such stupidity isn't really such a problem
>  (in the partial sense that a rising tide raises all boats. and there
>  will always be a long-tailed distribution.  I followed that up with
>  intent to confront any "thinkers" who got that far thinking they were
>  riding the peak, with the idea that this peak is inhabited by Brittney
>  et al and exploration of the bleeding edge is inherently a
>  **low**-probability affair.  I then concluded with an implicit
>  call-to-action with a reference to the "dissipative"  meaning
>  entropic, cognoscenti.

Do you also find in the tendency to work with greater degree of
abstraction that you are either in agreement with entire classes of
conclusion (despite particular instances that may be wholly off-base)
or that you are rarely in agreement with anyone that does not accept
every instance implied by your generalization?  I'm not asking to be
confrontational; I feel I commonly experience exactly this situation.

>  Does the foregoing help? I recognize that my writing is typically
>  terse and overly abstract, but hardly vague.  My programming is the
>  same.

Yes.  Enough to agree, and to not further sweat the details.

>  I would agree that the relationship of individual contributors to
>  technological innovation is changing much as you suggest, but as I
>  pointed out in my earlier post, I think what's most significant is not
>  the direct discovery/development but the evolution of increasingly
>  effective structures supporting discovery/development.

Would you say that these structures represent less the achievement of
any particular individual, and increasingly illustrate the  emergence
of a different order of self-organization?  (where there was once a
famous researcher, now the field of research itself is reaching a
critical mass- perhaps fueled by the input of innovative researchers,
but not directed by any single researcher's ego)

>  Retrospectively, such structures are often recognized as particularly
>  elegant, in sharp contrast to your view that innovation tends to
>  depend on general breadth of knowledge.  There's a strong analogy to
>  genetic programming, where success depends on a diverse set of
>  possibilities, exploited via a strong model of probabilities.

... what you perceived as my view from a single email on the subject.
I would like to clarify that my point was regarding the historical (?)
idea that a genius possessed the ability to apply domain knowledge
from one field in an apparently unrelated field [example omitted to
prevent confusion with an instance-level disagreement]  I do think
this is an effective way to assess the ability to bring previous
experience to a new situation (which must be at least some part of
general adaptive intelligence), but I would also agree that there are
elegant (to adopt the term you used) examples of innovative
advancements in a narrow field relying solely on internally consistent
propositions and conclusions.

>  >  I agree that what was once considered intelligence is often lost in
>  >  background noise
>
>  Well, that wasn't my point, and isn't my belief.  I think we are still
>  within the developmental window where a strong individual intelligence
>  can make astounding progress, not by grasping all the relevant
>  knowledge, but by having a very good grasp of sense-making.

Can I replace "a good grasp of sense-making" with 'intuition'?  I feel
that we would likely be in agreement with what what you are expressing
here.  What mechanism is employed to somehow discover the optimal
solution with minimal trial/testing?  Perhaps this is an example of
the self-organizing principle I mentioned above?  Does the "strong
individual intelligence" contribute as an ego-driven will, or an
efficient "sense-making" drone to a hive process?

I don't intend to be right or to prove one point is any better than
another.  To me, a good discussion is about the process of getting to
an agreement.  Maybe you'll take answer the questions I've asked and
pose others in this vein - thanks in advance if you do.



More information about the extropy-chat mailing list