[extropy-chat] The Drake Equation and Spatial Proximity.
jef at jefallbright.net
Tue Oct 24 17:37:53 UTC 2006
John K Clark wrote:
> Robert Bradbury Wrote:
> > as I pointed out at Extro3 -- we don't "talk" to nematodes -- they
> >don't "talk" to us.
> We don't talk to nematodes but every day humans directly
> effect the lives of billions of these worms, but ET does
> squat to us and that is very strange.
> Most of the excuses I've heard to explain away this fact are
> very unconvincing. It seems to me that if there were a race
> of beings as far advanced over us as we are over nematodes
> their existence would be obvious to anyone who looked up into
> the sky. It's not obvious. Why?
> This is my list of answers from least likely to most:
> 1) Maybe after your intelligence reaches a certain point
> there is no reason to go beyond because there is a limit to
> what any intelligence can do; and the universe just can't be
> 2) Maybe there is some hidden catastrophe we know nothing
> about that always destroyers a civilization when it advances
> beyond a certain point.
> 3) Maybe we are the first, somebody has to be.
(4) Maybe advanced intelligence develops inward for a significant phase
of its evolution.
Given what we know of the intrinsic physical constraints and decreasing
returns for efforts applied to moving matter and energy around, wouldn't
it make sense that advanced intelligence would rather tend to focus on
inwardly increasing complexity? Such a developmental phase might be
expected to begin with the era of information processing technology and
progress until that approach has either been exhausted or superseded by
further developments that we can't currently imagine.
As Feynman said, "There's plenty of room at the bottom", and so far, the
closer we look the more possibility-space we find, and we have only
faint glimmerings of what doors might be opened with the arrival of
practical quantum computing.
Some people look around and are disappointed with unmet expectations of
huge engineering projects enabled by accelerating technology, measuring
the apparent lack of progress by the dearth of hovercraft, personal
spacecraft, space elevators, huge power stations, etc. Others, perhaps
closer to the practice of technology development, see a continued bloom
of increasing ephemeralization in virtually all areas related to
information technology. To borrow the analogy of an automobile, the
parts of the car seen by the driver are even simpler than before, but
greatly improved performance and reliability are delivered via much
higher complexity under the hood.
So what about the great power emissions and planet-scale construction
projects envisioned by those who based their reasoning on common-sense
ideas like the Kardashev scale and joined others in asking "where is
everybody?", long before our own technology began to rapidly
ephemeralize? In our case, within a very short time-window we went from
high-power simply-modulated radio transmission to nearly ubiquitous
low-power networked communication and we've learned how essential are
the Shannon benefits of increasingly complex encoding resulting in
signals that are increasingly indistinguishable from background noise.
In addition to improving efficiency, the implications to security
through obscurity are obvious.
Anyone dealing with technology development is familiar with the Law of
Unintended Consequences. As any project becomes increasingly complex,
one can generally expect an increasing tendency toward unforeseen kinds
of problems and side-effects. But this is only a first-order rule, and
beyond it we find that increasing complexity can deliver increasing
reliability, iff the structure reflects deeper, more effective
principles. What many have not considered is the similarity of this
observation to a moral imperative: That any action by an agent will have
unintended consequences in rapidly increasing proportion to the number
of interfaces it presents to the outside world (the adjacent possible)
and therefore an increasingly intelligent agent will tend to minimize
its interfaces while maximizing its effectiveness.
There's an interesting dynamic tension between strategies of conflict
and strategies of cooperation. Both rely on an element of diversity to
promote growth. We can expect an advanced intelligence to promote
growth (of complexity of the interesting kind) and to do so mandates a
source of increasing diversity outside the local system. From this we
derive some of our "moral" thinking (thinking about principles of action
that work over expanding scope of interaction) such as the value of
promoting equality among independent agents, and injunctions against
murder and other forms of ruinous competition.
It's an old topic, hashed and rehashed here on the extropy list, the
question of whether increasing intelligence implies increasing morality,
and while we generally agree that from an objective viewpoint it does
not, from a subjective viewpoint--the viewpoint from which all
decisions, moral or otherwise are made--it most certainly does. We can
not know the specifics of what an advanced intelligence will do, but we
can know that it will tend to do that which most effectively promotes
its values into the future, and that this will reflect an increasingly
subtle understanding of increasingly general principles.
More information about the extropy-chat