[ExI] Drake Equation Musings

Anders Sandberg anders at aleph.se
Tue May 17 08:42:02 UTC 2016


On 2016-05-16 05:00, Keith Henson wrote:
> I largely agree with your analysis.  But it's possible we missed
> something, like a physics based reason for intelligent species to stay
> in their star system or even to stick to their home planet.  I have
> speculated about civilizations "collapsing" to 300 m spheres sunk in
> the deep ocean.  The small size gets the latency down and the cold
> water deals with the waste heat problem.

Actually, we can deal with that too:

Pr(intelligence density = x | no visible intelligence) = K Pr(no visible 
intelligence | intelligence density = x) Pr (intelligence density = x)

where K is a normalisation factor. In the vanilla version the update is 
of the form "we do not see any intelligence within distance d, but we 
would see them if they were there", which produces the factor Pr(no 
visible intelligence  | x) = exp(-(4 pi/3) d^3 x) if we assume a spatial 
Poisson model. Our observation rules out higher densities but allows for 
smaller densities.

If there is a probability p<1 of seeing a civilization that actually is 
there, then the factor becomes exp(-(4 pi/3) d^3 x p). The effect is 
that our probabilities will be less strongly updated by not seeing 
anything. If p goes down by a factor of 10, the "strength" of the update 
changes by about 10%.

If intelligence often turns into black boxes, then p is small. But note 
that you need many orders of magnitude to weaken the update a lot: since 
x can be arbitrarily large, even if you think black box civilizations 
are super-likely the lack of observed civilizations in the vicinity 
should move your views about the possible upper range of densities a 
fair bit. Arguing p=0 is a very radical knowledge claim, and equivalent 
to positing the most audacious law of sociology ever (true for every 
individual, society and species!)


[ Some of you will by now wonder why I do not say we should expect an 
uncertainty of p running over loads of orders of magnitude, like the 
life probability does in our paper. The reason is that there is a 
curious asymmetry between reasons intelligent life may not emerge and 
reasons intelligent life may be quiet. The first group is largely 
conjunctive: "intelligence will happen if X and Y and Z and W and... 
happens" - if one of the conditions in the chain is missing, there is no 
intelligence. Explanations for silence have the form "X or Y or Z or W 
or ...". If one of them is wrong, nothing happens to the outcome. But 
their probabilities need to sum to nearly exactly 1, and if one of them 
actually has less probability than needed then the entire explanation 
breaks. ]



-- 
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University




More information about the extropy-chat mailing list