[ExI] for the fermi paradox fans

Dennis May dennislmay at yahoo.com
Mon Jun 16 03:13:42 UTC 2014


https://dl.dropboxusercontent.com/u/50947659/huntersinthedark.pdf
Some questions and observations:
The question of
spatial segregation seems to rely upon neglecting 
possible
aspects of the effectiveness of common tactics such as 
dispersion
and stealth over vast periods of time.

The
“open” nature of space means that there is no finite known
number
of potential sources for the probes or any certainty about
the
overlapping nature of how far probes of any particular origin
have
spread. Any particular model may make assumptions but
there
are an open number of possible models describing density,
reproductive
rates, dispersion, stealth and probe purposes/tactics,
and
evolving probes – changing tactics. It would be like traveling 
to
an unknown continent with millions of species of animals and 
plants
and having to discover by trial and error which may be 
harmful
or poisonous and having no knowledge of their range,
habitat,
or reproductive rates. Some species travel thousands of
miles
at times and some occasionally travel well outside of their
normal
territories.

The
open nature of probes leaves open the possibility that encounter
with
ANY probe could lead to catastrophic civilization ending 
results.  The real question is not about the probes but
what civilizations
must
to in order to reduce the impact of their potential.

The
analysis of the Fermi paradox in this case should be more 
concerned
about what the response of civilizations would be
to
the open ended nature of such threats.

In
the article the conclusion is:

“…since we still exist, we
conclude that deadly probes are not
the
main cause of the Fermi paradox.”The
real question should be what are the necessary and sufficient strategic doctrines of civilizations enjoying long term survival. I believe that answer
is the main cause of the Fermi paradox.
That answer is a response to the energetics of WoMD available in space no matter who or what wields them.
Dennis
May
 

________________________________
 From: Anders Sandberg <anders at aleph.se>
To: Dennis May <dennislmay at yahoo.com>; ExI chat list <extropy-chat at lists.extropy.org> 
Sent: Saturday, June 14, 2014 5:14 PM
Subject: Re: [ExI] for the fermi paradox fans
  


Dennis May<dennislmay at yahoo.com> , 14/6/2014 9:13 PM:

Questions
concerning the Fermi Paradox should include the variable of
>adversaries
at every juncture since all of biology is known to deal with 
>the
issue continually from the earliest systems forward.

This is a problematic approach. Yes, freely evolving systems of replicators generically get parasitism. But in the Fermi context free evolution is just one option: a civilization that has developed into a singleton might coordinate future behaviour to preclude parasitism or adversarial behaviour, or it might decide on "intelligent design" of its future development. If it is also alone within the reachable volume its dynamics will be entirely adversary-free. Maybe this is not the most likely case, but it it has to be analysed - and understanding it is pretty essential for being able to ground the adversarial cases. 

When Joanna Bryson gave a talk here ( it can be viewed at https://www.youtube.com/watch?v=wtxoNap_UBc ) she also used a biological/evolutionary argument for why we do not need to worry about the AI part of the intelligence explosion; as I argued during the Q&A, there might be a problem in relying too much on biological insights when speaking about complex agent systems. Economics, another discipline of complex systems, gives very different intuitions. 

Then again, I do think running game theory for Fermi is a good idea. I had a poster about it last summer:  https://dl.dropboxusercontent.com/u/50947659/huntersinthedark.pdf In this case I think I showed that some berserker scenarios are unstable. (And thanks to Robin for posing the issue like this - we ought to write the paper soon :-) )



Once super-intelligences are in competition
>I
would expect things to get very complicated concerning the continued
>advantage
of “size” versus many other variables becoming enabled.
We know that game theory between agents modelling each other can easily become NP-complete (or co-NP):
https://www.sciencedirect.com/science/article/pii/S0004370206000397?np=y
https://www.sciencedirect.com/science/article/pii/S0004370297000301?np=y
And these are bounded agents; superintelligences will create an even more complex situation.

Of course, as seen in this little essay,
http://www.aleph.se/andart/archives/2009/10/how_to_survive_among_unfriendly_superintelligences.html
non-superintelligences can thrive under some circumstances simply because they go under the radar. A bit like how many insects do not use adaptive immune systems, or certain military devices are not armoured - it is not worth increasing resilience of individuals when you can get resilience by having many instances. 


Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140615/f8d82d29/attachment.html>


More information about the extropy-chat mailing list