[ExI] for the fermi paradox fans
anders at aleph.se
Sat Jun 14 22:14:47 UTC 2014
Dennis May <dennislmay at yahoo.com> , 14/6/2014 9:13 PM:
Questions concerning the Fermi Paradox should include the variable ofadversaries at every juncture since all of biology is known to deal with the issue continually from the earliest systems forward.
This is a problematic approach. Yes, freely evolving systems of replicators generically get parasitism. But in the Fermi context free evolution is just one option: a civilization that has developed into a singleton might coordinate future behaviour to preclude parasitism or adversarial behaviour, or it might decide on "intelligent design" of its future development. If it is also alone within the reachable volume its dynamics will be entirely adversary-free. Maybe this is not the most likely case, but it it has to be analysed - and understanding it is pretty essential for being able to ground the adversarial cases.
When Joanna Bryson gave a talk here ( it can be viewed at https://www.youtube.com/watch?v=wtxoNap_UBc ) she also used a biological/evolutionary argument for why we do not need to worry about the AI part of the intelligence explosion; as I argued during the Q&A, there might be a problem in relying too much on biological insights when speaking about complex agent systems. Economics, another discipline of complex systems, gives very different intuitions.
Then again, I do think running game theory for Fermi is a good idea. I had a poster about it last summer: https://dl.dropboxusercontent.com/u/50947659/huntersinthedark.pdf In this case I think I showed that some berserker scenarios are unstable. (And thanks to Robin for posing the issue like this - we ought to write the paper soon :-) )
Once super-intelligences are in competitionI would expect things to get very complicated concerning the continuedadvantage of “size” versus many other variables becoming enabled.
We know that game theory between agents modelling each other can easily become NP-complete (or co-NP):https://www.sciencedirect.com/science/article/pii/S0004370206000397?np=yhttps://www.sciencedirect.com/science/article/pii/S0004370297000301?np=yAnd these are bounded agents; superintelligences will create an even more complex situation.
Of course, as seen in this little essay,http://www.aleph.se/andart/archives/2009/10/how_to_survive_among_unfriendly_superintelligences.htmlnon-superintelligences can thrive under some circumstances simply because they go under the radar. A bit like how many insects do not use adaptive immune systems, or certain military devices are not armoured - it is not worth increasing resilience of individuals when you can get resilience by having many instances.
Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat