[ExI] taxonomy for fermi paradox fans:

Anders Sandberg anders at aleph.se
Mon Feb 2 10:46:24 UTC 2015


John Clark <johnkclark at gmail.com> , 1/2/2015 7:46 PM:




On Sun, Feb 1, 2015 at 9:00 AM, Anders Sandberg <anders at aleph.se> wrote:

> Cultural convergence that gets *everybody*, whether humans, oddly programmed AGIs, silicon-based zorgons, the plant-women of Canopus III, or the sentient neutronium vortices of Geminga, that has to be something really *weird*. 


Yes but do you thing the confluence of positive feedback loops and intelligence might produce effects that are weird enough?  I hope not but that's my fear.


They need to be very weird. They need to strike well before the point humanity can make a self-replicating von Neumann probe (since it can be given a simple non-changing paperclipper AI and sent off on its merry way, breaking the Fermi silence once and for all) - if they didn't, they are not strong enough to work as a Fermi explanation. So either there is a very low technology ceiling, or we should see these feedbacks acting now or in the very near future, since I doubt the probe is more than a  century ahead of us in tech capability. 


Intelligence doesn't seem to lead to convergence in our civilization: smart people generally do not agree or think alike (despite the Aumann theorem), optimization and globalization doesn't make humanity converge that strongly. 

Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150202/101bffb3/attachment.html>


More information about the extropy-chat mailing list