<html><head></head><body><div><span data-mailaddress="johnkclark@gmail.com" data-contactname="John Clark" class="clickable"><span title="johnkclark@gmail.com">John Clark</span><span class="detail"> <johnkclark@gmail.com></span></span> , 1/2/2015 7:46 PM:<br><blockquote class="mori" style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex;"><div><br><div class="mcntgmail_extra"><div class="mcntgmail_quote">On Sun, Feb 1, 2015 at 9:00 AM, Anders Sandberg <span><<a href="mailto:anders@aleph.se" target="_blank" title="mailto:anders@aleph.se" class="mailto">anders@aleph.se</a>></span> wrote:</div><div class="mcntgmail_quote"><br></div><blockquote class="mcntgmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex;"><span style="font-size:12.8000001907349px;">> Cultural convergence that gets *everybody*, whether humans, oddly programmed AGIs, silicon-based zorgons, the plant-women of Canopus III, or the sentient neutronium vortices of Geminga, that has to be something really *weird*. </span></blockquote><div class="mcntgmail_quote"><span style="font-size:12.8000001907349px;"><br></span></div><div class="mcntgmail_quote"><span style="font-size:12.8000001907349px;">Yes but do you thing the confluence of positive feedback loops and intelligence might produce effects that are weird enough? I hope not but that's my fear.</span></div></div></div></blockquote></div><div><br></div><div>They need to be very weird. They need to strike well before the point humanity can make a self-replicating von Neumann probe (since it can be given a simple non-changing paperclipper AI and sent off on its merry way, breaking the Fermi silence once and for all) - if they didn't, they are not strong enough to work as a Fermi explanation. So either there is a very low technology ceiling, or we should see these feedbacks acting now or in the very near future, since I doubt the probe is more than a century ahead of us in tech capability. </div><div><br></div><div>Intelligence doesn't seem to lead to convergence in our civilization: smart people generally do not agree or think alike (despite the Aumann theorem), optimization and globalization doesn't make humanity converge that strongly. </div><br><br>Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University</body></html>