[ExI] Singletons

John Clark jonkc at bellsouth.net
Mon Jan 3 18:04:12 UTC 2011


On Jan 3, 2011, at 12:30 PM, Anders Sandberg wrote:

> Imagine that there is a particular kind of physics experiment that causes cosmic vacuum decay.

Cosmic vacuum decay utterly destroying the entire universe, I hate it when that happens. But as Dr. Brown has reminded us "Of course that is a worst case scenario, the effect could be much more localized and just destroy the galaxy".  

It is difficult to predict how a newly discovered force in Physics will behave, that's why it's new. Madam Curie was certainly not stupid, and when she first discovered Radium she had not one scrap of information to think that the strange rays given off by that element were in any way dangerous, but it ended up killing her. Suppose nature is unkind on a much larger scale. Suppose that in the technological history of almost any civilization there will come a time when it will find hints of a new force in nature, and suppose there is a very obvious experiment to investigate that possibility, and suppose because it is so new there is not one scrap of information to think it is in any way dangerous so the experiment is performed. And then oblivion. Perhaps that explains the Fermi Paradox, in the context of Everett's Many World interpretation we happen to be living in a fantastically unlikely universe where nobody has thought of that very obvious and simple experiment, yet.

 John K Clark    











> The system monitors all activity, and stomps on attempts at making the experiment. Everybody knows about the limitation and can see the logic of it. It might be possible to circumvent the system, but it would take noticeable resources that fellow inhabitants would recognize and likely object too.
> 
> Now, is this really unacceptable and/or untenable?
> 
> 
> The rigidity of rules the singleton enforces can be all over the place from deterministic stimulus-responses to the singleton being some kind of AI or collective mind. The legitimacy can similarly be all over the place, from a basement accidental hard takeoff to democratic one-time decisions to something that is autonomous but designed to take public opinion into account. There is a big space of possible singleton designs.
> 
> 
>> Let's look at a population of cultures the size of a galaxy. How do
>> you produce an existential risk within a single system that can wipe more than a stellar system? In order to produce larger scale mayhem
>> you need to utilize the resources of a large number of stellar
>> systems concertedly, which requires large scale cooperation of
>> pangalactic EvilDoers(tm).
>>  
> 
> If existential risks are limited to local systems, then at most there is a need for a local singleton (and maybe none, if you like living free and dangerously).
> 
> However, there might be threats that require wider coordination or at least preparation. Imagine interstellar "grey goo" (replicators that weaponize solar systems and try to use existing resources to spread), and a situation of warfare where the square of number of units gives the effective strength (as per the Lanchester law; whether this is actually true in real life will depend on a lot of things). In that case allowing the problem to grow in a few systems far enough would allow it to become overwhelming. In this case it might be enough to coordinate defensive buildup within a broad ring around the goo, but it would still require coordination - especially if there were the usual kind of public goods problems in doing it.
> 
> 
> -- 
> Anders Sandberg,
> Future of Humanity Institute
> Philosophy Faculty of Oxford University 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110103/90f7624a/attachment.html>


More information about the extropy-chat mailing list