[ExI] Singletons

Samantha Atkins sjatkins at mac.com
Tue Jan 4 01:28:25 UTC 2011


On Jan 3, 2011, at 9:30 AM, Anders Sandberg wrote:

> Eugen Leitl wrote:
>> The problem that the local rules cannot be static, if the underlying
>> substrate isn't. And if there's life, it's not static. Unless the
>> cop keeps beating you into submission every time you deviate from
>> the rules.   
> 
> Which could be acceptable if the rules are acceptable. Imagine that there is a particular kind of physics experiment that causes cosmic vacuum decay. The system monitors all activity, and stomps on attempts at making the experiment. Everybody knows about the limitation and can see the logic of it. It might be possible to circumvent the system, but it would take noticeable resources that fellow inhabitants would recognize and likely object too.
> 
> Now, is this really unacceptable and/or untenable?

It is unacceptable to have any body enforcing not examining the possibility when said body has no idea whatsoever there is any particular danger.  Such regulating bodies on the other hand are a clear and very present danger to any real progress forward.

> 
> 
> The rigidity of rules the singleton enforces can be all over the place from deterministic stimulus-responses to the singleton being some kind of AI or collective mind. The legitimacy can similarly be all over the place, from a basement accidental hard takeoff to democratic one-time decisions to something that is autonomous but designed to take public opinion into account. There is a big space of possible singleton designs.
> 

No singleton can have effective enough localized enough information feeds enabling it to outperform any/all more localized decision making systems.   A singleton is by design a single point of failure. 

> 
>> Let's look at a population of cultures the size of a galaxy. How do
>> you produce an existential risk within a single system that can wipe more than a stellar system? In order to produce larger scale mayhem
>> you need to utilize the resources of a large number of stellar
>> systems concertedly, which requires large scale cooperation of
>> pangalactic EvilDoers(tm).
>>  
> 
> If existential risks are limited to local systems, then at most there is a need for a local singleton (and maybe none, if you like living free and dangerously).

Actually manufacturing a supernova affecting everything in many hundreds of light years is not likely that difficult.  But that is hardly a reason to go wild making super-super cops ruling over countless civilizations.

> 
> However, there might be threats that require wider coordination or at least preparation. Imagine interstellar "grey goo" (replicators that weaponize solar systems and try to use existing resources to spread), and a situation of warfare where the square of number of units gives the effective strength (as per the Lanchester law; whether this is actually true in real life will depend on a lot of things).

It is exceedingly unlikely although totally dumb, replication crazed Von Neumann probes may come close.


> In that case allowing the problem to grow in a few systems far enough would allow it to become overwhelming. In this case it might be enough to coordinate defensive buildup within a broad ring around the goo, but it would still require coordination - especially if there were the usual kind of public goods problems in doing it.

Public good problem?   Either there is a danger to the hopefully much more rational minds involved or there is not.    They will act rationally to deal with it to the degree it really is that much of a danger. 

- s





More information about the extropy-chat mailing list