[extropy-chat] Extropic Commandments

Jef Allbright jef at jefallbright.net
Thu Sep 28 20:37:10 UTC 2006


Damien Broderick wrote: 
> 
> Rather than "complexity" (too... complex... a term) or "extropy" 
> (which now has a partisan ring to it, inevitably), perhaps:
> 
> Syntropy
>  From Wikipedia
> Syntropy is a term popularized by Buckminster Fuller but also 
> developed by others to refer to an "anti-entropy" or "negentropy". 
> The following definition, referencing Fuller, can be found on 
> a web site on "Whole Systems": "A tendency towards order and 
> symmetrical combinations, designs of ever more advantageous 
> and orderly patterns. 
> Evolutionary cooperation. Anti-entropy."[1] Fuller's use 
> dates to 1956.

Wow, thanks Damien!  Somehow I missed this term from Bucky's lexicon and for 25 years or so I've been mistakenly disappointed that synergetics was as far as this terminology went with him.

His definition does highlight a problem with useful definitions of complexity however, as increasing symmetry becomes increasingly dull and useless near the limit.  Think one huge crystal. [Oh, pretty!] Not much can be done with it.  What's interesting happens at the edges, the "adjacent possible" in Stuart Kauffman's words.  

Of course, "ever more advantageous" is the key phrase, and it highlights the essential element of subjectivity that scientifically-minded people have tended to turn their backs on.  And thanks to the work of Shannon, Gödel, Chaitan, the Santa Fe Institute and many others, we're becoming more sophisticated in our understanding of complexity, or at least in our understanding that the more we learn the more questions open up.

Back to Robert's list of extropian commandments:

1) Information of greater complexity has greater value than information of lesser complexity.
2) Information in agreement with the natural laws and history of the universe has greater value than information in disagreement with the natural laws and history of the universe. 
3) Thou shalt seek to maximize the amount of information and its complexity in existence.
4) Thou shalt seek to make such information available to the greatest number of computational units to derive more information from it. 

I share his fondness for an abstract idea of "extropy" as fundamental to a workable system of morality.  And if Robert would agree with me that his items 1 and 2 must be combined to form a single value statement, then we'd be most of the way there.

Statement #1
The problem with #1 is not only that it lacks a precise definition of, but that it lacks the element of agency necessary for any statement about value.  The way it is worded, it asserts an objective value statement, obvious to anyone possessing the necessary context to understand it.  However, there exists no objective agent, and the ultimate objective viewpoint holds no values -- things simply are as they are.  More to the point of #1, increasing information is of no use whatsoever unless it meets some purpose.  We could quite easily go about analyzing and cataloging sand on a seashore and compile great masses of complex structured information, but it's easy to see that this would be of little value relative to other possible activities. As it stands by itself, #1 may appear elegant, but it's incomplete.

Statement #2
This assertion hits a key element (knowledge) on-target, but like #1, lacks the same factor of subjectivity (or limited context).  Put them together and we would have something useful.  The problem with #2 (and this may be only Robert's choice of semantics) is that all of our information is incomplete and contingent.  We can never say that something is objectively true, but only that it appears true within a context. [And no, I am most definitely not a post-modernist.] Therefore a better approach to this might be to say that our knowledge is increasingly valuable as it is increasingly assessed as working over increasing scope. 

[Yes, I'm aware of the multiple use of "increasingly" and have not found a good way to avoid this without mathematical notation.  For those of us who think in pictures, imagine an expanding sphere of effectivity becoming increasingly visible.]

So, if one were to combine #1 and #2 into a statement about (increasingly structured knowledge (that works over increasing scope (as assessed by some agent)))then I think we'd be off to a good start in defining knowledge of "good".  Take one more step (change "some agent" to "an increasing population of interacting agents") and I think we'd be off to a good start in defining knowledge of what's considered "moral."

I also agree that (partially and in general), humankind's moral codes, such as the Ten Commandments, the Golden Rule, The Buddhist Precepts, verses from the Quran, even the Wiccan Threefold Law, can be derived from the above, because they are reflections (at a cultural level) of behaviors that have been tested in a competitive environment and persisted because they worked.  At an even lower level of contextual awareness, human feelings of disgust, pride, envy, and so on serve as instinctive indicators of good and bad.  In such derivations, key factors include recognition of self, other, and synergetic growth (requiring diversity, conflict and cooperation.)

When considering metaethics, it's interesting to note that while it's not possible to get agreement (even in principle) on what is ultimately "good", it is most certainly possible to get agreement in principle on moving from "good" to "better".  Therein lies the Arrow of Morality.

I've exceeded my five-paragraph rule, so I'm going to leave it at that, but with a hint to think where this could lead if we imagine the progression of human ethical decision-making progressing from instinctual -> self-conscious individual -> cultural -> augmented cultural...

- Jef






More information about the extropy-chat mailing list