[ExI] General comment about all this quasi-libertarianism discussion

Eugen Leitl eugen at leitl.org
Sat Feb 26 22:08:53 UTC 2011

On Sat, Feb 26, 2011 at 04:29:32PM -0500, Richard Loosemore wrote:
> Eugen Leitl wrote:
> > ... for [these libertarian/anarchist proposals] to work you need to
> > patch the human primate.
> Did you really say that?  You are suggesting that libertarians need

Yes. Yes! I did. Apart from the libertarian part, of course.

> to ... what?... brainwash people? ... cut out bits of their brains to

Nothing so pedestrian. Human augmentation. We're transhumanists,
after all. Immanentize the Eschaton!

> change their behavior? ... genetically engineer them?  :-(

Mere genetic engineering would be a) too slow b) not enough.
A simple patch would be a wearable which allows nym
tamperproof reputation tracking and easy querying. A better
approach is to factor the external system in.

> I am not sure if you are advocating that idea, or telling me that all
> these proposed libertarian/anarchist ideas are dumb *because* they
> impplicitly assume that humans have been "patched" in some way.

I am suggesting that with the current agent makeup 
higher co-operative strategies are not stable. I do not know this,
of course, but it does seem a bit like kicking a dead whale up
a beach for a living.

> > Relevant aspects of human societies cannot yet be modelled
> > effectively.
> So there is really no code, no proof whatsoever, that these
> mechanisms will actually work?  You know, when some AGI researchers

I'm afraid the best way to model is to make it happen. 

> don't have working code, they get slammed.  But those AGI researchers

The advantage of real physical societies, is that they're real
physical societies. Once they happen, they can be observed.

> are actually working to produce the code, even as they get slammed ...

I have no beef with people who build systems. I have problems with
the usual brand of AI mental masturbation, which is sterile.

> whereas you seem to be saying that libertarian fantasies *cannot* yet

I'm not interested in libertarian fantasies. Just emergent higher
co-operative behaviour as a side effect of smarter agents. It's
probably a series of spatiotemporal phase transitions, according to
what little we know from ALife simulations.

> be modeled, so I guess those fantasies deserve to be slammed even more
> than AGI theories, for which code is on the way.

How can you tell a kook? By the G. 

By all means, feel free to produce a working system.

> > Energy as currency backing is not useful, because it gets consumed
> > in the process. However, it would make sense to tie currency value
> > to a basket of raw resources, with periodically adjustable
> > composition and coefficients, which have to be however resistant to
> > gaming.
> >
> > It would be a return to metal-backed/non-fiats, but without the
> > disadvantages. Since we've been there, and we know the system is
> > currently poorly managed the risk would be probably low.
> But all that was pure speculation, and more speculation is hardly a
> response to my request for proof.

The proof of the pudding is in the eating. (And, no, you can't
have your cake, and eat it, too).

> (And, BTW, the vast majority of economists seem to think that going
> back to a metals standard is crazy in spades.)

Which is why you notice I'm *not* suggesting a metal-backed currency,
or any backed (or baked) currency, but to tie currency to a diverse, periodically
readjusted resource basket with built-in checks against manipulation
by adjusting the composition and weight of such basket. See the difference?
I thought you would.

> > Yes, you have to fix the agent. Current agent's won't do.
> Again, this is mind-boggling.

If you think the humanity is perfect, you're on the wrong list.

> I really want to hear more about this "fixing the agent" business.
> I am puzzled as to how libertarians propose to "fix" people. It
> sounds profoundly ominous.

I wouldn't know. Ask the libertarians.

More information about the extropy-chat mailing list