[ExI] Libertarianism wins again...

David Lubkin lubkin at unreasonable.com
Wed Jul 27 14:58:16 UTC 2011


Dan wrote:

>Since you seem to agree with self-determination, then wouldn't this 
>impose a side constraint on everyone? I mean, in particular, if you 
>hold that all should be free to self-determine, then this imposes 
>limits on what other may do -- if one is to remain consistent. The 
>limits would be that everyone else can't interfere in someone's 
>self-determination. And that person couldn't, likewise, interfere in 
>anyone else's self-determination. Wouldn't this lead to 
>libertarianism? In other words, you do as you please with you and 
>the same applies to all others.
>
>If not, what do you mean? In my view, either you accept this is a 
>universal principle or you don't. If you don't, there are a few 
>possibilities. One is no one is allowed self-determination. Another 
>is that one or some persons has this right, but no one else does. 
>The one or some can self-determine, but everyone else can be pushed around.

Now we're back to my point. We all, except maybe the Dalai Lama
and friends, *are* comfortable with pushing around persons that
we consider sufficiently inferior to ourselves -- cattle, termites,
bacteria, rattlesnakes, computer software, etc. -- because we
don't include them in the protected class of "person."

The closer we are to that being in capabilities, the more willing we
are to concede that it is a person, and has rights. We didn't mind
abusing heathen or infidels, then slaves, Jews, Chinese, women,
homosexuals, imbeciles, dwarfs, atheists, etc. We now care
about dolphins and dogs, but constrain their self-determination.
We'd be upset if Koko were tortured or killed, but no one has
given her civil rights even though humans of the same IQ have them.

We are creating or transforming ourselves into all manner of new
beings, and there will not be uniform agreement on which of them
should be considered persons. And I would not expect one for
whom we were as termites to treat us better than we treat termites,
despite Eliezer's best efforts.

I think that unless there's a deus ex machina to establish the
definitions and enforce a moral framework, the bottom line will
be utilitarian. Why is it better for A to reciprocate with B where
there's an extreme imbalance in capabilities between the two?

If termites could negotiate with us, they could argue the benefits
they provide in breaking down dead biologics. We agree we
won't harm them as long as they stay out of our wooden homes
and book collections, and mark them in a way they can detect.

I'd rather look for utilitarian reasons why it would be better for
an AI not to convert me to computronium, or to convert me
in situ (I don't mind also being part of his brain if I don't know
I am), than to try to persuade it to follow a philosophical
precept of a non-aggression or self-determination principle.


-- David.




More information about the extropy-chat mailing list