[ExI] elections again

Harvey Newstrom mail at harveynewstrom.com
Mon Dec 31 02:38:01 UTC 2007


On Sunday 30 December 2007 15:58, Mirco Romanato wrote:
> You can and must impose your will on others, if you like.
> You can not use force first.
> Against a thief I can impose my will stopping him from stealing my stuff.

I make a distinction betwen imposing my will on others, and preventing them 
from imposing their will on me.  I don't see them as morally equivalent.  So, 
I agree with your distinction that you can impose your will on others given 
your example.  I just wouldn't call that imposing my will on others.

> If it would be always possible there would not be conflicts.
> It is a sad reality, but it is a reality, that sometimes there are
> conflicting interests and no way to solve the conflict.
> The law was developed to solve these conflicts in a consistent and
> predictable way, sometimes in a fair and useful way.

Agreed.  And I think new technologies require new laws to solve new conflicts 
in a consistent and predictable way.  But I fear that too many transhumanists 
don't want to do this.  They want to just plow ahead with their interests, 
ignoring others' interests, without even trying to avoid conflict.

> Only if you are catch before completing the AI and start exploiting it.

I think such a plan would be evil.

> How can someone disarm me that don't want be disarmed when someone will
> try to attack me?

I didn't understand how to parse this question.

> This is because I write about feeling.
> Because "people who don't want the possibility of being shot" are
> speaking about feeling, not reality.
> There is no way they can be sure none will be able to shot them.
> They can only disarm the pacific people, not the people inclined to
> attack them.

You misunderstand.  I am not asking to avoid all unknown risks.  But I am 
talking about known risks.  Having my neighbor pointing guns at me is a risk.  
Even if he says he won't shoot if I don't threaten him, I still find myself 
at some risk.  This is a real risk to me.  I want my private property out of 
gunshot range of my neighbor.

> You don't argue about vegan because they are not violent and don't go
> around killing, menacing, and so on.

Actually, there are some rare militant vegans who believe killing animals is 
murder.  They can represent a real threat, because they do get violent toward 
humans to protect animals.  I don't know how to resolve these conflicts, but 
this is another excellent example of the conflicts I am talking about.  
Futurists are going to run into more and more conflicts where the general 
populace just don't agree with the way futurists think.

> If it is about world-view, there is no solutions.
> Because we are in the same world with them.
> Or we bow to them or they will bow to us.

Sadly, that seems to be the current trend.  Is that the only answer, for one 
side to lose to the other?  Are we so sure that there is no win-win or 
compromise, that we shouldn't even try to come up with solutions?  I admit I 
don't have the answers, but I am concerned that most futurists aren't even 
interested in trying to look for answers.

> Do you know because these people is so easily offended with "us"?
> Because we don't bite back easily.
> This is because the same people don't bother to attack people that
> "bite" without warning for behaviors they would consider a greater
> insult and menace.

I don't believe this.  I believe most people who are offended have serious 
reasons to be offended.  Even if I don't agree, I can see their viewpoint in 
their world view.  We futurists literally are a threat to the world as it is.  
People who don't want their world to change, really see us as destroying the 
world.  People who can't tolerate abortion, foreigners, or gays certainly 
won't be able to tolerate uploading, non-human entities, or new 
human-nonhuman interactions.

> They can, and the best will, change their jobs.
> They can pool their resources and work for each others or buy robots
> themselves to do work for others.

This is a good example of the type of solutions I am asking about.  If people 
are truly enabled to find alternate work or become beneficiaries of the new 
technology, they won't mind giving up the old ways.  It is only when the old 
ways are clearly destroyed without an obvious replacement for their needs 
that people become reactionary.  I think these kinds of examples are possible 
if we try to find them.

> You can not.
> Or they bow you or you bow them. Or was blow?

Ha ha!  It seems like that is how people think.  It must be one way or the 
other.  Maybe mutual is possible too.

> We can not.
> Tolerance is only for tolerant people.
> It is a reciprocating thing.
> You could be tolerant to start, but be tolerant when the other party is
> clearly intolerant is only stupid.
> Saints against demons always lose.

Possibly.  Maybe probably.  But can we be more rigorous to prove this theorem? 
I don't want to become a demon if the angels ultimately win.  But I don't 
want to stay an angel if the angels always lose.  I don't know if skirting 
the line between them is even possible.  If I must be a demon, do I be the 
most angelic demon possible, with only demonic qualities when necessary?  Or 
do I become the most demonic demon possible?  These really are good 
questions.  Simplistic answers without proof or high reliability is not 
sufficient to convince me either way.

> Do you would not threaten the Saddams and the taliban of the day?
> Or would you let them do as they feel fit.

Best choice would be to somehow protect ourselves without doing anything with 
them.  (Yes, I don't know how, so this is theoretical.)  Second choice would 
be to stop them as needed, but already this gets complicated.  I see taliban 
as enemy who attacked U.S., and Saddam as incapable of attacking U.S.  (So 
already, I am disputing the basis of your question here.)  Third choice would 
be attack them after they attack us.  Fourth choice would be preemptively 
attack them before they attack us.  This last one is unacceptable with 
current administration which cannot be trusted to identify who is threat and 
who is not.  The least acceptable choice is to preemptively kill people who 
may be innocent or may not have attacked us or who cannot be proven to be 
guilty.  This least acceptable choice is too weak to be effective, will kill 
many innocents, and will squander our resources against non-threats, 
weakening our resources against real threats.  (Again, these are theoretical 
preferences I have, with no guarantee that they would really work in the real 
world.  But I don't see any other schemes that seem more likely to work 
either.)

> > It should be possible for everyone to practice their
> > own religions without forbidding anybody else's.
>
> It is impossible if your religion support world domination and do it a
> duty.

Right, which is what I meant with the second part about "without forbidding 
anybody else's".  This is a big difference between compatible religions and 
incompatible ones.

> The way we consider "legitimate" concerns is more about how strong is
> the concerned, how strong we are and what both have to lose and gain.

I may not understand what you mean.  But I don't see how the strength of the 
concerned has any bearing on how legitimate the concern is.

-- 
Harvey Newstrom <www.harveynewstrom.com>
CISSP CISA CISM CIFI GSEC IAM ISSAP ISSMP ISSPCS IBMCP



More information about the extropy-chat mailing list