[ExI] elections again

Harvey Newstrom mail at harveynewstrom.com
Tue Jan 1 21:46:04 UTC 2008


I apologize in advance for this being so long and rambling.  I am trying to 
respond to every single point without ignoring any.  But the result is that 
it goes round-and-round, saying the same thing over-and-over.  I am enjoying 
this conversation, but don't want it to get too convoluted and down in the 
weeds.  Don't feel compelled to respond to every point.

Ignoring all the detailed back-and-forth below, my summary is this:
I totally agree with the idea that we shouldn't initiate force.  Those who do 
are wrong.  That is how we should determine the right-and-wrong of all future 
conflicts.  However, my point is that all people adhere to this principle.  
Yes, the Muslims, the terrorists, the criminals, everybody thinks they are 
the underdog being attacked unfairly, and their response including violence 
is justified because the other party initiated force first.  Also, the 
interpretation of who initiated force depends on world-views and 
interpretations of rights which varies from culture to culture.  Thus, the 
rule of not initiating force is insufficient.  Everybody is doing it, but 
differently.  We need to further refine it or explain it or implement it to 
conclusively explain how we are implementing it correctly and they are 
implementing it incorrectly.  I don't know how to do this rigorously and 
methodically without resorting to a mere declaration that we are right and 
they are wrong.

The further ramblings below are not necessary to understand my summary above, 
but are included for completeness.  But after I responded to all of the 
below, I felt like I was getting off-track.  So I wrote the summary above 
which is more succinct, and don't care if anybody reads the following 
ramblings or not.

Oh, and Happy New Year!

-- 
Harvey Newstrom <www.harveynewstrom.com>
CISSP CISA CISM CIFI GSEC IAM ISSAP ISSMP ISSPCS IBMCP
____________________________________________________________
On Tuesday 01 January 2008 04:39, Samantha Atkins wrote:
> Not all environments would be equally conducive to its highest desired
> functioning.  Great capacity doesn't mean it is totally self contained
> and self sufficient.

I never claimed this diversity was the only possibility.  I was giving a 
counter-example to the claim that a single non-friendly environment was the 
only possibility.

> This assumes that the needs/desires crossed with the capabilities of
> AGIs will leave ample room for humans to exist.  We aren't nearly as
> difficult to eradicate, even accidentally, as cockroaches.  We are
> much more fragile with more needs.

I did not assume that this would occur.  I was giving a counter-example to the 
claim that total annihilation was the only possibility.

> But we aren't talking about brute-force once self-improving AGI
> exists.   There need be nothing akin to normal evolution about it.

Someone was talking about brute-force being solved by pure speed, which is why 
I countered that example.

> Well, what qualifies do we consider as god-like and how do we know
> exactly how much smarts it takes to obtain some of those powers?

I was speaking against people who think AIs will break the laws of physics (as 
we currently understand them), go faster than light, travel through time, and 
convert the entire planet to computroniium in the blink of an eye.  Those 
god-like powers are pure unsupported speculation.  I have no doubt that there 
will be many abilities that would seem god-like to us, just as we would seem 
to have many god-like abilities as viewed by our distant ancestors.

> Not 
> instantly no but I doubt it will take a self-improving AGI as much as
> a human generation to be able to do things that to us are decidedly
> "god-like".

Agreed.  I was speaking against "instantly" which some seem to expect.

> > They will have to perform slow physical experiments in
> > the real world of physics to discover or build faster communications,
> > transportation, and utilization of resources.
>
> A lot of the most important work of self-improvement of intelligence
> is internal and does not require so many physical world steps.  Once
> the AGI has optimized on its existing substrate it can see about
> upgrading its physical components.  I doubt it needs to figure out any
> new transportation methods or invent faster communications until it is
> already extremely advanced.

I think big advances in AI will take hardware self-improvements.  I don't 
think it all can be done in software with its existing hardware.  Each step 
that takes a hardware upgrade will be greatly slowed down compared to the 
self-modifying software that most people are talking about.

> One of the first external science priorities will likely be MNT.  It
> will not need conventional factories.  The benefits of MNT will be too
> great for all humans to want to stop it.   For that matter it will
> very early on be such a beneficial boon  that it will find patrons and
> protectors easily.

Agreed.

-- 
Harvey Newstrom <www.harveynewstrom.com>
CISSP CISA CISM CIFI GSEC IAM ISSAP ISSMP ISSPCS IBMCP

On Tuesday 01 January 2008 05:22, Samantha Atkins wrote:
> The phrase "impose our will" is rather imprecise.  At any rate a lot
> less precise than abstaining from introducing physical force or fraud.

I mean the same thing.

> Shooting you would be an initiation of force totally against
> libertarian first principles.   What someone may merely want to do but
> not actually attempt to do (due to principles, the likelihood of being
> punished or shot back at, etc.) is not actually a problem.

But I can't wait until the trigger is actually pulled before I defend myself.  
So the question is, when can I fight back?  Can I disarm a person for 
pointing a gun at me?  Can I disarm a person for bringing a gun into my home?  
Can I disarm a person for bringing a gun into my yard?  What about standing 
just outside my yard, looking in?  What about sitting next to me on a public 
bus?

I agree totally with the theory of no harm, no foul.  But how much preparation 
and readiness to do harm do I have to put up with before I can call foul and 
put a stop to it?  Many people are more sensitive to guns being around them 
or their children than others.  The perception of what is threatening or 
dangerous various between cultures and perceptions.

> >  Say you
> > want to sit on your property with your gun aimed at me while I move
> > around on
> > my property.
>
> That would be a pretty direct threat of physical force so again not
> allowed.

Glad we agree about that.  (Some people don't.)

> Again you are confusing hypotheticals and possible dangers with actual
> aggression. 

No I am not.  We agreed that gun owners can't point a gun at me.  It is only 
hypothetical that they might shoot.  But it is too possible for comfort.

People could easily interpret a doomsday machine, or an ultra-powerful AI to 
be a similar situation as a gun pointed at them that could destroy them at 
any moment.  Even if you don't believe this is analogous, you can guess that 
some people will interpret it thusly.

> There is no way to avoid all possibility of harm and the 
> Precautionaly Principle would have us do.

I never claimed this.  I am arguing against the super-AI that some insist is 
guaranteed to wipe out humanity.

> That is not a question of 
> rights at all.   Being invasive of others property and space because
> they might do something or have done something that might harm you is
> utterly unjustified and an obvious initiation of force.

Exactly my point.  I am not claiming new rights.  I am pointing out how people 
are going to interpret many of our technological advances as direct invasions 
of their property and space that can easily harm them.  They will interpret 
this as an obvious initiation of force.  That is what I am trying to warn 
about.  Most technologists here have no concern for how people are going to 
react to their technology.  But I guarantee that if some (even incorrectly) 
interpret your technology as a loaded gun pointed at them, they will use 
deadly force to prevent you from completing construction of that technology.

> Baloney.  That someone has the means to harm you does not mean they
> will.  You cannot punish them or by force render them completely
> harmless.

Agreed.  In your worldview given you holding the gun.  How about if build a 
nuclear bomb next door?  Just because I have the means to harm you does not 
mean that I will.  You cannot punish me or by force render me completely 
harmless.  Do you feel the same way in this example?  What about nukes in 
other countries pointed at us, is that the same?  My point is what seems 
obvious to you given your cherished technology may not seem obvious to 
someone else given some other technology.

> > They believe that babies are being murdered and must be protected.
>
> Then this is a notion totally in their heads that they would by force
> impose on others.  So they are the aggressors.

This is the problem.  They would claim the exact thing of you.  The idea that 
babies aren't alive yet before birth is totally in your head.  They would 
claim that you are imposing force and that you are the aggressors.  It is 
easy to just state that we are right and they are wrong.  But they state the 
exact same thing in reverse.  So the rules about not initiating force don't 
prove anything.

How do we prove the other side wrong?  Or how do we prove to lawmakers or 
societies or third parties who is correct?  Simple rules don't solve these 
interpretation problems.

> Hardly as this ignores the obvious point that no one can conclusively
> show that a fetus, especially in early pregnancy, is a human being to
> be protected.  That is a central issue in contention.   Those who
> believe that a fetus is a child etc. cannot legitimately imposed their
> opinion on those who do not who are unhappy to be carrying said fetus.

Agreed.  But this just opens up another can of worms, wrt the precautionary 
principal.  We can't prove a fetus is a human.  But we can't prove it's not.  
Where exactly is the line?  If it is blurry, which way do we err?  The 
science will be disputed, and the direction of safety argued back and forth.

> But this is an arbitrary opinion they have no right to impose on others.
I agree with your opinion.  But others don't.  My question is how do we 
resolve this?  Merely stating that we are right and they are wrong doesn't 
solve the controversy.  People still are fighting about it.

> > I don't know.  Many people in this country object to humans coming
> > from other
> > countries to take jobs away.
>
> Do they take jobs away?   I am not so sure. 

You miss my point.  It doesn't matter if they are right or not.  I am arguing 
that the rebellion against robot workers will be stronger and more obvious in 
most minds than the rebellion against foreign workers.  I am merely 
predicting this rebellion.  I am not saying that it is right.  But it will 
definitely happen.

> Feelings are not facts and do not confer rights.
You keep saying this, clearly attributed "their" feelings as not being facts, 
and "our" facts as not being feelings.  But merely asserting it doesn't prove 
anything.  In this example, there are no objective facts.  They say citizens 
deserve jobs more than foreigners.  You say they don't.  How is either side 
more factual or less feeling-based?  What is the basis for argument besides 
just asserting it?  That's what I'm not getting.  I'm just seeing assertions.  
But how do you prove it?  How do you support it? 

I'm not disagreeing with you at all.  I'm asking how do we explain this to 
convince others.  There is more and more rebellion against technology 
brewing, and I don't see many technologists paying attention or even giving 
rational answers to the objections.  I just see assertions without convincing 
argument.  (Even though I agree!)


>
> > Imagine how much more adamant they would be that
> > humans deserve these jobs more than machines.
>
> Tough cookies.

See what I mean?  This doesn't solve anything.  This won't convince congress 
not to outlaw your technology.  Is this our only approach, to become outlaws 
and ignore what society or governments think?  Do we not even try to explain 
our position?

>
> >  It doesn't even matter if they
> > believe in entitlements or not.
>
> Your right it does not matter because such entitlements are bogus.
>
> >  They have to work to feed their families,
> > and these machines are threatening their families.
>
> No they are not.  Working is not necessarily the only way to have
> enough to feed your family.  I do not know what other arrangements
> will be worked out but it is pretty certain that sooner or later we
> will have such an abundance economy and few enough people will be
> qualified for the jobs that are not yet automated that it will not
> make a lot of sense to insist that you have to have a j-o-b to partake
> of the abundance.

This is a pretty weak argument.  It could be the case.  But I think people 
will want to receive the abundance first before they lose their jobs.  I 
can't imagine this explanation satisfying anyone who isn't already decided.

> In some Muslim countries it is quite physically dangerous to attempt
> to leave the faith on even be known to question it.

Exactly.  This is an example of an intolerant belief system that I am talking 
about.

> They can believe whatever they wish.   We only have a problem when
> they initiate force.

Again, this is what I have been saying.

> This is a "problem" only  if you claim that the "interests" of others
> must be catered to regardless of their nature.  Only the interest in
> being free from initiation of force and thus free to lead one's own
> life unmolested is an interest that all must abide by.

No, you misunderstand.  I am not saying we have to cater to them.  I am asking 
how we convince people that our side is not initiating force.  My point is 
that the other side is using the exact same rules that we are about not 
initiating force.  It is just that they interpret force differently such that 
our exercise of our freedoms is initiating force against them.

> >> Do you prefer suppress freedom or suppress conflicts?
> >
> > I prefer that we suppress conflicts.  There must be some win-win
> > scenarios.
>
> What kind of conflict?  All conflict?  Then only the grave will do I'm
> afraid.    Suppression of freedom is conflict!   Do not suppress
> freedom and much worrisome real conflict dissolves.

As I explained in another post, I do not see these as mutually exclusive.  
Just because I hope to prevent conflicts does not mean I want to suppress 
rights.  I want to find a way to do both.  Those who don't believe that we 
can do both and don't care about conflict misinterpret my response to mean 
that I would rather suppress rights than avoid conflicts.  That is the 
opposite of my opinion.

>
> > It should be possible to build an AI without threatening to overthrow
> > humanity's governments.
>
> Getting rid of governments is a fine idea since they are the primary
> initiators of force!

Isn't this initiating of force against governments or those groups that want 
their government?

> > There
> > should be an answer to most conflicts.
>
> Compromise no matter what is no answer at all.  The relatively evil or
> at least "less good" always wins in such compromise for compromise
> sake.  Conflict per se is not evil, especially conflict of mere beliefs.

I never called for compromise.  I am trying to figure out if there is a way to 
allow all freedoms to occur without interfering with all.  People who don't 
see a way to do this (which I agree I don't know how either) misinterpret 
that I must therefore advocate compromise.  I do not.

> > Simply having one side override the
> > other side is usually not the answer.
>
> It is if one side doesn't really have much of a leg to stand on.   It
> is when all true rights are on one side.

This is the basis of all war and all terrorist actions.  All initiators of 
force claim the exact same thing.  Virtually no initiators of force perceive 
that they are the initiators of force.  They interpret that the "new" 
technologies or activities are the initiation of force, and that they are 
defending themselves against the attackers.

> > But often, there are legitimate concerns that should be
> > addressed rather than ignored when developing new disruptive
> > technologies.
>
> We deal with these one by one without throwing away freedom from
> initiation of force.

That's what I am trying to talk about.  But nobody seems to get to this point.  
They are so sure that every single one of their beliefs is so obviously 
right, that they see no reason to deal with any of these as described above.

> > But it is much more complicated and messy than people like to imagine.
>
> It surely is if you start pretty much with no real guiding principles
> and just attempt to make everyone as happy as possible.

Is that what you read in my postings?  I don't know how to be clearer.  I am 
not saying that we should not use the guiding principles of "no initiation of 
force."  I think that is the guiding principle.  My point is that the 
terrorists and anti-technology camps are claiming the exact same guiding 
principles, and it is leading them to do what they are doing.  The same 
principles!  We need to figure out what is different between our 
implementation and their implementation that can be generically explained in 
advance, other than "we're right and you're wrong."  The concept of "force" 
itself is ill-defined and interpretted differently by different people.



More information about the extropy-chat mailing list