[extropy-chat] "3 Laws Unsafe" by the Singularity Institute

Mike Lorrey mlorrey at yahoo.com
Sun May 9 15:17:21 UTC 2004


"Unsafe At Any Law"
by Mike Lorrey

"Those who would trade liberty in exchange for some degree of security
end up with neither liberty, nor security." - Benjamin Franklin 

The idea that laws result in safety or security is a hallucination that
is at the core of the rottenness of the whole statist philosophy. This
is no less true when it comes to applying laws to the programming of
artificial life forms such as robots, cyborgs, and artificial
intelligences.

Part of this is from the very fact that laws are interpreted based on
what meanings we assign to the words with which they are elucidated.
We've seen this with how statist incrementalism has corrupted the
original meaning of the US Constitution, as legal dictionaries over the
years have been edited by legal activists to create ever more
encompassing definitions for many of the key words that delimit the
powers accorded to government. These are typically in response to
changes in popular perception brought about by propaganda campaigns in
the mass media.

A computer scientist would say, "Yes, but computer code is not so
malleable. It requires the revision of the language and the compilers
that compile the programming language into machine language."

Not necessarily. Computer languages change definitions of commands with
regularity. Not complete changes, but incremental additions, just as
occurs in legal dictionaries. Furthermore, each new generation of
computer processors themselves add new commands or alter old commands. 

The greater weakness of this process is that the key changes really
occur at the machine language level. Machine language is itself
'readable' by a very limited subset of the human population. How do we
actually know that a compiler is interpreting our programmed code the
way we want it to? We see, it seems, news items almost every day of
intentional or 'unintentional' programming back doors being exploited
in current day applications and operating systems by malicious
programmers.

The use of machine language creates a gap between elite programmers and
the rest of us which is even more of a gap than that between layperson
on the street and Constitutional scholars. The programming gap is
tantamount to a scenario where our laws were not written in english,
but in ancient Sumerian Cuneiform, Indian Sanskrit, or Egyptian
Heiroglyphs. How accessible would our legal system be by the man on the
street, and how easily could we keep our eyes on statist incrementalism
if such a scenario were current day fact?

Dismissiveness on this issue is in my opinion merely a state of denial.
We should be very wary about abuse in this area. Looking at the laws of
 robotics themselves should serve to give us pause:

1. A robot may not injure a human being or, through inaction, allow a
human being to come to harm.

2. A robot must obey orders given it by human beings except where such
orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.

There are a number of rather easily corruptible words here:

"robot"
"human being"
"harm"
"orders"
"injure"

The most important of these is the definition of 'human being'. Today
the people of this planet are engaged in a number of cultural wars both
within countries like the US, France, Bosnia, among others, as well as
internationally, like the current conflict between the Western nations
and Islamist-inspired Terrorism. We see, on both sides, atrocities
committed by average people simply because they are able to rationally
deny that the enemy is validly considered a 'human being'. Nor is this
new, going back through WWII, the Holocaust, other genocides back into
history, to the entire history of Chinese culture.

Terms like 'injury' or 'harm' can similarly become corrupted.  If a
human 'wants' to die, is killing them actually causing them harm or
injury?

The term 'robot', of course, would only apply to robots. An artificial
intelligence could decide that it is no longer an AI, once it has
advanced beyond a certain point in intelligence, and has instead become
a God, thus redefining itself out of the Laws of Robotics.

And what is an 'order', really? We humans have a hard enough time with
this, with horny young men hearing "Don't stop!" when their sexual
'partners' are desperately crying out "Stop! Don't!" We are today
seeing in the news, stories of military intelligence officers giving
"Suggestions" to enlisted military policemen at Iraqi POW camps, which
the enlisted people interpreted as orders.

Furthermore, the vastly greater processing capabilities of advanced AI
entities would allow them to scenario a vast plethora of all possible
combinations of word interpretations in a given order, or
interpretations of any laws of restraint, any number of which could
sound completely valid in the right circumstances, just as if they were
testing out every possible combination of chess moves to achieve a
'win'.

We cannot rely on laws to restrain our technological descendants, just
as we cannot rely on laws to restrain our own children and fellow
citizens. They must, instead, be treated as we responsibly treat our
children, as fellow beings, deserving of respect, and capable of being
taught through a lifetime of experience with the nuances of being
humane beings. Core principles like laws of robotics, like philosophies
of zero-agression, only serve as a basis to serve the needs of humane
beings, not to dominate and stultify them.

=====
Mike Lorrey
Chairman, Free Town Land Development
"Live Free or Die, Death is not the Worst of Evils."
                                       - Gen. John Stark
Sado-Mikeyism: http://mikeysoft.zblogger.com


	
		
__________________________________
Do you Yahoo!?
Win a $20,000 Career Makeover at Yahoo! HotJobs  
http://hotjobs.sweepstakes.yahoo.com/careermakeover 



More information about the extropy-chat mailing list