[ExI] Re: What surveillance solution is best - Orwellian, David Brin's, or ...?
TheMan
mabranu at yahoo.com
Tue Jun 26 04:23:31 UTC 2007
> > Jef Allbright <jef at jefallbright.net> wrote:
> >
> > > The cosmic race is simply a fact of nature,
> >
> > The _cosmic_ race? You mean the fact that tech
> races
> > will go on throughout universe for eternity
> anyway,
> > with or without mankind's participation? Don't you
> > care whether this particular race going on on
> Earth
> > now is going to continue leading to better and
> better
> > things for ever, or soon stop itself by creating
> > something that kills us all?
>
> My point was that "the race" is simply a fact of
> nature, "bloody in
> tooth and claw", and that rather than framing it as
> a
> "nightmare",
My use of the word nightmare referred to the threat
from the super-hi-tech weapons that may soon be
developed and with which one or more of the thousands
of ordinary, angry and crazy human beings that exist
today may choose to terminate all life on this planet.
Of course, technology is great in many ways, but I've
got the impression that most extropians tend to focus
too much on the boons and underestimate the perils.
For example, only a tiny part of Kurzweil's
"Singularity is near" is about the perils of the
coming technologies, the rest is about the great stuff
these technologies can bring us. And when it confronts
arguments against singularity, it confronts mainly
arguments like "no, a singularity won't happen because
this and that" and very few arguments about the risks.
If you can be motivated to better or equally good
actions by feeling only excitement and no fear, that's
great. I'm just not sure that one will be sufficiently
aware of the risks and take sufficient action to
diminish them if one doesn't acknowledge the nightmare
aspect of the global arms race that seems to be going
to get out of control soon.
The race is a fact of nature, I agree, but it would
proceed even if restricted, just at a slower speed.
Just as you point out, the acceleration of change may
soon make it impossible for any government, or any
other groups or individuals for that matter, to
restrict the use of too dangerous technologies (in an
ordinary, democratic manner, that is) before it's too
late. So if the speed of development could be lowered,
it would be safer, and mankind would still reach
singularity sooner or later. An Orwellian world
despot, with the power to prevent everyone else from
experimenting with new technologies, would,
statistically, be more careful with what experiments
he allows, than would the least careful of countless
free and eager engineers, cults and terrorists in the
world. The kind of society David Brin suggests might
have a similar dampering effect on the perils of tech
development. But in a free society that follows the
proactionary principle without a ubiquitous
surveillance system for watching out for dangerous use
of new technlogies, it seems to me that less careful
(and less morally sensible) engineers will get to
perform experiments than in the former two cases.
How easy will it be for the good people in the world
to always come up with sufficient defenses against
nanoweapons, and other supertech, in time before a
nanoweapon, or other supertech, terminates us all?
Wouldn't it be easier for the mankind-loving people to
secure an advantage over the mankind-threatening
people if the technological development would be
slowed down? An Orwellian system might slow it down
and thus provide some time, so it might be the best
alternative for mankind, even if the leaders of it
would not protect mankind for mankind's sake but
merely for personal profit.
> we
> might do better to recognize it as it is and do our
> best to stay
> ahead. Yes, it's significant that it was going on
> before, and will
> continue after, the existence of what we know as
> humankind.
>
> I realize that my writing tends to be very terse,
> and
> I compress a lot
> into metaphor, but how did you possibly get the idea
> that I might not
> care? Is this your way of inviting further
> clarification, or do you
> really suspect that?
I suspected it a bit because of your use of the word
"cosmic" in the context, but I wasn't sure, so I
wanted clarification. If I expressed myself
impolitely, my apologies!
> I identify as a member of humanity and care very
> much
> that our
> **evolving** values be promoted indefinitely into
> the
> future.
That's good to hear! I agree that it is just as
important to evolve as it is to survive, but why
evolve uncontrollably fast? I think it would be good
to break Moore's law now, if possible, and proceed
"slower than natural" during this last bumpy and
dangerous bit of the ride toward singularity. Given
that universe will exist forever, why hurry? With
infinite time at our disposal, mankind may get to
evolve infinitely anyway! You are right that this
acceleration toward singularity is a law of nature,
but we have tamed nature before in various ways, so
why should we not be able to tame this phenomenon and
slow it down? It's all up to us. We create the
acceleration, so we must be able to temper it too. If
we say nature inevitably makes us evolve at an
accelerating speed, a person who has committed a crime
out of natural feelings of anger could similarly say
nature inevitably made him commit the crime. Just
because nature is "bloody in tooth and claw", doesn't
mean we should let it be that way.
> > The tech race on our planet in inevitable - until
> it
> > stops itself by leading to something that extincts
> > all.
>
> So here we come to the significance of my statement
> that it is a race
> within a cosmic context. There is no "until" -- the
> race will
> continue with or without our participation, and this
> is significant,
> not because we should care if we're out of the game,
> but because it
> helps us understand the rules.
I assume you mean "shouldn't care", not "should care"?
> > The race can take differing paths, and we should,
> > at least to some extent, be able to influence what
> > path it will take, because we are the ones
> creating
> > it.
>
> Yes, it is critical, from our POV, that we exercise
> choice as
> effectively as possible.
Good to hear.
> > What I wonder is what path is safest, what path
> > minimizes the risk that the race stops itself
> through
> > a disaster (or an evil deed).
>
> We can never know "the correct" safest path, but we
> improve our odds
> to the extent that we apply our best understanding
> of
> the deeper
> principles of the way the universe works, toward
> promotion of our best
> understanding of our shared human values, expressed
> in
> an increasingly
> coherent manner. In practical terms, this implies
> the
> importance of
> an increasingly effective cognitive framework for
> moral
> decision-making.
That sounds like a never-ending process. The time that
that would take might be better used in trying to
figure out how to best prevent mankind-threatening
acts of terror from taking place.
Mankind's situation today can be compared to that of a
chessplayer who 1) is forced to play a game of chess
to stay alive, 2) only needs a draw in order to stay
alive, and 2) in that game now has very little time
left until the next time control.
In such a case, there is no need to try to win, you
only need to avoid losing. If in that situation you
can make an advancement that would improve your
chances of winning, but at the same time open up
around your king so that your opponent can more easily
attack it, you would be much better off not advancing
- because a draw is enough for you. This is a good
analogy because mankind doesn't need to advance faster
than is necessary for survival.
The admittedly very sad fact that hunger and global
warming may become huge problems for hundreds of
millions of people soon if we don't invent technology
that can solve those problems in time, still doesn't
threaten mankind's survival. Future individuals will
be much more numerous than the present (if mankind
survives), and given utilitarianism, we should
therefore give that lager number of future individuals
much higher priority than the lesser number of
currently existing individuals.
> > Sousveillance implies watching "from below",
> meaning
> > there is someone "above" you, someone who still
> has
> > more power than you. This is not the only
> alternative
> > to surveillance. A society is thinkable where
> there
> > are no governments with any of the advantage in
> terms
> > of surveillance and overall power that they have
> > today, a society where everybody has equal ability
> to
> > watch each other, and equal power to stop each
> other
> > from doing evil. That would not be sousveillance
> > but... equal interveillance?
>
> Sorry, but again it comes down to information.
> You're
> neglecting the
> ensuing combinatorial explosion and the rapid run-up
> against the
> limits of any finite amount of computational
> resources. To function,
> we (and by extension, our machines) must limit our
> attention, there
> will always be gradients, and that's a good thing.
> Gradients make the
> world go 'round.
You mean gradients in the sense that everybody can't
have equal power? Of course, everybody can't have
equal power, but it might be good to choose a system
that gets closer to total equality than other systems,
even if it doesn't reach it.
> > Would you rather have that kind of system than the
> > kind of system we have today?
>
> I passionately desire, and work toward, a system
> that
> increases our
> effective awareness of ourselves and how we can
> better
> promote our
> evolving values. Such a system does not aim for
> "equality", but
> rather, growth of opportunity within an increasingly
> cooperative
> positive-sum framework.
I don't see equality as a morally good thing in and of
itself. If I were to choose between a society of
billions of people where everybody is happy except one
person who is terribly miserable, and a society where
everybody is mid-way between happy and miserable, I
would choose the former without hesitation, because it
contains a much larger total sum of happiness, and I
wouldn't care that it is a lot less equal.
But in the case of the coming development of ever more
dangerous technology, a more equal distribution of
power in society would probably have great
_instrumental_ value. The more people that have as
much power as the ones with the most power, the more
people there are that are able to intervene, when
necessary, against the ones of those with the most
power that become dangerous to all of us, and in time.
If power to intervene is less equally distributed
among the people, less people will be able to
intervene against the most powerful, and this means
smaller statistical probability that at least some
people will do it in time to stop the extinction of
all - that is, in an era where dangerous things can
happen extremely fast.
> > > while the tree can
> > > and will branch unpredictably, a fundamental
> trend
> > > is
> > > toward
> > > increasing information (on both sides.)
> > >
> > > We can take heart from the observation that
> > > increasing
> > > convergence on
> > > principles "of what works" supports increasing
> > > divergence of
> > > self-expression "of what may work." If we
> recognize
> > > this and promote
> > > growth in terms of our evolving values via our
> > > evolving understanding
> > > of principles of "what works", amplified by our
> > > technologies, then we
> > > can hope to stay in the race, even as the race
> > > itself
> > > evolves. If we
> > > would attempt in some way to devise a solution
> > > preserving our present
> > > values, then the race, speeding up
> exponentially,
> > > would soon pass us
> > > by.
But we ARE the race! So, by choosing values, we choose
where the race goes. Of course, this is true only if
everybody is forced to go the same way. That's where
ubiquitous surveillance comes in. Not only an
Orwellian solution, but also David Brin's
everybody-watches-everybody solution seems to offer a
way for mankind to force every one of its members to
go the same way. Once that kind of system is
established, we can collectively slow down development
to a moderate speed that allows us to always be able
to prevent each other from doing something that
threatens the existence of us all. Singularity would
be reached with that system too, only much later. But
there is plenty of time.
This could be compared to playing a game of chess
where you are allowed to take months to come up with
each move. That kind of playing speed means a lesser
risk of losing than a blitz game does. And not losing
is all we have to care about. Development happens by
itself anyway.
(You may be less eager to slow down the tempo of
mankind's development because you think that we as
individuals will not live for ever if singularity is
delayed so much that it doesn't happen within our
lifetime. But if future posthumans live for ever,
thanks to your saving mankind, they will sooner or
later happen to create an exact copy of you, whether
they get to know that someone like you ever existed or
just happen to create such a being by random (as their
[I don't know, something like
10^10^10^10^10^10^10^10^10^10^10]th experiment. So it
may be egoistically rational to sacrifice one's life
for mankind's survival by slowing down the dangerous
speed of development.)
> > >
> > > In short, yes, we can hope to stay in the race,
> but
> > > as
> > > the race
> > > evolves so must we.
> >
> > Nice word ambiguity! :-)
> >
> > I don't really understand whether you answer my
> > question though.
>
> Sorry that I appear ambiguous; I strive to be as
> clear
> and precise as
> possible, but not more so.
So I suppose you only meant the race as in
competition, not the human race. I thought your
unintentional word ambiguity opened up for an
interesting interpretation though.
> TheMan: "How should we travel, to get through the
> increasingly complex
> and dangerous territory that lies ahead?"
>
> Jef: "We should take an inventory of ourselves and
> our
> equipment, and
> apply ourselves to making a better map as we
> proceed."
>
> TheMan: "No, I mean specifically, how should we go?"
>
> Jef: "The best answer to that question is always on
> our best map."
>
> TheMan: "You are so irritatingly vague! Don't you
> even care about
> this journey?"
That was clarifying! Thanks! Yes, that's how I was
feeling.
So, your studying the map has, so far, lead you to the
conclusion that we need not worry about how any
ubiquitous surveillance in the near future should be
administered and how the power to exercise it should
be distributed? What on the map has lead you to that
conclusion? To me, that conclusion seems to rest on
the implied assumption that there is very little risk
that any individual or group using nanotechnology and
other powerful technologies will ever threaten the
survival of mankind - or that we can do nothing to
alter that probability, other than by promoting the
very force of nature that will create that threat in
the first place. What on the map that makes you come
to that conclusion? Designing a solution to guard
mankind against [extinction due to evil or careless
use of technology during the coming decades] does not
necessarily exclude the process of continuing to learn
how to read the map.
I'm thinking that if we have the wrong map in the
sense that, for example, the common belief that we
will die if mankind is wiped out by misuse of
technology is mistaken because we will live for ever
thanks to the infinite number of copies of us in
universe, we still probably don't have anything to
lose by acting as if our personal survival depends on
mankind's survival. So even if our current map _may_
be wrong, we may have nothing to lose and everything
to win by assuming it's right.
> > Basically, I was wondering what is
> > the best way to minimize the existential threat
> from
> > technology, in terms of _what_ people should have
> the
> > right to watch _what_ people, and to what extent,
> and
> > how, and how it should be governed (if at all)
> etc.
>
> My thinking in this regard tends to align with the
> Proactionary Principle:
>
<http://en.wikipedia.org/wiki/Proactionary_principle>
> but I realize you're looking for something more
> specific.
Let's have a look at the first maxime of the
Proactionaty principle:
"1. Freedom to innovate: Our freedom to innovate
technologically is valuable to humanity. The burden of
proof therefore belongs to those who propose
restrictive measures. All proposed measures should be
closely scrutinized. "
>From the fact that our freedom to innovate
technologically is valuable to humanity does not
necessarily follow that the burden of proof belongs to
those who propose restrictive measures. There are many
things that are valuable to humanity, and I would say
freedom to innovate is not the most valuable one of
them. I would say survival and well-being are more
valuable than freedom to innovate. Well-being is
intrinsically valuable, whereas freedom to innovate is
only instrumentally valuable (it is only valuable to
the extent that it contributes to or creates
well-being). And survival is a more necessary
condition for our well-being than freedom to invent
is. Therefore, to the extent that our survival is
threatened by people's freedom to innovate, our
survival should be put first.
Furthermore, "people's freedom to innovate" seems to
be erroneously thought to support only one side in the
discussion. An Orwellian Totalitarian World Government
(OTWG for short) would be one thing that could limit
that freedom. Another one would be the extinction of
mankind, something that may happen as a result of the
lack of an OTWG. If an OTWG is the only thing that can
prevent the extinction of mankind, it is unfair to
focus mainly on the fact that an _OTWG_ would limit
people's freedom to innovate - as if the extinction of
mankind would not! Obviously, it is conceivable that
mankind may survive even without an OTWG, but why take
the risk, since an OTWG would provide freedom to
innovate in the long run (after reaching singularity,
at the very least)?
You may alternatively replace OTWG above with an
extreme version of David Brin's suggestion, a society
where everybody watches everybody very closely.
Why should the burden of proof belong to those who
want to secure people's _long_ term freedom to
innovate (by protecting mankind from extinction by
controlling people a lot), rather than to those who
want to maximize people's _short_ term freedom to
innovate (by not controlling people so much)? Both
sides try to maximize people's freedom to innovate,
only on different time scales. So both sides could say
they simply obey the first and most important maxime
of the Proactionary principle.
> I don't have a specific answer to "what people"
> should
> be able to
> watch "what people." Personally, I tend to like the
> idea of public
> cameras on the web watching all public areas.
> I think this will
> improve public safety dramatically,
If cameras will only watch public areas, they won't
prevent crimes committed from people's homes with for
example remote-controlled nanorobots. They will be
pretty useless in preventing extinction of mankind.
Sure, they will prevent crimes, but I don't think
increased surveillance is justified if the objective
is to simply decrease the number of crimes, even if
that can save a huge lot of lives. I think increased
surveillance is justified only if it protects mankind
from extinction, but in that case, on the other hand,
I think it is infinitely justified. What the objective
is makes all the difference. We need more
surveillance, but for the right reasons. We need less
of the surveillance that is now taking place for the
wrong reasons. Today's surveillance probably creates
more suffering (for example by scaring innocent people
with controversial political opinions to silence, and
thereby helping the wrong kinds of politicians keep
their power) than it decreases suffering (by reducing
crime). When surveillance becomes necessary for
mankind's survival, however, it will be, well,
necessary.
(Possibly, the surveillance of today, that I call
unjustified, may turn out to have been important as a
way to get people used to the idea of being watched
all the time, in time before surveillance really
becomes important. On the other hand, it may also make
people less aware of the importance of watching the
watchers.)
> and that
> concerns
> about privacy
> will adapt,
Of course, people adapt to just about anything with
time. You can make a person adapt to being tortured.
That doesn't justify torture.
> and that additional unforeseen benefits
> will arise.
I would say any benefits are irrelevant as long as
they don't decrease the risk of extinction of mankind.
But maybe they will decrease that risk. Only then are
they relevant.
> You
> > might be better off handing over a lot of power to
> > your government, or you might not. That's the
> question
> > I want to discuss.
>
> Personally, I think "government" as we know it will
> collapse under the
> weight of its own inconsistencies,
What inconsistencies?
The trend I see is the opposite - governments getting
more and more power by being given more and more power
to watch and control people. Governments may be
inefficient in proportion to the huge resources they
have at their disposal, but they still have so great
resources - and authority - that they are still far
more powerful than most people, organizations and
companies. Why would the tech race change any of that?
The goverments even lead the tech race, don't they? At
least the US government?
If the major governments in the world collapse, it's
because mankind collapses. I could see no other
plausible cause.
When terrorists start using nanoweapons to kill
millions of people in seconds, whom will the people
ask for help? Of course their governments. If
governments' bureaucracy turns out to be too slow to
have a chance against mankind-threatening terrorism,
governments will say to their people "Ok, now
terrorists have become so much more agile than us,
because of today's insane acceleration in technology
development, and because our heavy bureaucracy is too
slow in this situation, so we have to skip all the
bureaucracy, take fast action without wasting any time
on anchoring it democratically, and profoundly change
the constitution so that we can do whatever is
necessary immediately when it is necessary. We assume
you accept that we go "totalitarian" in this
exceptional situation. The alternative is to let the
terrorists commit homicide. Which do you choose?" What
do you think people will choose in that situation?
Governments all over the world will also be
increasingly forced to cooperate with each other in
order to combat nanoterrorism and the like, so soon a
world government will be installed. And it will use
the above arguments in order to be allowed to go
totalitarian.
Once a totalitarian world government is installed, I
think it will be very hard to remove. (Which may be a
good thing or not.)
Governments will also convince their people that they
need to put a huge lot of tax money into
anti-nano-weapon research to combat nanoterrorism.
That way, they will continue being the leaders of the
tech race.
I would be very interested to hear why you think
governments will collapse.
Oh, you mean when singularity happens? That's
different. I guess there is no way to predict what
will happen then. But I'm talking about a more near
future. Personally, I think all the dangers will
disappear when we reach singularity, because then we
will all become one - an individual so much wiser than
us that it would be foolish of anyone of us today to
predict anything about its risk to go extinct. I think
the existential risks are going to be worst a couple
of years before singularity (or even right before it),
and increasingly worse from now and up until then.
> I favor a system possibly
> describable as anarchy
> within an increasingly cooperative framework.
That sounds compatible with David Brin's suggestion.
____________________________________________________________________________________
Pinpoint customers who are looking for what you sell.
http://searchmarketing.yahoo.com/
More information about the extropy-chat
mailing list