[extropy-chat] Neural Engineering
Robert J. Bradbury
bradbury at aeiveos.com
Fri Mar 19 05:10:24 UTC 2004
Well an interesting note on nanodot points out that there is now
a Journal of Neural Engineering.
(http://nanodot.org/articles/04/03/19/0141235.shtml)
Now, many of you may remember my rather infamous "lets nuke
Afghanistan" post of a couple of years ago. This was based
on my utilitarian/extropic argument that it would simply
be more efficient to eliminate Al Queda (and secondary
non-involved individuals) than to be chasing them around
for who knows how long (allowing them to inflict more harm
at random as appears to be pointed out by recent events in
Madrid and Bagdhad)). [Remember I'm not letting you off the
hook here -- if you want to spend $100B in slow methodical
minimal casualty cleanup of radicals (such as suicide bombers
in Afghanistan or Iraq then that is $100B you can't spend on the
treatment of AIDS or Malaria in Africa, starvation around the world,
etc. I have yet to see anyone propose concrete return-on-investment
guidelines that would conform to the Extropian principles.]
But the above is an aside...
Now what I am interested in from the perspective of the
"Neural Engineering" perspective is the possibility of
extracting (non-destructively and non-painfully) information
that leads directly to the people who are responsible for
the direct (and most probably painful) termination of
innocent people.
I.e. if one has information that may relate to past or future
damage to society or individuals within the society is it
reasonable to "rape" such information from ones mind?
Particularly if this can be done in a non-harmful way
(i.e. no torture, no long term damage, no pain, etc.)?
Going back to my several year old strategy (which I will
admit was rather heavy handed) it seems that if one can
use such methods (i.e. Neural Engineering) to extract
the information required to identify people who are
oriented towards killing innocent non-combatants to
advance a non-universal position it would be a good thing.
Now, on the other hand one might view this as a bad thing
(PBS has been running some programs on the history of the
struggle of the Irish against the British the last several
weeks.)
It seems there may be a fundamental underlying principle
at work here -- "the freedom to choose". And then whether
one can allow "the freedom to choose wrongly" (when you
know in your heart of hearts that a choice is completely
wrong). And *then* one gets into the sticky question of
when, if a choice is wrong, it impacts just oneself, or
whether there are downstream secondary effects (Napoleon
and Hitler come to mind) that are very very significant.
I.e. -- Just how much negative scondary effects of
"freedom to choose" does one allow?
So, attempting to cross "neural engineering" with "freedom
to choose" -- when exactly is one entitled to "privacy
of ones thoughts?" (for example -- I could come up with
scenarios where Mike or Spike or Amara could represent
threats to my life -- leaving aside Arabs (or more
accurately radical Muslims in Afghanistan, Iraq or Palestine).
The point being that almost anyone could present a threat
with various reasonably rational perspectives (from within
their framework).
Under what circumstances is it reasonable for me to "rape"
their minds to determine they are not a significant threat to me?
Mind you this is based on the assumption is that it is not
painful, does not damage their body in any way, etc.
There is a converse side of this perhaps -- i.e. those
individuals/companies who offer full disclosure ("go ahead
read my mind") so that it is completely obvious that they
are dealing from an up-front perspective. Obviously the
last couple of years from Enron to WorldCom to Shell have
shown how messy things can get when one doesn't have all
the cards on the table.
Robert
More information about the extropy-chat
mailing list