[ExI] Story part 2 yet again.
Alan Grimes
ALONZOTG at verizon.net
Tue Jul 30 20:07:46 UTC 2013
Don't take this the wrong way, I'm trying to give you some constructive
feedback.
Adrian Tymes wrote:
> On Sun, Jul 21, 2013 at 11:11 PM, Alan Grimes <ALONZOTG at verizon.net
> <mailto:ALONZOTG at verizon.net>> wrote:
>
> # How will VR environments be provisioned? How much work will be
> required of the user to create a VR? Where would the terminally
> incompetent get their VRs?
>
> That last forms the baseline. So line as there are VRs for even the
> terminally incompetent, the more competent can afford to be lazy -
> and, well, human nature tends toward laziness here. Someone goes to
> the effort of making a good, or at least acceptable, standard VR
> interface that anyone can use, and many people use it.
Has anyone attempted that while using said system as their only mode of
existence?
> Of course, users can put in as much work as they want. Note the amount
> of effort that goes into building Minecraft worlds - even those that
> are never seen by more than a few. (Although, fame to those who both
> make good product and share it widely; some small fortune to those who
> figure out how to turn a profit without turning away most of their
> audience.)
Yeah, those are impressive. But notice: they're all made from one meter
cubes...
>
> # How large of a VR would a user be allowed to build?
>
> That strongly depends on who's setting the limits - and why. It may
> well be that there are no limits, beyond how much hardware a user can
> gather; that would be the case if today's laws were applied.
I don't think any moral person is proposing perpetuating today's laws. =\
> # What limitations on creating sentient characters to populate the
> VR? (this is obviously deeply problematic on many fronts).
>
>
> Again, who's setting the limits, and what's their agenda? Again, if
> today's laws were applied, there would be no limits. Some may soon
> come into effect once this happens, depending on how the legislators
> come to learn of this...or it may be viewed as a modern form of
> slavery without the drawbacks, if the sentient characters can simply
> be programmed for loyalty and slave mentalities (or, more importantly,
> if the legislators believe this to be the case).
Part one of my story raised some of those issues.
> # What rights/limitations would a user have in a public VR?
>
> Depends entirely on who's paying for it, and their relationship to the
> user. Most likely it'd be akin to the rights/limitations people have
> on any public property, including a limitation against trashing the
> place (without special permit, which usually involves working for or
> with the government).
I am not sure what you mean by "paying for it". I can't imagine anything
akin to a conventional economy in a post-uploaded world because 99.9% of
the uploads will have nothing of value to trade and will therefore
starve to death if forced to participate in an economy.
> # Would the user be guaranteed unalienable rights to exist and
> communicate in public VRs?
>
> Depends on the local government, and whether they give similar rights
> in meatspace.
Well, all local governments would have been obliterated along with their
localities after Kurzweil's computronium shockwave annihilated them.
>
> # Would private VR spaces be considered a natural right?
>
> Probably not, any more than homes are considered natural rights.
> They're property, and it's a good thing if most people have one, but
> this is distinct from a right to have one. (Though it helps that
> making an eyesore out of one's private VR does not impact other
> peoples' private VRs.)
How about breathing? Remember, an upload cannot exist in any meaningful
sense without a VR environment. So denying an upload access to VR is
equivalent to denying it the right to exist.
> # What limitations would there be on how a user manafests himself
> in a public VR environment?
>
> Again, depends entirely on who's paying for it, and their relationship
> to the user. "Public VR" can be considered to be "VR that is owned
> and operated by the government, which gives most people certain access
> rights", similar to public roads today.
Well the system I was going to write about was governed by a piece of
software called Protocol which would be designed to permit the entire
upload-AI-VR mess to function at all but, unintentionally, introduced
numerous, mostly insurmountable, limitations as to how one can chose to
exist. On top of that would be implemented a system called Code, based
loosely on the arguments of Lawrence Lesig. Basically it would implement
the founder's idealized ethics as absolute laws. Most of the things
actually worth doing would require you hack your way around
well-intentioned edicts built into Code. Then there would be an explicit
governing authority that would give the veneer of democracy and
oversight but all the important decisions would be made by popular
participants in certain specific cuddle-piles.
> # What limitations would a user have on the type of avatar that
> could be attached to his emulated humanoid nervous system?
>
> Same answer.
Can you address the technological challenges in actually implementing
that? Every time I think about it I come to the conclusion that there
are more dragons there than in Skyrim. =P
> # Is there any alternative to the following mode of self
> modification beyond basic tuning parameters: --> You load a copy
> of yourself from a backup made a few moments ago, modify it,
> attempt to run it, if it seems to work, you then delete yourself.
>
> Yes. Many alternatives:
> * You modify your currently running copy on the fly, without backup,
> much like how self modification works today. (More dangerous? Yes.
> Convenient and therefore used widely anyway? Probably. Safe enough
> for small tweaks, so that "more dangerous" rarely applies in
> practice? Likely. Does away with the "there are briefly two yous"
> issue that some people might want to deal with? Yes. And "this is
> similar to how people have done it for a long time" is a compelling
> factor for many people. Of course, one can also copy a modification
> that someone else tested on someone else, thus trusting that the
> modification is probably safe for yourself too.)
Yeah, you can make certain shallow modifications that way, certainly
parameter tuning, etc... But what about the assumptions built into the
simulation software? What about massive architectual overhauls to the
misshapen, fluoride rotted, lump of neurons you had when you were
scanned? What about being conversions to operate a non-humanoid avatar?
etc, etc, etc...
> * You run several such modifications at once.
Not sure what you mean.
> * You don't delete yourself, essentially forking for each modification.
Why would you want to fork in that manner, ever?
> * You run altered self in a simplified, sped up sim (sped up because
> of the simplifications) and thereby evaluate long-term progress quickly.
Wouldn't the brain scan itself dominate all simulation time and hence
couldn't be sped up any further than normal speeds? What would doing
that really tell you?
> Are those enough?
No, you aren't even beginning to address the technological challenges
implied by what you refer to so dismissively. =(
> *** The current dominant theme, that of a heliocentric cloud of
> computronium bricks seems to imply a central authority that, at
> the very least, dictates communications protocols and orbital
> configurations.
>
> Unless the protocols emerge by consensus, for lack of said authority
> (much like how "international law" is not "what the single superpower
> - USA - wants", but "what enough of the major countries of the world
> agree on"), and orbital configurations likewise (though likely
> recognizing orbitals already claimed in practice).
I can't imagine that this kind of scenario would ever develop naturally
(because nobody would want to do it). It would either not happen or it
would be imposed by some agency.
> # What rights would the least privileged user to access base reality?
>
>
> The least privileged users might access base reality and nothing
> else. This is a common plot: the elites turn their attention to
> spaces only they are aware of, and ignore the portion of reality that
> is the entire reality for commoners. VR vs. base reality is one
> expression of this, as is a medieval tale of peasants whose only
> exposure to war is when knights come through, demanding food and
> shelter, until the lords on both sides - who had only thought to
> defeat one another - suddenly have their "civilized" war interrupted
> by a peasant revolt.
Well, if you had a right to base reality then you could quickly
bootstrap that up to a right not to be uploaded... Which isn't what
happens in this story.
> *** Assume that the overwhelming majority of the population was
> force-uploaded and re-answer the previous question.
>
>
> It changes depending on two things:
>
> * What noble aspirations the uploaders had when doing the
> force-uploading and setting things up.
>
> * What that grinds down to, in day-to-day practice after a
> sufficiently long period of time.
>
> Generally, why do the uploaders even care to run most people?
Mostly their own perverted sense of ethics.
> Access to base reality likely costs at least minimal resources -
> whose, and is this a large enough amount that anyone cares?
> # Are there any problems with the following scenario: You detect a
> problem that will inevitably cause a cascade failure across the
> entire network but since you are on the ignore list of everyone
> connected with the central authority you can't even report your
> bug. The people in the central authority have been running human
> emulated brain patterns for subjective thousands of years and have
> become senile (due to the exhaustion of potential synapses between
> their neurons), and complacent and refuse to even acknowledge the
> possibility of such a problem... So basically everyone trapped in
> this nightmare tried to go around the simulator with a count-down
> clock to when everything would collapse hovering over their heads,
> trying to warn the few who could actually do something about it,
> until the clock reached zero and everything just stopped.
>
> Yes. If many people are convinced of imminent demise, and the powers
> that be can not be convinced to help - perhaps some people would be
> content to give passive warnings as you describe, but someone is going
> to try to take more action than that.
There is no James Bond solution to this. You are a simulation, not a
person. The slightest misstep will cause the operating system to revoke
your cycles and trigger a security scan of your "slab file"...
> Security crackers exist. Exploits and social engineering exist. If
> the people in the central authority are indeed senile, that makes it
> more likely that these things will develop, in time for those aware of
> the problem to attempt a fix. (Just last May, I was in a LARP about
> essentially this very scenario.)
They spent 1,500 subjective years preparing for the "great uplift",
making sure that the system, as a whole, was utterly unhackable.
Furthermore, a successful hack would greatly endanger the
thing/entity/blob of bits that carried out the hack for purely technical
reasons.
> The bigger problem is assuming that everyone will act in the same way
> - especially, in a way they have reason to believe will be utterly
> ineffective (even if it does require little effort). Over a large
> enough population with the same concern, a wide variety of solutions
> will be attempted.
What choice do they have given that they are a simulation that exists in
a way utterly dependent on the system that will inevitably kill them.
You could shackle someone or you could cut off their arms.
--
NOTICE: NEW E-MAIL ADDRESS, SEE ABOVE
Powers are not rights.
More information about the extropy-chat
mailing list