[ExI] Story part 2 yet again.
Adrian Tymes
atymes at gmail.com
Wed Jul 31 03:03:59 UTC 2013
On Tue, Jul 30, 2013 at 1:07 PM, Alan Grimes <ALONZOTG at verizon.net> wrote:
> Don't take this the wrong way, I'm trying to give you some constructive
> feedback.
>
Odd - you seemed to be the one wanting feedback. Anyway...
> Adrian Tymes wrote:
>
>> On Sun, Jul 21, 2013 at 11:11 PM, Alan Grimes <ALONZOTG at verizon.net<mailto:
>> ALONZOTG at verizon.net>> wrote:
>>
>> # How will VR environments be provisioned? How much work will be
>> required of the user to create a VR? Where would the terminally
>> incompetent get their VRs?
>>
>
> That last forms the baseline. So line as there are VRs for even the
>> terminally incompetent, the more competent can afford to be lazy - and,
>> well, human nature tends toward laziness here. Someone goes to the effort
>> of making a good, or at least acceptable, standard VR interface that anyone
>> can use, and many people use it.
>>
>
> Has anyone attempted that while using said system as their only mode of
> existence?
>
No, because that's a chicken and egg problem. How do you provision, or get
something provisioned for you, if you do not yet exist? How can anyone
know whether you are terminally incompetent in such a state?
"Where would the terminally incompetent get their VRs?" implies said
terminally incompetents have some mode of existence other than VR, in which
to seek VR.
> Of course, users can put in as much work as they want. Note the amount of
>> effort that goes into building Minecraft worlds - even those that are never
>> seen by more than a few. (Although, fame to those who both make good
>> product and share it widely; some small fortune to those who figure out how
>> to turn a profit without turning away most of their audience.)
>>
>
> Yeah, those are impressive. But notice: they're all made from one meter
> cubes...
>
Yep. Much more fame potential to those who work with more refined
materials.
> # How large of a VR would a user be allowed to build?
>>
>> That strongly depends on who's setting the limits - and why. It may
>> well be that there are no limits, beyond how much hardware a user can
>> gather; that would be the case if today's laws were applied.
>>
>
> I don't think any moral person is proposing perpetuating today's laws. =\
>
Define "moral person". I believe the majority of people in existence today
would, by default, seek to perpetuate today's laws - possibly with minor
modifications, but close enough that you and I would call them essentially
today's laws. So your statement only holds true if the majority of people
in existence today are not "moral people" - which limits the utility of
that declaration.
While we might hope otherwise, and even work to prevent it, the fact is,
those who will be in charge will not necessarily have morals you and I
might agree with.
> # What rights/limitations would a user have in a public VR?
>>
>
> Depends entirely on who's paying for it, and their relationship to the
>> user. Most likely it'd be akin to the rights/limitations people have on
>> any public property, including a limitation against trashing the place
>> (without special permit, which usually involves working for or with the
>> government).
>>
>
> I am not sure what you mean by "paying for it". I can't imagine anything
> akin to a conventional economy in a post-uploaded world because 99.9% of
> the uploads will have nothing of value to trade and will therefore starve
> to death if forced to participate in an economy.
>
Even outside of a conventional economy - VR runs on computers which run on
energy. Where does this energy come from? Who handles protecting the
computers from external debris, and/or repairing the computers? Who
handles expanding the hardware base?
These people have access to base reality. These people have the power to
unplug the VR, or to make physical edits to the memory it runs in. Perhaps
these people pay for their power with labor, or perhaps they are paid by
others who govern. Who prevents these workers - who, again, are in base
reality - from destroying nodes that displease them?
> # Would the user be guaranteed unalienable rights to exist and
>> communicate in public VRs?
>>
>
> Depends on the local government, and whether they give similar rights in
>> meatspace.
>>
>
> Well, all local governments would have been obliterated along with their
> localities after Kurzweil's computronium shockwave annihilated them.
>
"Local" refers to the VRs, in this case.
> # Would private VR spaces be considered a natural right?
>>
>
> Probably not, any more than homes are considered natural rights. They're
>> property, and it's a good thing if most people have one, but this is
>> distinct from a right to have one. (Though it helps that making an eyesore
>> out of one's private VR does not impact other peoples' private VRs.)
>>
>
> How about breathing? Remember, an upload cannot exist in any meaningful
> sense without a VR environment. So denying an upload access to VR is
> equivalent to denying it the right to exist.
>
Actually, an upload could be shelled out to base reality: put in a robot
that (until someone - possibly its operator - tinkers with it) can only see
and interact with the unsimulated world.
> # What limitations would a user have on the type of avatar that
>> could be attached to his emulated humanoid nervous system?
>>
>
> Same answer.
>>
>
> Can you address the technological challenges in actually implementing
> that? Every time I think about it I come to the conclusion that there are
> more dragons there than in Skyrim. =P
>
You're talking uploads, right? Tweak their nervous system equivalents.
> # Is there any alternative to the following mode of self
>> modification beyond basic tuning parameters: --> You load a copy
>> of yourself from a backup made a few moments ago, modify it,
>> attempt to run it, if it seems to work, you then delete yourself.
>>
>
> Yes. Many alternatives:
>>
>
> * You modify your currently running copy on the fly, without backup, much
>> like how self modification works today. (More dangerous? Yes. Convenient
>> and therefore used widely anyway? Probably. Safe enough for small tweaks,
>> so that "more dangerous" rarely applies in practice? Likely. Does away
>> with the "there are briefly two yous" issue that some people might want to
>> deal with? Yes. And "this is similar to how people have done it for a long
>> time" is a compelling factor for many people. Of course, one can also copy
>> a modification that someone else tested on someone else, thus trusting that
>> the modification is probably safe for yourself too.)
>>
>
> Yeah, you can make certain shallow modifications that way, certainly
> parameter tuning, etc... But what about the assumptions built into the
> simulation software? What about massive architectual overhauls to the
> misshapen, fluoride rotted, lump of neurons you had when you were scanned?
> What about being conversions to operate a non-humanoid avatar? etc, etc,
> etc...
>
The first one is not "self" modification, in the scenario you're setting up.
The second can, in theory, be done the same way. No matter how unwise or
dangerous or suicidal it may seem, it can be attempted.
The third only matters if the simulation software has limits against that
sort of thing...which, if it's simulating down to the level you suggest, it
probably does not.
> * You run several such modifications at once.
>>
>
> Not sure what you mean.
>
Test several modifications at the same time. Only give extended runtime to
the one that works best. (You asked for alternatives to testing just one
modification at a time.)
> * You don't delete yourself, essentially forking for each modification.
>>
>
> Why would you want to fork in that manner, ever?
>
Ah - different sense of "you" here. You and I, the specific people, would
not. You and I are far, far, far from everyone who has ever existed.
"You", as in some other people, might want to do that. Heck, there are
plenty of people who would fork themselves with no modification; there are
tales going back to ancient times of people wishing for that exact thing.
(And in those fictions where they got their wish, not all of them regretted
it.)
> * You run altered self in a simplified, sped up sim (sped up because of
>> the simplifications) and thereby evaluate long-term progress quickly.
>>
>
> Wouldn't the brain scan itself dominate all simulation time and hence
> couldn't be sped up any further than normal speeds? What would doing that
> really tell you?
>
That's why I said "simplified": not all aspects are in play. Now, whether
you trust that to be an accurate test is another thing...but your original
scenario involved brief testing in the full-up environment, and you can
never be certain that insanity or a sudden halt does not lurk just a little
longer than you tested, so it's not like that provided 100% test coverage
either.
Are those enough?
>>
>
> No, you aren't even beginning to address the technological challenges
> implied by what you refer to so dismissively. =(
>
Dude, seriously. You asked for "any" alternatives, I provided some.
Doesn't matter if they fully address things, they are alternatives.
And where do you get off thinking I'm being dismissive? Show some respect
to people who are helping you when you ask for help, or people are less
likely to help you in the future.
> *** The current dominant theme, that of a heliocentric cloud of
>
>> computronium bricks seems to imply a central authority that, at
>> the very least, dictates communications protocols and orbital
>> configurations.
>>
>
> Unless the protocols emerge by consensus, for lack of said authority
>> (much like how "international law" is not "what the single superpower -
>> USA - wants", but "what enough of the major countries of the world agree
>> on"), and orbital configurations likewise (though likely recognizing
>> orbitals already claimed in practice).
>>
>
> I can't imagine that this kind of scenario would ever develop naturally
> (because nobody would want to do it). It would either not happen or it
> would be imposed by some agency.
>
You just contradicted yourself. If some agency imposed it, then that
agency wanted to do it.
Further, you later say that those in charge are motivated by their ethics.
If one group is so motivated, why not multiple groups? Multiple,
independent groups that perhaps started locally and never did agree on
everything.
Much like how countries and legal systems were founded around the world,
and the world has not yet merged under a single government. (No matter how
much influence certain entities have over how large an area, for any
specific entity, there is a non-zero portion of Earth's surface over which
it has no jurisdiction, directly or otherwise.)
*** Assume that the overwhelming majority of the population was
>> force-uploaded and re-answer the previous question.
>>
>> It changes depending on two things:
>>
>> * What noble aspirations the uploaders had when doing the force-uploading
>> and setting things up.
>>
>> * What that grinds down to, in day-to-day practice after a sufficiently
>> long period of time.
>>
>> Generally, why do the uploaders even care to run most people?
>>
>
> Mostly their own perverted sense of ethics.
>
And what does that shake down to, day-to-day? How big of a monkeysphere do
they manage, and what happens to everyone who isn't in it?
> There is no James Bond solution to this. You are a simulation, not a
> person. The slightest misstep will cause the operating system to revoke
> your cycles and trigger a security scan of your "slab file"...
>
Assuming the operating system, or its operators, registers a misstep.
If you had a "perfect" system, the operators couldn't get senile in the
first place. (Or if they did, the operating system itself would flag their
failure to act on foreseeable harm as errant, and revoke their cycles -
and, of course, promote & train someone else to take over, the OS being
"perfect".)
Since that safeguard isn't present, by definition there is at least one way
to sneak harm through. And if there is one, there are probably more.
> Security crackers exist. Exploits and social engineering exist. If the
>> people in the central authority are indeed senile, that makes it more
>> likely that these things will develop, in time for those aware of the
>> problem to attempt a fix. (Just last May, I was in a LARP about
>> essentially this very scenario.)
>>
>
> They spent 1,500 subjective years preparing for the "great uplift", making
> sure that the system, as a whole, was utterly unhackable. Furthermore, a
> successful hack would greatly endanger the thing/entity/blob of bits that
> carried out the hack for purely technical reasons.
>
And then they got senile. That leads to messing up, no matter how crazy
prepared they used to be.
China, arguably, spent more time than that preparing to dominate the
world...and got their rears handed to them in World War II, as a capstone
to the preceding century.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130730/e973f167/attachment.html>
More information about the extropy-chat
mailing list