[ExI] Why socialism and environmentalism (and a lot more) will be impossible

Omar Rahman rahmans at me.com
Wed May 28 16:27:51 UTC 2014


> Date: Wed, 28 May 2014 02:46:02 -0400
> From: Rafal Smigrodzki <rafal.smigrodzki at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Subject: [ExI] Why socialism and environmentalism (and a lot more)
> 	will be	impossible
> Message-ID:
> 	<CAAc1gFjAw7iOakoDQvkYF-9kQo-YKwkhyo-RTygd57CLPrRDFA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> 
> It's interesting to speculate on the evolutionary paths that our successors
> later this century will take as they churn in the computational substrate.
> Yet, I feel that the truth of the future is largely inaccessible to us,
> hidden behind many layers of interactions between computational features of
> the world that will be formed by minds rapidly evolving away from humanity.

Rafal,

I agree with much of what you say. And if these future minds exist many layers deep in some sort of virtual world I also agree that it is basically impossible to predict what they will be like. In such circumstances they will indeed evolve rapidly away from humanity. There are some who make the argument that we already exist in a simulated environment. Theoretical physicists and cosmologists have yet to present a testable consensus view that I as a non-physicist can comprehend fully or at least comfortably adopt into my worldview.

Why am I going on about multiple realities or nested layers of reality in simulations? Because, as far as we know, we have one reality and a speed limit of 'c'. The result of this that we will be sharing the same environment.

> 
> Just think - if zero knowledge proofs can be used to implement efficient
> minds, the future might belong to reciprocally opaque entities, like poker
> players but even more so. But then, maybe completely reciprocally
> transparent minds might have an advantage by being able to justifiably
> trust each other and thus collaborate better. But then, a transparent mind
> might be more susceptible to viral attacks, so maybe you need opacity but
> maybe you could do with firewalls, whitelists, and remote restore in case
> of infection. But maybe all you need is an opaque manager core and
> single-use minds copied from a library and erased after they do their
> job....
> 

You have speculated along the same lines as many have done. I speculated in a similar manner in my recent post. Process termination will become a thorny issue if the processes develop personalities, worldviews, ethics, etc. The recent thread 'death follows European contact' inspired me to think a bit more about what our 'first contact' with AIs will be like.

> One could go on fantasizing about the shape of minds to come for a long
> time but none of us, not even AI researchers, have enough knowledge to make
> any but the most trivial predictions. However, since the design space of
> minds in general is much larger than the tiny area explored by evolution in
> the making of humans, I am reasonably sure the minds spawned by evolution
> in the computational substrate, under much different pressures, will be
> just too weird to have such human proclivities that produce our -isms.
> 

You're right about some, maybe even many, of our -isms. These entities almost certainly won't have a sex in our sense of the word and will reproduce in something probably resembling an engineering design process. So sexism would be gone for them. Hooray, if you eliminate the sexes we can finally eliminate sexism, or at least 'for them'.

How about the environment? Unless this universe collapses there will always be an environment. Environmentalism can't go away.

How about socialism? Unless there is only one entity, yourself, that you are aware of there will always be some sort of society. Socialism can't go away.

I am trying to speak of a socialism which is connected to the environment because the environment is a REAL and measurable thing. I am speaking against capitalism because it is a UNREAL and measurable thing. Money is a very useful tool for facilitating exchanges and nothing more.

Money is a trust symbol. Capitalism is the accumulation of trust symbols for the sake of accumulating trust symbols. Once someone accumulates too many trust symbols there are not enough trust symbols to go around and the system collapses. I think basic human psychology drives the crash cycle in capitalism.

If we manage to move into a post-scarcity economy with entities that have long transaction histories we should be able to move away from the 'prisoner's dilemma' and 'there's a sucker born every minute' capitalist scenario we are in.

> And there is nothing anybody can do about it. Technology does what
> technology wants. I think I have been lately becoming a techno-fatalist,
> although not in a sad or depressed manner. The future will be very cool,
> with or without beings recognizably human.
> 
> Rafal

You, as a researcher Rafal, are creating the technology. You can create many things and I'm sure you can probably imagine enough research projects to fill up a 'normal' human lifetime. There is nothing inevitable about any particular technology. 

I agree the future will be very cool, especially if we can recognise more things as being 'human'.

Best regards,

Omar Rahman

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140528/c359ba38/attachment.html>


More information about the extropy-chat mailing list