[ExI] Why socialism and environmentalism (and a lot more) will be impossible
rafal.smigrodzki at gmail.com
Wed May 28 06:46:02 UTC 2014
It's interesting to speculate on the evolutionary paths that our successors
later this century will take as they churn in the computational substrate.
Yet, I feel that the truth of the future is largely inaccessible to us,
hidden behind many layers of interactions between computational features of
the world that will be formed by minds rapidly evolving away from humanity.
Just think - if zero knowledge proofs can be used to implement efficient
minds, the future might belong to reciprocally opaque entities, like poker
players but even more so. But then, maybe completely reciprocally
transparent minds might have an advantage by being able to justifiably
trust each other and thus collaborate better. But then, a transparent mind
might be more susceptible to viral attacks, so maybe you need opacity but
maybe you could do with firewalls, whitelists, and remote restore in case
of infection. But maybe all you need is an opaque manager core and
single-use minds copied from a library and erased after they do their
One could go on fantasizing about the shape of minds to come for a long
time but none of us, not even AI researchers, have enough knowledge to make
any but the most trivial predictions. However, since the design space of
minds in general is much larger than the tiny area explored by evolution in
the making of humans, I am reasonably sure the minds spawned by evolution
in the computational substrate, under much different pressures, will be
just too weird to have such human proclivities that produce our -isms.
And there is nothing anybody can do about it. Technology does what
technology wants. I think I have been lately becoming a techno-fatalist,
although not in a sad or depressed manner. The future will be very cool,
with or without beings recognizably human.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat