[ExI] Axiological Risk (was ? Risk:)

Anders Sandberg anders at aleph.se
Sat Mar 10 00:38:45 UTC 2012


On 09/03/2012 22:14, The Avantguardian wrote:
> Axiological risk is an interesting point to raise. I can see why a 
> humanist would try using such an argument against life-extension and 
> transhuman tech in general. But I think that the weakness of this 
> argument lies in the false assumptions of the universality and 
> consistency of human values, be they ethical or aesthetic, across 
> space and time.   Because axiology is based on a relative value set, 
> the humanist must assume their particular brand of "western mainstream 
> culture" is the human norm and then cloak themselves in armor of 
> "human values". 

Axiology doesn't have to be based on a relative value set. There are 
certainly people around that think that there might be a One True 
Eternal Value System, but that we currently do not know it, for 
instance. They might say that we cannot evaluate whether there is 
axiological risk in a lot of future scenarios, but that scenarios where 
humans stop being experiencers or evaluators of value (for example by 
devolving) would represent axiological hazards no matter what the OTEVS is.

As always, there is plenty of philosophy on the topic.
http://plato.stanford.edu/entries/value-theory/
Whether one will find it trivial hairsplitting or profound insights is 
very personal.

> While I may tip my hat to ethical and aesthetic axiology, I agree with 
> Stefano that in the long term, the only value that truly matters is 
> Darwinian survival value.

But is that good in itself, good for something, or just a neutral state 
of affairs? If the latter, we could just as well choose not to survive - 
not just axiological xrisks but all xrisks cease to matter.


-- 
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University




More information about the extropy-chat mailing list