[ExI] How do we construct workable institutions and ethical behaviors?

Anders Sandberg anders at aleph.se
Tue Dec 6 10:53:11 UTC 2011


John Grigg wrote:
> Anders Sandberg wrote (from the "Wiki entry of Critical Theory of 
> Posthumani​sm" thread):
> But I do think there is a major challenge to update our enlightenment 
> views to work in a postmodern and indeed posthuman world. If free 
> will, rights, rationality, individuality and species membership are 
> less sharp than assumed by past thinkers, how do we construct workable 
> institutions and ethical behaviors?
> >>>
>  
>  
> Anders, I wish someone of your caliber would devote their career to 
> the creation of workable institutions and ethical behaviors.

Thanks for the vote of confidence, but this is definitely a many 
geniuses problem. Consider the thought and debate that went into 
inventing our enlightenment concept of morality. Also, it is one of 
those wexed problems where we cannot expect group problem solving to be 
effective, yet it is so interdisciplinary that there are likely no 
experts who could singlehandedly deal with it. So we have to make do 
with a global intellectual debate and hope our brains are sharp enough.


>   I am horrified at the degree of Wall Street & corporate shady 
> behavior (which often gets rewarded, rather than punished), along with 
> the financial seduction of political leaders.  And of course this is 
> not just an American problem.  I see the notion of a "social 
> contract"  rapidly falling apart.

A key problem right now is that our societies have bungled the principal 
agent problem badly.
https://en.wikipedia.org/wiki/Principal%E2%80%93agent_problem
The incentives have been set up in the wrong way for traders, banks, 
politicians, governments and perhaps everybody else. This in turn might 
be due to ideological blinders of various kinds - the assumption that a 
properly democratic appointed official will act rationally and zealously 
in the public interest (despite plenty of evidence about cognitive bias 
and public choice economics), the asusmption that markets will 
self-organize well (despite having massive government controls, 
behavioral economics and social-signalling driven organisations), the 
assumption that decisionmakers even know what they are doing (at least 
in domains like technology, security and finance there is likely a case 
of policy theatre where decisionmakers simply cannot keep up and hence 
make irrelevant decisions) and that the voters contribute useful 
information (ignoring that they are also incentive driven, rationally 
ignorant, and often do not appoint praise and blame anywhere close to 
the right targets).

Note that the above is not the normal indignant moral condemnation for 
somebody behaving badly - the problem is that the cybernetics of the 
system is malfunctioning, not that humans are human. It is easy to blame 
bankers or politicians, but they are not that important.

A scary possibility is that more complex societies might be even less 
able to deal with these problems than ours. Some technologies may fix 
some problems, but they can also add even more to the speed and 
complexity that destabilizes things. It could be that Didier Sornette is 
right about the singularity as an infinite sequence of ever faster stock 
market crashes and rallies converging to a single point...

But I do think there are practical things we can do to improve 
transparency, accountability and freedom of experimentation. That might 
at least help us figure things out a bit better.

-- 
Anders Sandberg,
Future of Humanity Institute 
Oxford Martin School 
Faculty of Philosophy 
Oxford University 




More information about the extropy-chat mailing list