[ExI] Warren Buffett is worried too and thinks Republicans are "asinine"

Omar Rahman rahmans at me.com
Wed Oct 23 09:42:05 UTC 2013


> 
> Date: Mon, 21 Oct 2013 23:17:23 +0100
> From: Anders Sandberg <anders at aleph.se>
> To: extropy-chat at lists.extropy.org
> Subject: Re: [ExI] Warren Buffett is worried too and thinks
> 	Republicans are "asinine"
> 
> On 21/10/2013 10:18, Omar Rahman wrote:
>> I put to you list members that: the crazed billionaires backing the 
>> Brethren of the Koolaid are in fact far more extropian than us here on 
>> this list. Sitting on top of their mountains of money they can see 
>> further, just as those who stand on the shoulders of giants can see. 
>> They can see the wave robotisation that will drive many jobs out of 
>> the hands of humans. They are the primary beneficiaries of this. It 
>> isn't an academic discussion for them it's a business plan. Anders and 
>> others recently posted information about jobs that will/could be soon 
>> computerised or robotised; egotistical crazed billionaire was not on 
>> any list that I saw. They are in practical terms (far?) closer to the 
>> singularity than us.
> 
> In a sense they are already there: they can pay, and conglomerates of 
> minds will try to solve their problems for them. Conglomerates that are 
> beyond individual human intelligence.
> 
> Being rich in a capitalist economy is a useful state, since it means 
> that you can earn a living just by existing and having certain 
> possessions. In fact, it might be the *only* stable state in 
> sufficiently AI-enriched economies. A socialist would of course try to 
> bring everybody into this state through joint ownership of the means of 
> production. Anarchists hope that having a non-money economy will fix 
> things (which is an interesting claim - I am not entirely convinced 
> mutualist societies are stable in the face of AI).

Your '*only*' makes this an interesting and strong claim. It is true by definition that you won't even be a ('normal' or perhaps 'equal') member of a sufficiently AI-enriched economy without controlling some of these devices. Basically you won't belong to the 'new' species without an appropriate 'AI-apendage'. 

The heart of the matter is the '*only* stable state' claim. Oddly for this case I would like a definition of stable that does not reference time because we do not know on what time scale post-sigularity entities will operate. So, what is stability without reference to some specific interval of time? I would propose a notion of connectedness between our internal states and the external universe. Take the example of a lover; a post-singularity might conceivably love another for for 100 microseconds or 100 megayears, but the it's stability would be measured by the presence, absence, or most likely amount of love in that entity.

Anarchism is interesting as it might conceivably 'dissolve' post-singularity entities. I say dissolve because a post-singularity entity (I feel an acronym coming on...PSE perhaps?) could conceivably spawn a self-aware process to manage some situation. This process might refuse to re-merge with the main in an anarchist entity and proceed on it's own with a share of resources. What if there are certain problems that are best/only managed with self-aware processes and they consistently refuse to re-merge? Such an entity might 'dissolve' into a cloud of self-aware processes or reach some sort of 'intelligence plateau'.

Aside from that a 'non-money' economy is for me a 'non-trust-marker' economy and it will indeed result in anarchy. While, trust came before 'trust-markers' and some sort of society is possible it would be deeply personal and would probably end up dividing the universe into camps of 'self', 'family', 'friends', 'lovers', 'enemies', 'food', etc. Our emotions can lead us to wonderful things but they can also lead us to 'Stockholm syndrome'.

I think we are always going to have 'trust markers' and we will always have people trying to 'trick' or 'play' the system to get trust they don't deserve.

For me the capitalist vs socialist debate can be rephrased in terms of 'trust'.

Capitalists: we should trust those the most who have the most trust symbols. Seems obvious.

Socialists: all people deserve a fair chance to earn trust. Seems obvious.

A competitive measure beween the two systems would be which system has the most trust in it. (To anyone at this point who wishes to 'get specific' with 'real world' examples that are probably based on the Cold War...please don't.)

I fall on the side of socialism because in the capitalist side of things we reach a state change once a certain level of wealth is achieved. The rich really are different, they are magnified, extended, and assisted. They have 'private armies', they become 'too big to fail', etc. At some point when a person or entity is comparatively 'too rich' they are above the law. At that point the opposite of trust occurs, I fear them, just as I would fear standing next to a bull elephant on the African savannah, due to it's immense comparative size, or standing next to a poisonous snake, due to it's 'special power'.

This is why I fear unfettered capitalism and this is why every government in the world regulates business.

To be fair the fear of socialism is also easily explained; assuming an equal 'trust' distribution (hahahahhahahaha!) 50% of the population would feel that they would be harmed or 'less trusted' just because they were 'more trusted' and conversely 50% would be 'more trusted' just because they were 'less trusted'. This seems patently unfair, the 'wealth redistribution' nightmare!

This is why many fear unfettered socialism and this is why every government in the world tries to make a 'fair' tax system.

This is my long winded way of saying that re:

> I am not entirely convinced 
> mutualist societies are stable in the face of AI).

Any 'real' AI will go beyond a 'turing test' level and become something akin to a large corporation or a country and will be some sort of 'mutualist society' by definition.

> 
>> Elsewhere I've said on his list that corporations and countries are 
>> like huge mostly analogy AIs. A billionaire or dictator  who 
>> respectively controls one of these corporations  or countries is the 
>> closest facsimile to a post singularity entity that we can see. Of 
>> course to them taxation, national governments, and international 
>> agreements are usually just impediments to their free action. Even the 
>> 'good' egotistical crazed billionaires, think Elon Musk (to be fair 
>> Elon doesn't come off as egotistical even when he makes some sweeping 
>> statement that some past approach or program is doomed to fail) , have 
>> a perspective that might not always line up with the 'little guy'.
> 
> There is a difference between going for the usual power/wealth/status 
> complex and planning for the radical long run. If you think something 
> like an AI/brain emulation singularity is likely you should make sure to 
> own part of it (and sponsor research to make it safe for you) - even if 
> that means fellow billionaires think you are crazy (a surprisingly large 
> number of them are pretty conventional people, it turns out). Same thing 
> for all other "weird" extremes we discuss here, whether positive or 
> negative.
> 
> I like Musk. He was very good at quickly getting to the core of 
> arguments through first principle physics/engineering thinking, and he 
> delivered some relevant xrisk warnings to 10 Downing St.
> 
> 

I am so jealous of Musk as he goes around like the protagonist of some SF novel or my every geeky daydream that I don't know whether to cry over the unrealised dreams of my youth or scrape together the cash to buy one of his cars. 

> -- 
> Anders Sandberg,
> Future of Humanity Institute
> Oxford Martin School
> Faculty of Philosophy
> Oxford University

I apologise if we have dragged this thread too far away from the, purported by the worried Warren Buffet, "asinine" nature of Republicans.


Regards,

Omar Rahman

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20131023/bfd0d7e0/attachment.html>


More information about the extropy-chat mailing list