[ExI] Digital identity
Anders Sandberg
anders at aleph.se
Sun Apr 28 17:29:10 UTC 2013
On 28/04/2013 16:32, BillK wrote:
> On Sun, Apr 28, 2013 at 12:33 PM, Anders Sandberg wrote:
>> I assume you are merely donning some fashionable cynicism here, rather than
>> actually talking political science or sociology?
>>
> No. I have no wish to discuss politics, but I thought that it was
> common knowledge that politicians are regarded as self-serving and
> corrupt. They work for themselves and the bankers and corporation
> lobbyists.
>
> If you dig a bit deeper the reality is far worse.
> How can you tell when a politician is lying?
> It's when they speak.
OK, fashionable cynisism it is. You clearly have not interacted with
governments and policymaking processes, so you do not know what to
actually be critical about. So you just repeat popular sayings.
>> If you believe influence scales linearly or superlinearly with available money,
>> the governments will clearly be able to control the rich guys.
> Heh! :) Tell that story to the unemployed and poor 99% of the population.
I prefer to investigate things that matter. Even if 99% of people cannot
understand power dynamics in social groups and how it affects them, it
is still a relevant question.
It is also worthwhile to consider corporations as agents with power and
wealth. Again, many corporations have significant amounts of both, but
the scaling of their actual social or political power does not appear to
be proportional to the wealth. For example, consider how the software
industry has historically been very weak in lobbying effectiveness
compared to other industries.
Overall, it is my considered opinion that power scales sublinearly in
wealth. This might be because power itself seems to scale sublinearly
when groups form, and that this is even deliberately built into many
social systems. The reason is that the coalition of the most powerful
agents is stabilized by this: otherwise the subset of the most powerful
members of the coalition could pool their power and beat the rest.
However, the zero-sum nature of social status and that access to
important people is a finite resource (the president only gets 24 hours
in a day even if he wants to talk to everybody) means that there are
networking advantages that allow incumbents to retain their power or get
things they like done by talking to people with executive power.
> I think you underestimate the appeal of having your own Eden. Some
> argue that could be the explanation for the Great Silence.
Yes, and it is a pretty crazy explanation since it requires *everybody*
and *everything* *everywhere* regardless of evolutionary and cultural
background to decide on living in VR. Even if a tiny fraction do not go
for Eden the explanation breaks. It needs a further argument why this
never happens.
>> The speed depends on available computer power, and this depends on the
>> particular scenario. If the emulation technology arrives first, but
>> computers are not yet cheap/fast enough uploads will be few and slow,
>> gradually picking up speed. If you have hardware overhang due to late
>> emulation technology, you get fast/many uploads quickly.
>
> Agreed it won't all happen overnight. The sequence is important.
> But the first successful upload / AGI can immediately set about
> improving the computer power and creating improved copies. How quick
> and how many uploads get done is arguable.
Improving computer power is an industry wide process, requiring at least
the size of something like Intel (104,000 employees). Just making a
hundred copies of an engineer is not going to cut it, especially since
hardware improvement requires a very diverse skillset (the people who
know how to crystalize semiconductors are separare from the people who
understand mask design and the people who know processor architecture).
And real improvements likely requires researching entirely new
technologies. How long does it take to bring your skillset up to exotic
solid state physics, plasmonics or spintronics?
While hard takeoffs are a valid concern, I have always felt people
underestimate the sheer amount of data/knowledge that needs to be
learned in order to improve stuff. There is a reason for the division of
labor, and I suspect even a bright AGI architecture will be data
limited. Of course, once it has learned enough it might be able to copy
itself or transfer skills, so I am not arguing that it will necessarily
be as slow as humans. Even uploads might speed things up a lot by having
multiplication of human capital. It is just that we not yet have a good
way of estimating how quick "quick" is in realtime.
--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University
More information about the extropy-chat
mailing list