[ExI] Who is covering corruption in AI?

Tomasz Rola rtomek at ceti.pl
Thu Nov 1 17:00:23 UTC 2012


On Thu, 1 Nov 2012, Kevin G Haskell wrote:

> <This isn't about friendship.  It is about the future of humanity and
> beyond.  If someone has
> stolen money and integrity from the efforts of our small SIAI community,
> then
> we need to know the exact specifics of why, why we should forgive, and
> then how we can make sure it doesn't  happen again.
> 
> If there is something wrong with what is going on with the fundraising of
> the  SIAI, then it needs
> to be known, repaired like a machine, and then ensured that it never
> happens again. If somebody has betrayed our machine goals, then who is it,
> and how do we make sure they no longer pose a threat?

Eh? I don't think this is possible. Whenever there is an effort/initiative 
by some group of humans and there is an opportunity which could benefit 
one human short term while screwing rest of the group (and the whole 
effort) long term, you can safely bet someone will screw it if you wait 
long enough. Usually the wait is not very long (but I think in smaller 
groups it is longer).

If you would like to see how far such screwing can go, meet history. There 
were fights over throne while Byzantium Empire was drowning deeper and 
deeper (AFAIK, fights/dissonance continued until the very fall). Soviet 
Union is another great example. Nowadays, tinkering around privacy, if 
successful, will almost certainly benefit few, but longterm will convert 
rest of humans into a herd of bipedal cattle, which in effect will screw 
everybody (including beneficiaries, or rather, their grand grand 
children). I don't mean things like vengeance of God, rather, I mean that 
the only constant in Universe is change. Cattle and dinosaurs are not very 
good in adapting to change.

So, history can teach us (or at least, me) that humans screw themselves. 
Of course, on the surface it looks like they were actually acting in their 
best interest. This probably means they were doing their best according to 
game theory. So, whenever someone acts in such way, it is exactly the best 
thing they could do in this exact situation (assuming they had good 
overview of it).

Funny, isn't it?

Now, if only they could see themselves as part of bigger entity, like a 
world, civilisation etc. This would change the whole game theoretic stuff 
in favour of group and its efforts. But I have mixed feelings about 
proposing such changes. They smell too much of soviets and hive minds. And 
if they are future, it is not going to last long. Ants don't build 
rockets. They don't do anything new at all. Soviets fell apart after 70 
years of continuous success (and killing millions of their best folk, 
which would be very needed now - if they only still had them, or their 
students). Global soviet (globsoviet?) would last much longer (because of 
no competition), but the longer it would be, the more fraked humanity will 
be after its fall (petrified structures versus change means their fall is 
inevitable, can be mitigated for some time with human sacrifices but 
that's all).

Humans are poor knobs and wheels in a mechanism. Building flawless 
mechanisms out of humans is doomed effort. On the other side, current 
hiper-pro-individualistic bent, promoting personal success and 
disregarding communities, doesn't look like the way to go either. If you 
really want to engineer a group, either make it into religious cult or 
make it fault-tolerant. Or both.

In other words, I don't think there is anything you (or anybody) could 
repair. We as species are to be oscilating between egoistic and group 
instincts. Sometimes for something good.

Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.      **
** As the answer, master did "rm -rif" on the programmer's home    **
** directory. And then the C programmer became enlightened...      **
**                                                                 **
** Tomasz Rola          mailto:tomasz_rola at bigfoot.com             **



More information about the extropy-chat mailing list