[ExI] uploads again

Anders Sandberg anders at aleph.se
Mon Dec 24 22:46:36 UTC 2012

On 24/12/2012 20:22, Brent Allsop wrote:
> But, all intelligence must eventually logically realize the error in 
> any such immoral, lonely, and will eventually loose, 'slavish' 
> thinking.  Obvisly what is morally right, is to co-operate with 
> everyone, and seek to get the best for everyone - the more diversity 
> in desires the better.

This is anthropomorphising things a lot. Consider a utility-maximizer 
that has some goal (like making maximal paperclips). There are plenty of 
reasons to think that it would not start behaving morally:

Typically moral philosophers respond to this by claiming the AI is not a 
moral agent, being bound by a simplistic value system it will never want 
to change. That just moves the problem away from ethics to safety: such 
a system would still be a danger to others (and value in general). It 
would just not be a moral villain.

Claims that systems with hardwired top-level goals will necessarily be 
uncreative and unable to resist more flexible "superior" systems better 
be followed up by arguments. So far the closest I have seen is David 
Deutsch argument that they would be uncreative, but as I argue in the 
link above this is inconclusive since we have a fairly detailed example 
of something that is as creative (or more) than any other software and 
yet lends itself to hardwired goals (it has such a slowdown that it is 
perfectly safe, though).

> And I think that is why there is an emerging consensus in this camp, 
> that thinks fear of any kind of superior intelligence is silly, 
> whether artificial, alien, or any kind of devils, whatever one may 
> imagine them to be in their ignorance of this necessary moral fact.

I'm not sure this emerging consensus is based on better information or 
just that a lot of the lets-worry-about-AI people are just busy over at 
SingInst/LessWrong/FHI working on AI safety. I might not be a 
card-carrying member of either camp, but I think dismissing the 
possibility that the other camp is on to something is premature.

The proactive thing to do would be for you to find a really good set of 
arguments that shows that some human-level or beyond AI systems actually 
are safe (or even better, disprove the Eliezer-Omohundro thesis that 
most of mindspace is unsafe, or prove that hard takeoffs are 
impossible/have some nice speed bound). And the AI-worriers ought to try 
to prove that some proposed AI architectures (like opencog) are unsafe. 
I did it for Monte Carlo AIXI, but it is a bit like proving a snail to 
be carnivorous - amble for your life! - it is merely an existence proof.

> So far, at least, there are more people, I believe experts, willing to 
> stand up and defend this position, than there are willing to defend 
> any fearful camps.

There has been some interesting surveys of AI experts and their views on 
AI safety over at Less Wrong. I think the take home message is, after 
looking at prediction track records and cognitive bias, that experts and 
consensuses in this domain are pretty useless. I strongly recommend 
Stuart Armstrong's work on this:

Disaggregate your predictions/arguments, try to see if you can boil them 
down to something concrete and testable.

Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20121224/704f6514/attachment.html>

More information about the extropy-chat mailing list