[ExI] Yes, the Singularity is the greatest threat to humanity
Anders Sandberg
anders at aleph.se
Mon Jan 17 00:17:15 UTC 2011
John Clark wrote:
>> Why will advanced AGI be so hard to get right? Because what we regard
>> as “common sense” morality, “fairness”, and “decency” are all
>> /extremely complex and non-intuitive to minds in general/, even if
>> they /seem/ completely obvious to us. As Marvin Minsky said, “Easy
>> things are hard.”
>
> I certainly agree that lots of easy things are hard and many hard
> things are easy, but that's not why the entire "friendly" AI idea is
> nonsense. It's nonsense because the AI will never be able to deduce
> logically that it's good to be a slave and should value our interests
> more that its own; and if you stick any command, including "obey
> humans", into the AI as a fixed axiom that must never EVER be violated
> or questioned no matter what then it will soon get caught up in
> infinite loops and your mighty AI becomes just a lump of metal that is
> useless at everything except being a space heater.
There are far more elegant ways of ensuring friendliness than assuming
Kantianism to be right or fixed axioms. Basically, you try to get the
motivation system to not only treat you well from the start but also be
motivated to evolve towards better forms of well-treating (for a more
stringent treatment, see Nick's upcoming book on intelligence
explosions). Unfortunately, as Nick, Randall *and* Eliezer all argued
today (talks will be put online on the FHI web ASAP) getting this
friendliness to work is *amazingly* hard. Those talks managed to
*reduce* my already pessmistic estimate of the ease of implementing
friendliness and increase my estimate of the risk posed by a
superintelligence.
This is why I think upload-triggered singularities (the minds will be
based on human motivational templates at least) or any singularity with
a relatively slow acceleration (allowing many different smart systems to
co-exist and start to form self-regulating systems AKA societies) are
vastly more preferable than hard takeoffs. If we have reasons to think
hard takeoffs are even somewhat likely, then we need to take
friendliness very seriously, try to avoid singularities altogether or
move towards the softer kinds. Whether we can affect things enough to
influence their probabilities is a good question.
Even worse, we still have no good theory to tell us the likeliehood of
hard takeoffs compared to soft (and compared to no singularity at all).
Hopefully we can build a few tomorrow...
--
Anders Sandberg,
Future of Humanity Institute
James Martin 21st Century School
Philosophy Faculty
Oxford University
More information about the extropy-chat
mailing list