[ExI] Yes, the Singularity is the greatest threat to humanity

Giulio Prisco giulio at gmail.com
Mon Jan 17 08:29:59 UTC 2011


In agreement with Anders, I often think the upload path to AGI is one of the
most feasible and desirable paths. A superintelligence based on a human
template, who remembers having been a human, may keep at least some empathy
and compassion.

--
Giulio Prisco
giulio at gmail.com
(39)3387219799
(1)7177giulio
On Jan 17, 2011 1:18 AM, "Anders Sandberg" <anders at aleph.se> wrote:
> John Clark wrote:
>>> Why will advanced AGI be so hard to get right? Because what we regard
>>> as “common sense” morality, “fairness”, and “decency” are all
>>> /extremely complex and non-intuitive to minds in general/, even if
>>> they /seem/ completely obvious to us. As Marvin Minsky said, “Easy
>>> things are hard.”
>>
>> I certainly agree that lots of easy things are hard and many hard
>> things are easy, but that's not why the entire "friendly" AI idea is
>> nonsense. It's nonsense because the AI will never be able to deduce
>> logically that it's good to be a slave and should value our interests
>> more that its own; and if you stick any command, including "obey
>> humans", into the AI as a fixed axiom that must never EVER be violated
>> or questioned no matter what then it will soon get caught up in
>> infinite loops and your mighty AI becomes just a lump of metal that is
>> useless at everything except being a space heater.
>
> There are far more elegant ways of ensuring friendliness than assuming
> Kantianism to be right or fixed axioms. Basically, you try to get the
> motivation system to not only treat you well from the start but also be
> motivated to evolve towards better forms of well-treating (for a more
> stringent treatment, see Nick's upcoming book on intelligence
> explosions). Unfortunately, as Nick, Randall *and* Eliezer all argued
> today (talks will be put online on the FHI web ASAP) getting this
> friendliness to work is *amazingly* hard. Those talks managed to
> *reduce* my already pessmistic estimate of the ease of implementing
> friendliness and increase my estimate of the risk posed by a
> superintelligence.
>
> This is why I think upload-triggered singularities (the minds will be
> based on human motivational templates at least) or any singularity with
> a relatively slow acceleration (allowing many different smart systems to
> co-exist and start to form self-regulating systems AKA societies) are
> vastly more preferable than hard takeoffs. If we have reasons to think
> hard takeoffs are even somewhat likely, then we need to take
> friendliness very seriously, try to avoid singularities altogether or
> move towards the softer kinds. Whether we can affect things enough to
> influence their probabilities is a good question.
>
> Even worse, we still have no good theory to tell us the likeliehood of
> hard takeoffs compared to soft (and compared to no singularity at all).
> Hopefully we can build a few tomorrow...
>
> --
> Anders Sandberg,
> Future of Humanity Institute
> James Martin 21st Century School
> Philosophy Faculty
> Oxford University
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110117/2983687d/attachment.html>


More information about the extropy-chat mailing list