[ExI] Yes, the Singularity is the greatest threat to humanity

Samantha Atkins sjatkins at mac.com
Sun Jan 23 21:25:39 UTC 2011


On Jan 21, 2011, at 9:29 PM, Michael Anissimov wrote:

> On Mon, Jan 17, 2011 at 11:23 PM, Eugen Leitl <eugen at leitl.org> wrote:
> On Mon, Jan 17, 2011 at 03:11:30PM -0800, Michael Anissimov wrote:
> 
> > This is the basis of Eugen's opposition to Friendly AI -- he sees it as a
> 
> This is not the basis. This is one of the many ways I'm pointing
> out what you're trying to do is undefined. Trying to implement
> something undefined is going to produce an undefined outcome.
> 
> That's a classical case being not even wrong.
> 
> Defining it is our goal... put yourself in my shoes, imagine that you think that uploading is much, much harder than intelligence-explosion-initiating AGI.  What to do?  What, what, what to do?

Both are currently unknowably hard in that we don't know how to achieve either.  Guesses that one is harder than the other may be right or wrong but are not decidable as to their correctness.

Friendly AI requires not only AGI but also a provably correct way of constraining it so we get only things that we consider beneficial or at the least not destructive of all our values and of humanity itself.   So the first problem is whether we have a very clear idea of what we consider truly beneficial even now much less into the future.    For whatever we think that is should by FAI theory as I understand it be unbreakably encoded to AGI design in such a way that it is immutable no matter how much the AGI self-improves.   This is a type of Aladdin's Lamp program.  You get one chance to get that 'wish' right without unforeseen consequences.  

We can't clearly define what "Friendly" means and we don't know how to safely codify it to immutably direct a being of ever increasing ability throughout whatever the future may bring.

The next problem is that I see no reason to believe that it is remotely possible to make some one part of the AGI code immutable when the rest is open to examination and change.

The entire effort seems to be driven by fear.  It borders on "don't create any AGI until we can absolutely prove it is safe".  In short, it is the Precautionary Principle at work.

- samantha
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110123/17534d36/attachment.html>


More information about the extropy-chat mailing list