[ExI] Unfriendly AI is a mistaken idea

Brent Allsop brent.allsop at comcast.net
Sat May 26 21:30:23 UTC 2007


Russell,

Thanks for doing this work!  This is a great start at getting this issue 
"canonized" so we can know what everyone believes on this issue and 
finally make some progress.  There is much in your statement that I 
believe in, so I could almost add my POV statement in a camp underneath 
yours.  Since any support of sub camps implies support of supper camps 
my support could potentially include support of your camp.  But I do 
believe there are enough differences that necessitate my being in a 
sibling camp to emphasize these differences.

But, we both definitely do agree that the notion of Friendly or 
Unfriendly AI is silly right?  So I propose a foundation statement under 
the agreement statement that basically says what we agree in, and that 
is that the notion of friendly or unfriendly AI is silly.

Each camp has a "name" and a "one line" description so could you come up 
with these for your statement Russell?  The name is meant to be a 
mnemonic, like a file name, and must be less than 25 chars. (You don't 
want a path of camp names like God / Theist / Monotheism / Christian / 
Mormon / Transhumanist to be to long! ;)  The "one line" description is 
like a camp title and can be a little longer.

My original statement had this:

Name: *Such concern is mistaken*
One Line: *Concern over unfriendly AI is a big mistake.*

If you agree, Russell, then I propose we use these for the supper camp 
that will contain both of our camps and have the first version of the 
text be something simple like:

Text: *We believe the notion of Friendly or Unfriendly AI to be silly 
for different reasons described in subordinate camps.*

Then perhaps we can find other parts of our camps we agree in (I think 
there is a lot of this), and can then move these things into the supper 
camp rather than repeating them in each of our sub camps.

I will rename my camp as follows and move it to be a sibling to yours 
under this new supper camp:

Name:  *AI can only be friendly*
One Line: *AI will naturally be increasingly motivated, and friendly.*

and include the text I have there now: 
http://test.canonizer.com/topic.asp?topic_num=16&statement_num=2

(unless someone can help me improve it.)

How does that sound?  Does that sound good to you Russell?  Does anyone 
disagree with this proposed structure?

I initially thought the camp I was in, that the notion of unfriendly AI 
was silly, was by far a minority camp here.  But surprise surprise, 
evidently I'm not in the that much of a minority after all?  I'm sure 
there are at least some people with other camps, but I'm having troubles 
figuring such out from all the tangential, very verbose, flip floppy, 
personal conversations that are hard to keep up with.  So I hope some of 
these other people will propose concise camp statements so we can 
finally make more progress on this issue and really concisely know and 
specify precisely what extropians do believe.  How many more people are 
in either of the structure of three camps forming so far?  Maybe we can 
finally completely expunge this idea from our beliefs and discussion and 
writings for good and finally move up to the next level?


Thanks!

Brent Allsop


Russell Wallace wrote:
> On 5/24/07, *Brent Allsop* <brent.allsop at comcast.net 
> <mailto:brent.allsop at comcast.net>> wrote:
>
>
>     Could some of you in different camps dig up some of your old
>     posts, and clean them up a bit, or whatever and propose it as a
>     concise description of your POV here (or post it to the Canonizer
>     on this topic) so other people can know it without having to
>     attempt to digest all the notes groups histories?
>
>
>
> Okay, here goes:
>
>
> The entire question of "Friendly" versus "Unfriendly" AI is based on 
> anthropomorphism; our intuitions are shaped by a million years of 
> living in a world where we were the only general intelligences. 
> Therefore almost every time we portray AGI, we portray metal men with 
> human psychology. Even when those of us with expertise in the field 
> try our best to remove the anthropomorphism, we still end up talking 
> about "human-level AGI" - in particular, that human quality of 
> possessing a self-willed mind, something that acts in the world 
> without - or even against - direction; that has motives, which must be 
> trusted.
>
> I think we will never have human-level AGI in the same way that we 
> will never have bird-level flight - because there is no scalar "level".
>
> Does an F-22 have bird-level flight? In one way the answer is yes and 
> much more - it flies far faster than any bird.
>
> But flight in the real world includes refueling, maintenance and 
> manufacturing. And an F-22's performance in these areas is infinitely 
> inferior to that of a bird; it is entirely dependent on humans to 
> manufacture, refuel and maintain it.
>
> At this point some readers will be thinking that these gaps might 
> someday be filled in given sufficiently advanced nanotechnology. And 
> indeed there is no known law of physics that forbids this.
>
> But when you're working with machine phase rather than living cells, 
> even if you _can_ burden a combat aircraft with the cost and overhead 
> of these capabilities there is no practical reason to do so. The F-22 
> was built by professional engineers for a practical purpose. If 
> bird-level flight is ever created in centuries to come, it will be 
> done by hobbyists for the coolness factor, and only long after it is 
> of no practical relevance. That is what I mean when I say we will 
> never have bird-level flight in the practical sense: it will never be 
> done by anyone working in their capacity as professional engineers, 
> because the _shape_ of capabilities implied by machine phase is so 
> different from that implied by biology.
>
> The same applies to intelligence. It is not a scalar quantity, but 
> possesses a complex shape. We already have computers that outperform 
> humans in arithmetic by a factor of a quadrillion, yet underperform in 
> almost all other tasks by a factor of infinity. That's a difference in 
> shape of capabilities that implies a completely different path. It 
> will be no more feasible or necessary for AGI to duplicate all the 
> abilities of a human than it is feasible or necessary for an F-22 to 
> duplicate all the abilities of a bird. (Again, I'm not saying an AGI 
> with the shape of a human mind can't ever be created, in a thousand or 
> a million years or whatever from now - but if so, it will be done for 
> the coolness factor, not by professional engineers who want it to 
> solve a practical problem. It will never be cutting edge.)
>
> Furthermore, even if you postulate AGI0 that could create AGI1 unaided 
> in a vacuum, there remains the fact that AGI0 won't be in a vacuum, 
> nor if it were would it have any motive for creating AGI1, nor any 
> reason to prefer one bit stream rather than another as a design for 
> AGI1. There is after all no such function as:
>
> float intelligence(program p)
>
> There is, however, a family of functions (albeit incomputable in the 
> general case):
>
> float intelligence(program p, job j)
>
> In other words, intelligence is useful - and can be said to even exist 
> - only in the context of the jobs the putatively intelligent agent is 
> doing. And jobs are supplied by the real world - which is run by 
> humans. Even in the absence of technical issues about the shape of 
> capabilities, this alone would suffice to require humans to stay in 
> the loop.
>
> The point of all this isn't to pour cold water on people's ideas, it's 
> to point out that we will make more progress if we stop thinking of 
> AGI as a human child. It's a completely different kind of thing, and 
> more akin to existing software in that it must function as an 
> extension of, rather than replacement for, the human mind. That means 
> we have to understand it in order to continue improving it - black box 
> methods have to be confined to isolated modules. It means user 
> interface will continue to be of central importance, just as it is 
> today. It means the Lamarckian evolutionary path of AGI will have to 
> be based, just as current software is, on increased usefulness to 
> humans at each step.
>
> This is why the question of whether AGI will be Friendly or Unfriendly 
> is as relevant as the question of whether it will be bearded or 
> clean-shaven.
> ------------------------------------------------------------------------
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070526/551ecab0/attachment.html>


More information about the extropy-chat mailing list