[ExI] Unfrendly AI is a mistaken idea

Richard Loosemore rpwl at lightlink.com
Wed May 23 13:30:35 UTC 2007


Brent,

There is another position -- one that I have stated on the Singularity 
and AGI lists -- that is somewhere in between your position and the SIAI 
position.

I think that friendliness is an important issue, but I think that the 
SIAI approach to the issue is extremely naive, and unworkable (briefly: 
it assumes a certain type of AI, with a certain type of motivational 
system that, in practice, will probably not actually function in a real 
intelligent system).

An alternative approach to the issue of friendliness would use types of 
motivational system that are more controllable, although less 
mathematically provable.  You can find a short sketch of the idea at:

http://www.mail-archive.com/singularity@v2.listbox.com/msg00316.html

The reason this position is in between yours and the SIAI one is that I 
do believe that the SIAI insistence on provability is an obstruction: 
they will be trying to do this forever, without succeeding, and if the 
rest of us sit around waiting for them to realize that they can never 
succeed, we will waste a ridiculous amount of time.

Also, you suggest that any sufficiently intelligent being would 
inevitably be friendly:  I do not accept that by itself (too easy to 
think of counterexamples), but on the other hand I believe that a 
certain class of intelligent system (with a certain type of motivational 
system) would have a natural tendency toward friendliness.  That is just 
a statement of my general position that friendliness can in fact be 
assured with a particular design.

One of the ways that a system designed badly could be unfriendly would 
be just plain, common or garden madness:  unfortunately, I think that 
the conventional approach to AI does not degrade gracefully, and for 
that reason would be more prone to madness than other approaches, such 
as the one I am working on.


Richard Loosemore.





Brent Allsop wrote:
> 
> Extropians,
> 
> I’m hesitating to digg the singularity institute's new video, but 
> perhaps this isn’t a good thing for me to do? The reason I am hesitating 
> is because it has always been troubling to me how much this community is 
> concerned about “Friendly AI”. It seems this is more or less a 
> “religious” or point of view issue with vastly different opinions on 
> many diverse sides. I’ve tried to bring this issue up before, and I’ve 
> seen others talk about it some, but it would surely be useless to try to 
> glean information from the log files on this issue right?
> 
> 
> I think it would be great if we could concisely and briefly document the 
> beliefs of each camp, the reasons for such, and also quantitatively 
> document just who, and how many, are in each camp. I have much respect 
> for most of you, and know many of you are way smarter than me in many 
> areas. So it would really help me to be more sure of my beliefs if I 
> could precisely know just what the camps are, the arguments for each, 
> and who believes in them. This is why I’m trying to build a POV Wiki 
> like the Canonizer. I don’t want 50 different half baked testimonials I 
> have to search the group’s archives for, I want a concise encyclopedic 
> easily digestible specification of each of the camps and precise 
> indication of who and how many are in each camp.
> 
> 
> So, towards that end, I’m going to throw out a beginning draft of a 
> specification of a topic on this issue. This will include a Wikipedia 
> like agreement statement that will contain facts and info we find we can 
> all agree on. Anything we do not agree on must be moved to sub POV 
> statements describing the various “camps” on this issue. I’ve included a 
> beginning statement which I hope can evolve to concisely specify what I 
> (and some of you?) believe.
> 
> 
> For those of you that have a different POV, I would hope you can also 
> concisely describe what you believe, and why you believe it. Hopefully, 
> if I am wrong, such a concise specification, and ability to see just who 
> believes such, will enable me to finally “see the light” all the sooner 
> right?
> 
> 
> Could one of you that believe in the importance of concern about 
> “Friendly AI” throw out a beginning statement about what you believe to 
> get your camp started? And if anyone is in my camp, I’d sure love to 
> know who you are and have some help to better argue this point. Then I 
> can get this data Wikied into the Canonizer. I’m hoping with something 
> like this we can start making some better progress on such issues, 
> rather than just rehashing the same old same old in half backed ways in 
> forums over and over again.
> 
> 
> Toipc Name: Friendly AI Importance
> 
> 
> Statement Name: Agreement
> 
> One Line: The importance of Friendly Artificial Intelligence.
> 
> Many Transhumanists, and others, are concerned about the possibility of 
> an unfriendly AI arising too early in the near future resulting in our 
> “doom”. Some, such as those involved in the Singularity Institute, are 
> addressing their concern by actively promoting their concerns, and 
> working on research to ensure a “Friendly AI” is created first.
> 
> 
> Statement name: Such concern is mistaken
> 
> One Line: Concern over unfriendly AI is a big mistake.
> 
> 
> We believe morality, or on this topic, we will instead use the term 
> “friendliness”, to be congruent with intelligence. In other words, the 
> more intelligent any being is, the friendlier it will be. We believe it 
> is irrational to believe otherwise, for fundamental reasons like, if you 
> seek to destroy others, you will then be “lonely” which cannot be as 
> good as not being lonely and destructive. It seems rationally impossible 
> for any sufficiently “smart” entity to escape such absolute “friendly” 
> logic.
> 
> 
> If this is true, having such concerns would be unnecessarily damaging to 
> movements pushing technological progress, as it will instill unwarranted 
> fear amongst the technology fearful and luddites who are prone to fear 
> things such as an “Unfriendly AI”.
> 
> 
> Some of our parents attempted to instill primitive religious beliefs and 
> values into us. But when we discovered how wrong some of these values 
> were, we of course, with some amount of effort, “reprogrammed” ourselves 
> to be much better than that. We believe, even to think that you could 
> some how program into a self improving AI any kind of restricting 
> “values” just seems as absurd as the idea that parents might be able to 
> “program” their children, to never change the values taught to them in 
> their youth, forever more.



More information about the extropy-chat mailing list