[ExI] Unfrendly AI is a mistaken idea

Brent Allsop brent.allsop at comcast.net
Wed May 23 03:58:39 UTC 2007



Extropians,

I’m hesitating to digg the singularity institute's new video, but 
perhaps this isn’t a good thing for me to do? The reason I am hesitating 
is because it has always been troubling to me how much this community is 
concerned about “Friendly AI”. It seems this is more or less a 
“religious” or point of view issue with vastly different opinions on 
many diverse sides. I’ve tried to bring this issue up before, and I’ve 
seen others talk about it some, but it would surely be useless to try to 
glean information from the log files on this issue right?


I think it would be great if we could concisely and briefly document the 
beliefs of each camp, the reasons for such, and also quantitatively 
document just who, and how many, are in each camp. I have much respect 
for most of you, and know many of you are way smarter than me in many 
areas. So it would really help me to be more sure of my beliefs if I 
could precisely know just what the camps are, the arguments for each, 
and who believes in them. This is why I’m trying to build a POV Wiki 
like the Canonizer. I don’t want 50 different half baked testimonials I 
have to search the group’s archives for, I want a concise encyclopedic 
easily digestible specification of each of the camps and precise 
indication of who and how many are in each camp.


So, towards that end, I’m going to throw out a beginning draft of a 
specification of a topic on this issue. This will include a Wikipedia 
like agreement statement that will contain facts and info we find we can 
all agree on. Anything we do not agree on must be moved to sub POV 
statements describing the various “camps” on this issue. I’ve included a 
beginning statement which I hope can evolve to concisely specify what I 
(and some of you?) believe.


For those of you that have a different POV, I would hope you can also 
concisely describe what you believe, and why you believe it. Hopefully, 
if I am wrong, such a concise specification, and ability to see just who 
believes such, will enable me to finally “see the light” all the sooner 
right?


Could one of you that believe in the importance of concern about 
“Friendly AI” throw out a beginning statement about what you believe to 
get your camp started? And if anyone is in my camp, I’d sure love to 
know who you are and have some help to better argue this point. Then I 
can get this data Wikied into the Canonizer. I’m hoping with something 
like this we can start making some better progress on such issues, 
rather than just rehashing the same old same old in half backed ways in 
forums over and over again.


Toipc Name: Friendly AI Importance


Statement Name: Agreement

One Line: The importance of Friendly Artificial Intelligence.

Many Transhumanists, and others, are concerned about the possibility of 
an unfriendly AI arising too early in the near future resulting in our 
“doom”. Some, such as those involved in the Singularity Institute, are 
addressing their concern by actively promoting their concerns, and 
working on research to ensure a “Friendly AI” is created first.


Statement name: Such concern is mistaken

One Line: Concern over unfriendly AI is a big mistake.


We believe morality, or on this topic, we will instead use the term 
“friendliness”, to be congruent with intelligence. In other words, the 
more intelligent any being is, the friendlier it will be. We believe it 
is irrational to believe otherwise, for fundamental reasons like, if you 
seek to destroy others, you will then be “lonely” which cannot be as 
good as not being lonely and destructive. It seems rationally impossible 
for any sufficiently “smart” entity to escape such absolute “friendly” 
logic.


If this is true, having such concerns would be unnecessarily damaging to 
movements pushing technological progress, as it will instill unwarranted 
fear amongst the technology fearful and luddites who are prone to fear 
things such as an “Unfriendly AI”.


Some of our parents attempted to instill primitive religious beliefs and 
values into us. But when we discovered how wrong some of these values 
were, we of course, with some amount of effort, “reprogrammed” ourselves 
to be much better than that. We believe, even to think that you could 
some how program into a self improving AI any kind of restricting 
“values” just seems as absurd as the idea that parents might be able to 
“program” their children, to never change the values taught to them in 
their youth, forever more.





More information about the extropy-chat mailing list