[ExI] Unfriendly AI is a mistaken idea
Samantha Atkins
sjatkins at mac.com
Mon May 28 05:46:55 UTC 2007
Part of what I wrote on another list (which started with some
thoughts on an AGI self improving theorem prover) today may clarify
my position further:
"Today we have no real idea how to build a working AGI. We have
theories, some of which seem to people that have studied such things
plausible or at least not obviously flawed - at least to a few of the
qualified people. But we don't have anything working that looks
remotely close. We don't have any software system that self-improves
in really interesting ways except perhaps for some very, very
constrained genetic programming domains. Nada. We don't have
anything that can pick concepts out of the literature at large and
integrate them.
To think that a machine that is a glorified theorem prover is going
to spontaneously extrapolate all the ways it might get better and
spontaneously sprout perfect general concept extraction and learning
algorithms and spontaneously come to understand software and hardware
in depth and develop a will to be better greater than all other
consideration whatsoever but without ever in its self-improvement
questioning that will/goal and somehow get or be given all the
resources needed to eventually convert everything to highly efficient
computational matrix is utterly and completely bizarre when you think
about it. It is far more unlikely than gray goo.
Can we stop wasting very valuable time and brains on protecting
against the most unlikely of possibilities and get on with actually
increasing intelligence on this poor besotted rock?"
This is similar to what Russell says I guess except I am far less
pessimistic about the possibility of strong AI in the near term than
he. However I don't believe that we can control the consequences of
its arrival with respect to human well being hardly at all. So why
am I so interested in AI? Because I believe that without a drastic
increase in intelligence on this planet that we have little hope at
all of surviving to see the 22nd century. Hopefully much of that
intelligence can come from Intelligence Augmentation of humans and
groups of humans. But that has its own dangers.
So yes unfriendly AI is a concern but I consider lack of sufficient
intelligence (and yes, wisdom) to be be a far greater and more
pressing concern.
- samantha
On May 27, 2007, at 12:58 PM, Brent Allsop wrote:
>
> Russell,
>
> Thanks for adding your camp and support, it looks great! Your 1
> hour share of Canonizer LLC has been recorded. Notice that this
> now anti "Friendly AI" topic has now moved up from the 10th most
> supported topic to #4.
>
> Here is a link to the topic as it is currently proposed:
>
> http://test.canonizer.com/change_as_of.asp?destination=http://
> test.canonizer.com/topic.asp?topic_num=16&as_of=review/
>
>
>
> Samantha,
>
> Thanks for proposing yet another good position statement that looks
> to me like another good point. Since you disagree with a notion in
> the supper statement - that "Concern Is Mistaken" we should
> probably restructure things to better accommodate your camp. You
> don't want to support a sub camp to a camp containing a notion you
> disagree with and thereby imply support of the mistaken (in your
> POV) notion.
>
> Could we say that one of the points of contention seems to do with
> Motivation of future AI? Russell, you believe that for the
> foreseeable future tools will just do what we program them to, and
> they will not be motivated right? Where as apparently Samantha and
> I believe tools will be increasingly motivated and thereby share
> this difference in our beliefs with you?
>
> Samantha's and my differences have to do with the morality (or
> friendliness) of such motivations right? I believe, like
> everything else, the morality of this motivation will be improving
> so it need be of no real concern. While Samantha differers,
> believing the morality may not improve along with everything else,
> so it indeed could be a real concern right Samantha? Yet, since
> Samantha believes if such were the case, there would be nothing we
> could do about it, it isn't worth any effort, hence she is in
> agreement with Russell and I about the lack of worth for any effort
> towards creating a friendly AI at this time?
>
> So does anyone think a restructuring like the following would not
> be a big improvement? Or can anyone propose a better structure?
>
> No benefit for effort on Friendly AI
> AI will be motivated
> Everything, including moral motivation, will increase. (Brent)
> We can’t do anything about it, so don’t waist effort on it. (Samantha)
> AI will not be motivated (Russell)
>
> Any other POV out there we're still missing before we make a big
> structure change like this?
>
> Thanks for your effort folks! This is really helping me realize
> and understand the important issues in a much more productive way.
>
> Brent Allsop
>
>
>
>
>
> Samantha Atkins wrote:
>>
>> On May 27, 2007, at 12:59 AM, Samantha Atkins wrote:
>>
>>>
>>> On May 26, 2007, at 2:30 PM, Brent Allsop wrote:
>>>> My original statement had this:
>>>>
>>>> Name: Such concern is mistaken
>>>> One Line: Concern over unfriendly AI is a big mistake.
>>>>
>>>> If you agree, Russell, then I propose we use these for the
>>>> supper camp that will contain both of our camps and have the
>>>> first version of the text be something simple like:
>>>>
>>>> Text: We believe the notion of Friendly or Unfriendly AI to be
>>>> silly for different reasons described in subordinate camps.
>>>>
>>>
>>> Subgroup: No effective control.
>>>
>>> The issue of whether an AI is friendly or not is, when removed
>>> from anthropomorphism of the AI, an issue of whether the AI is
>>> strongly harmful to our [true] interests or strongly beneficial.
>>> It is a very real and reasonable concern. However, it is
>>> exceedingly unlikely for a very advanced non-human intelligence
>>> that we can exert much leverage at all over its future decision
>>> or the effects of those decisions on us. Hence the issue, while
>>> obviously of interest to us, cannot really be resolved. Concern
>>> itself however is not a "big mistake". The mistake is believing
>>> we can actually appreciably guarantee a "friendly" outcome.
>>>
>>
>> There is also some danger that over concern with insuring
>> "Friendly" AI, when we in fact cannot do so, slows down or
>> prevents us from achieving strong AI at all. If the great influx
>> of substantially >human intelligence is required for our survival
>> then postponing AI could itself be a real existential risk or
>> failure to avert other existential risks.
>>
>> - s
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070527/f1f1158a/attachment.html>
More information about the extropy-chat
mailing list