[ExI] Unfriendly AI is a mistaken idea

Brent Allsop brent.allsop at comcast.net
Mon May 28 18:39:22 UTC 2007


Samantha,

<<<
Can we stop wasting very valuable time and brains on protecting against 
the most unlikely of possibilities and get on with actually increasing 
intelligence on this poor besotted rock?
 >>>

Absolutely!  That is the moral imperative of this exercise.  It is 
immoral to waste time because of what we might all  loose by any such 
immoral or mistaken behavior.  Personally, I believe we could 
unnecessarily loose yet another of our family members to rotting in the 
grave, for every mistaken hour we spend pushing in the wrong direction, 
and that ain't moral right?  Not to mention some of the calamities you 
are so rightly concerned about.

If we can get your support, we will be that much more successful at 
finally pushing this besotted rock off the table (especially if no 
Friendly AI people are committed to their camp enough to produce and 
support a competing camp).  These clarifications here are good, they 
indicate that you admit other camps could turn out to be the ones that 
are correct, as do I.  But that isn't the purpose here.  We must select 
the best hypothesis to guide our moral decisions right?  If I believed 
more in what Russell (or others) was saying, that would mean I would 
make very different moral decisions as to how to live my life.  But I 
must select the single best and most likely belief for my working 
hypothesis to influence and direct my moral decisions in life.  If new 
evidence comes in, or I gain some new insight based on other's "position 
statements" convincing me there is a better way, then I will admit I was 
wrong, change camps (loosing reputation according to good Canonizes), 
and start living my life according to the new working hypothesis.  (And 
trust the first people in that better camp much more the next time 
around via appropriately programmed Canonizers.)  I am always glad there 
is great diversity of camps so hopefully all the important bases will be 
covered. (that is with correspondingly and quantitatively less effort on 
the lessor camps...)

So, other than the reservations you seem to have about being absolutely 
right, your preferred camp, and current working hypothesis, still fits 
in with this structure right?

   1. No benefit for effort on Friendly AI
         1. AI will be motivated
               1. Everything, including moral motivation, will increase.
                  (Brent)
               2. We can’t do anything about it, so don’t waist effort
                  on it. (Samantha)
         2. AI will not be motivated (Russell)

Could we get you to come up with a brief and concise name, one line, and 
text for your camp?  I'm sure I couldn't get as close as you could, 
trying to sympathetically glean your beliefs from what you have said so 
far right?  And either way would you be willing to support such a camp 
as Russell has supported his camp to help finally get this rock off the 
table?

Upward,

Brent Allsop


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070528/ea549f6f/attachment.html>


More information about the extropy-chat mailing list