[ExI] singularity summit on foxnews

Samantha Atkins sjatkins at mac.com
Fri Sep 14 03:49:06 UTC 2007


Stefano Vaj wrote:
> On 9/11/07, Brent Allsop <brent.allsop at comcast.net> wrote:
>   
>> As a lessor issue, it is still of my opinion that you are making a big
>> mistake with what I believe to be mistaken and irrational fear mongering
>> about "unfriendly AI" that is hurting the Transhumanist, and the strong
>> AI movement.
>>     
>
> I still have to read something clearly stating to whom exactly an
> "unfriendly AI" would be unfriendly and why - but above all why you or
> I should care, especially if we were to be (physically?) dead anyway
> before the coming of such an AI
> It is not that I think that those questions are unanswerable or that
> it would be impossible to find arguments to this effect, I simply
> think they should be made explicit and opened to debate.
>
>   
You haven't been around long enough to know what the worries are?   To 
name a few off the top of my head:

1) AGIs that are smarter and more capable than humans, perhaps by many 
orders of magnitude, will be uncontrollable by us;

2) Assuming a roughly equivalent economic scenario (needing to have 
income to enjoy much of the good things) it is very likely
that most/all humans will in short order be unemployable with few if any 
marketable skills;

3) It is quite possible that the AGIs will consider humans irrelevant at 
best and possibly a waste of material resources.  It is quite possible 
they might decide we are not worth keeping around.

- samantha

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070913/80b9b808/attachment.html>


More information about the extropy-chat mailing list