<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Stefano Vaj wrote:
<blockquote
cite="mid:580930c20709131428n5b7195c1h6380c2026e67000@mail.gmail.com"
type="cite">
<pre wrap="">On 9/11/07, Brent Allsop <a class="moz-txt-link-rfc2396E" href="mailto:brent.allsop@comcast.net"><brent.allsop@comcast.net></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">As a lessor issue, it is still of my opinion that you are making a big
mistake with what I believe to be mistaken and irrational fear mongering
about "unfriendly AI" that is hurting the Transhumanist, and the strong
AI movement.
</pre>
</blockquote>
<pre wrap=""><!---->
I still have to read something clearly stating to whom exactly an
"unfriendly AI" would be unfriendly and why - but above all why you or
I should care, especially if we were to be (physically?) dead anyway
before the coming of such an AI
It is not that I think that those questions are unanswerable or that
it would be impossible to find arguments to this effect, I simply
think they should be made explicit and opened to debate.
</pre>
</blockquote>
You haven't been around long enough to know what the worries are? To
name a few off the top of my head:<br>
<br>
1) AGIs that are smarter and more capable than humans, perhaps by many
orders of magnitude, will be uncontrollable by us;<br>
<br>
2) Assuming a roughly equivalent economic scenario (needing to have
income to enjoy much of the good things) it is very likely<br>
that most/all humans will in short order be unemployable with few if
any marketable skills;<br>
<br>
3) It is quite possible that the AGIs will consider humans irrelevant
at best and possibly a waste of material resources. It is quite
possible they might decide we are not worth keeping around. <br>
<br>
- samantha<br>
<br>
</body>
</html>