[extropy-chat] hope you can comprehend

KAZ kazvorpal at yahoo.com
Sat May 20 17:58:15 UTC 2006


----- Original Message ----
From: Robert Bradbury <robert.bradbury at gmail.com>
To: ExI chat list <extropy-chat at lists.extropy.org>
Sent: Saturday, May 20, 2006 7:20:48 AM
Subject: Re: [extropy-chat] hope you can comprehend

I missed the last part of that post...

> 4. For example, donating the money to organizations that would limit efforts to 
> create advanced general AIs, which could in turn lead to "SkyNet" which could 
> in turn lead to the end of humanity as we know it.  
 
First, as noted already, "donating" rarely contributes to real advancement.
 
The for-profit guys continue to kick the asses of the non-profit researchers, when it comes to knowledge advancement. The illusion that the latter is the cutting edge comes mainly from them buying the former's stuff and announcing that they're working for some blue-sky thing based on it. Imagine if government and non-profit research had been responsible for the advancement computing, instead of Intel, Motorola, Microsoft, Apple, et cetera. We probably still would have to go beg computer time from big central machines which would be about as fast as a Pentium, with a clunkier interface and poorer information system. Note the lack of roman numeral after the word "Pentium".
 
Instead, the researchers buy modern multi-processor AMD/Intel boxes. They make supercomputers out of PC technology...and then base their new projects on that.
 
Of course one reason is that there's no effective means of measuring how productive non-profits, government agencies, and other extra-market research is. All market-based activity is part of a complex system which measures the value of all of its components to the society as a whole. So even though it is apparently far less directed and "rational", it constantly outproduces even the most careful, advancement-seeking effort that lacks any such objective quality control.
 
> Selecting cryonic suspension involves a rather questionable assumption that the future 
> will be "better".  Who would want to be reanimated if the future sucked?  (Makes 
> one wonder what fraction of cryonic reanimation specifications include "only bring me back *if*" clauses.) 

I still don't see why creating AIs who see our intellects as positively amoebic would even make the end of humanity likely, much less probable. Their ability to organize and utilize resources would probably be so advanced that they would not see us as any kind of liability or threat. Again I note that pretty much nobody minds the guys living in shacks in Montana, all inefficient and primitive, nor do we worry about killing bacteria out in the wilds of the Amazon because it's using resources in ways different than we find useful.
 
--
Words of the Sentient:
The young always have the same problem -- how to rebel and conform at the same
time. They have now solved this by defying their parents and copying one
another.         --Quentin Crisp
E-Mail: KazVorpal at yahoo.com
Yahoo Messenger/AIM/AOL: KazVorpal
MSN Messenger: KazVorpal at yahoo.com
ICQ: 1912557
http://360.yahoo.com/kazvorpal
 
_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060520/57732190/attachment.html>


More information about the extropy-chat mailing list