[extropy-chat] RE: Singularitarian verses singularity

Samantha Atkins sjatkins at mac.com
Thu Dec 22 06:37:44 UTC 2005


On Dec 21, 2005, at 10:08 PM, gts wrote:

> On Thu, 22 Dec 2005 00:42:30 -0500, Samantha Atkins  
> <sjatkins at mac.com> wrote:
>
>> No human is likely to understand the code after one much less  
>> several optimization passes.
>
> Yes, but how about an open-source module or class, one which is  
> called as the last step before any SAI acts or decides, and which  
> requires any decision to be consistent with a rule something like  
> "minimize human suffering"?

Do you believe any of us are wise enough to implement this rule in  
all situations, no matter how complex?  No?  Then how will this  
help?   In a even modestly complex decision scenario human abilities  
would be overwhelmed.  Remember we need the AI to handle challenges  
beyond our abilities.  How exactly can a challenge be beyond our own  
abilities yet humans be capable enough to second guess the AI?

>
> The code for that class might be understood by humans of realistic  
> intelligence, and be public domain.

Re the above argument I think this is fantasy.

- samantha





More information about the extropy-chat mailing list