[ExI] Self improvement

Keith Henson hkeithhenson at gmail.com
Fri Apr 22 15:34:47 UTC 2011


On Fri, Apr 22, 2011 at 2:34 AM, Anders Sandberg <anders at aleph.se> wrote:
> Eugen Leitl wrote:
>>
>> On Thu, Apr 21, 2011 at 09:42:16AM +0100, Anders Sandberg wrote:
>>
>>>
>>>  We  need a proper theory for this!
>>
>> I am willing to bet good money that there is none. You can use
>> it as a diagnostic: whenever the system starts doing something
>> interesting your analytical approaches start breaking down.
>>
>
> Interesting approach. Essentially it boils down to the "true creativity is
> truly unpredictable" view (David Deutsch seems to hold this too).
>
>
>> As long as we continue to treat artificial intelligence as
>> a scientific domain instead of "merely" engineering, we won't be making
>> progress.
>>
>
> On the other hand "mere" engineering doesn't lend itself well to foresight.
> We get the science as a side effect when we try to understand the results
> (worked for thermodynamics and steam engines) but that is too late for
> understanding the dangers very well.

> "Look! Based on the past data, I can now predict that this kind of AI
> architecture can go FOOM if you run it for more than 48 hours!"
> "Oh, it already did, 35 minutes ago. We left it on over the weekend."
> "At least it wont paperclip us. That is easily proved from the structure of
> the motivation module..."
> "It replaced that one with a random number generator on Saturday for some
> reason."

There is something to be said for ROM or hardware.

I suspect, for speed of light reasons, that there will be a lot of AIs
rather than one.

If there are, I sure hope they will be friendly toward each other.  It
would be just awful to be caught in a war waged by AIs.

Keith




More information about the extropy-chat mailing list