[ExI] Self improvement
Anders Sandberg
anders at aleph.se
Fri Apr 22 09:34:24 UTC 2011
Eugen Leitl wrote:
> On Thu, Apr 21, 2011 at 09:42:16AM +0100, Anders Sandberg wrote:
>
>> We
>> need a proper theory for this!
>>
>
> I am willing to bet good money that there is none. You can use
> it as a diagnostic: whenever the system starts doing something
> interesting your analytical approaches start breaking down.
>
Interesting approach. Essentially it boils down to the "true creativity
is truly unpredictable" view (David Deutsch seems to hold this too).
> As long as we continue to treat artificial intelligence as
> a scientific domain instead of "merely" engineering, we won't
> be making progress.
>
On the other hand "mere" engineering doesn't lend itself well to
foresight. We get the science as a side effect when we try to understand
the results (worked for thermodynamics and steam engines) but that is
too late for understanding the dangers very well.
"Look! Based on the past data, I can now predict that this kind of AI
architecture can go FOOM if you run it for more than 48 hours!"
"Oh, it already did, 35 minutes ago. We left it on over the weekend."
"At least it wont paperclip us. That is easily proved from the structure
of the motivation module..."
"It replaced that one with a random number generator on Saturday for
some reason."
--
Anders Sandberg,
Future of Humanity Institute
James Martin 21st Century School
Philosophy Faculty
Oxford University
More information about the extropy-chat
mailing list