[ExI] Hard Takeoff
The Avantguardian
avantguardian2020 at yahoo.com
Thu Nov 18 05:11:02 UTC 2010
>From: Michael Anissimov <michaelanissimov at gmail.com>
>To: ExI chat list <extropy-chat at lists.extropy.org>
>Sent: Sun, November 14, 2010 9:52:06 AM
>Subject: [ExI] Hard Takeoff
Michael Anissimov writes:
We have real, evidence-based arguments for an abrupt takeoff. One is that the
human speed and quality of thinking is not necessarily any sort of optimal
thing, thus we shouldn't be shocked if another intelligent species can easily
surpass us as we surpassed others. We deserve a real debate, not accusations of
monotheism.
------------------------------
I have some questions, perhaps naive, regarding the feasibility of the hard
takeoff scenario: Is self-improvement really possible for a computer program?
If this "improvement" is truly recursive, then that implies that it iterates a
function with the output of the function call being the input for the next
identical function call. So the result will simply be more of the same function.
And if the initial "intelligence function" is flawed, then all recursive
iterations of the function will have the same flaw. So it would not really be
qualitatively improving, it would simply be quantitatively increasing. For
example, if I had two or even four identical brains, none of them might be able
answer this question, although I might be able to do four other mental tasks
that I am capable of doing, at once.
On the other hand, if the seed AI is able to actually rewrite the code of it's
intelligence function to non-recursively improve itself, how would it avoid
falling victim to the halting roblem? If there is no way, even in principle, to
algorithmically determine beforehand whether a given program with a given input
will halt or not, would an AI risk getting stuck in an infinite loop by messing
with its own programming? The halting problem is only defined for Turing
machines so a quantum computer may overcome it, but I am curious if any SIAI
people have considered it in their analysis of hard versus soft takeoff.
Stuart LaForge
“To be normal is the ideal aim of the unsuccessful.” -Carl Jung
More information about the extropy-chat
mailing list