[extropy-chat] The emergence of AI
Russell Wallace
russell.wallace at gmail.com
Sun Dec 5 02:34:03 UTC 2004
On Sat, 04 Dec 2004 19:03:51 -0500, Eliezer Yudkowsky
<sentience at pobox.com> wrote:
> As for seed AI, recursively self-improving AI, I am not aware of anyone who
> explicitly claims to *specialize* in that except me and Jurgen Schmidhuber.
> So far as I know, Jurgen Schmidhuber has not analyzed the seed AI
> trajectory problem.
Having read LOGI, I'm still curious as to how you expect to solve the
fundamental problem of knowing what is and isn't an improvement.
Suppose version 1 of the AI comes up with a version 2 that it thinks
will be smarter than V1. How does it know whether that's true or not?
Trial and error? Or are you hoping (as suggested by a couple of
remarks you made along the way) that it'll be able to use formal
proof?
- Russell
More information about the extropy-chat
mailing list