[ExI] Limiting factors of intelligence explosion speeds
Eugen Leitl
eugen at leitl.org
Fri Jan 21 10:54:17 UTC 2011
On Thu, Jan 20, 2011 at 09:21:26PM -0500, Richard Loosemore wrote:
> Eugen's comment -- "Run a dog for a gigayear, still no general
> relativity" -- was content-free from the outset.
Perhaps it's in the observer. Consider upgrading your parser.
> Whoever talked about building a dog-level AGI?
Canines have limits, human primates have limits.
> If a community of human-level AGIs were available today, and were able
> to do a thousand years of research in one year, that would advance our
> level of knowledge by a thousand years, between now and January 20th
> next year.
The most interesting kind of progress is the one that pushes
at your limits. Gods make faster and qualitatively different
progress than humans.
> The whole point of having an intelligence explosion is to make that rate
> of progress possible.
The whole point of intelligence explosion is to have a positive
feedback loop in self-enhancement.
> What has that got to do with running a dog simulation for a billion years?
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
More information about the extropy-chat
mailing list