[ExI] Self improvement
Eugen Leitl
eugen at leitl.org
Sat Apr 23 17:54:02 UTC 2011
On Sat, Apr 23, 2011 at 02:25:20PM +0100, Anders Sandberg wrote:
> I prefer to approach the whole AI safety thing from a broader
> standpoint, checking the different possibilities. AI might emerge
> rapidly, or more slowly. In the first case it is more likely to be
> unitary since there will be just one system recursively selfimproving,
Remember why Crays looked that way? Memory latency. There is no
unitary anything in a relativistic universe, at least until
you've got population of clones splitting state, and in order
to deal with locally differing input.
Hence, diversity.
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
More information about the extropy-chat
mailing list