[extropy-chat] Bluff and the Darwin award
Eliezer S. Yudkowsky
sentience at pobox.com
Thu May 18 16:05:26 UTC 2006
Eugen Leitl wrote:
> On Wed, May 17, 2006 at 04:35:42PM -0700, Eliezer S. Yudkowsky wrote:
>
>>internally observing a specific chess game played by two algorithms
>>against each other. The latter option strikes me as silly in practice,
>>that is, a suboptimal use of computing power, but doable if some
>
> Do you have a proof that this is a suboptimal use of computing power?
>
>>superintelligence wanted to do it.
You misunderstand, I think; I meant that *deliberately avoiding*
observing any specific chess game would probably be a suboptimal use of
computing power. But you could *probably* write a chess-playing program
using only thoughts that were abstracted and not associated with any
specific chess positions, but only if you were a superintelligence and
wanted to be silly. I cannot prove that this is a suboptimal use of
computing power, and in fact I deleted a caveat to that effect from an
early draft of the email.
Now that I think about in more detail, it's hard to see how an SI would
ever be *forced* to think about a specific chess position while writing
a chess-playing program, because any specific scenario that ends up
being probably relevant to future games must generalize in its key
aspects beyond that exact board position. There is therefore, perhaps,
no reason to ever consider it as an exact board position, rather than as
the category it generalizes to.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list