[extropy-chat] Re: Structure of AI
Eliezer Yudkowsky
sentience at pobox.com
Mon Nov 22 03:31:31 UTC 2004
J. Andrew Rogers wrote:
>
> 1.) There is nothing in "intelligence" that has a time dimension in the
> theoretical. In any finite context, there is no "intelligence per unit
> time" that reflects on the intrinsic intelligence of the system being
> measured. For any time-bounded intelligence metric you can think of,
> there is a "fast and stupid" machine that will appear more intelligent
> than a "slow and smart" machine, for the purposes of black box
> comparison. Of course, the point of AI is to come up with an algorithm
> that will be smart in any context, not to game intelligence metrics.
Nitpicks:
1) The point of AI is to come up with an algorithm that will be smart in
any of the tiny set of contexts that represent low-entropy universes. We
may make this assumption since a maxentropy universe could not contain an
AI. If we do not make this assumption we run into no-free-lunch theorems.
It may not sound practically important (how many maxentropy universes did
we plan to run into, anyway?) but from a theoretical standpoint this is one
hell of a huge nitpick: The real universe is an atypical special case.
2) The goal of FAI is not the same as the point of AI. The point of AI,
if you implement it successfully, just stabs you. The optimization target
of an FAI is an unusual special case of optimization targets, with complex,
relevant properties.
We must devise an AI that is "smart" according to an unusually difficult
criterion, to operate in an unusually easy universe.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list