[extropy-chat] Eugen Leitl on AI design
Eliezer Yudkowsky
sentience at pobox.com
Wed Jun 2 14:46:55 UTC 2004
Eugen Leitl wrote:
> On Wed, Jun 02, 2004 at 09:40:07AM -0400, Eliezer Yudkowsky wrote:
>
>>>Many orders of magnitude more performance is a poor man's substitute for
>>>cleverness, by doing a rather thorough sampling of a lucky search space.
>>
>>Right. But it automatically kills you. Worse, you have to be clever to
>
> Quite possible. Which is why I'm saying trying to build a human grade
> AI is probably not a very good idea.
Nnnooo... what follows is that sampling a lucky search space using brute
force is a poor idea. Incidentally, if you think this is a poor idea, can
I ask you once again why you are giving the world your kindly advice on how
to do it? (Maybe you're deliberately handing out flawed advice?)
>>realize this. This represents an urgent problem for the human species, but
>>at least I am not personally walking directly into the whirling razor
>>blades, now that I know better.
>
> You're still trying to build an AI, though.
Only white hat AI is strong enough to defend humanity from black hat AI, so
yes.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list