[extropy-chat] Eliezer Yudkowsky on AI design
Eugen Leitl
eugen at leitl.org
Wed Jun 2 15:06:31 UTC 2004
On Wed, Jun 02, 2004 at 10:46:55AM -0400, Eliezer Yudkowsky wrote:
> Nnnooo... what follows is that sampling a lucky search space using brute
> force is a poor idea. Incidentally, if you think this is a poor idea, can
I think any rapid enough positive-autofeedback process is a poor idea, since
uncontainable (due to undecidability), but this conversation deja vus all
over the place. We're not actually arguing, just going through the usual
motions.
> I ask you once again why you are giving the world your kindly advice on how
> to do it? (Maybe you're deliberately handing out flawed advice?)
Maybe. Maybe I'm just pointing out some dangerous recipes (metarecipes,
actually). So we can think about how to prevent them.
> >You're still trying to build an AI, though.
>
> Only white hat AI is strong enough to defend humanity from black hat AI, so
> yes.
If you want to stick to security metaphors, fighting a worm with a
counterworm is a classical-textbook Bad Idea. A better approach would be to
build a worm-proof environment.
--
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144 http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20040602/bec1127b/attachment.bin>
More information about the extropy-chat
mailing list