[ExI] How could you ever support an AGI?
Henrique Moraes Machado
cetico.iconoclasta at gmail.com
Wed Mar 5 12:47:31 UTC 2008
John Grigg>You are badly anthropomorphizing the AGI. It will most likely
not have the same biological drives/wiring that you and I have.
>Where is Eliezer Yudkowsky when we need him? lol I think the "whole
>gradual coming into being of AGI combined with the integration of us into
>it,"
>is actually the very unlikely scenario. Purely AGI development will
>definitely progress faster than the machine/biological interfaces that you
>imagine.
We humans tend to anthropomorphizize (that's a big word!) everything. We do
it with our cars, with our pets and with our computers (c'mon, tell me that
you don't think your computer has feelings sometimes... :-)). Anyway, an AI
programmed by humans would almost certainly resemble a human, since the only
referential we have is us.
John>Upgrading animals would be a very cool thing, indeed.
I'd really really love to see that happen. Anybody working on it?
John>I would say this is a very big "if." But some say AGI would only have
the motivations which we program into them.
At first at least. But a self improving AI could change that.
More information about the extropy-chat
mailing list