[ExI] Watson on NOVA
sjatkins at mac.com
Tue Feb 15 19:57:21 UTC 2011
On 02/15/2011 09:34 AM, Richard Loosemore wrote:
> spike wrote:
> The problem is, Spike, that you (like many other people) speak of
> AI/AGI as if the things that it will want to do (its motivations) will
> only become apparent to us AFTER we build one.
> So, you say things like "It will decide it doesn't need us, or just
> sees no reason why we are useful for anything."
> This is fundamentally and devastatingly wrong. You are basing your
> entire AGI worldview on a crazy piece of accidental black propaganda
> that came from science fiction.
If an AGI is an autonomous rational agent then the meaning of whatever
values are installed into it on creation will evolve and clarify over
time, particularly in how they should be applied to actual contexts it
will find itself in. Are you saying that simple proscription of some
actions is sufficient or that any human or group of humans can
sufficiently state the exact value[s] to be attained in a way that will
never ever in any circumstances forever lead to any unintended
consequences (the Genie problem)? As an intelligent being don't you
wish the AGI to reflect deeply on the values it holds and their
relationship to one another? Are you sure that in this reflection it
will never find some of the early programmed-in ones to be of of
questionable importance or weight? Are you sure you would want that
powerful a mind to be incapable of such reflection?
More information about the extropy-chat