[ExI] Watson on NOVA
Richard Loosemore
rpwl at lightlink.com
Tue Feb 15 19:33:37 UTC 2011
spike wrote:
> Richard Loosemore wrote:
>> The problem is, Spike, that you (like many other people) speak of AI/AGI as
> if the things that it will want to do (its motivations) will only become
> apparent to us AFTER we build one...
>
> Rather I would say we can't be *completely sure* of its motivations until
> after it demonstrates them.
According to *which* theory of AGI motivation?
Armchair theorizing only, I am afraid. Guesswork.
> But more critically, AGI would be capable of programming, and so it could
> write its own software, so it could create its own AGI, more advanced than
> itself. If we have programmed into the first AGI the notion that it puts
> another species (humans) ahead of its own interests, then I can see it
> creating a next generation of mind children, which it puts ahead of its own
> interests. It isn't clear to me that our mind-children would put the our
> interests ahead of those of our mind-grandchildren, or that our mind-great
> grandchildren would care about us, regardless of how we program our mind
> children.
Everything in this paragraph depends on exactly what kind of mechanism
is driving the AGI, but since that is left unspecified, the conclusions
you reach are just guesswork.
In fact, the AGI would be designed to feel empathy *with* the human
species. It would feel itself to be one of us. According to your
logic, then, it would design its children and to do the same. That
leads to a revised conclusion (if we do nothing more than stick to the
simple logic here): the AGI and all its descendents will have the same,
stable, empathic motivations. Nowhere along the line will any of them
feel inclined to create something dangerous.
Richard Loosemore
More information about the extropy-chat
mailing list