[ExI] Why care about AI friendliness? (was Re: singularity summit on foxnews)
stefano.vaj at gmail.com
Fri Sep 14 20:27:09 UTC 2007
On 9/14/07, Mike Dougherty <msd001 at gmail.com> wrote:
> If you really do feel that you will be dead before AI is born, then
> you probably shouldn't worry about it. Even if that's true, should
> you expect your children to care?
This is one of the interesting points. Does one mean "your children"
in the literal sense? Or perhaps children who happen to be as
genetically close as possible (say, of your tribe, race or country)?
Or future generations of the humankind irrespective of any direct,
albeit vague, genetic connection? And what about "children of the
mind", as AIs could not too unreasonably be qualified?
> If your reference to _physical_
> death implies some kind of uploaded transcendence of your physical
> body, then you have even more to consider of AI - since it will
> probably take some serious intelligence to run your software in a
> machine. Most likely if you can run on a machine there is already a
> non-human agent facilitating your experiences. You would need to be
> emulated, the AI 'hypervisor' will be running natively.
That is, as long as you care to emulate features which used to belong
to your biological self... Which in turn may be necessary even for
purely artificial AIs if we are to consider them as "friendly" or
"unfriendly" in any more meaningful way than the very complex
terrestrial climate system may be. Otherwise, the distinction between
uploaded or emulated humans and purely artificial AIs gets blurred
More information about the extropy-chat