[extropy-chat] Singularity Blues

Edmund Schaefer edmund.schaefer at gmail.com
Wed Apr 6 18:07:55 UTC 2005


--Mike Lorrey <mlorrey at yahoo.com> wrote:
> I think the history  of environmental stewardship improving over time,
> particularly at the hands of wealthy individuals and
> purpose-organizations, it is clear that the trend shows that AI will,
> provided we do not fear and attack them as a species vs species
> conflict, treat humanity as a species needing protection.

Do you mean that all possible AI will feel this way whether we
explicitly design them to or not, or that this goal will be
deliberately imbued into the AI by its human programmers?

> I believe you
> are applying a stereotype of the worst sort of inhuman callousness to
> AI which has no basis to be expected, but is typical of human emotional
> distain for those who are not emotionally motivated.

I am not arguing that all AIs are evil. I do believe Friendly AI (
http://singinst.org/friendly/ ) is possible and support its creation.
I do not oppose AI, I oppose the notion that AI is guarunteed to be
safe and beneficial without deliberate effort by the programmers. The
goal of "protect humans for they are my creators" can't be assumed to
exist in all possible minds.

There's a section of CFAI specifically addressing AI anthropomorphism
that you might be interested in if you've not read it already:
http://www.singinst.org/CFAI/anthro.html



More information about the extropy-chat mailing list