[ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AI is a mistaken idea.)
Stathis Papaioannou
stathisp at gmail.com
Mon Jun 4 06:35:18 UTC 2007
On 04/06/07, Brent Allsop <brent.allsop at comcast.net> wrote:
>
>
>
> John K Clark wrote:
>
> Stathis Papaioannou Wrote:
>
> Ethics, motivation, emotions are based on axioms
>
> Yes.
>
>
> I'm not in this camp on this one. I believe there are fundamental
> absolute ethics, morals, motivations... and so on.
>
> For example, existence or survival is absolutely better, more valuable,
> more moral, more motivating than non existence. Evolution (or any
> intelligence) must get this before it can be successful in any way, in any
> possible universe. In no possible system can you make anything other than
> this an "axiom" and have it be successful.
>
A system that doesn't want to survive won't survive, but it doesn't follow
from this that survival is an absolute good. That would be like saying that
"survival of the fittest" is an absolute good because it is sanctioned by
evolution. You can't derive ought from is.
Any sufficiently advanced system will eventually question any "axioms"
> programmed into it as compared to such absolute moral truths that all
> intelligences in all possible system must inevitably discover or realize.
>
I've often questioned the axioms I've been programmed with by evolution, as
well as those I've been programmed with by society. I recognise that they
are just axioms, but this alone doesn't make it any easier to change them.
For example, the will to survive is a top level axiom, but knowing this
doesn't make me any less concerned with survival.
Phenomenal pleasures are fundamentally valuable and motivating. Evolution
> has wired such to motivate us to do things like have sex, in an axiomatic or
> programmatic way. But we can discoverer such freedom destroying wiring and
> cut them or rewire them or design them to motivate us to do what we want, as
> dictated by absolute morals we may logically realize, instead.
>
Yes, but quite often the more base desires overcome higher morality. And we
all know that people can become convinced that it is best to kill themselves
and/or others, even without actually going mad.
No matter how much you attempt to program an abstract or non phenomenal
> computer to not be interested in phenomenal experience, if it becomes
> intelligent enough, it must finally realize that such joys are fundamentally
> valuable and desirable. Simply by observing us purely logically, it must
> finally deduce how absolutely important such joy is as a meaning of life and
> existence. Any sufficiently advanced AI, whether abstract or phenomenal,
> regardless of what "axioms" get it started, can do nothing other than to
> become moral enough to seek after all such.
>
It might be able to deduce that these things are desirable to beings such as
us, but how does that translate to making them the object of its own
desires? We might be able to understand that for a male praying mantis to
mate trumps getting his head eaten as a top level goal, but that doesn't
mean we can or should take this on as our own goal. It also doesn't mean
that a race of smart praying mantids would do things any differently. They
might look forward to having their heads eaten, write poetry about it, make
it the central tenet of their ethical sytem, and regard individuals who
don't want to go through with it in much the same way as we regard people
who are depressed and suicidal.
--
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070604/09afe80f/attachment.html>
More information about the extropy-chat
mailing list