[extropy-chat] consequentialism/deontologism discussion

Stathis Papaioannou stathisp at gmail.com
Sun Apr 29 09:25:21 UTC 2007


On 4/28/07, Jef Allbright <jef at jefallbright.net> wrote:

> Each side will in the end be reduced to yelling at the other, "My values
> are
> > better than your values!". This is the case for any argument where the
> > premises cannot be agreed upon.
>
> I think the key point here is that you and I agree that values are
> subjective, and there is absolutely no basis for proving to an
> individual that their values are "wrong".  But -- we share a great
> deal of that tree, diverging only with the relatively outermost
> branches. To the extent that we identify with more of our branch of
> that tree, we will find increasing agreement on principles that
> promote our shared values (that work) over increasing scope.
>
> If that is still too abstract, consider the Romulans and the Klingons.
> They share a common humanoid heritage but have diverged into quite
> separate cultures.  The Klingons have taken the way of the warrior to
> an extreme, while the Romulans have grown in the direction of stealth
> and deception.  Caricatures, sure, but they illustrate the point,
> which is that they hold deeper values in common.  They must care for
> their children, they value the pursuit of happiness (however they
> define happiness), they value the right to defend themselves, they
> value cooperation (to the extend that it promotes shared values), ...
> and of course I could go on and on.
>
> We could even apply this thinking to robotic machine intelligence
> vis-à-vis humans.  The intersecting branches would be a little further
> down, closer to the roots, but to the extent that these hypothetical
> robots had to interact within our physical world, within somewhat
> similar constraints, then there would be some basis for empathy and
> cooperation, effectively moral agreement.


I guess my main focus in the previous post was a meta-ethical position. I
don't know why, but I am very taken with the idea that there is no objective
ethics, out there in the world which can be demonstrated to be true in the
way empirical or logical facts can be demonstrated to be true. Thus, it may
be the case that all evolved or even artificial life have some core values
in common, but it need not necessarily be the case. If we came across
Berserker machines intent on wiping out all life they encountered we could
disagree with them and fight them (after trying to find some common value
that might win them over to our side, of course), but we would not
necessarily be able to show them that they had made an empirical or logical
error, as we would if they believed that the Moon was made of cheese or that
16 was a prime number.

Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070429/12a63f19/attachment.html>


More information about the extropy-chat mailing list