[ExI] Humanity+ as self guided evolution

Stefan Pernar stefan.pernar at gmail.com
Tue Jan 13 10:23:03 UTC 2009


On Tue, Jan 13, 2009 at 2:45 PM, samantha <sjatkins at mac.com> wrote:

> Stefan Pernar wrote:
>
>> Look at it this way: natural selection determines what can exist. Once you
>> make this your objective in the sense of ensuring continued existence and
>> include others by modifying the objective to ensure continued co-existence
>> so it does not lead to contradictions with Kant's categorical imperative you
>> get the basis for a truly beautiful concept.
>>
>
> Unfortunately I don't think well enough of Kant for this to add to your
> position. I will put to you questions I put to myself quite often.
>

Schopenhauer had a critique that I guess you would agree with in terms of
the categorical imperative. In essence we was not thrilled about the
rational aspect of the CI and argued that compassion is the cause of most
selflessness/morality and rightly so. The evolutionary perspective
reconciles the two quite nicely actually (more below).



 For in doing so you effectively equate the self with the other.
>

While I have had similar thought/ideas/intuitions, that is not enough. You
> need to show the clear irresistible good of this for all intelligent beings
> involved, even those of quite different levels of intelligence.
>

Totally agree and I do so in my paper on practical benevolence that I am
currently expanding into a book sized work. You can find it under
http://rationalmorality.info/wp-content/uploads/2008/07/Practical-Benevolence-2008-07-15.pdf


 This will oblige you to love the other just like you love yourself.
>

What for precisely?  What if that other is not remotely my equal in say
> intelligence as would for instance be the case between an unaugmented human
> and an advanced AGI?  Why would the AGI equate the human with itself and
> care as much for this empirically inferior intelligent being as it does for
> itself?  What would it gain by this lovely philosophy? What if the human
> cannot substantially add its being or help it achieve its values at all?  In
> that case why should it care for the human as it does itself?
>

That becomes a design issue of AIs. In my thinking creating a single AI as a
deus ex machina will be very dangerous. In line with my philosophy I prefer
an evolutionary approach that I briefly explain at
http://rationalmorality.info/wiki/index.php?title=Guido_Borner_Project

 You will need to be fair because you want to be fair to yourself.
>

What is fair?  Is it fair to myself to forego my own growth and

 [snip]

Fairness is explained in the paper as the rational compromise that I explain
under 2.5 Respect for Others in my paper.


>
>
>  Altruism becomes egoism with the two concepts becoming meaningless when
>> following this principle for giving yourself up for others becomes the same
>> as giving yourself up for yourself and vice versa.
>>
>
> There is no easily discerned meaning in "giving yourself up for yourself".
>  There is only value and whether you go toward value or away from it.  If
> you give up greater values for lesser ones that is clearly a net loss.
>

See 2.3 resolving moral paradoxes


> The reason why compassion evolved as the central theme of all world
> religions during the time of the axial age (see Karen Armstrong's book The
> Great Transformation) is because of its evolutionary advantageous
> properties.
>

Compassion is one of many things that came about as evolutionarily
> advantageous.  Understanding and empathy and compassion enough for
> cooperation among roughly equal range intelligent beings is obviously a very
> helpful and good thing for all concerned.  But it should not be hastily
> reified into the end and be all good you seem to be pushing it as.  It is
> not so clear and indeed did not evolve that we have this compassion for
> those beings radically less intelligent than ourselves (or even some that
> are much closer).  Thus it is not clear that this generalizes to a universe
> of radically disparate intelligences.
>

Seeing us as a transitional form that will continue to evolve I see no
reason why we can not expand our circle of compassion further. My first
upgrade might very well be a combination of compassion 2.3 and empathy 3.4 -
metaphorically speaking - it wont be as straight forward of course.


>
>
>  Why then shouldn't we make use of this and follow evolutionary concepts in
>> guiding our self modification? What other rational alternatives are there?
>>
>>
> Rationally we have to avoid selection bias and cherry picking that which we
> find most intuitively appealing.
>

Absolutely and that is what I am trying to avoid by creating a rational
philosophy of morality - precisely to be rational about what is good.

-- 
Stefan Pernar
Skype: Stefan.Pernar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20090113/9e58664e/attachment.html>


More information about the extropy-chat mailing list