[extropy-chat] A crushing defense of objective ethics. Universal Volition and 'Ought' from 'is'.

The Avantguardian avantguardian2020 at yahoo.com
Thu May 5 08:55:55 UTC 2005


--- Marc Geddes <marc_geddes at yahoo.co.nz> wrote:

> Let's briefly review the reasoning chain:
> 
> Axioms:
> (1)  Meaning comes from minds
> (2)  The ultimate fate of the universe is
> indeterminate

> Deductions:
> (3)  You can't even begin to ask 'why?' without
> presupposing that meaning is important.  Therefore
> all
> ethical systems presuppose meaning as a value.
> 
> (4)  By (1) all meaning comes from minds and by (3)
> all ethics presupposes meaning, it follows that the
> net continued existence of sentient minds is a
> universal good, since a net deduction of sentients
> will result in a net delete of minds and therefore a
> net deletion of meaning.

Your argument is GREAT up to this point.

> (5)  By (2) the ultimate fate of the universe is
> indeterminate and could possibility be influenced by
> sentient actions.  But if the universe ends, all
> sentient life ends, which by (4) was established as
> an
> evil.  Since sentient actions could possibility
> influence the ultimate fate of the universe, it
> would
> be a universal evil for sentients not to try to
> influence the universe in order to increase the
> probability of its continued existence. 

There is a problem here. The indeterminate fate of the
universe cannot be simply analyzed. That is to say no
sentient mind below godlike intelligence could predict
what impact its actions would have on the survival
time of the universe as a whole even empirically. An
immortal AI could perform an action and wait around
until the universe ended to see if it was earlier or
later than theory would predict, but what use would
that be?  

> We have deduced a *universal* ethical principle for
> all sentient minds.  A universal volition of you
> like.
>  It is:
> 
> Universal Volition:
> 
> 'Take the actions required to increase the
> probability
> of the continued existence of the universe'

No. You could state it as "do not destroy the
universe" but stating it the way you do, leads to
conflicts with deduction (4). For example an AI might
decide that all those sentient minds running around
are dumping too much entropy into the universe and
hastening its demise. It may then take the following
course of action without violating your principles:

1. Download all meaning from all sentient minds.

2. Destroy all sentient minds except oneself.

3. Shut down all non-vital functions to minimize
entropy output.

Thus, the total net meaning of the universe is
conserved AND the universe lasts a lot longer.
     I think you should have kept your principles
centered on meaning. Starting from your axioms and
using your deductions, the "Universal Volition",
"Golden Rule", and "Laws of Robotics" should be:

1. Maximize total meaning.
2. Maximize duration of total meaning.

      By your own arguments above, actions that
promote the survival of the AI, its creators, and
essentially all sentient beings is implicit in these
statements as well as preservation of literature and
works of art. Moreover insofar as any sentient mind
can predict how its actions will influence the
survival of the universe as a whole, the probability
of the universe's continued existence is implicit as
well. This is regardless of whether the universe
itself is sentient or not. 
      Too bad there does not seem to be tenable way to
define a "meaning" function in computer code. Anyways,
good post. It made me think. :)

 



The Avantguardian 
is 
Stuart LaForge
alt email: stuart"AT"ucla.edu

"The surest sign of intelligent life in the universe is that they haven't attempted to contact us." 
-Bill Watterson


		
__________________________________ 
Yahoo! Mail Mobile 
Take Yahoo! Mail with you! Check email on your mobile phone. 
http://mobile.yahoo.com/learn/mail 



More information about the extropy-chat mailing list