[extropy-chat] Fwd: Extinctions

Anders Sandberg asa at nada.kth.se
Sat Jun 10 23:54:03 UTC 2006


Rafal Smigrodzki wrote:
> Still, I am curious, why would you see an irreversible loss of
> information, in the sense of losing a bug that won't happen again, as
> a loss of value. Does all complex information have value for you per
> se?

Yes.

[ The definition of complex is of course a problem, since obviously
neither classic information theory of Kolmogorof complexity has exactly
the properties I would like (clearly we don't need more white noise in the
universe). Right now I'm getting optimistic about Giulio Tononi's
information integration theory of consciousness - even if the
consciousness part is wrong, the theory seems to suggest some interesting
directions to go in. Possibly my theory needs a concept of temporal
integration to really work. ]

> If I want to resurrect the T.Rex from a rotten bone, it's not because
> T.Rex is somehow important in and of itself, but rather because I find
> the notion of making one a stimulating exercise, the kind of genetic
> feat that I would like to fool around with once the important issues
> (i.e. curing aging and disease) are taken care of. The T.Rex would be
> a plaything for me, to be made or unmade as I see fit, and not my ward
> I would be morally bound to take care of.

I do think we have a bit of responsibility of our creations, at least
those with intermediary complexity so that they can be morally relevant
entities but not able to be independent persons. I would base these
responsibilities on reducing the risks of suffering and loss of complexity
or developmental potential: the creations should not have to suffer
unduly, they should have the chance to develop their nature (including, of
course an open-ended nature that allow changing their nature) and so on.
Creating a special purpose AI only interested in accounting is OK as long
as it is not so complex it could conceivably also become interested in
other things; at this point we ought to let it choose its own path
instead.


> And there is no need to keep a sharp line between them if
>> we ask ourselves what particular goals we (and other systems) are aiming
>> for rather than take an all-or-nothing approach saying human goals OK,
>> nonhuman goals not OK.
>
> ### Do bugs have goals? Do ecosystems have goals?
>
> I would ascribe goals only to sentient creatures, and as a libertarian
> I may not transgress against their property rights but this seems to
> be far removed from the question of preservation of non-sentient
> species.

To some extent I would say they have goals, but not in the usual human
sense. Clearly a bug has behavioral programs aimed at certain functions,
so we can very well speak about a bug's goal of reproduction. That it is
not very flexible and not subject to much rational thinking makes it less
of an ethically relevant goal than what occurs in the minds of humans,
since the human goals are both more highly contingent (humans can decide
on nearly anything), complex and amenable to rational change depending on
new information. An ecosystem has even less of goals. The closest thing
would be homeostatic feedback loops and a general "will to live" common to
all self-reproducing systems.

I agree that these kinds of goals and interests are outside strict
libertarian ethics. Wiping out a jungle is not a breach of the jungle's
rights, but can be seen as against its interests and a morally bad thing
even if it is allowed.


-- 
Anders Sandberg,
Oxford Uehiro Centre for Practical Ethics
Philosophy Faculty of Oxford University





More information about the extropy-chat mailing list