[extropy-chat] Fwd: Extinctions

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Sun Jun 11 02:39:54 UTC 2006


On 6/10/06, Anders Sandberg <asa at nada.kth.se> wrote:
> Rafal Smigrodzki wrote:
> > Still, I am curious, why would you see an irreversible loss of
> > information, in the sense of losing a bug that won't happen again, as
> > a loss of value. Does all complex information have value for you per
> > se?
>
> Yes.
>
> [ The definition of complex is of course a problem, since obviously
> neither classic information theory of Kolmogorof complexity has exactly
> the properties I would like (clearly we don't need more white noise in the
> universe). Right now I'm getting optimistic about Giulio Tononi's
> information integration theory of consciousness - even if the
> consciousness part is wrong, the theory seems to suggest some interesting
> directions to go in. Possibly my theory needs a concept of temporal
> integration to really work. ]

### Amazing. Would a conversion of everything to computronium
pondering the deepest mathematical truths count as increasing the
complexity of matter?

If so, how would be trivial existence of the Eastern dappled titmouse
fare in comparison? (if such bird were to be discovered)

How would you resolve the conflict between uses of matter that differ
in their level of complexity? Does the less complex one have to yield?

-------------------------------------
> I do think we have a bit of responsibility of our creations, at least
> those with intermediary complexity so that they can be morally relevant
> entities but not able to be independent persons. I would base these
> responsibilities on reducing the risks of suffering and loss of complexity
> or developmental potential: the creations should not have to suffer
> unduly, they should have the chance to develop their nature (including, of
> course an open-ended nature that allow changing their nature) and so on.
> Creating a special purpose AI only interested in accounting is OK as long
> as it is not so complex it could conceivably also become interested in
> other things; at this point we ought to let it choose its own path
> instead.

### I would not mind the T.Rex being reduced in complexity by being
shot, stuffed, and hung on the living room wall, although I would find
subjecting him to unreasonable suffering wrong (which is why I would
suggest using high-explosive 50 cal rounds for the hunt).

Now, to pay for all that, I'd have no qualms about building an AI that
would find the fulfillment of its existence in doing my checkbook. I
am rather confused by what you mean: what is "could conceivably also
become interested in other things"?

It would be anthropomorphising to demand rights for computational
devices simply because they are intelligent - I see *desire*, not
intelligence, as the basis for conferring rights (but not as a
sufficient criterion).
--------------------------------
>
> I agree that these kinds of goals and interests are outside strict
> libertarian ethics. Wiping out a jungle is not a breach of the jungle's
> rights, but can be seen as against its interests and a morally bad thing
> even if it is allowed.

### So you say that building a house in a forest is a morally bad
thing? As in, not just grating against your own personal affection for
the jungle, but in and of itself a Bad Thing, worthy of opprobrium and
sanctions? Because, you know, a moral injunction without sanctions is
just empty talking, morality is as serious as the weight of force seen
as justified in upholding it, so I can't resist asking you, how much
violence would you condone to prevent the Brazilians from breaching
their jungle's interests to build their huts?

Rafal

PS. Sorry for the Socratic cadence to my post.



More information about the extropy-chat mailing list