[extropy-chat] Fwd: CNN features amazing user with autism

Jef Allbright jef at jefallbright.net
Mon Feb 26 00:42:16 UTC 2007


On 2/25/07, Stathis Papaioannou <stathisp at gmail.com> wrote:
> On 2/26/07, Jef Allbright <jef at jefallbright.net> wrote:
> > While it's fashionable and considered by some to be the height of
> > morality to argue that all preferences are equally valid, it is
> > morally indefensible.
>
> For a start, it's difficult to define morality in any sense come the
> singularity.

I would agree there's virtually zero likelihood of successfully
predicting where specific actions would fall on a scale of morality
post-singularity.  But I think we can speak with increasing confidence
about what principles will apply to decisions about the relative
morality of specific actions--even post-singularity.  It's all about
promoting (qualified) growth.


> If we live on a distributed computer network as near-gods we
> are physically invulnerable and psychologically invulnerable. Physically
> invulnerable short of a planet- or solar system- or galaxy-destroying event;

I don't think invulnerability is ever effectively achievable.
Increasingly complex systems tend to develop increasingly subtle
failure modes, and these will remain a concern.  Threats from outside,
both natural (gamma ray burster?) and competitive will continue to
drive improvements in system design as long as there's an entity
present to care about its own future growth.  If you're referring to
maintaining a static representation of humanity, however,   then the
problems become a non-issue since there's no future context to
consider.  In that case you might as well write your date out to
Slawomir's diary and be done.


> psychologically invulnerable because if we don't like the way we feel, we
> can change it. If we suffer it will be because, perversely, we enjoy
> suffering.

I think that claim is conceptually incoherent, but will save that for
another discussion.


> It's not even like someone who is depressed and self-harms, or is
> addicted to drugs: they don't really have a choice, but if they could decide
> whether or not to be depressed as easily as they could decide between
> chocolate or vanilla ice-cream, that would be a different matter.

Such considerations are paradoxical unless one adopts a third-person,
systems-oriented POV.


> > It's even more irksome than teleological references to "evolutionary
> > goals and subgoals."
>
>
> Fair enough, evolution doesn't really "want" anything from its creatures.
> However, we do have drives, which boil down to optimising the pleasure/pain
> equation (broadly construed: the pleasure of sleeping in and not going to
> work is outweighed by the pain of explaining my laziness to people and
> running out of money, so I decide to go to work), even if these drives do
> not end up leading to "adaptive" behaviour.

Here's where you and I go around and around due to fundamentally
different models.  To me it makes much better sense to understand the
organism acting in accord with its nature, and consciousness going
along for the ride.  You continue to keep consciousness in the main
loop.


> The problem is, although we can
> struggle against the drives, which means pushing the pain/pleasure equation
> in a certain deirection, we can't arbitrarily and without any fuss just
> decide to change them. If we understood enough about our minds to transfer
> them to computers, and probably well before then, we could do this, and at
> that point the human species as we know it would end.

Yes, well, I agree with your statement within its intended context,
but I would consider humanity to be already significantly defined by
its culture, and thus you could say that original humanity has already
ended, or (as I do) view the current phase as just one point along an
extended path of development.

- Jef



More information about the extropy-chat mailing list