[extropy-chat] Fwd: CNN features amazing user with autism
Stathis Papaioannou
stathisp at gmail.com
Mon Feb 26 08:58:14 UTC 2007
On 2/26/07, Jef Allbright <jef at jefallbright.net> wrote:
> On 2/25/07, Stathis Papaioannou <stathisp at gmail.com> wrote:
> > On 2/26/07, Jef Allbright <jef at jefallbright.net> wrote:
> > > While it's fashionable and considered by some to be the height of
> > > morality to argue that all preferences are equally valid, it is
> > > morally indefensible.
> >
> > For a start, it's difficult to define morality in any sense come the
> > singularity.
>
> I would agree there's virtually zero likelihood of successfully
> predicting where specific actions would fall on a scale of morality
> post-singularity. But I think we can speak with increasing confidence
> about what principles will apply to decisions about the relative
> morality of specific actions--even post-singularity. It's all about
> promoting (qualified) growth.
Post-singularity there would still be scope for moral behaviour towards the
non-singularity universe, but I think the concept of morality within the
singularity collapses. Conceivably one posthuman might try to delete or
alter every copy of another posthuman, but it is difficult to see where the
motivation to do this might come from when you have all the delights of
heaven at your disposal.
> > If we live on a distributed computer network as near-gods we
> > are physically invulnerable and psychologically invulnerable. Physically
>
> > invulnerable short of a planet- or solar system- or galaxy-destroying
> event;
>
> I don't think invulnerability is ever effectively achievable.
> Increasingly complex systems tend to develop increasingly subtle
> failure modes, and these will remain a concern. Threats from outside,
> both natural (gamma ray burster?) and competitive will continue to
> drive improvements in system design as long as there's an entity
> present to care about its own future growth. If you're referring to
> maintaining a static representation of humanity, however, then the
> problems become a non-issue since there's no future context to
> consider. In that case you might as well write your date out to
> Slawomir's diary and be done.
I would have thought you could be very safe on a distributed network, far
safer than any single machine or organism could be. This is not the same as
true invulnerability, but the wider the network is distributed, the closer
it comes to this ideal.
> psychologically invulnerable because if we don't like the way we feel, we
> > can change it. If we suffer it will be because, perversely, we enjoy
> > suffering.
>
> I think that claim is conceptually incoherent, but will save that for
> another discussion.
>
>
> > It's not even like someone who is depressed and self-harms, or is
> > addicted to drugs: they don't really have a choice, but if they could
> decide
> > whether or not to be depressed as easily as they could decide between
> > chocolate or vanilla ice-cream, that would be a different matter.
>
> Such considerations are paradoxical unless one adopts a third-person,
> systems-oriented POV.
You mean I look at myself as a third person? I suppose you could describe it
that way, since if I modify my own mind I am both subject and object.
> > > It's even more irksome than teleological references to "evolutionary
> > > goals and subgoals."
> >
> >
> > Fair enough, evolution doesn't really "want" anything from its
> creatures.
> > However, we do have drives, which boil down to optimising the
> pleasure/pain
> > equation (broadly construed: the pleasure of sleeping in and not going
> to
> > work is outweighed by the pain of explaining my laziness to people and
> > running out of money, so I decide to go to work), even if these drives
> do
> > not end up leading to "adaptive" behaviour.
>
> Here's where you and I go around and around due to fundamentally
> different models. To me it makes much better sense to understand the
> organism acting in accord with its nature, and consciousness going
> along for the ride. You continue to keep consciousness in the main
> loop.
Of course that's what happens: I can hardly decide to act differently to
what my brain tells me to act, nor can my brain can't act in any way other
than that dictated by physical laws. I know that both free will and its
partial absence - the feeling that I have less control over my behaviour
than I would like - are illusory, because whatever decision I make, I am
bound to make. Nevertheless, I would like to continue having positive
illusions: that I will remain conscious from moment to moment as myself,
that I have control over my actions, that I could have acted differently had
I wanted to. People watch films and convince themselves that the characters
on the screen are not only in continuous motion, but real people about whom
they care, so what's wrong with telling myself a few lies about my own mind?
> The problem is, although we can
> > struggle against the drives, which means pushing the pain/pleasure
> equation
> > in a certain deirection, we can't arbitrarily and without any fuss just
> > decide to change them. If we understood enough about our minds to
> transfer
> > them to computers, and probably well before then, we could do this, and
> at
> > that point the human species as we know it would end.
>
> Yes, well, I agree with your statement within its intended context,
> but I would consider humanity to be already significantly defined by
> its culture, and thus you could say that original humanity has already
> ended, or (as I do) view the current phase as just one point along an
> extended path of development.
OK, but I think the point where we are able to directly access and modify
the source code of our minds will be the most important change since the
development of language.
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070226/4450fb37/attachment.html>
More information about the extropy-chat
mailing list