[ExI] Isn't Bostrom seriously bordering on the reactionary?

Giulio Prisco giulio at gmail.com
Tue Jun 14 14:42:40 UTC 2011


I am not really surprised.

2011/6/14 Stefano Vaj <stefano.vaj at gmail.com>:
> http://theeuropean-magazine.com/282-bostrom-nick/283-perfection-is-not-a-useful-concept
>
> "Perfection Is Not A Useful Concept"
> by Nick Bostrom — 13.06.2011
> Nick Bostrom directs the Future of Humanity Institute at Oxford
> University. He talked with Martin Eiermann about existential risks,
> genetic enhancements and the importance of ethical discourses about
> technological progress.
>
>
> The European: I want to start with a quote from your website. You have
> said: “When we are headed the wrong way, the last thing we need is
> progress.” Can we reason about the wrong way without taking concrete
> steps in that direction?
>
> Bostrom: That is a good question. Probably you have to take these
> steps. But they must be small and careful to give us more insight into
> where we should be going.
>
> The European: The idea of practical wisdom. We might need to make
> small mistakes to figure out that there is a better way…
>
> Bostrom: If we developed the ability to think more clearly and to
> understand the world better–which we have to do if we want to figure
> out what is right–then that understanding will also tend to increase
> the pace with which we move. And the better we understand
> technologies, the closer we will be to developing new technologies.
> That practical knowledge is an important part of innovation.
>
> The European: So the primary task is expanding the scope of what we
> think is achievable?
>
> Bostrom: I think that is one thing we need to do if we want to reason
> about the right approach to technological progress. Let me give you a
> concrete example: Let’s assume that we want to think about whether we
> should push for synthetic biology. There will be risks and there will
> be benefits as well. To make a better decision, we need to really
> understand the risks. We might say that there is the potential for
> misuse, for a new generation of biological weapons or other kinds of
> harmful applications. When we have a detailed understanding of the
> risks, we have already taken the first step towards pushing synthetic
> biology into a specific direction. So there is a trade-off: We want to
> be able to describe potential risks with detail and precision, but we
> also don’t want to go too far into a certain direction to gather that
> information because that would make the risks real.
>
> The European: What risks should a society tolerate, and what risks are
> either too high or too complex to live with?
>
> Bostrom: My focus has been on existential risks, which are at the far
> end of the severity spectrum. An existential risk is something that
> could either cause the extinction of intelligent life or the permanent
> destruction of the potential for future desirable development. It
> would be an end to the human story. Obviously, it is important to
> reduce existential risks as much as possible.
>
> The European: Where might those risks arise from?
>
> Bostrom: They could be risks that arise from nature–like asteroids or
> volcanic eruptions–or risks that arise from human activity. All the
> important risks fall into the latter category, they are anthropogenic.
> More specifically, the biggest ones will arise from future
> technological breakthroughs, such as advanced artificial intelligence
> or advanced forms of nanotechnology that could lead to new weapons
> systems. There also might be threats from biotechnology or from new
> forms of surveillance technology and mind control that might enable a
> system of global totalitarian rule. And there will also be risks that
> we haven’t yet thought of.
>
> The European: Are the ethical debates about technological change
> keeping pace with the development of new technologies? In other words,
> are we really thinking about potential risks and unintended
> consequences of progress?
>
> Bostrom: The ethical debates about some of these possibilities are
> just beginning. I introduced the concept of existential risk only in
> 2001. Technological progress, on the other hand, has been around for
> thousands of years. So we are very much starting from behind, but I
> hope we will catch up at a rapid pace. We have to think ethically
> about what we are doing as a species.
>
> The European: Apocalyptic thoughts have been around for thousands of
> years. Thus far, fortunately, they have always been proven wrong. What
> is different about today’s discussions of existential risks?
>
> Bostrom: Historically, the predictions have been groundless. They have
> not been based on science or careful analysis of particular
> technological prospects. During most of human history, we simply did
> not have the ability to destroy the human race, and we probably still
> don’t have that ability today. Even at the peak of the Cold War, a
> nuclear strike would probably not have resulted in human extinction.
> It would have caused massive damage, but it is likely that some groups
> would have survived. Past doomsday prophecies have often relied on
> religious beliefs.
>
> The European: Why are the anthropogenic risks suddenly increasing?
>
> Bostrom: Our long track record of survival–humans have been around for
> about 100,000 years–gives us some assurance that the natural risks
> have been rather small.
>
> If they have not ended human history until now, they are unlikely to
> have that effect in the near future. So the risks we should really
> worry about come from new developments. They introduce new factors
> with a lot of statistical uncertainty, and we cannot be confident that
> their risks are manageable. The potential of human action to do good
> and evil is larger than it has ever been before. We know that we can
> affect the global system. We can travel around the world in a matter
> of hours. We can affect the global climate. World wars have already
> happened. We can already foresee that new technologies might be
> developed in the coming century that would further expand our power
> over nature and over ourselves. We might even be able to change human
> nature itself.
>
> The European: People also thought that traveling faster than thirty
> miles per hour would lead to mental insanity, or that nuclear
> explosions would set the atmosphere on fire, or that we might
> accidentally create black holes at particle accelerators.
>
> Bostrom: With trains, there was no discussion of existential risks. In
> the case of nuclear weapons, it was different. The atomic bomb was
> arguably the first human-made existential risk. And the probability of
> a doomsday scenario was considered significant enough that one of the
> scientists of the Manhattan Project did a study. They ultimately came
> to the conclusion that the atmosphere would not explode, and they were
> correct. As the potential for existential risks increases, we must be
> careful to examine the possible consequences of technological
> innovation.
>
> The European: You have already mentioned genetic enhancements. When we
> think about human potential, we often think about our cognitive
> abilities: Our capacity for rational thought is what distinguishes us
> from animals. Is that description incomplete?
>
> Bostrom: It is certainly not complete. But our cognitive abilities
> might be the most important difference between humans and animals;
> they have enabled our language, culture, science and technology, and
> complex social organization. A few differences in brain architecture
> have led to a situation where one species has increasing control over
> all other species on this planet.
>
> The European: When we talk about enhancements, we implicitly talk
> about the idea of perfection: We want to minimize the negative and
> maximize the positive aspects of human existence, to move closer to an
> optimal state. But who would define what constitutes such a state?
>
> Bostrom: I don’t think that perfection is a useful concept. There is
> not necessarily one best form of human existence; perfection might be
> different for different people. But the difficulty or impossibility of
> defining a perfect state should not make us blind to the fact that
> there are better and worse ways of living. It’s common sense that we
> prefer to be healthy rather than sick, for example. We also think that
> we ought to support our childrens’ development, intellectually and
> physically. We use education to expand our cognitive abilities. We try
> to stay fit and eat healthy to expand our lifespan. We reduce lead in
> tap water because doing so increases intelligence. That toolkit will
> be drastically expanded by technology. I don’t think that there is a
> fundamental moral difference between these old and new ways of
> enhancement.
>
> The European: You have called these traits–healthy, happy lives,
> understanding good social relations–"intrinsically valuable". They are
> at the core of the ethical justification for transhumanism and genetic
> enhancements. How can we ensure that technological progress does not
> lead to enhancements of traits that are either not desired, or that
> are only conditionally valuable?
>
> Bostrom: We have to distinguish between positional and non-positional
> goods. In economics, a positional good benefits you only because
> others lack it. Height may be an advantage in men, but if everybody
> were three inches taller, nobody would be better off. Attractiveness
> may be another example of a positional good. A gain for one person
> implies a relative loss for others. I would contrast that with a trait
> like health. Your life is better when you are healthy, even if others
> are also healthy. Cognitive enhancements are a complex topic, but they
> have aspects that are intrinsically valuable. It is good if we can
> understand the world better. Arguments against positional goods are no
> arguments against enhancements as such.
>
> The European: There’s the slippery slope argument: Once we decide to
> pursue human enhancements with a certain determination, we have less
> control over the limits of these enhancements. How do you guard
> against unintended consequences?
>
> Bostrom: Yes, unintended consequences are likely to occur.
>
> Right now, there is a lot of research into cosmetics. That’s a
> positional good at best, yet we devote a huge amount of time and money
> to it. There is no moral reason why we should enhance our skin. On the
> other hand, enhancements that could increase our cognitive capacities
> are not really pursued. Partially, that has to do with our regulatory
> framework, which is built on the idea that medicine is all about
> disease. If you want to develop new drugs, you have to show that they
> are safe and effectively treat a disease. So when you want to find
> ways to enhance our brain activity, you perversely have to show that
> we are currently sick and need treatment. You cannot say, “I simply
> want to make this better than before”. We need to remove that stigma.
>
> The European: Michael Sandel writes that there is something valuable
> about accepting biological chance: We should remain humble and accept
> the traits we have been given instead of trying to engage in
> hyper-parenting, genetic enhancements and the like.
> Bostrom: The idea of appreciating gifts makes a lot of sense if there
> is someone who is giving you these gifts and might otherwise be
> offended. But if we are talking about a natural condition like cancer
> or malaria, I think have every reason to reject these “gifts”.
>
> The European: The consequence might be that everyone feels entitled to
> an ever-increasing standard of capacities.
> Bostrom: I don’t think it’s bad if more people feel entitled to a good
> life. We should probably encourage that.
>
> The European: Francis Fukuyama has called transhumanism “the greatest
> threat to mankind”. What explains that cultural pessimism?
> Bostrom: When we create the technologies to fundamentally change human
> nature, there are great dangers associated with that. It is not clear
> that our wisdom is really up to the task. That’s part of the
> explanation. There is also a certain double standard: We accept
> inventions and innovations of the past, but we tend to be more
> critical towards new developments. If we look at the history of
> medicine, we see that many inventions were condemned and disparaged by
> bioconservatives. Heart transplants were once considered immoral–how
> could you open the chest cavity of one person and transplant the heart
> into the body of another person? Similarly, when anesthesia during
> childbirth came into use, bioconservatives lamented that it ran
> against nature. A woman, they said, was meant to feel pain when giving
> birth. It’s the same story with in-vitro fertilization, when people
> worried about the psychological effects of someone knowing that they
> came from a test tube. When we introduce new biomedical ways of
> manipulating our bodies, there is often an initial, gut-level
> repugnance. Usually, that repugnance dissipates once people become
> familiar with new technologies.
>
> The European: But how do we distinguish progress from good progress?
>
> Bostrom: We need to figure out what concerns are based on irrational
> bias and which ones are not, while weighing those concerns against
> potential benefits. Then we have to consider practicalities and what
> is politically feasible, and to prioritize.
>
> The European: What possibilities for human enhancement do you see as
> especially promising and as least problematic, so that we should
> actually take concrete steps into their direction?
>
> Bostrom: I think it would be great, for example, if we could develop a
> least some mild cognitive enhancements that give us a bit more mental
> energy or combat diseases like Alzheimer’s. In general, though, the
> difficulties of enhancing the capacities of a healthy human being may
> have been underestimated. Humans are very complex evolved systems. If
> we begin to tinker with that and don’t know what we are doing, we are
> likely to mess up and cause side effects that might only become
> evident much later.
>
> The European: And what effect might that have on the probability of
> existential risk?
>
> Bostrom: The wrong kinds of enhancements constitute a kind of
> existential threat. In relation to cognitive enhancements, I believe
> that their net effect on existential risk would be positive. They
> might increase the speed of technological innovation, but they would
> also enhance our capacity to think about potential consequences of
> that innovation. With cognitive enhancements, the gains are likely to
> outweigh the downsides. If one didn’t have that optimism, one would
> have to be consequential and also argue that we should not care about
> lead in our water. We don’t have a reason to assume that the current
> distribution of cognitive abilities is at an optimal level.
>
>
> --
> Stefano Vaj
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>




More information about the extropy-chat mailing list