[ExI] Hard Takeoff

Stefano Vaj stefano.vaj at gmail.com
Fri Nov 19 17:49:44 UTC 2010


On 19 November 2010 07:26, spike <spike66 at att.net> wrote:
> Our current efforts might influence the AGI but we have no way to prove it.
> Backing away from the AGI development effort is not really an option, or
> rather not a good one, for without an AGI, time will take us all anyway.  I
> give us a century, two centuries as a one sigma case.

What remains very vague and fuzzy in such discourse is why an
"intelligent" (whatever it may mean...) computer would be more
"dangerous" (whatever it may mean...) per se than a non-intelligent
one of equivalent power.

It is my impression that, besides the rather unclear definition of
those very concepts. such view has more to do with some mythical
legacy (Faust, Frankenstein, the Golem, the Alien, etc.) than with
plausible, critical arguments that I am currently aware of.

If it were the case, such concern would not be innocuous, since it
might well end up justifying increased social control and
prohibitionist research policies, and would at least distract from
more present threats to values which are of the essence of a
transhumanist worldview.

-- 
Stefano Vaj




More information about the extropy-chat mailing list