[ExI] Organizations to "Speed Up" Creation of AGI?

Stefano Vaj stefano.vaj at gmail.com
Mon Dec 26 18:03:27 UTC 2011


On 26 December 2011 10:56, Anders Sandberg <anders at aleph.se> wrote:
> The AGI threat we at FHI have been concerned with is when we end up with
> "children of the mind" that do not have worthwhile lives.

This however is by no means an obvious value reference. As a priority
trumping over the survival of existing humans, or the long-term
viability of their clade, or the preservation of the memory of what
they were, or the deployment of all their evolutive potential, or any
"self-overcoming" and quest-for-greatness ethical imperative, or
ideological Darwinism, or whatever else you and I can list as
plausible alternatives, this seems to correspond to a "humanist bias"
(only counts what has a "worthwhile life" in our own terms) which is
as arbitrary as a value system where anything that might, say,
threaten the "supremacy of the white race" would be an x-risk
unconditionally to avoid, no matter what the costs and the (other)
risks.

I am not saying that such position is absurd or untenable or even
especially unusual. Only that this mentality used to be the background
of most secular neoLuddite positions, positing exactly that anything
that would not correspond to one's own more or less parochial idea of
a "worthwhile life" should not really qualify as a fact of any
emotional or moral value for anybody.

I for one would be as little interested in the continued existence in
the far future of different human beings, or some avatars thereof, who
would be simply engaged in repeating themselves in terms that I could
currently understand as long as the physical constraints would allow
it, as Nick is in paperclip optimisers. But hey, I do not take my
stance in this respect as a universal truth. Simply as a mark of a
transhumanist inclination. :-)

-- 
Stefano Vaj



More information about the extropy-chat mailing list