[ExI] Organization to 'Speed Up" Creation of AGI?

Stefano Vaj stefano.vaj at gmail.com
Sun Dec 18 13:58:25 UTC 2011


On 18 December 2011 09:42, Anders Sandberg <anders at aleph.se> wrote:

> I think that is a bit of misrepresentation. I can only speak for us at
> FHI, but we would be fine with faster AGI development if we thought safe
> AGI was very likely or we were confident that it would be developed before
> AGI went critical. An early singularity reduces existential risk from
> everything that could happen while waiting for it. It also benefits
> existing people if it is a positive one.
>

Given that unless something very dramatic happens the entire humankind -
defined as the set of all humans currently alive - is currently confronted
with an obvious extinction risk, or rather certitude, in a matter of
decades, out of aging if anything, it has always been unclear to me how FHI
can reconcile what I think is fair to characterise as an utilitarian value
system with a primary concern for the dominance or not of bio-based
intelligences in the future.

But of course the same question could be asked to the Singularity Institute.

-- 
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20111218/b0e0837d/attachment.html>


More information about the extropy-chat mailing list