[ExI] Organization to "Speed Up" the Creation of AGI?
Kevin G Haskell
kgh1kgh2 at gmail.com
Tue Dec 20 04:43:37 UTC 2011
On Date: Sun, 18 Dec 2011 14:58:25 +0100, Stefano Vaj wrote:
On 18 December 2011 09:42, Anders Sandberg <anders at aleph.se> wrote:
(Anders Sandberg's comment)"
> I think that is a bit of misrepresentation. I can only speak for us at
> FHI, but we would be fine with faster AGI development if we thought safe
> AGI was very likely or we were confident that it would be developed before
> AGI went critical. An early singularity reduces existential risk from
> everything that could happen while waiting for it. It also benefits
> existing people if it is a positive one.
>
(Stefano Vaj's Reply)
>>Given that unless something very dramatic happens the entire humankind -
>>defined as the set of all humans currently alive - is currently confronted
>>with an obvious extinction risk, or rather certitude, in a matter of
>>decades, out of aging if anything, it has always been unclear to me how
FHI
>>can reconcile what I think is fair to characterise as an utilitarian value
>>system with a primary concern for the dominance or not of bio-based
>>intelligences in the future.
>>But of course the same question could be asked to the Singularity
Institute.
I share your thoughts on that, Stefano. Well said.
Kevin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20111219/95b54bee/attachment.html>
More information about the extropy-chat
mailing list