[ExI] Organization to 'Speed Up" Creation of AGI?
Anders Sandberg
anders at aleph.se
Sun Dec 18 21:05:34 UTC 2011
On 2011-12-18 14:58, Stefano Vaj wrote:
> On 18 December 2011 09:42, Anders Sandberg <anders at aleph.se
> <mailto:anders at aleph.se>> wrote:
>
> I think that is a bit of misrepresentation. I can only speak for us
> at FHI, but we would be fine with faster AGI development if we
> thought safe AGI was very likely or we were confident that it would
> be developed before AGI went critical. An early singularity reduces
> existential risk from everything that could happen while waiting for
> it. It also benefits existing people if it is a positive one.
>
>
> Given that unless something very dramatic happens the entire humankind -
> defined as the set of all humans currently alive - is currently
> confronted with an obvious extinction risk, or rather certitude, in a
> matter of decades, out of aging if anything, it has always been unclear
> to me how FHI can reconcile what I think is fair to characterise as an
> utilitarian value system with a primary concern for the dominance or not
> of bio-based intelligences in the future.
Yes, we usually tend towards the consequentialist side of the ethical
spectrum. But most of us also think future generations and sentient
systems have some or full value: discounting people in the future is the
same thing as discounting them in space (given relativity) and pretty
odd. So this means that the enormous value embodied in possible future
generations - whether humans, posthumans or AIs - matters a lot, and
avoiding risking it by our actions is an important moral consideration.
The threats we must reduce are those that remove value: extinction or
permanent curtailment of value. For example, badly motivated AGI might
both wipe out humanity and be unable to ever achieve any value of its
own. We are much less concerned if humanity invents successors that
gradually take our place and then go on to enrich the universe with
impossible-to-human mentalities.
It is individually rational to try to reduce existential risk,
especially if you are signed up for cryonics or think life extension is
likely, since then you will have even more years to be at risk in. Right
now the existential risk per year is likely below 1% per year, but by
our estimates probably not far below it. Inventing a super-powerful
technology like self-improving AGI, uploading or atomically precise
manufacturing, bumps it up - at least for a short time. This means that
if you are a transhumanist who thinks these technologies are likely in
the not too far future you should existential risk to be a *personal*
threat on the same magnitude as many common diseases.
I think a brain emulation based singularity is safer than an AGI one,
and hence I would prefer it to come first. Others in the office argue
that while friendly AGI might be hard to achieve, once we have it we are
much safer from the risks of uploading, and hence it is to be preferred
over the scenario where we first get uploading and then AGI. Same thing
with nanotechnology. But the rational choice depends a lot on what
probability estimates you have...
Gambling with the future of Earth-originating civilization is so fun,
isn't it?
(My own strategy is to talk to as many AI researchers as possible and
get them thinking in constructive ways. Stopping research has never been
an option, but it might get smarter.)
--
Anders Sandberg
Future of Humanity Institute
Oxford University
More information about the extropy-chat
mailing list