[ExI] Organizations to "Speed Up" the Creation of AGI?

Kevin G Haskell kgh1kgh2 at gmail.com
Tue Dec 20 05:35:49 UTC 2011


On Date: Sun, 18 Dec 2011 22:05:34 +0100, Anders Sandberg wrote:

>(Anders Sanderberg reply to Kevin George Haskell):

>  I think that is a bit of misrepresentation. I can only speak for us
>  at FHI, but we would be fine with faster AGI development if we
>  thought safe AGI was very likely or we were confident that it would
>  be developed before AGI went critical. An early singularity reduces
>  existential risk from everything that could happen while waiting for
>  it. It also benefits existing people if it is a positive one.

> Also

>> (Reply from Stefano Vaj to Anders Sandberg)
>
>
> Given that unless something very dramatic happens the entire humankind -
> defined as the set of all humans currently alive - is currently
> confronted with an obvious extinction risk, or rather certitude, in a
> matter of decades, out of aging if anything, it has always been unclear
> to me how FHI can reconcile what I think is fair to characterise as an
> utilitarian value system with a primary concern for the dominance or not
> of bio-based intelligences in the future.

>>Yes, we usually tend towards the consequentialist side of the ethical
>>spectrum. But most of us also think future generations and sentient
>>systems have some or full value: discounting people in the future is the
>>same thing as discounting them in space (given relativity) and pretty
>>odd. So this means that the enormous value embodied in possible future
>>generations - whether humans, posthumans or AIs - matters a lot, and
>>avoiding risking it by our actions is an important moral consideration.

>>The threats we must reduce are those that remove value: extinction or
>>permanent curtailment of value. For example, badly motivated AGI might
>>both wipe out humanity and be unable to ever achieve any value of its
>>own. We are much less concerned if humanity invents successors that
>>gradually take our place and then go on to enrich the universe with
>>impossible-to-human mentalities.

While the concern is valid, how would FHI and like-minded groups go about
ensuring that once AGI is created, either in 10 years or 200, that this new
species will be anything other than it/they want(s) to be, and do to
whatever existing species, rather human or Transhuman, that will still have
much lower levels of speed, awareness, and power, that it wants?

>>It is individually rational to try to reduce existential risk,
>>especially if you are signed up for cryonics or think life extension is
>>likely, since then you will have even more years to be at risk in. Right
>>now the existential risk per year is likely below 1% per year, but by
>>our estimates probably not far below it. Inventing a super-powerful
>>technology like self-improving AGI, uploading or atomically precise
>>manufacturing, bumps it up - at least for a short time. This means that
>>if you are a transhumanist who thinks these technologies are likely in
>>the not too far future you should existential risk to be a *personal*
>>threat on the same magnitude as many common diseases.

I would be interested in how you can quantify the existential risks as
being 1% per year? How can one quantify existential risks that are known,
and as yet unknown, to mankind, within the next second, never mind the next
year, and never mind with a given percentage?

As someone who considers himself a Transhumanist, I come to exactly the
opposite conclusion as the one you gave, in that I think by focusing on
health technologies and uploading as fast as possible, we give humanity,
and universal intelligence, a greater possibility of lasting longer as a
species, being 'superior' before the creation of AGI,and perhaps merging
with a new species that we create which will 'allow' us to perpetually
evolve with it/them, or least protect us from most existential threats that
are already plentiful.

>>I think a brain emulation based singularity is safer than an AGI one,
>>and hence I would prefer it to come first. Others in the office argue
>>that while friendly AGI might be hard to achieve, once we have it we are
>>much safer from the risks of uploading, and hence it is to be preferred
>>over the scenario where we first get uploading and then AGI. Same thing
>>with nanotechnology. But the rational choice depends a lot on what
>>probability estimates you have...

Once a brain is emulated, a process that companies like IBM have promised
to complete in 10 years because of competitive concerns, not to mention all
of the other companies and countries pouring massive amounts of money for
the same reason, the probability that various companies and countries are
also pouring ever larger sums of money into developing AGI,  especially
since many of the technologies overlap.  If brain-emulation is achieved in
10 years or less, then AGI can't be far behind.

Still, I can't really see how waiting for brain-emulation will somehow keep
us safer as a species once AGI is actually developed. What factors are
being used in the numbers game that you mentioned? Do we have any idea,
even those most closely working on these projects, especially since many of
these projects will be approached from different angles and with a great
deal of secrecy, that a numbers game is even possible. The leap-frogging
through unexpected breakthroughs are bound to happen and speed up as we
approach the Singularity, won't they?

What is the general thinking about why we need to wait for full-brain
emulation before we can start uploading our brains (and hopefully bodies)?
Even if we must wait, is the idea that if we can create artificial brains
that are patterned on each of our individual brains, so that we can have a
precise upload, that the AGIans will somehow have a different view about
what they will choose to do with a fully Transhumanist species?

>>Gambling with the future of Earth-originating civilization is so fun,
>>isn't it?

Pardon?

>>(My own strategy is to talk to as many AI researchers as possible and
>>get them thinking in constructive ways. Stopping research has never been
>>an option, but it might get smarter.)

>>--
>>Anders Sandberg
>>Future of Humanity Institute
>>Oxford University

When you said 'constructive' and 'smarter,' don't you mean 'slower and more
cautious?'  I don't mean to put words in your mouth, but I don't see what
else you could mean.

May I ask if you've been polling these researchers, or have a general idea
as to what the percentages of them working on AGI think regarding the four
options I presented (expecting, of course, that since they are working on
the creation of them, few are likely in support of either the stop, or
reversing options, but rather the other two choices of go slower or speed
up)?

Thanks,
Kevin George Haskell
C.H.A.R.T.S
(Capitalism, Health, Age-Reversal, Transhumanism, and Singularity)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20111220/a70d2b1d/attachment.html>


More information about the extropy-chat mailing list