[ExI] Organization to 'Speed Up" Creation of AGI?

Stefano Vaj stefano.vaj at gmail.com
Mon Dec 19 16:04:33 UTC 2011


On 18 December 2011 22:05, Anders Sandberg <anders at aleph.se> wrote:

> But most of us also think future generations and sentient systems have
> some or full value: discounting people in the future is the same thing as
> discounting them in space (given relativity) and pretty odd. So this means
> that the enormous value embodied in possible future generations - whether
> humans, posthumans or AIs - matters a lot, and avoiding risking it by our
> actions is an important moral consideration.
>
> The threats we must reduce are those that remove value: extinction or
> permanent curtailment of value. For example, badly motivated AGI might both
> wipe out humanity and be unable to ever achieve any value of its own.


This corresponds roughly to my understanding of FHI stance, and, yes, for a
utilitarian I suspect that discounting the "happiness", "well-being" or
"pleasure-pain balance" contribution of future entities is somewhat odd
(even though this does not go without its own paradoxes, such as making
birth control immoral unless the case can be satisfactorily proved that
additional births would create more suffering than the opposite, whatever
it may be).

I come from a very different perspective, so all this does not concern me
too much personally, but a point however that I consider perplexing is this
content-rich idea of "value", where a utilitarian could and should
recognise as "successors", and be contented with them, only those to which
"value" can be attached in human, when not squarely in humanist, terms (a
position epitomised by Stross' characters quite "racist" language about the
so-called Vile Offspring).

Interestingly, this is at odd as well with the kind of animalism espoused
by, eg, David Pearce, which axiologically could otherwise be not so far
from Bostrom's ethical views.

We are much less concerned if humanity invents successors that gradually
> take our place and then go on to enrich the universe with
> impossible-to-human mentalities.
>

I wonder however whether this can be generalised to the transhumanism camp
in general, since some statements, including passing-by ones in mailing
lists seem to suggest otherwise.

It is individually rational to try to reduce existential risk, especially
> if you are signed up for cryonics or think life extension is likely, since
> then you will have even more years to be at risk in. Right now the
> existential risk per year is likely below 1% per year, but by our estimates
> probably not far below it. Inventing a super-powerful technology like
> self-improving AGI, uploading or atomically precise manufacturing, bumps it
> up - at least for a short time. This means that if you are a transhumanist
> who thinks these technologies are likely in the not too far future you
> should existential risk to be a *personal* threat on the same magnitude as
> many common diseases.


This is probably the crux of the issue. I am basically persuaded that our
personal chances of survival in a seventy- or eighty-year time are
vanishingly small, unless dramatic changes take place (any significant
lifespan extension requires itself the development and adoption of
dangerous technologies, and cryonics remains at best a stop-gag measure). I
am a transhumanist in the sense that I would like to defy such destiny -
even though in my case personal survival, or for that matter "the
well-being balance in the cosmos", are not really the entire story.

Accordingly, it seems that the real choice is between, say, sitting on the
deck of the Titanic, with a 99,999 chances of getting drown in five hours,
and dive aiming at a lifeboat, a process which has, say, a 50% chance of
killing you in a handful of seconds.

Now, I maintain that the kind of risk aversion that certainly dictates the
first behaviour to those adhering to contemporary mainstream ideology in
western countries, besides reflecting ethologically the literal and
civilisational aging of those societies, are deeply influence by moral
biases of a monotheistic origin according to which a drastic difference
would exist between identically catastrophic events depending on whether
they are the product of human decisions or of some alleged impersonal
"providence" or "necessity", religious or secular that it may be. See the
statistic about the huge differences of the sacrifices the average US
citizen would be ready to make to fight Global Warming in connection with
its anthropic or non-anthropic nature (something which, strictly speaking,
is absolutely irrelevant as to its consequences)

I think a brain emulation based singularity is safer than an AGI one, and
> hence I would prefer it to come first.
>

My own bet, but you probably know it by now, is that we are not recognising
an AGI anyway as such unless it *is* a functional, ethological emulation of
brain, no matter at what high or low level. And that anyway a pure-silicon
system is neither more nor less dangerous than a fyborg composed by a
biological brain with en equivalent computing power at its fingertips.
Biological/Darwinian features can certainly be emulated, but specialised
"human peripherals" are so plenty and cheap that their full emulation is
mainly an interesting scientific exercise - same as running graphic
programs on a CPU.

-- 
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20111219/569d8885/attachment.html>


More information about the extropy-chat mailing list