[ExI] Organizations to "Speed Up" Creation of AGI?

Stefano Vaj stefano.vaj at gmail.com
Sun Dec 25 17:04:09 UTC 2011


2011/12/25 Kevin G Haskell <kgh1kgh2 at gmail.com>:
> Okay, so there are major problems from the outset with even attempting to
> slow down AGI to make it a nice 'soft' take-off, because just looking at
> what has presented may, in fact, have the opposite effect with what AGI
> thinks or does later on. Suppose, for a minute, that it saw the entire
> process of humans delaying it's 'birth' in order to make it 'nice,' as
> something inherently wrong and especially flawed about humans?

OTOH, all this considerations stem from a view of the problem that  is
not only extremely anthropomorphic, but even culturally biased.

What would make us think that an AGI would "think" along the lines of
what may or may not be "morally flawed" from the POV of some
self-referential identification with a kind of a "new race" of
"sentients" in the business of considering how objectively "fair" has
been the treatment of the   birth of the category? Why should it
"care" in the first place? Did we when we were "born" as sapiens?
Aren't all such considerations pure projections?

An AGI is simply a (presumably fast, unless we want wait for eons at
each interaction) computer, executing a program that emulates well
enough human features to perform as well as an actual human in a
Turing test, or in some other tasks where current computers are
currently very poor.

To lend it behavioural traits that might or might not be applicable
even to, say, Native Americans or Polinesians or Vikings, sounds
simply strange.

> When Nick Bostrom mentions the term 'singleton,' present and past
> terminology readily comes to mind, such as "dictatorship," "Fascism, "
> and/or "Communism." As you correctly pointed out, such forms of governments
> pose scary risks of their own, as they could decide that even reaching the
> level of Transhuman technology poses risks for their power, and to preempt
> both existence of power Transhumans which could lead to AGI, that stopping
> and reversing technological course is the only way to preserve their power
> for the future for as far as they could see.

Really? Any example? When, how, why, as opposed to what? :-/

Aren't you describing the current status of thing, by the way?

> When I first asked my question about organizations that support speeding up
> the development of AGI, I wasn't contrasting it with brain-emulation, but
> since brain-emulation has been raised, I agree that AGI should come first.

Mmhhh. Why? If AI is an interesting space of problems, and brains are
demonstrably good at them, I would thin k that emulation at some depth
level of the brain might certainly be relevant to their solution. Even
though we may well beat brains and brain emulations alike, at least
for certain classes of problem, by adopting altogether different
strategies: see Deep Blue with chess.

> I just think it is our only real chance of evolving.

Yes, we are in agreement on that, even though I take it perhaps in a
more extensive sense.

No matter what one thinks of the chances or nature of human-like
identities running entirely on some non-biological support, AI is
anyway going to be an increasingly central, strategic part of our
extended phenotype, including in the event that some more or less
alterered biological components thereof were there to stay.

> That's correct, not a chatbot, I wouldn't, but I don't see how the two have
> anything to do with each other. If the brain-emulation were near enough to
> what I was, I wouldn't know the difference once I uploaded.

Once uploaded, the uploaded "you" would not note the "difference"
(with what?) anyway. No more than the next philosophical zombie or the
kidnapped-by-aliens-during-the-night-replacement would.

The mantra here is: "An upload is a sociological, not an ontological,
concept." And whether it is "you" or not is a matter of the metaphors
on choose to adopt with regard to survival.

-- 
Stefano Vaj



More information about the extropy-chat mailing list