[ExI] Wired article on AI risk

Stefano Vaj stefano.vaj at gmail.com
Fri May 25 13:39:23 UTC 2012


On 24 May 2012 21:29, Kelly Anderson <kellycoinguy at gmail.com> wrote:

> Ok Stefano, now you're playing in my sandbox... :-)
>

Hey, happy to hear that.

Computational equivalence means simply that one machine possesses the
> same capacity as another to execute instructions. It says nothing of
> speed. Even more importantly, it says nothing of memory. When you say
> two machines are computationally equivalent, you aren't saying that
> any program that runs on one will also run on the other because the
> memory requirements could greatly outstrip the capacity of one or the
> other of the machines.
>

My understanding is exactly that. A Power Macintosh could emulate an Intel
PC because given unlimited time and unlimited memory anything that is a
"universal computer" can emulate any system at all - even though this may
be true for quantum processors only in theory.


> To take Newton's approach of limits, imagine a machine with a total
> memory (Disk and RAM) of only 10 bytes. Do you think you could
> implement universal computation on that? I think not. The
> computational equivalence math assumes infinite memory. Given infinite
> memory and near infinite time, yes, an Atari 800 XL could simulate a
> human brain, but it would be like watching a redwood tree grow, so
> it's of little practical use.
>

Absolutely. I am note sure whether a Chinese Room is a universal computer,
but if it is, the solution of the Searle objection is that its intelligence
given whatever software is counterintuitive not for any structural reason,
but simply because a real-world implementation would require multiples of
the age of the universe for each interaction.

> He makes a distinction between "predictable" (a program enumerating the
> > powers of 2), chaotic (a program calculating the effect of the butterfly
> > wings on hurricanes the other end of the world), and "truly
> unpredictable",
> > which remain fully deterministic, but where the only way to calculate the
> > final status is to run the program step by step to the end.
>
> Ok. Not sure how it applies, but I get that.
>

In this sense, an organic brain, human or otherwise, would be
"unpredictable", and yet fully deterministic, not because of some
misterious "free will", but similarly to innumerable other physical
processes, and even to a few very simple computer programs. A classic,
anthropomorphic, Turing-qualified AGI would certainly exhibit the same
property, but Wolfram's point is that this property is nothing special and
is actually shared by the vast majority of possible processes and
phenomena, the contrary opinion arising from a selection bias where
scientific thought has so far concentrated exclusively to the subset of
problems amenable to an easy algorithimic solution, and generalised the
idea that this must be true for all or almost all problem (human mind
perhaps exceptionally excluded for metaphysical reasons).

> OK, as for their personal skin, they are under the much more actual threat
> > of being dead *anyway* within less than a century in average, and in most
> > cases much sooner, unless something drastic happens. As for their
> progeny,
> > we have to define first what "progeny" is. Immediate children? Genetic
> > successors? Co-specifics? "Children of the mind"? I am not sure there any
> > final reason to opt for one definition or another, but I have an answer
> for
> > each.
>
> Yes, I realize it is a complex subject. But most proles care more for
> their DNA progeny than the children of humanity's mind.
>

This would have indeed little to do with "x-risk" rhetoric, because every
human who is not a direct descendant of the individual concerned does not
qualify. And while children share 50% of the genetic endowment of each
parent, already grand-children share only 25%, and so on. Accordingly, in
terms "caring for one's DNA", AGIs increasing by 10% the life of one's
children at the price of the immediate extinction of the rest of humankind,
and the extinction of the offspring itself of the invidual concerned but
only after a few generation, should be heartily welcome by the said proles.

OTOH, humans easily extend their "survival" istinct to adoptive children,
to disciples, and to collective identities (say, France, or the catholic
church, or the family business, or the communist party), and this even when
this may have a cost for their own personal interest of that of their
immediate biological children; so, not only this is not the ethical POV of
the AGI doom-mongers, but has little to do with real-world priority
scales...

> I have some views about that as well, but perhaps this will bring us too
> > far... :-)
>
> LOL... If we are uncomfortable in any way having the elites run the
> world today, then how much more uncomfortable will we be about it when
> the elites are not even human?
>

Let us say that I am a British lady, who is uncomfortable with the rule of
Queen Elizabeth II. Should I really be concerned upon how much more
uncomfortable I will be when the monarch will "not even be a woman"? :-)

-- 
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20120525/90ee7c34/attachment.html>


More information about the extropy-chat mailing list