[ExI] Wired article on AI risk

Kelly Anderson kellycoinguy at gmail.com
Tue May 29 23:18:05 UTC 2012


On Fri, May 25, 2012 at 7:39 AM, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 24 May 2012 21:29, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>>
>> Ok Stefano, now you're playing in my sandbox... :-)
>
> Hey, happy to hear that.
>
>> Computational equivalence means simply that one machine possesses the
>> same capacity as another to execute instructions. It says nothing of
>> speed. Even more importantly, it says nothing of memory. When you say
>> two machines are computationally equivalent, you aren't saying that
>> any program that runs on one will also run on the other because the
>> memory requirements could greatly outstrip the capacity of one or the
>> other of the machines.
>
> My understanding is exactly that. A Power Macintosh could emulate an Intel
> PC because given unlimited time and unlimited memory anything that is a
> "universal computer" can emulate any system at all - even though this may be
> true for quantum processors only in theory.

Let's not get into the quantum processors, that will just make my
brain hurt. Of course, it's the case because quantum computers exist
only in theory.

But it is not true that they can all run the same programs because
some programs require more memory. Also, if the requirement of the
program is that it run in real time (or the plane will crash, for
example) then this fact is only helpful in theory, not in practice.

And of course many things the brain does must be accomplished in real
time (or you will fall down, or walk into something).

>> To take Newton's approach of limits, imagine a machine with a total
>> memory (Disk and RAM) of only 10 bytes. Do you think you could
>> implement universal computation on that? I think not. The
>> computational equivalence math assumes infinite memory. Given infinite
>> memory and near infinite time, yes, an Atari 800 XL could simulate a
>> human brain, but it would be like watching a redwood tree grow, so
>> it's of little practical use.
>
> Absolutely. I am note sure whether a Chinese Room is a universal computer,
> but if it is, the solution of the Searle objection is that its intelligence
> given whatever software is counterintuitive not for any structural reason,
> but simply because a real-world implementation would require multiples of
> the age of the universe for each interaction.

Yes, exactly. So the gap between theory and practice is greater in
practice than it is in theory.

>> > He makes a distinction between "predictable" (a program enumerating the
>> > powers of 2), chaotic (a program calculating the effect of the butterfly
>> > wings on hurricanes the other end of the world), and "truly
>> > unpredictable",
>> > which remain fully deterministic, but where the only way to calculate
>> > the
>> > final status is to run the program step by step to the end.
>>
>> Ok. Not sure how it applies, but I get that.
>
>
> In this sense, an organic brain, human or otherwise, would be
> "unpredictable", and yet fully deterministic, not because of some misterious
> "free will", but similarly to innumerable other physical processes, and even
> to a few very simple computer programs. A classic, anthropomorphic,
> Turing-qualified AGI would certainly exhibit the same property, but
> Wolfram's point is that this property is nothing special and is actually
> shared by the vast majority of possible processes and phenomena, the
> contrary opinion arising from a selection bias where scientific thought has
> so far concentrated exclusively to the subset of problems amenable to an
> easy algorithimic solution, and generalised the idea that this must be true
> for all or almost all problem (human mind perhaps exceptionally excluded for
> metaphysical reasons).

Ok. I actually get that. Sure, absolutely right. It is difficult
emotionally to give up free will. Maybe this will come to make the
sting of losing free will less sharp.

>> > OK, as for their personal skin, they are under the much more actual
>> > threat
>> > of being dead *anyway* within less than a century in average, and in
>> > most
>> > cases much sooner, unless something drastic happens. As for their
>> > progeny,
>> > we have to define first what "progeny" is. Immediate children? Genetic
>> > successors? Co-specifics? "Children of the mind"? I am not sure there
>> > any
>> > final reason to opt for one definition or another, but I have an answer
>> > for
>> > each.
>>
>> Yes, I realize it is a complex subject. But most proles care more for
>> their DNA progeny than the children of humanity's mind.
>
> This would have indeed little to do with "x-risk" rhetoric, because every
> human who is not a direct descendant of the individual concerned does not
> qualify.

We share DNA as a species, and my Darwin-Dawkins-given right to
selfishness for my genes extends, in some measure, to all humans.

> And while children share 50% of the genetic endowment of each
> parent, already grand-children share only 25%, and so on. Accordingly, in
> terms "caring for one's DNA", AGIs increasing by 10% the life of one's
> children at the price of the immediate extinction of the rest of humankind,
> and the extinction of the offspring itself of the invidual concerned but
> only after a few generation, should be heartily welcome by the said proles.

I'm not sure of that.. maybe you need to hone your marketing message... LOL

> OTOH, humans easily extend their "survival" istinct to adoptive children, to
> disciples, and to collective identities (say, France, or the catholic
> church, or the family business, or the communist party), and this even when
> this may have a cost for their own personal interest of that of their
> immediate biological children; so, not only this is not the ethical POV of
> the AGI doom-mongers, but has little to do with real-world priority
> scales...

I think we will be able to extend our "survival" instincts to NBEs
too. Honestly. Hopefully, we will see.

>> > I have some views about that as well, but perhaps this will bring us too
>> > far... :-)
>>
>> LOL... If we are uncomfortable in any way having the elites run the
>> world today, then how much more uncomfortable will we be about it when
>> the elites are not even human?
>
> Let us say that I am a British lady, who is uncomfortable with the rule of
> Queen Elizabeth II. Should I really be concerned upon how much more
> uncomfortable I will be when the monarch will "not even be a woman"? :-)

Not sure of your point exactly... but it was a rhetorical question.

-Kelly



More information about the extropy-chat mailing list