[ExI] Wired article on AI risk

Kelly Anderson kellycoinguy at gmail.com
Thu May 24 19:29:48 UTC 2012


On Wed, May 23, 2012 at 4:45 PM, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 23 May 2012 21:38, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>>
>> > Absolutely. In fact, I contend that given that beyond a very low level
>> > of
>> > complexity
>>
>> I grant this if the "low" level of complexity is far above the levels
>> we have achieved today.
>
>
> Not at all. From a cellular automaton to a Turing machine to the original
> 1980 PC, every system exhibiting the property of universal computation is
> beyond that level. See http://www.wolframscience.com/nksonline/section-12.1.

Ok Stefano, now you're playing in my sandbox... :-)

Computational equivalence means simply that one machine possesses the
same capacity as another to execute instructions. It says nothing of
speed. Even more importantly, it says nothing of memory. When you say
two machines are computationally equivalent, you aren't saying that
any program that runs on one will also run on the other because the
memory requirements could greatly outstrip the capacity of one or the
other of the machines.

To take Newton's approach of limits, imagine a machine with a total
memory (Disk and RAM) of only 10 bytes. Do you think you could
implement universal computation on that? I think not. The
computational equivalence math assumes infinite memory. Given infinite
memory and near infinite time, yes, an Atari 800 XL could simulate a
human brain, but it would be like watching a redwood tree grow, so
it's of little practical use.

>> But if the task is broadly defined, such as Darwinian "fitness within
>> a given environment" you can get pretty close to what lay people call
>> intelligence. If such fitness is in the environment of Wall Street,
>> then there are machines that are already smarter than most people.
>
> Absolutely, as are a number of very simple biological organisms.

If what you are saying is that there are simple biological organisms
that are better suited to a particular environment than we are, then
yes, by all means... One need go no further than extremophiles.

>> I'm not sure this is the main interest, but it is certainly an
>> important one. I think another main problem for AIs to solve is the
>> optimal employment of all available resources, which probably isn't
>> best accomplished with human-like intelligence, but something that is
>> far more capable in statistics. Human thought patterns are downright
>> contraindicated when compared with cold statistical analysis.
>
> OTOH, this require processing power, but not necessarily any kind of
> anthropomorphic, "classic AI", kind of intelligence.

I think the kinds of problems I'm thinking about would be best handled
by a hybrid intelligence. A human like pattern recognition module
linked tightly to a statistical engine of immense power.

>> > that is anedoctical, but eloquent.
>>
>> I'm about a third of the way through Wolfram's NKS right now, so I may
>> not have gotten the full force of his argument yet... but the main
>> thing that I get from him so far is that you can't predict the outcome
>> of chaotic systems from the initial state. I guess that's what you're
>> saying too.
>
> He makes a distinction between "predictable" (a program enumerating the
> powers of 2), chaotic (a program calculating the effect of the butterfly
> wings on hurricanes the other end of the world), and "truly unpredictable",
> which remain fully deterministic, but where the only way to calculate the
> final status is to run the program step by step to the end.

Ok. Not sure how it applies, but I get that.

>> But most computer science, as it is practiced today, is highly
>> predictable.
>
> Yes, even extremely simple programs can be shown to be fully unpredictable,
> that is, able to generate arbitrary degrees of complexity, even though they
> do not exhibit any especially intelligent behaviours.

Complexity and intelligence are not the same thing... I agree with
that. The digits of Pi are terribly complex but not at all
intelligent.

>> I think most of the discussion takes place from the perspective of
>> unenhanced human beings. Asked succinctly, Will the resources made
>> available to unenhanced human beings in the future be sufficient for
>> their needs (including survival into the indefinite future) and many
>> of their wants?
>
> "Unenhanced human beings" are going to create new entities, who will demand
> careful programming

Or training... which is different in a subtle way.

> at the beginning, and then will become more and more
> essential to the working of their societies, and eventually marginalise them
> altogether, while they gradually go extinct. This is called the succession
> of generations, and has gone on since human beings came first into
> existence. What else is new?

This clearly predates humans, and will survive us too. I hope.

>> Bostrom's existential risk applies to "earth-originated intelligence",
>> but most proles are more concerned about their skin and that of their
>> progeny.
>
> OK, as for their personal skin, they are under the much more actual threat
> of being dead *anyway* within less than a century in average, and in most
> cases much sooner, unless something drastic happens. As for their progeny,
> we have to define first what "progeny" is. Immediate children? Genetic
> successors? Co-specifics? "Children of the mind"? I am not sure there any
> final reason to opt for one definition or another, but I have an answer for
> each.

Yes, I realize it is a complex subject. But most proles care more for
their DNA progeny than the children of humanity's mind. There may be a
few AI researchers who would not fit this characterization, but they
are outliers.

I can accept that we will be inevitably out evolved by our creations,
but it took me a long time to get there.

>> It's the same problem we have convincing them that they are actually
>> better off letting the 1% run the world rather than the 99%.
>
> I have some views about that as well, but perhaps this will bring us too
> far... :-)

LOL... If we are uncomfortable in any way having the elites run the
world today, then how much more uncomfortable will we be about it when
the elites are not even human? Or, will it be like an autonomous
vehicle? More trustworthy than a chauffeur.

-Kelly



More information about the extropy-chat mailing list