[ExI] Wired article on AI risk

Stefano Vaj stefano.vaj at gmail.com
Wed May 23 22:45:17 UTC 2012


On 23 May 2012 21:38, Kelly Anderson <kellycoinguy at gmail.com> wrote:

> > Absolutely. In fact, I contend that given that beyond a very low level of
> > complexity
>
> I grant this if the "low" level of complexity is far above the levels
> we have achieved today.
>

Not at all. From a cellular automaton to a Turing machine to the original
1980 PC, every system exhibiting the property of universal computation is
beyond that level. See http://www.wolframscience.com/nksonline/section-12.1.


> But if the task is broadly defined, such as Darwinian "fitness within
> a given environment" you can get pretty close to what lay people call
> intelligence. If such fitness is in the environment of Wall Street,
> then there are machines that are already smarter than most people.
>

Absolutely, as are a number of very simple biological organisms.


> I'm not sure this is the main interest, but it is certainly an
> important one. I think another main problem for AIs to solve is the
> optimal employment of all available resources, which probably isn't
> best accomplished with human-like intelligence, but something that is
> far more capable in statistics. Human thought patterns are downright
> contraindicated when compared with cold statistical analysis.
>

OTOH, this require processing power, but not necessarily any kind of
anthropomorphic, "classic AI", kind of intelligence.


> > that is anedoctical, but eloquent.
>
> I'm about a third of the way through Wolfram's NKS right now, so I may
> not have gotten the full force of his argument yet... but the main
> thing that I get from him so far is that you can't predict the outcome
> of chaotic systems from the initial state. I guess that's what you're
> saying too.
>

He makes a distinction between "predictable" (a program enumerating the
powers of 2), chaotic (a program calculating the effect of the butterfly
wings on hurricanes the other end of the world), and "truly unpredictable",
which remain fully deterministic, but where the only way to calculate the
final status is to run the program step by step to the end.

 But most computer science, as it is practiced today, is highly
> predictable.


Yes, even extremely simple programs can be shown to be fully unpredictable,
that is, able to generate arbitrary degrees of complexity, even though they
do not exhibit any especially intelligent behaviours.


> I think most of the discussion takes place from the perspective of
> unenhanced human beings. Asked succinctly, Will the resources made
> available to unenhanced human beings in the future be sufficient for
> their needs (including survival into the indefinite future) and many
> of their wants?
>

"Unenhanced human beings" are going to create new entities, who will demand
careful programming at the beginning, and then will become more and more
essential to the working of their societies, and eventually marginalise
them altogether, while they gradually go extinct. This is called the
succession of generations, and has gone on since human beings came first
into existence. What else is new?

Bostrom's existential risk applies to "earth-originated intelligence",
> but most proles are more concerned about their skin and that of their
> progeny.


OK, as for their personal skin, they are under the much more actual threat
of being dead *anyway* within less than a century in average, and in most
cases much sooner, unless something drastic happens. As for their progeny,
we have to define first what "progeny" is. Immediate children? Genetic
successors? Co-specifics? "Children of the mind"? I am not sure there any
final reason to opt for one definition or another, but I have an answer for
each.

It's the same problem we have convincing them that they are actually
> better off letting the 1% run the world rather than the 99%.
>

I have some views about that as well, but perhaps this will bring us too
far... :-)

-- 
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20120524/addaeaee/attachment.html>


More information about the extropy-chat mailing list