[ExI] Wired article on AI risk

Kelly Anderson kellycoinguy at gmail.com
Wed May 23 19:38:12 UTC 2012


On Wed, May 23, 2012 at 7:58 AM, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 22 May 2012 18:23, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>> Ok, I agree with that. Though it is also true that faster processing
>> is equivalent, in some sense, to higher intelligence.
>
> Absolutely. In fact, I contend that given that beyond a very low level of
> complexity

I grant this if the "low" level of complexity is far above the levels
we have achieved today.

> there is no qualitative difference in the capacity of
> information-processing systems, the only plausible definition of
> intelligence is "speed in executing a given program". This suggests that:
> i) all systems achieving the level of universal computation are in a way
> "intelligent";

Clearly.

> ii) it does not make sense to measure the intelligence of a given system
> unless with reference to a given task.

But if the task is broadly defined, such as Darwinian "fitness within
a given environment" you can get pretty close to what lay people call
intelligence. If such fitness is in the environment of Wall Street,
then there are machines that are already smarter than most people.

>> That's possible, though that does not imply that classic AI has no
>> practical applications, it does.
>
> Agreed, one being that mentioned below.
>
>> > - The main real interest of AIs is the emulation of actual individuals
>>
>> i.e. uploading. Specific individuals.
>
>
> Exactly.

I'm not sure this is the main interest, but it is certainly an
important one. I think another main problem for AIs to solve is the
optimal employment of all available resources, which probably isn't
best accomplished with human-like intelligence, but something that is
far more capable in statistics. Human thought patterns are downright
contraindicated when compared with cold statistical analysis.

>> > - AIs are by definition possible, most of them being OTOH very likely to
>> > work more slowly or at least inefficiently than organic brains.
>>
>> I would agree with that... thought it is something of a matter of
>> faith or lack thereof.
>
> As to the first part, I think I have persuasive arguments (in a shell: if
> the universe with all its content can be emulated by any given system  -
> although it is possible that a quantistic processor be required for
> practical purposes - this applies as well to any of its parts, including
> organic brains).

I personally agree with you, though I don't think it's proven to the
average prole.

> For the second, I think that the evidence that indicates
> that is anedoctical, but eloquent.

We'll see. Time will tell for sure.

>> > computers of equivalent processing powers
>>
>> The issue with AI isn't that it is dangerous, but rather by its very
>> nature it is not as
>> predictable as a programmed computer. Yes, programmed computers with
>> bugs can cause airplanes to crash, but it is unlikely that a stupid
>> computer of today is going to rise up and take over the world. Yet
>> just such things are possible with AGI. If you can counter this
>> argument, I'm certainly interested in what you would have to say.
>
> There again, I think that Wolfram is right in remarking that everything is
> "programmed" after a fashion, the only difference being that for a very
> small subset thereof we have an algorithmic trick to access the status of
> the system without running it step by step to the end.

I'm about a third of the way through Wolfram's NKS right now, so I may
not have gotten the full force of his argument yet... but the main
thing that I get from him so far is that you can't predict the outcome
of chaotic systems from the initial state. I guess that's what you're
saying too.

> For the very large majority of systems, however, including most non-organic
> ones, we simply have to do that, and in that sense they are "impredictable".
> A system need not to be "intelligent" in any classic AI sense to fall in the
> last category, since many cellular automata already do.

But most computer science, as it is practiced today, is highly
predictable. It doesn't use Wolfram's approach. Indeed, I have seen
nothing thus far in Wolfram that is practical. (Again, only a third of
the way through at this point, I reserve the right to change my mind
as I finish; It is a VERY big book... LOL)

>> > - In the relevant literature, the terms "friendly", "danger",
>> > "comparative
>> > risk", "humanity", etc. can be deconstructed as ill-defined concepts
>> > based
>> > on a number of assumptions that do not really bear closer inspection and
>> > represent only monotheistic archetypes under a thin secular veneer.
>>
>> I see where you are coming from there. I don't think "unpredictable"
>> is in this same category.
>
> No, in fact the issues are "what is a danger?", "a danger for whom?", "whose
> 'existence' are we speaking when we say 'x-risks'?". "what adds to what risk
> and what is the atlernative?", "why should one care?", etc.

I think most of the discussion takes place from the perspective of
unenhanced human beings. Asked succinctly, Will the resources made
available to unenhanced human beings in the future be sufficient for
their needs (including survival into the indefinite future) and many
of their wants?

> The best that
> has been produced in the more of less implicit utilitarianism of Bostrom,
> but while being ethically utilitarianists is not mandated by any law or
> cogent philosophical reason, even there a number of choices and assumptions
> which are pretty arbitrary in nature can be easily identified, IMHO.

I don't feel qualified to discuss this point.

>> I also don't see how what you say so strongly contradicts what was in
>> the Wired article. What in that article do you strenuously disagree
>> with?
>
>
> If anything, the vision of AIs suggested therein and the idea that we should
> be concerned of a related x-risk.

Bostrom's existential risk applies to "earth-originated intelligence",
but most proles are more concerned about their skin and that of their
progeny. Those are not the same things at all. People are worried how
they are going to earn money to buy bread. That is a personal
extistential risk, not a global one.

Emergence of societal behavior is based on the acts of the individual
agents (proles), and it will be hard to convince the unwashed masses
that the better future lies in their not being the most intelligent
kids on the block. It seems to interfere with their economic well
being. So the trick is figuring out how to convince them that this is
so.

It's the same problem we have convincing them that they are actually
better off letting the 1% run the world rather than the 99%. The OWS
movement says basically that we should let the least capable people
have equal say in how to run the world as the most capable. That is a
recipe for disaster. It will be no different when the 1% is more
intelligent rather than more rich as they are functionally correlated.

I for one welcome our new computer overlords...

-Kelly




More information about the extropy-chat mailing list