[ExI] Wired article on AI risk
stefano.vaj at gmail.com
Wed May 23 13:58:33 UTC 2012
On 22 May 2012 18:23, Kelly Anderson <kellycoinguy at gmail.com> wrote:
> > Yes, the software is not only important, but essential, because if we
> > the Principle of Computational Equivalence, there is basically nothing
> > to say on any given system other than the program executed and the
> > performance it exhibits in executing it.
> Ok, I agree with that. Though it is also true that faster processing
> is equivalent, in some sense, to higher intelligence.
Absolutely. In fact, I contend that given that beyond a very low level of
complexity there is no qualitative difference in the capacity of
information-processing systems, the only plausible definition of
intelligence is "speed in executing a given program". This suggests that:
i) all systems achieving the level of universal computation are in a way
ii) it does not make sense to measure the intelligence of a given system
unless with reference to a given task.
> English it probably sounds even a little more difficult than in the
> > version, but it does present a number of firm, and quite radical,
> > conclusions.
> > Inter alia:
> > - Intelligence (as, for instance, in "IQ" or in "intelligent response")
> > little or nothing to do with "classic" AI.
> That's possible, though that does not imply that classic AI has no
> practical applications, it does.
Agreed, one being that mentioned below.
> - The main real interest of AIs is the emulation of actual individuals
> i.e. uploading. Specific individuals.
> > - AIs are by definition possible, most of them being OTOH very likely to
> > work more slowly or at least inefficiently than organic brains.
> I would agree with that... thought it is something of a matter of
> faith or lack thereof.
As to the first part, I think I have persuasive arguments (in a shell: if
the universe with all its content can be emulated by any given system -
although it is possible that a quantistic processor be required for
practical purposes - this applies as well to any of its parts, including
organic brains). For the second, I think that the evidence that indicates
that is anedoctical, but eloquent.
> > computers of equivalent processing powers
> The issue with AI isn't that it is dangerous, but rather by its very
> nature it is not as
> predictable as a programmed computer. Yes, programmed computers with
> bugs can cause airplanes to crash, but it is unlikely that a stupid
> computer of today is going to rise up and take over the world. Yet
> just such things are possible with AGI. If you can counter this
> argument, I'm certainly interested in what you would have to say.
There again, I think that Wolfram is right in remarking that everything is
"programmed" after a fashion, the only difference being that for a very
small subset thereof we have an algorithmic trick to access the status of
the system without running it step by step to the end.
For the very large majority of systems, however, including most non-organic
ones, we simply have to do that, and in that sense they are
"impredictable". A system need not to be "intelligent" in any classic AI
sense to fall in the last category, since many cellular automata already do.
> - In the relevant literature, the terms "friendly", "danger", "comparative
> > risk", "humanity", etc. can be deconstructed as ill-defined concepts
> > on a number of assumptions that do not really bear closer inspection and
> > represent only monotheistic archetypes under a thin secular veneer.
> I see where you are coming from there. I don't think "unpredictable"
> is in this same category.
No, in fact the issues are "what is a danger?", "a danger for whom?",
"whose 'existence' are we speaking when we say 'x-risks'?". "what adds to
what risk and what is the atlernative?", "why should one care?", etc. The
best that has been produced in the more of less implicit utilitarianism of
Bostrom, but while being ethically utilitarianists is not mandated by any
law or cogent philosophical reason, even there a number of choices and
assumptions which are pretty arbitrary in nature can be easily identified,
I also don't see how what you say so strongly contradicts what was in
> the Wired article. What in that article do you strenuously disagree
If anything, the vision of AIs suggested therein and the idea that we
should be concerned of a related x-risk.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat