[ExI] Wired article on AI risk

Kelly Anderson kellycoinguy at gmail.com
Tue May 22 16:23:54 UTC 2012


On Tue, May 22, 2012 at 8:53 AM, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 22 May 2012 05:45, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>>
>> My favorite quote from the article is: "the space beyond human
>> intelligence is vast"... I'm not sure how anyone could disagree with
>> that one... LOL!
>
> Indeed. Even a pocket calculator is for istance more performing than an
> unaided human mind when dealing with complex arithmetic...

You are a very reasonable person Stefano. :-)

>> > As far as I can tell, my rebuttal of the relevant assumptions and
>> > platitudes
>> > still stands (English translation by Catarina Lamm still available
>> > full-text
>> > online at http://www.divenire.org/articolo_versione.asp?id=1).
>>
>> Stefano, this article seems to downplay the importance of getting the
>> software right. I don't think we know the algorithms for
>> "intelligence" in the sense of general learning mechanisms that can
>> grok language and parse visual scenes, for example. While
>> understanding the neural layout of a fruit fly is a great step
>> forward, I don't know how much "intelligence" you get from that.
>> Scaling up a fruit fly brain doesn't give you human level
>> intelligence, if that's what you were implying.
>
>
> First of all, thank you for your interest.

I'm always up for learning something new.

> Yes, the software is not only important, but essential, because if we accept
> the Principle of Computational Equivalence, there is basically nothing else
> to say on any given system other than the program executed and the
> performance it exhibits in executing it.

Ok, I agree with that. Though it is also true that faster processing
is equivalent, in some sense, to higher intelligence.

> Accordingly, to emulate a fruit fly brain you have to emulate its overall
> internal working, and to emulate a human you have to do just the same. Only,
> while doing that with a (relatively) low-level, bottom-up approach leads in
> both cases to finite problems, the first thing is much simpler.

Meaning that it is simpler to emulate a fruit fly than a human? Yes,
if that's what you mean, surely that is the case!

>> Honestly, I found the Wired article to be far more readable and
>> understandable (whether more believable or not, I cannot say). I don't
>> know if your paper was published for some journal of philosophy, but I
>> found it to be a bit hard to read in that it used a LOT of big words,
>> and referred to a LOT of external literature (some of which I've
>> actually read and still didn't get the reference) and didn't seem to
>> draw a whole lot of firm conclusions.
>
> Why, my piece *is* an academic essay for a philosophic journal, and in
> English it probably sounds even a little more difficult than in the original
> version, but it does present a number of firm, and quite radical,
> conclusions.

I am not a philosopher, but I like philosophy.

> Inter alia:
> - Intelligence (as, for instance, in "IQ" or in "intelligent response") has
> little or nothing to do with "classic" AI.

That's possible, though that does not imply that classic AI has no
practical applications, it does.

> - The main real interest of AIs is the emulation of actual individuals

i.e. uploading. Specific individuals.

> - AIs are by definition possible, most of them being OTOH very likely to
> work more slowly or at least inefficiently than organic brains.

I would agree with that... thought it is something of a matter of
faith or lack thereof.

> - AIs are by definition not distinguishable from fyborgs with equivalent
> processing powers at their fingertips.

Fair enough.

> - AIs are by definition neither more (nor less!) dangerous than "stupid"
> computers of equivalent processing powers

As a computer scientist I would have to disagree slightly with this,
at least the way we use "stupid" computers today. The issue with AI
isn't that it is dangerous, but rather by its very nature it is not as
predictable as a programmed computer. Yes, programmed computers with
bugs can cause airplanes to crash, but it is unlikely that a stupid
computer of today is going to rise up and take over the world. Yet
just such things are possible with AGI. If you can counter this
argument, I'm certainly interested in what you would have to say.

> - "Friendly" AIs is a contradiction in terms.

Yes, Understood.

> - In the relevant literature, the terms "friendly", "danger", "comparative
> risk", "humanity", etc. can be deconstructed as ill-defined concepts based
> on a number of assumptions that do not really bear closer inspection and
> represent only monotheistic archetypes under a thin secular veneer.

I see where you are coming from there. I don't think "unpredictable"
is in this same category.

> Hey, if somebody is interested I would be very happy to elaborate on any of
> those points and more... :-)

Just on the "no more dangerous" point...

I also don't see how what you say so strongly contradicts what was in
the Wired article. What in that article do you strenuously disagree
with?

-Kelly



More information about the extropy-chat mailing list