[ExI] Wired article on AI risk

Stefano Vaj stefano.vaj at gmail.com
Tue May 22 14:53:33 UTC 2012


On 22 May 2012 05:45, Kelly Anderson <kellycoinguy at gmail.com> wrote:

> My favorite quote from the article is: "the space beyond human
> intelligence is vast"... I'm not sure how anyone could disagree with
> that one... LOL!
>

Indeed. Even a pocket calculator is for istance more performing than an
unaided human mind when dealing with complex arithmetic...


>  > As far as I can tell, my rebuttal of the relevant assumptions and
> platitudes
> > still stands (English translation by Catarina Lamm still available
> full-text
> > online at http://www.divenire.org/articolo_versione.asp?id=1).
>
> Stefano, this article seems to downplay the importance of getting the
> software right. I don't think we know the algorithms for
> "intelligence" in the sense of general learning mechanisms that can
> grok language and parse visual scenes, for example. While
> understanding the neural layout of a fruit fly is a great step
> forward, I don't know how much "intelligence" you get from that.
> Scaling up a fruit fly brain doesn't give you human level
> intelligence, if that's what you were implying.
>

First of all, thank you for your interest.

Yes, the software is not only important, but essential, because if we
accept the Principle of Computational Equivalence, there is basically
nothing else to say on any given system other than the program executed and
the performance it exhibits in executing it.

Accordingly, to emulate a fruit fly brain you have to emulate its overall
internal working, and to emulate a human you have to do just the same.
Only, while doing that with a (relatively) low-level, bottom-up approach
leads in both cases to finite problems, the first thing is much simpler.


> Honestly, I found the Wired article to be far more readable and
> understandable (whether more believable or not, I cannot say). I don't
> know if your paper was published for some journal of philosophy, but I
> found it to be a bit hard to read in that it used a LOT of big words,
> and referred to a LOT of external literature (some of which I've
> actually read and still didn't get the reference) and didn't seem to
> draw a whole lot of firm conclusions.
>

Why, my piece *is* an academic essay for a philosophic journal, and in
English it probably sounds even a little more difficult than in the
original version, but it does present a number of firm, and quite radical,
conclusions.

Inter alia:
- Intelligence (as, for instance, in "IQ" or in "intelligent response") has
little or nothing to do with "classic" AI.
- The main real interest of AIs is the emulation of actual individuals
- AIs are by definition possible, most of them being OTOH very likely to
work more slowly or at least inefficiently than organic brains.
- AIs are by definition not distinguishable from fyborgs with equivalent
processing powers at their fingertips.
- AIs are by definition neither more (nor less!) dangerous than "stupid"
computers of equivalent processing powers
- "Friendly" AIs is a contradiction in terms.
- In the relevant literature, the terms "friendly", "danger", "comparative
risk", "humanity", etc. can be deconstructed as ill-defined concepts based
on a number of assumptions that do not really bear closer inspection and
represent only monotheistic archetypes under a thin secular veneer.

Hey, if somebody is interested I would be very happy to elaborate on any of
those points and more... :-)

-- 
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20120522/fed14d18/attachment.html>


More information about the extropy-chat mailing list