[ExI] Wired article on AI risk

Kelly Anderson kellycoinguy at gmail.com
Tue May 22 03:45:40 UTC 2012

On Mon, May 21, 2012 at 6:03 AM, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 21 May 2012 11:31, Aleksei Riikonen <aleksei at iki.fi> wrote:
>> This recent Wired article on AI risk was rather ok:
>> http://www.wired.co.uk/news/archive/2012-05/17/the-dangers-of-an-ai-smarter-than-us
> Nothing really new, and nothing I could agree with less.

My favorite quote from the article is: "the space beyond human
intelligence is vast"... I'm not sure how anyone could disagree with
that one... LOL!

> As far as I can tell, my rebuttal of the relevant assumptions and platitudes
> still stands (English translation by Catarina Lamm still available full-text
> online at http://www.divenire.org/articolo_versione.asp?id=1).

Stefano, this article seems to downplay the importance of getting the
software right. I don't think we know the algorithms for
"intelligence" in the sense of general learning mechanisms that can
grok language and parse visual scenes, for example. While
understanding the neural layout of a fruit fly is a great step
forward, I don't know how much "intelligence" you get from that.
Scaling up a fruit fly brain doesn't give you human level
intelligence, if that's what you were implying.

I think you are correct that the brain does not have "irreducable
peculiarities", but we still have an awful lot to learn.

Honestly, I found the Wired article to be far more readable and
understandable (whether more believable or not, I cannot say). I don't
know if your paper was published for some journal of philosophy, but I
found it to be a bit hard to read in that it used a LOT of big words,
and referred to a LOT of external literature (some of which I've
actually read and still didn't get the reference) and didn't seem to
draw a whole lot of firm conclusions. I'm not sure I understood the
points you were making for the most part, which is sad because I'd
really like to understand your point of view better.

btw, typo in your article..:"weak in a Turing text" should be "weak in
a Turing test"...

You know I'm sympathetic, I just couldn't understand a lot of what was
in your paper.


More information about the extropy-chat mailing list