[ExI] Wired article on AI risk
brent.allsop at canonizer.com
Wed May 23 04:22:31 UTC 2012
I'm glad you only think this wired article is only "rather ok". In my
opinion, it was clueless immoral fear mongering, further contributing to
what I believe is already, guaranteed to be the greatest threat to
humanity. And I've started a survey topic, in which I state the reasons
for why I believe such there. (see:
http://canonizer.com/topic.asp/13/2). It'd be great to know what all of
you think is the most significant threat to humanity.
The topic on the stupidity of concern over "Frienly AI" has less
consensus, so far, than any other topic on Canonizer.com. But the
consensus is still for the non fear mongering camp. (see:
http://canonizer.com/topic.asp/16/3 ) It'd sure be great if this, and
the above survey was a bit more comprehensive, and more of you would
take a moment to participate. It takes far less time than re-iterating
all those naive arguments everyone keeps repeating, add infinitem, for
years on end, as is happening, yet again, here, now for how many years?
Did anyone notice that the only two "highly ranked' comments on that
wired article are not fear mongering comments?
Anyone want to bet what the emerging expert consensus says the most
significant threat to humanity will turn out to be, in this survey
topic, on for the most significant threats to humanity, after another
year, after 10 years...?
It'd sure be great to know, consistently, concisely, and quantitatively,
what all of you think, so we can significantly amplify the wisdom of
this crowd on this topic, instead eternally providing all these naive
clueless mistaken arguments, over and over again, year after year,
after year. Am I the only one that gets tired of all this?
Has anyone noticed, how we never have these kinds of eternally
repetitive and very painful arguments, like we once did here or in any
other transhumanist forum, on the topic of qualia? And notice that the
qualophobes who once dominated these silly naieve discussion, no longer
drown out the expert consensus? Why do you think that is? I know some
of you dislike Canonizer.com, because of things like that, but do you
think such hate is justified, or is the wisdom of this crowd finally
being significantly amplified, above the clueless and mistaken
arguments, now on at least this topic? (see the significant consensus
camp: http://canonizer.com/topic.asp/88/6 which continues to extend its
lead, compared to all other theories.).
On 5/21/2012 3:31 AM, Aleksei Riikonen wrote:
> This recent Wired article on AI risk was rather ok:
More information about the extropy-chat