[ExI] nick's book being sold by fox news
brent.allsop at canonizer.com
Mon Oct 27 21:42:01 UTC 2014
On Mon, Oct 27, 2014 at 12:17 PM, Asdd Marget <alex.urbanec at gmail.com>
> AI could be incredibly dangerous if we don't get it right, I don't think
> anyone could argue against that but I never see these types of articles
> discuss the methods and models we are attempting to develop for "Friendly
> AI." In my opinion, we should be working harder on concepts like
> Yudkowsky's Coherent Extrapolated Volition (
> https://intelligence.org/files/CEV.pdf) to ensure we aren't simply ending
> our species so early in our life cycle.
You mentioned "I don't think anyone could argue against that", but in my
opinion just the opposite is true. This is exactly the kind of bleating
that herds tend to focus on, despite zero good arguments to support such
fears of AI. In my opinion, Canonizer.com is good at filtering through
these kinds of things the herd tends to think are good arguments, when in
reality, they are not good arguments, and do not stand up to expert
review. When people try to concisely state their arguments, and attempt to
build any kind of expert consensus supporting them, they quickly get weeded
out, even self censored (i.e. it doesn't even make it to the state where
someone is willing to submit them to Canonizer.com.)
While good arguments that do stand up to expert and peer review/support
quickly rise to the top, and people are much more motivated to do the work
to get the arguments canonized.
The "The importance of Friendly Artificial Intelligence." survey topic at
Canonizer.com is a good example of this, in my opinion. It is almost as
good an example as all the bleating consciousness noise we once had on all
the transhumanist forums, before all that finally got canonized, amplifying
everyone's wisdom on the topic. So nice now that all the bleating on
consciousness no longer exists.
There are very strong arguments for why concern over AI is just dumb, a
complete waste of time, and so far at least, there is more consensus for
these arguments, than there are for the fear mongering camps.
And of course, you can tell which camp I am strong in, so surely this has
biased my view, so take everything I say, with a grain of salt.
But, along with that, if you do have ANY good arguments, as to why we
should fear AI, for any reason, and/or you see ANY mistakes in the leading
consensus arguments that have been canonized to date, please make some
effort to point such out, and do more than just bleating this kind of
assertion noise, with nothing to back it up. In other words, if you think
there is something to what you are thinking, and bleating here, please
canonize that, to help everyone be more educated, rather than just adding
more bleating noise. This is definitely a critically important tpoic that
could use some significant amplification of the wisdom of the bleating
crowd by some concise and quantitative communication.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat