[ExI] uploads again
Anders Sandberg
anders at aleph.se
Tue Dec 25 10:30:22 UTC 2012
On 2012-12-25 01:09, Brent Allsop wrote:
> "I think the take home message is, after looking at prediction track
> records and cognitive bias, that experts and consensuses in this domain
> are pretty useless."
>
> It sounds like you are expecting expert consensus (at least anything
> that can be determined via a top down survey or any kind traditional
> prediction system) to be a reliable measure of truth.
Quite the opposite. See Stuart's survey: no difference between the
consensus of experts and amateurs, between current predictions and known
false predictions. We just have some evidence that people do not know
what they are talking about in this case.
> If any such is really a good (able to convince others) argument, someone
> should be able to concicely describe such, and we should be able to
> rigorously measure how many people agree that it is a good argument,
> relative to other arguments and other points of view. And we should be
> able to track this in real time.
I think realtime is less important than rigor: is this a strong or weak
argument? Number of people is an unreliable detector since a very good
argument might be very hard for most to understand.
Mapping arguments well would be extremely useful. But it is not enough
to look at sizes of camps, you need to check the logic of claims - and
that is very hard to do, since most arguments are informal messes.
> You pointed out that that I should: "find a really good set of arguments
> that shows that some human-level or beyond AI systems actually are
> safe". And in my opinion, for me, the argument that I'm presenting,
> that any sufficiently intelligent (arround human level) will realize
> that the best, most moral thing do to, is to coperatively work with
> everyone to get everythinging for all, or at least as much of it as
> possible. And, at least at Canonizer.com, there are more people that
> also think the same way than any other camp, as is concisely,
> quantitatively represented in a way that nobody can deny.
See my response to Giulio: I totally think this is an erroneous argument
I have strong counterexamples against (brief sketches in the
paperclipper post, see Nick's paper on the orthogonality thesis and
various follow-up tech reports) - but a lot of people find it
intuitively appealing. So I would say this is why I am skeptical of
looking at camp sizes as evidence for correctness. If that was reliable
we should all become Catholics.
> This kind of open survey system isn't about determining truth. It's all
> about comunicating in concice and quantitative ways, so the best
> theories and arguments can quickly rise and be recognized above all the
> mistaken and repetative childish bleating noise.
The problem is that unless you have a feedback mechanism that rewards
correct argumentation somehow, the noise can easily overwhelm your
system because of the usual dreary sociological and cognitive bias factors.
> It's about having a
> bottom up system with a focus on building consensus, and finding out
> exactly what others are having problems with
This is where I think Canonizer is important. If you check out Stuart's
talk, you will find his recipes for improving expert performance and how
to say something useful about AI by disaggregating claims. That seems to
fit in perfectly with your vision.
--
Anders Sandberg
Future of Humanity Institute
Oxford University
More information about the extropy-chat
mailing list