[extropy-chat] singularity conference at stanford

Metavalent Stigmergy metavalent at gmail.com
Mon May 15 14:53:06 UTC 2006


On 5/14/06, "Hal Finney" <hal at finney.org> wrote:

> I was especially impressed with Eliezer's talk.

Agreed.  Great summary, Hal.

> Nick Bostrom quoted a study which asked people to guess at factual
> answers by giving a range of possibilities ...

Thank you for taking the time to clarify.  I may have completely
misinterpreted this.  I had a bit of a hard time following Dr. Bostrom
and understood his summary comment to mean the opposite of what you
describe quite well and in great detail.

I understood his final comment to mean that we greatly *under*
estimate the chances of particular outcomes; that his study subjects
essentially said, "i'm 98% certain that such-and-such will *not*
happen" and yet 42.6% of the time, the event *did* happen.
Implicitly, in that case, they had given the event a 2% chance of
happening and they had greatly underestimated those chances.  I could
be completely wrong, but that's the way my scribbled notes read.  As
the owner of an unaugemented brain, these days I tend to trust what I
wrote down more than what I think I recall. :)  Can anyone mediate
this one and help us get closer to truth?  I looked for Dr. Bostrom's
slides, but they don't seem readily available.

> Of course, the speakers were not exactly a model of diversity, and
> probably an even better estimate could have come from the audience.
> I wish they had been polled as well.  You could have everybody stand up,
> then say to remain standing if you think human-level AI will occur before
> 2100, then before 2070, 2050, 2030, and so on.  At some point it will be
> roughly clear when half the audience sits down, and that's your estimate.

Great idea.  I too would have loved to have seen this and agree that
it may have been the most valuable bit of data gained from the entire
day.

> But it may be that, even though they don't disagree about the facts,
> they should still argue as if they did disagree.  Argumentation brings
> out issues and directions of analysis that might not appear if they
> just exchanged the minimal amount of data necessary to reach agreement.
> It is a rich form of communication which can produce higher quality
> agreement than would occur otherwise.

Couldn't agree more.

On 5/15/06, Russell Wallace <russell.wallace at gmail.com> wrote:

>  I never thought I'd wish for the day when the "need a fantasy to latch
> onto" crowd just waited for the mothership to beam them up. Being a
> pessimist only means _most_ of your surprises are pleasant ones.

I don't quite follow your comment, Russel.  If you are saying that
increasingly impressive life extension is a fantasy, or that the
continual improvement of AI is a fantasy, I'd be interested to learn
what data or personal observations you use to form those particular
characterizations.  Not saying you're wrong, just interested in how
you got there.

I'd never thought of this before, but as Hal suggested, I probably do
tend to contribute a somewhat more hopeful public vote while retaining
a more conservative internal estimate of future progress.  Perhaps as
our personalities develop, we somehow gain a subconscious sense of
"what kind of vote is needed from me" in order to better hone the
crowd's wisdom.  Over time, it seems that I've most often found myself
weighing in on the enthusiastic side, at least outwardly, in order to
balance the perceived negativity of the crowd?

My internal philosophy is something akin to "hope and work toward the
best while expecting and preparing for the worst."  Although a gross
oversimplification, it is a fairly effective hedge against both
unbridled optimism (hypomanic fantasy?) and demoralizing pessimism
(general dysthymia?).




More information about the extropy-chat mailing list