[extropy-chat] Re: SIAI: Donate Today and Tomorrow

Brett Paatsch bpaatsch at bigpond.net.au
Fri Oct 22 21:59:00 UTC 2004


Hal Finney wrote:

> I don't know what the answer is; probably the reasons are complex.
> But I think Damien has a good part of the truth when he says that the
> reason people don't donate is because ultimately they don't think it's
> money well spent.  They don't think that the threat is imminent, or they
> don't think that your group will solve the problem.

This IS the case for me. I don't think the threat is imminent and I don't
think that if it is that your group in particular will solve the problem.

I do think Eliezer is one of the more rational and to me therefore likeable
people around. But that is not enough reason for me to donate money.
The essay was enough for me to take another quick look at the the linked
singularity site. There I saw 21 definitions of Friendly AI which did not
auger well in my view for an ability to produce one result - a friendly AI.

> I suspect that rather than spending so much time explaining to us how
> irrational we all are, you would be better off considering your own
> strategy.  What are your goals?  Your milestones?  Your deliverables?
> The essays are good, I guess, but they don't seem to obviously move
> things forward.  It would be helpful if you could point to something
> tangible that shows that you are not just a net.crackpot amusing himself.
> Get into a position where you can go to donors and say, if we get this
> much money, then in six months we will achieve these milestones, and in
> 12 months these additional ones.  I'll bet you'd do better soliciting
> donations with that approach than what you are doing now.

I think Hal is "absolutely right!" :-) Except that he apparently donated
BEFORE asking for goals and milestones.

I read a well written essay but didn't see a clear easy link to "the 
problem"
that was being proposed as needing to be solved and my predisposition to
the view that friendly AI cannot be built that would be friendly to everyone
and would not just reflect the biases of the designer was not removed in the
time I was willing to commit.

If I was convinced there was an urgent serious problem then that would
be step one accomplished. It would have little bearing on consideration
two which would be the consideration of *how* in fact the problem was
going to be solved.

As a rational, in my view anyway, person I'd have to be persuaded that
the *how* was viable and a generally favourable impression of Eliezer
would not bear on that consideration of either the problem or the how.

It would only bear on my willingness to take *some* time to take a
look when I am inclined to think there is not an overwhelming threat.  If
there is a fairly well written concise statement of the problem somewhere
and the proposed solution I'd like to see a link to it/them and not have to
go looking.

Brett





More information about the extropy-chat mailing list