[extropy-chat] SIAI: Donate Today and Tomorrow

Eliezer Yudkowsky sentience at pobox.com
Fri Oct 22 10:00:31 UTC 2004


Damien Broderick wrote:
> At 04:59 AM 10/21/2004 -0400, Eliezer wrote:
> 
>> If you ask people how much they're willing to pay for the entire human 
>> species to survive, most of them name the amount of money in their 
>> pockets, minus whatever they need to pay for their accustomed lunch.  
>> If it was *only* their own life at stake, not them plus the rest of 
>> the human species, they'd drop everything to handle it.
> 
> Aw, come on. It's the plausibility of the threat.

No, Damien, it is not.  I wrote that essay after conversing with many, many 
people who seemed to consider UFAI a plausible existential risk and who 
were quite kind and rational folks.  People capable of dealing with 
probabilistic futures and evaluating scientific arguments.  People who 
would open doors for a stranger with an armful of groceries.  People who 
nonetheless sat back and cheered for SIAI without trying to leap into the 
silver screen.  I realize there are people who do not agree with the 
reasoning and assign it a probability of essentially zero.  There is no 
puzzle in the psychology of those people, nor was my email addressed to them.

> (If they were sufficiently gullible. If they were sufficiently 
> desperate. If I could absolutely prove to their satisfaction the truth 
> of my hard-to-credit claim.)

I'm not gullible nor do I encourage gullibility in others.  There are 
people who think it's okay to prey upon weakness in a good cause.  I 
disagree, oppose myself to that darkness, and I do my best to ensure that 
reading one of my essays makes people stronger of mind whether they agree 
or disagree.  I include tidbits of science and theory-of-rationality that 
will feed a hungry mind regardless of whether the main point is accepted or 
rejected.

I am dealing with a major existential risk, one that seems to incorporate a 
loss by default if nothing is done.  If that doesn't qualify as 
"sufficiently desperate" I don't know what does.

The idea that absolute proof is required to deal with an existential risk 
is another case of weird psychology.  Would you drive in a car that had a 
10% chance of crashing on every trip?  There's no *absolute proof* that 
you'll crash, so you can safely ignore the threat, right?  If people 
require absolute proof for existential risks before they'll deal with them, 
while reacting very badly to a 1% personal risk, then that is another case 
of weird psychology that needs explaining.

Making comparisons to the Heaven-for-Everyone Institute is silly.  What, 
just because people in the past made false claims of flight, the Wright 
Brothers are physically prevented from ever constructing a device that will 
fly?  The territory of reality can't possibly threaten us because past maps 
raised false alams?  Let's not forget that the boy who cried wolf didn't 
cause wolves to stop existing.  I didn't choose that those others should 
cry wolf, and I have to do my best to rally the villagers despite the 
damage.  SIAI is making an ordinary, rational case for the seriousness of 
an existential risk and a strategy for dealing with it.  Previous 
irrational claims for existential risks, large benefits, etc., are not 
evidence against this; invalid reasoning is simply eliminated from the pool 
of arguments and does not count one way or the other.  The world's greatest 
fool may have said at some point that the sun is shining, but that doesn't 
eliminate the physical possibility of day.

Otherwise you follow a strategy which *guarantees* that if reality *ever 
does* throw an existential risk at you, you will do *nothing*, because once 
upon a time some other guy was fooled.  As we all know, there's nothing 
worse in this world than losing face.  The most important thing in an 
emergency is to look cool and suave.  That's why, when Gandalf first 
suspected that Frodo carried the One Ring, he had to make *absolutely sure* 
that his dreadful guess was correct, interrogating Gollum, searching the 
archives at Gondor, before carrying out the tiniest safety precaution. 
Like, say, sending Frodo to Rivendell the instant the thought crossed his 
mind.  What weight the conquest of all Middle-Earth, compared to the 
possibility of Gandalf with egg on his face?  And the interesting thing is, 
I've never heard anyone else notice that there's something wrong with this. 
  It just seems like ordinary human nature.  Tolkien isn't depicting 
Gandalf as a bad guy.  Gandalf is just following the ordinary procedure of 
taking no precaution against an existential risk until it has been 
confirmed with absolute certainty, lest face be lost.

I don't think it's a tiny probability, mind you.  I've already identified 
the One Ring to my satisfaction.  But even if you don't know the One Ring 
on sight, Damien, even if you think you know better than I, please grant me 
a probability high enough that you don't want to actively get in my way 
while I'm working.

Let me emphasize again that if you choose not to donate, you have no need 
to justify that choice to me, or to anyone.  If you're satisfied with your 
choice, do it without apology.  If your choice doesn't satisfy you, change 
it to a choice that does.  But for the love of cute kittens!  You have no 
need whatsoever to post your justifications to this or any other mailing 
list!  Unless you think that every transhumanist movement should remain 
tiny and helpless forever and that the best way to achieve this is by 
starting an argument whenever one of them tries to gain momentum.  Think of 
how nice it would be if, instead of arguments, we saw some replies from new 
donors.  Wouldn't that be a warm and fuzzy feeling?  Transhumanists showing 
that they can do more than argue about the future?  It is in your best 
interest that others donate to SIAI even if you don't, so please don't 
discourage them.

I swear, it's worse than herding cats.  At least cats would obey Natasha. 
But we know we're cats, and we can, if we choose, try to think through the 
question of how rational self-aware cats can cooperate and not forever 
remain tiny compared to flying-saucer cults.  Now I enjoy a bit of sarcasm 
in the course of rational criticism as much as anyone, but *not* when a 
transhumanist organization is in the middle of launching a new effort or 
project.  If you must criticize (the rationality part goes without saying), 
that's a time for polite, constructive criticism phrased in such a way as 
to not actively discourage new activists.  Save the delicious sarcasm for 
the next argument about politics.  Please.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list