[extropy-chat] SIAI: Donate Today and Tomorrow

Eliezer Yudkowsky sentience at pobox.com
Thu Oct 21 08:59:18 UTC 2004


The Singularity Institute is approaching the end of its five-year 
provisional tax-exempt period.  At the end of this year, December 31st 
2004, the IRS will request information from us that they then use to 
determine whether the Singularity Institute will get permanent status as a 
public charity.  The critical test they apply is called the "public 
support" test, and like most of what the IRS does, it's complicated. 
Roughly speaking, the IRS adds up all the donations we receive in a 
four-year period, takes the grand total, and divides by 50; this amount (2% 
of total support) is the most of any one individual's donation that counts 
as "public support".  The rest of the donations from that individual, 
anything over and above 2% of your total support over that four-year 
period, are nonpublic support.

Leaving aside the exact details of the calculation, the IRS asks whether a 
nonprofit is a *publicly* supported organization - whether their donations 
come from a broad base, or a few big donors.

Right now, most of SIAI's support comes from a few big donors.  To pass the 
public support test automatically, we would need 33.3% public support 
(1/3).  At present, according to our calculations, we're at roughly 25% 
public support.  This doesn't automatically fail us, but it does mean that 
the IRS applies something called the "facts and circumstances" test, which 
we might not pass - or the IRS might demand unworkable changes in our 
governing structure, like having members of the Board appointed by public 
officials (yes, this is part of the "facts and circumstances" test).  In 
the worst case the IRS might determine that we were a private foundation, 
which would severely cripple SIAI.  If we can get 6 *new* donors - not 
existing major donors - to donate $5000 apiece, we should pass 
automatically.  If we can't do that, the IRS still pays attention to how 
much public support we *do* have, how many donors we have, and from how 
wide a base.

The Singularity Institute also currently has $7,690 unfilled in the 
Fellowship Challenge - yes, $7,690 unfilled; Brian Cartmell increased the 
total Challenge Grant to $15,000.  So if you donate now, and earmark your 
donation for the Fellowship Challenge, Cartmell will match your donation.

Even if you can only afford a hundred bucks, the Fellowship Challenge will 
still match the donation, it still will be one more person who donated, and 
it will still help show the IRS that we have a broad base of public 
support.  In short, now - yes, *now* - is a good time.  And we need *new* 
donors.  That means *you*.  Not the people who already donated more than 
$5000 over 2001-2004, *you*.

The Singularity Institute now announces a 72-hour donations campaign, 
starting at 5AM Eastern Time on Thursday, October 21st, and continuing 
until 5AM Eastern Time, Sunday October 24th.  The title of the campaign is 
"Donate Today and Tomorrow".  The 72-hour time limit is intended to give 
people "today and tomorrow" even if they don't check their email more than 
once a day.  The theme of this campaign is donating early and fast, so if 
you've already decided to send us a check for $5000, you can go ahead and 
mail it right now, then continue reading at your leisure.  Otherwise, a 
somewhat longish essay follows, meant for people who think that SIAI is a 
fun, neat, cool idea, but who haven't gotten around to donating yet.  If 
you've never heard of us, don't start here - visit our website at 
http://singinst.org/.

--- Today And Tomorrow --

Once upon a time, long ago and far away, ever so long ago...

In the beginning, saith Richard Dawkins, the world was populated by stable 
things:  Patterns that arose frequently, or patterns that lasted a long 
time.  Raindrops that fell from the sky, or mountains that rose from the 
crust.  This was the era of chemistry, the era of dynamic stability, an age 
before optimization.  By chance a star might have dynamics that let it burn 
longer; and you would be more likely to see such a long-burning star than 
to see an unlucky star that went nova shortly after coalescing.  But the 
star has no properties that are there *because* they help the star burn 
longer.  A star may have intricate stellar dynamics, sunspots and corona 
and flares - but a star is not optimized, has no complex functional design. 
  A star is an accident, something that just happens when interstellar dust 
comes together.  In the beginning, the universe was populated by accidents.

Today we see many patterns, from butterflies to kittens, with far more 
complexity than mere stars or raindrops.  Not that stars are simple, but 
stars can't compare to the intricate order of a kitten.  The kitten arises 
from a different kind of pattern-maker, a different source of structure. 
When we look around us, we find not just stable accidents, like mountains, 
or frequent accidents, like water molecules, but also patterns that copy 
themselves successfully.  The process that structures these patterns is 
called evolution, and evolution produces complex structure enormously 
faster than accident alone.  The first ten billion years of the universe 
were relatively boring, with little real complexity, because there was no 
way for complexity to accumulate.  If by luck one star burns unusually 
long, that makes it no more likely that future stars will also last longer 
- the successful pattern isn't copied into other stars.  Evolution builds 
on its successes and turns them into bigger and more complex successes.  In 
an information-theoretic sense, there might be less complexity in the 
entire Milky Way outside Earth than there is in a single butterfly.

Once upon a time, currently thought to be around 3.85 billion years ago, 
the era of accidents ended.  Perhaps by sheer chance the processes of 
chemistry coughed up a single self-replicating molecule.  Perhaps the 
transition was more subtle, an attractor in a swirling loop of catalytic 
chemistry.  Even if it was most exceedingly unlikely for any given accident 
of chemistry to create a replicating pattern, it only had to happen once. 
So it all began, once upon a time, ever so long ago: the ultimate 
grandparent of all life, the very first replicator.  From our later 
perspective, that first replicator must have looked like one of the oddest 
and most awkward things in the universe - a replicator with the pattern of 
a stable thing; a reproducing structure that arose by accident, without 
evolutionary optimization.  The era of stable things ended with a special 
stable thing that was also a replicator, and only in this way could the era 
of biology bootstrap into existence.  Had you been around at the time, you 
might have mistaken the first replicator for an exceptionally stable thing 
that just happened to be very common in the primordial soup.  A great 
success as stable things go, but nothing fundamentally different.

The true nature of the next era would not become clear until you saw the 
second replicator ever to exist, your penultimate grandparent, an optimized 
pattern that would never have arisen in the primordial soup without a 
billion copies of that first replicator around to potentially mutate.  It 
was the second replicator that was a new kind of structure, a pattern that 
would never be found in a world of stable things alone.

And what a strange sight a human must be - an intelligence inscribed 
entirely by evolution, a mind not constructed by another mind.  We are one 
of the oddest and most awkward sights in the universe - an intelligence 
with the pattern of an evolved thing.  Like that first replicator, we only 
had to evolve once, but we did have to evolve.  The transition from biology 
to intelligence, from survival of the fittest to recursive 
self-improvement, could never have happened otherwise.

Even today, if you look into the core of the replicators in today's world, 
you can see the trace of that distant stable thing in their ancestry.  What 
a miracle, what fortuity, that Urey and Miller's experiment of running 
electricity through a primordial atmosphere of methane and ammonia should 
produce the amino acids making up all proteins!  What fortuity, that later 
experiments based on new models of the primordial atmosphere produced DNA 
bases as well!  How lucky!  No, it is not luck at all; of course the 
elements of biology are molecules that would tend to arise by chance 
chemistry on primordial Earth.  How else would life get started?  Strange, 
that RNA should be capable of both carrying information, and twisting up 
into catalysts and enzymes.  What a fortuitous property, in a molecule so 
close to DNA!  But it is not luck; how else would you expect life to start, 
but with a molecule capable of both carrying information and carrying out 
chemical operations?  And because that pattern of RNA was there at the 
beginning, it still forms part of the uttermost core, tiny strands of 
single-stranded RNA ubiquitous in the chemistry of the cell.  Look at the 
pattern of any living thing in the world and you can see that there must 
have been a stable thing in its ancestry, far far back, ever so long ago.

And someday, if we do our jobs right, our distant grandchildren will study 
evolutionary biology, and shake their heads in wonder.  For though it is 
understandable enough that kindness should beget kindness and loving minds 
construct more loving minds, it is passing strange that the simple 
equations of evolutionary biology should do likewise.  Who would ever have 
thought that natural selection, bloody in tooth and claw, should give rise 
to a sense of fun?  Who would expect that the winning alleles would embody 
the love of beauty, taking joy in helping others, the ability to be swayed 
by moral argument, the wish to be better people?

Evolution is an optimization process devoid of conscience and compassion. 
And yet we have conscience.  We have compassion.  Alleles for conscience 
and compassion somehow drove competing alleles to extinction.  I do not 
wish to hint at mystery where no mystery exists: science does have a good 
idea as to how alleles for mercy wiped out the genetic competition.  All 
our human natures are patterns that evolution can produce, and furthermore, 
patterns that carry the unique characteristic signature of evolution.  The 
original archetype of Romeo and Juliet would not arise without sexual 
reproduction.  Some emotions are less obvious at first blush than others, 
but there is always a reason, and there is always evolution's unique 
signature.  That evolution coughed up friendship is amazing, but not 
mysterious.

I still nominate the evolution of kindness as the most wonderful event in 
the history of the universe.  The light that is born into our species is a 
precious thing, which must not be lost.

Less surprising is that bloodlust, hatred, prejudice, tribalism, and 
rationalization are products of evolution.  They are not humane, but they 
are human.  They are part of us, not by our choice, but by evolution's 
design, and the heritage of billions of years of lethal combat.  That is 
the tragedy of the human condition, that we are not what we wish we were. 
There shall come a time, I think, when humanity sees itself reflected, and 
burns the darkness clean.  Yet the same impartial optimization process 
inscribed both our light and our darkness.  The forces that first 
constructed intelligence were without intelligence.  The dynamics that 
birthed morality were without morality.

So it all began, once upon a time, long ago and far away, ever so long ago, 
in an age so distant that only a tiny handful of minds in all the countless 
galaxies remember...

But that is for the future.  First we must survive this century.

It is not exactly trivial, to try and master fully the art of Friendly AI 
before anyone succeeds in cobbling together a half-understood hack that 
recursively self-improves.  Creating Friendly AI is an engineering 
challenge, and it will take full-time people and specialized skills, and 
donors to support them.  It has to be done fast, because we need a 
difficult cure to arrive before an easy disease.  I still think it's 
possible to win, though the hour grows later.  But you can't change the 
future by relaxing and letting things go on the way they have.  You can't 
change the future by doing whatever you planned to do anyway with a newer 
and cleverer excuse.  Changing the future takes people willing to change 
even themselves, if that is the price demanded.

And even among those who understand what is at stake, it seems that the 
most common reaction is to sit back and cheer the Singularity Institute on, 
the way one might cheer for a football team - passively.  Maybe it's 
watching movies that engrains the habit into people's minds.  They would 
look at you oddly, in the movie theatre, if when the world was threatened 
you tried to jump into the silver screen to help.  Maybe it's watching 
movies that teaches people that when the fate of the human species is at 
stake, the thing to do is wait with baited breath for the good guys to win, 
as they always do.  Too few seem to realize that the outcome is yet in 
doubt, and that they might have their own parts in the unfinished tale, 
waiting for them to leap onto the stage.

There's research in social psychology, starting in 1968 with a famous 
series of experiments by Latane and Darley, on the phenomenon now known as 
the "bystander effect".  When more people are present at an emergency, it 
can reduce the probability that *anyone* will help.  One of Latane and 
Darley's original experiments had subjects fill out questionnaries in a 
room when they began to add smoke.  In one condition, the subject was 
alone.  In another condition, three subjects were in the room.  In the 
final condition, one subject and two confederate experimenters (apparently 
other students) were in the room.  75% of alone subjects calmly noticed the 
smoke and left the room to report it.  When three subjects were in a room 
together, only 38% of the time did anyone leave to report the smoke.  When 
a subject was placed with two confederates who deliberately ignored the 
smoke, the subject reported the smoke only 10% of the time.  Other 
experiments in Latane and Darley's original series included the 
experimenter apparently breaking a leg and a student apparently having a 
seizure.  In every case, the subjects who were alone reacted to the 
emergency faster than the subjects who witnessed the emergency in a group.

Since 1968, Latane and Darley's original experiments have been extensively 
replicated, varied, tested in real-world conditions, et cetera, and the 
original result still holds: people in groups are much slower to react to 
emergencies than individuals, if they react at all.  The current consensus 
in social psychology is that the bystander effect stems from two primary 
causes, diffusion of responsibility and a phenomenon called "pluralistic 
ignorance".  Diffusion of responsibility works like this:  If three people 
are present, each one thinks, "Well, *I* don't need to do anything, those 
other two will handle it," and no one does anything.  People who are alone 
know that they alone are responsible, and if they don't do something, no 
one will - and, yes, that does make a huge, experimentally verifiable 
difference; one person, alone, is literally more than twice as likely to 
act in an emergency as a group of three people.  "Pluralistic ignorance" is 
the name that social psychologists give to group underreaction:  When 
people in groups see an emergency, they look around to see if anyone else 
is responding.  If no one else is responding, they assume it's not a real 
emergency.  The problem is that *before* people have decided something is 
an emergency, they instinctively try to appear calm and unconcerned while 
they dart their eyes about to see if anyone else is responding - and what 
they see are other people appearing calm and unconcerned.

If you ever find yourself with a broken leg or some other emergency, and 
you're unlucky enough to have many people about, social psychologist Robert 
Cialdini recommends that you point at *one particular* person and tell him 
or her to call 911, or carry out some other definite action.  Singling out 
a particular person reduces the diffusion of responsibility, and once a 
single person steps up to help, other people may also stop ignoring the 
emergency.

Experimental manipulations that reduce group apathy and the bystander 
effect include (1) a bystander who thinks that the emergency requires 
intervention from them *personally*, (2) bystanders with considerable 
training in emergency intervention, and, last but not least, (3) bystanders 
who know about the bystander effect.  So the next time you're in a large 
group when someone has a heart attack, don't try to look calm and 
unconcerned while you dart your eyes around to see if anyone else is 
panicking.  Take out your cellphone, call 911, point to other people 
specifically and recruit them to help you.  And yes, it has to be you, 
because only you know about the bystander effect.

The bystander effect takes odd forms when it comes to people 
procrastinating about when to donate to the Singularity Institute.  My 
current observation is that most nondonors do honestly plan to donate, 
eventually, just as soon as X - where X varies across donors, but always 
lies at least a year in the future.  Non-donors also expect that lots of 
other people are donating.  This is not so.  Rather, lots of other people 
expect that lots of other people are donating.  Also interesting is the way 
people choose X, the future condition that will finally allow them to 
donate.  High school students say they want to wait until they can get into 
college on a scholarship.  People in college on scholarships want to wait 
until they graduate and get jobs.  People with jobs want to wait until they 
pay off their student loans.  People whose student loans are paid off, want 
to wait another five years until their stock options vest.  People whose 
stock options have vested, want to wait until they can support both the 
Singularity Institute and the startup venture they're currently funding...

Meanwhile, the Singularity Institute recently received, in the mail, an 
envelope with no return address, containing an anonymous donation for ten 
dollars.  We know, because he or she told us so, that whoever sent this 
donation is a high school student - and that's all we know.  On the 
shoulders of such people rests the fate of the world.

Maybe someday this person will show up on the SL4 mailing list.  Maybe 
we'll find out who sent that letter after the Singularity.  Maybe the donor 
will never choose to reveal himself or herself.  Maybe someday there will 
be a monument to this person, the Anonymous Donor, and it will say:  "We 
never knew who the one was, or why the one helped us, but when the one was 
needed, the one was there."  I wonder how many names will be on the other 
monument, the monument to that entire band which once conducted the last 
defense of humankind.  Less than ten thousand names?  Less than a thousand? 
  At the end of 2003 the roster of donors was less than a hundred.  Let us 
be optimistic, and hope there will be five thousand and twenty-four names 
on this monument, and that they put forth enough effort to win.  Five 
thousand and twenty-four names would still be fewer than one in a million, 
and there would never be any more.  Somewhere on that monument will be the 
name of someone who donated a single dollar, and humanity will be glad that 
he did, for it is strange enough that there are only five thousand and 
twenty-four names on that monument; how much sadder if there should be only 
five thousand twenty-three.  For as long as Earth-originating intelligent 
life continues that monument shall exist, and it shall still have only five 
thousand twenty-four names...

And I had this thought, and I wondered how many SIAI volunteers would have 
their names on the monument, and I knew that at the present rate it would 
be fewer than one in ten.  As a wise volunteer recently observed - I can't 
find the quote, so I'm paraphrasing - "The problem is that we're 
indoctrinated into believing that you can make a big difference by helping 
out just a little.  But the sad truth is that you can't do AI on two hours 
a month."  These are words of deep wisdom, and I wish I could remember who 
said them (if you're reading this, write me).  As Christine Peterson of 
Foresight said on a similar occasion for similar reasons, donating is a 
*lot* more helpful than volunteering.  I'm sorry, but that's the way it is. 
  If you're waiting for the Singularity Institute to come up with a 
desperately urgent problem that can be solved with ten hours of Flash 
coding over five months, you'll probably wait forever.

I had that thought about the monument, and I wondered what it would be like 
to spend the next several centuries explaining that, yes, you were one of 
the tiny fraction of humanity that knew the Singularity Institute existed, 
and you were on the volunteers list, and you even hung out on the SIAI 
volunteers IRC channel, but you never actually got around to donating a 
hundred bucks and that's why your name isn't on the monument.

So I mentioned this thought in the #siaiv IRC channel, for it seemed to me 
like a dreadful and possible doom against which people ought to be warned. 
  And someone who was not a donor said: but we aren't doing it for the 
fame.  And I replied: it's all well and good to act on pure altruism if you 
can attain that level, but there's something wrong with deriding fame-lust 
when fame-lust would produce pragmatically better behavior.  For one 
motivated by lust for fame would send in the hundred bucks, and that is 
more help than receiving nothing from an altruist.

I probably should have phrased that rejoinder more tactfully.  I was 
feeling a tad frustrated at the time.  But tactless or not, it happens to 
be true.

When I started up the Singularitarian movement, I wished (in my youthful 
idealism) to appeal to pure altruism, unmixed with lesser motives like the 
lure of everlasting fame.  My original reasoning was that we might all zip 
off directly to superintelligence without lingering in the human spaces 
where things like monuments made sense, and for this reason I could promise 
nothing for the future.  Today I do not think I would choose such rapid 
growth.  I think I will prefer to grow at whatever healthy rate keeps my 
mind from going stale, and smell the roses along the way.  But I might be 
mistaken.  There may be no monument.  There certainly won't be a monument 
if the human species dies without ever having its chance at the light.  The 
future is uncertain and I cannot honestly promise anything; and so it 
disturbs me to even offer pleasant prospects, because there is something in 
human nature that makes us treat prospects as if they were promises. 
Today, it still strikes me as wrong and perhaps dangerous to tell someone 
that they should donate to SIAI for the uncertain prospect of fame.  But I 
would also be deeply annoyed if the human species died off because its 
defenders were too proud to stoop to pointing out some of the specific 
impure motives that of course should not motivate you to help save the 
world.  Right now, most people are waiting into the indefinite future to 
donate, and that's not an acceptable outcome.  It means we're doing 
something wrong, and there's something about our strategy that we need to 
reconsider, and maybe this is it.

I worry about the evolutionary psychology of the bystander effect.  People 
dealing with emergencies in groups stand by and do nothing.  Now experiment 
also shows that people who find themselves the sole source of help in 
emergencies, *do* act.  This implies that ancestors who acted when alone 
did not do consistently worse than ancestors who walked away.  Maybe the 
ancestors who walked away had lousy reputations and no one wanted to be 
their friends.  Maybe the ancestors who helped, tended to end up helping 
someone who was more likely than average to share the helpful allele.  The 
point is that individual helping behavior was not selected out.  So why the 
bystander effect?  If the selection pressure favors (or at least doesn't 
punish) acting in emergencies when you're standing there alone, why would 
this change if three people are present?

My thought is that the presence of a group creates an arms race between 
alleles in which the goal is to avoid being the first person to help. 
Suppose that we start out with an allele A for helping someone in trouble 
right away.  Allele A might maintain itself in the gene pool because 
spatial variance in the distribution of allele A meant that an ancestor who 
carried allele A and encountered someone in need of emergency assistance, 
or a threat to the tribe, usually helped beneficiaries with a higher 
proportion of A alleles than the general population pool.  For whatever 
reason, allele A hasn't been driven to extinction.  But now suppose that 
there's a group of three people watching someone in need of help.  We'll 
call the person who needs helping Harry.  Suppose one of the group of three 
has an alternative allele B that runs the algorithm, "Wait 20 seconds, then 
help Harry."  If all three people present carry allele B, then Harry is 
still helped, albeit after a delay of 20 seconds.  If one or more of the 
others present has allele A that helps immediately, then the A-carriers 
bear most of the cost of helping, while the B-carriers freeload.  This 
holds true whether Harry carries allele A or B.

Perhaps the bystander effect results from an evolutionary arms race to not 
be the *first* to help.  Even if helping someone in need tended to be a net 
evolutionary benefit in the ancestral environment, if there happened to be 
a *group* present, there might have been a fitness advantage to not being 
the *first* to help.  There would have been an arms race between alleles, a 
race of apathy and delay and hoping that someone else would handle the 
problem instead.  And this arms race has no obvious upper bound.  Latane 
and Darley performed their original series of experiments in the aftermath 
of the Kitty Genovese incident in Queens, New York, 1964.  Kitty Genovese 
was stabbed, raped, robbed, and murdered over the course of half an hour. 
Later investigations showed that more than 38 people had witnessed parts of 
the attack.  None called police.  In 1995, Deletha Word was beaten with a 
tire iron on a bridge in Detroit; she jumped from the bridge to escape and 
died; none of the people crossing the bridge that morning stopped to save 
her.  And though these are but anecdotes, they are anecdotes which 
illustrate solid experimental results.  Individuals help, and people in 
groups hang back and wait for someone else to help.

The thought also occurred to me that if you help everyone in the entire 
human species, it is an evolutionary null-op.  Evolution runs on allele 
substitution rates in a population pool.  What matters isn't whether you 
reproduce, it's whether you outreproduce peers who carry different alleles 
- whether an allele increases its proportion in the gene pool and 
eventually becomes fixed.  A benefit that everyone in your species shares 
equally, benefits all alleles equally, and hence is an evolutionary 
null-op; it produces zero genetic information.  Luckily, human beings are 
adaptation-executers, not fitness-maximizers.  There were no existential 
risks in the ancestral environment, nor any Friendly AIs, no way for an 
individual to harm or benefit the entire human species at once.  Our 
psychologies are already inscribed, solely by those selection pressures 
that acted on our ancestors.  The psychology of existential risk is likely 
to fit the "threat to the tribe" template, a problem that was around in 
ancestral times and that involved noticeable selection pressures.

(Unfortunately, "threats to the tribe" tended to be those evil Other Guys 
from the Tribe That Lives Across The Water, not hard-to-understand threats 
like recursively self-improving optimization processes.  Which is why it's 
so difficult to keep people focused on boring old saving the world instead 
of fun politics.  The sad truth is that if the Singularity were 
recognizably a Democrat or Republican project, we'd get a lot more funding.)

I've done some math, and I have not yet found any obvious evolutionary 
reason why an action that benefits both others and yourself should create 
less selection pressure favoring your alleles than an action that benefits 
only yourself.  (We assume the same individual cost and the same individual 
benefit in both cases, the only question being whether others receive 
duplicates of the individual benefit.)  But I intend to keep looking for 
explanatory math, because there's a difference of *psychology* that is 
downright bizarre.  If you ask how much people are willing to pay not to 
get shot, they name the entire amount of money in their bank account.  If 
you ask people how much they're willing to pay for the entire human species 
to survive, most of them name the amount of money in their pockets, minus 
whatever they need to pay for their accustomed lunch.  If it was *only* 
their own life at stake, not them plus the rest of the human species, 
they'd drop everything to handle it.  There's an eerie echo here with the 
observation that anything that benefits or harms the entire human species 
is an evolutionary null-op.  But I looked, and I didn't find any plausible 
direct connection, so it probably happens for other reasons.  Maybe 
something that threatens everyone is something that someone else might 
handle - so hang back and wait another 20 seconds, or another 20 years.

This kind of evolutionary arms race between individuals can promote alleles 
to universality that cause major group disasters.  Individual selection can 
promote alleles to universality that result in the extinction of the entire 
species.  Natural selection exercises no foresight, no extrapolation, does 
not ask "if this goes on".  Natural selection is a tautology: alleles that 
increase their proportions in the gene pool become universal.  Often the 
winning alleles look to a human like clever design, but sometimes the same 
math can promote downright suicidal alleles.  George Williams's classic 
"Adaptation and Natural Selection" (published in 1966 and still an 
excellent read) debunked the then-popular notion that evolution worked for 
the good of species or ecologies.  Williams discusses how individual 
selection can create group disasters or, indeed, wipe out an entire 
species.  Happens all the time, apparently.

I don't think that procrastination deserves the death penalty.  But I'm 
human and I have wacky human notions about mercy, kindness, second chances, 
fair warnings, consequences proportional to acts.  Maybe Nature has other 
ideas.

So before that arms race of individual procrastination causes a species 
catastrophe, I want to get past this weird psychology that distinguishes 
between benefits to only yourself, and benefits to you *plus* everyone 
else.  It really bugs me that if there was some kind of legitimate reason 
why the Singularity Institute *had* to build a Friendly AI that benefited 
only SIAI donors, we'd probably have a lot more donors.  It's the *same 
benefit*!  Does it not count if other people get it too?

Of course you want to help.  It's not like you're a bad person or anything. 
  But there are these perfectly reasonable reasons why it makes sense to 
wait another year before helping, that somehow don't apply to buying a 
movie ticket or a cheeseburger.  Even though, when you think about it, it's 
not the same order of personal benefit we're talking about here. 
Evolutionary psychology is subtle and sometimes downright stupid when it 
messes with your head.

Here's a thought experiment:  If I offered people, for ten dollars, a pill 
that let them instantly lose five pounds of fat or gain five pounds of 
muscle, they'd buy it, right?  They'd buy it today, and not sometime in the 
indefinite future when their student loans were paid off.  Now, why do so 
few people get around to sending even ten dollars to the Singularity 
Institute?  Do people care more about losing five pounds than the survival 
of the human species?  For that matter, do you care more about losing five 
pounds than you care about extending your healthy lifespan, or about not 
dying of an existential risk?  When you make the comparison explicitly, it 
sounds wrong - but how do people behave when they consider the two problems 
in isolation?  People spend more on two-hour movies than they ever get 
around to donating to the Singularity Institute.  Cripes, even in pure 
entertainment we provide a larger benefit than that!

The two questions seem to be handled by different areas of the brain. 
There are self-benefiting actions, that we go out and do right now using 
any necessary resources.  And there's philanthropy, which we'll get to at 
some point in the indefinite future, if there are any free resources that 
we aren't using for something else.  And, as only the Singularity Institute 
could demonstrate, this difference in emotional psychology doesn't even 
seem to depend on whether the philanthropic benefit that lands on *you 
personally* is *larger* than the selfish benefit.  If I had an fMRI machine 
I could probably show that the two questions activate different brain 
areas.  One emotional module procrastinates indefinitely, hoping that 
someone else will do it instead.  The other emotional module goes out and 
does it right away before anyone else gets there first.

If you knew you were going to get your name on a monument and get awed 
looks at social functions for the next thousand years, you'd probably make 
certain you seized the moment and sent in *some* kind of donation. 
Preferably one you weren't embarrassed to talk about a hundred years later, 
but yeah, fifty bucks if it came down to that, just to make sure you sent 
in *something*.  For the honor of the human species, to bump the count up 
to five thousand twenty-five.  Because it sure would be embarrassing to 
forget, and not get around to it in time, and spend the next thousand years 
kicking yourself.  And, y'know, according to my current understanding, this 
scenario isn't really all that unlikely.  So if fame is what it takes to 
get you moving, then by all means go for it.  But whatever it takes to kick 
yourself out of bystander mode, please do!  For it is also awful to forget 
and not get around to it in time, if what's at stake is the survival of 
humankind, and you don't have a thousand years to kick yourself afterward 
because the human species lost.  And yet somehow the psychology seems to be 
different; for in the first case people do it today, and in the second case 
they plan to do it next year.

There is a failure I would warn you against, a bug in the human 
architecture.  Since the Singularity Institute booted up, I have observed 
this surprising fact: donors donate, and nondonors don't.  People who 
donate this year will, very likely, donate again next year.  People who 
plan to donate next year will, next year, still be planning to donate next 
year.  If you want to give to the Singularity Institute someday, the best 
thing you can do to ensure that is to pull up 
http://singinst.org/donate.html and send a hundred dollars, right now. 
That makes you a donor instead of a non-donor.  But if you donate only a 
hundred dollars, won't that prevent you from donating five thousand 
dollars, which you were planning to do any time now?  No.  Non sequitur. 
Just because you've donated a hundred dollars doesn't mean you can't donate 
more.  What it does is transform you from a non-donor into a donor.  That 
is progress, for donors donate, and non-donors don't.

The title of this message is "Donate Today and Tomorrow".  The usual motto 
for fighting procrastination is "today, not tomorrow".  Yet it seems to me 
that this is not the way the human mind works.  It's either "today and 
tomorrow", or "neither today nor tomorrow".  The difficult part of keeping 
the Singularity Institute alive isn't finding people who want us to win, 
it's getting you to do something about it - to throw that mental switch 
from "someday in the indefinite future" to "in the next 48 hours after 
reading this message".

Yes, the Singularity Institute is running a *very brief* fundraising 
campaign.  Otherwise everyone waits until the last minute, hoping someone 
else will donate, and then they forget.  Our new campaign lasts 72 hours 
since the sender's timestamp on this message, terminating at 5AM Eastern 
Time, Sunday, October 24th.  Hopefully this will give almost everyone a 
chance to donate in the next 48 hours after reading this message.  No one 
finds out how much someone else donated until afterward.  So act now, 
before it's too late.  Just like real life, compressed into a slightly 
shorter timescale.

If you didn't even see this message until too late, I suppose you could 
take 24 hours from whenever you read it.  A day may not seem like a large 
unit of time, but it's made up of hours, and an hour is made up of minutes. 
  Even a minute is enough time to think, if you think now instead of later. 
  Athletes and police officers and martial artists make huge choices in 
seconds because they don't think of themselves as slow.   What are you 
waiting for?  You're faster than this.  Don't think you are, know you are.

You can always donate afterward too, of course.  Donate today and tomorrow.

But now would be a good time.  Now is when the Challenge Grant runs.  Now 
is when the IRS review of our public charity status approaches.  The next 
72 hours is when we're running our "Donate Today and Tomorrow" campaign. 
If you can't be a major donor, we're looking for a typical donation of one 
hundred dollars.  Everyone who reads this and hasn't already donated, 
please.  More than a hundred dollars would be wonderful.  If it helps, 
think of the monument and what you'll be telling people at social functions 
for the next thousand years.  But we need a broader donor base, so please 
send something.  If you just can't handle a hundred dollars, and you're at 
least as well off as that unknown high school student, match that 
ten-dollar donation.  Think of it as voting to tell the IRS that you 
approve of the Singularity Institute and you want SIAI to have permanent 
public charity status.  Think of it as making sure that the human species 
doesn't end up with an embarrassingly small monument.  Think of it as 
letting us know that you exist and you care and we don't have to do this alone.

It's your responsibility, you, yes, you personally.  If I could insert 
<your name here> into this email I'd do it.  There are six billion people 
in this world.  An infinitesimal fraction have the faintest inkling of 
what's at stake.  If you don't step up, that's it.  No one will.  It's you 
or no one.

This essay began when someone wrote to me, explaining why he didn't think 
his potential donations were important:  "Trying to support the Institute 
financially, I believe that I wouldn't be able to offer much more than 
other professional contributors (you know, contributors that are 
professionals)."  That was the anvil that broke the camel's back.  Only 8 
donors in the history of the Singularity Institute have given more than 
$5000.  Right now it doesn't take much to make you a big name in the 
history of the Singularity Institute.  Being a major donor may not seem 
like a glamorous, rare part to play in the unfolding of history, but it 
*is*.  And if it weren't glamorous, that wouldn't make the tiniest 
difference.  It's probably going to take something like ten major, steady 
repeat donors for each full-time specialist on the Friendly AI programming 
team.  That's what it takes to get the job done.

When this essay was done, I sent it to an SIAI volunteer, a non-donor who'd 
been hanging around since 2001.  He read it all, offered a number of 
comments and suggestions for getting past the bystander effect, and then 
casually added:  "But I have an active reason for not donating regularly." 
  When I was done banging my head against the keyboard, I asked why.  This 
was an error: his reasons started to get more elaborate as he explained. 
That's another failure mode - developing intricate justifications for not 
donating.  If you decide not to donate, leave your options open for the 
future.  Don't expend great effort trying to justify your decision to 
yourself, lest you succeed.  You have no need to justify your decision to 
me, or to anyone.  If you dislike your choice, change it!  If you approve 
your choice, do it without apology!  That volunteer did realize, after 
cogitating further, that despite his reasons it didn't make sense to donate 
*nothing* - he signed up for a twenty-dollar monthly donation.  And I 
rejoiced, for there was one more donor, and a monthly donor beside.  One 
more when the IRS asks how many people care about this effort.  One more 
who might donate more someday.  One more who will be able to say afterward: 
  "I was there."  I want everyone's name on the monument.  Seriously.

People assume that someone else is taking care of it.  That professional 
contributors are donating.  They're not!  Other people are not taking care 
of it for you, and you shouldn't wish that!  This is *your* chance to make 
a difference!  Not someone else, you!  You you you!  You, born into this 
generation; you, one of the first intelligent beings ever to exist; you, 
one of only six billion sentients that can potentially intervene in this 
matter, out of all the countless minds that will someday (we hope) come 
into existence.  And as if that isn't enough, you're among the tiny 
fraction that knows what's going on and is in a position to do something 
about it.  That's as targeted as it gets, the finger pointing to you and 
you alone.  You can't get any more personally responsible than that.  No 
more diffusion of responsibility.  No more bystander apathy.  This problem 
landed in *your* lap.

What if you want to throw that mental switch from "someday" to "now", but 
you're having trouble actually doing it?  I know the feeling.  Here's my 
suggestion:  If you're a potential major donor, pick up your checkbook and 
write out a check for at least one thousand dollars, now.  There's nothing 
irrevocable about that.  Later you can rip up the check and not mail it, or 
rip it up and write a check for five thousand instead.  But perform the 
action.  Break the mental inertia.  Give yourself a little reminder to 
stare at you.  If it stares at you for too long, rid yourself of the 
problem by tearing it up or mailing it.  If you're thinking of a lesser 
donation and you haven't gotten around to it, open up 
http://singinst.org/donate.html, and let it stare at you until you decide 
how to get rid of it.  But do something *now*.  Defy paralysis.  Take the 
first step, and the second step will be easier.  The scariest part isn't 
leaping onstage - it's standing up in the audience.


Yours,
Eliezer Yudkowsky,
for the Singularity Institute for Artificial Intelligence, Inc.



More information about the extropy-chat mailing list