[extropy-chat] SIAI seeking seed AI programmer candidates

Eliezer Yudkowsky sentience at pobox.com
Wed Jun 2 10:27:40 UTC 2004


Adrian Tymes wrote:
> 
> You misread.  If the project - that is, if *YOUR*
> project - goes nowhere, well, there are others trying
> to build friendly (and technically even Friendly,
> although they haven't formally checked their goals
> against your specification yet) AIs, some of whom have
> a lot more resources and a lot higher chance of
> success.

Many projects have a lot more resources.  Some may even have a fair chance 
of success on the deadly part of the problem.  I'm not aware of one other 
AI project out there that even tries to rise to the challenge of FAI, 
Adrian, not one.

> (For instance, my major project at the moment is
> learning nanolithography so I can help advance that
> state of the art towards Drexlerian ideals in ways
> that make off-Earth pre-Singularity enclaves of
> humanity more likely...and on the way, try to tap
> certain quantum physics phenomena to try to make a lot
> more energy available to mankind.  Success is not
> guaranteed, but there doesn't seem to be anyone else
> in the world trying my exact route, yet it does seem
> to be possible given the tools now available.  A large
> spike in environmentally clean domestic energy
> generation would probably free up all kinds of
> resources, some of which would likely wind up with
> organizations like SIAI, not to mention reduce several
> factors which negatively impact our lifespans.  BTW,
> this is not a call for assistance - I do not presently
> see how anyone not already on or near my path could
> reasonably help in the near future - just an example.

Sounds like fuzzy strategic thinking.  Nanotech -> nanocomputers -> 
brute-force AI -> Earth go poof.  Getting off Earth seems highly unlikely 
to help escape a UFAI.  Trickle-down theories of solving the Singularity, 
like creating new energy resources, or arguing over who should be in the 
White House, I do now formally declare to be tempting distractions and ask 
that all ignore them.

> And then there is also the problem with relying
> exclusively on essentially volunteer labor, even
> full-time volunteers like you requested (no guarantee
> of good pay - or, by implication, any contractually
> enforceable steady pay - means essentially volunteer
> labor).  Even the most presently gung-ho volunteer may
> start to become disillusioned if the project does not
> change the world within the next few years (and there
> are very few who would put the Singularity this side
> of 2010).
> 
> Basically, you need to study what happens to most
> efforts like what you propose (*IGNORE* the ends -
> they don't matter except as a circumstance until after
> the project succeeds, no matter how important they are
> - and focus on the means), and see why they tend to
> fail.  Specifically, all-volunteer efforts with no
> practical ("commercial") applications or focus in the
> next several years, and the corresponding potential
> degeneration of motivation in those who sign up
> (again, regardless of their present feelings: being
> constantly hungry or other resource-depleted states
> tends to invoke hardwired overrides in the human
> psyche - and until after your project succeeds or
> someone else beats you to making an AI, you'll have to
> use human beings).

I wasn't planning to let our programmers starve.  Still, it would be better 
than being dead, and one who does not realize this does not belong on the 
project.

I'd like a salary of $100K/year too.  I plan to ask for it, for the reasons 
you mention.  Meanwhile, here I am working.  It depletes my mental energy. 
  I do it anyway.  Anissimov is correct, Adrian; it would be desirable to 
offer the programmers good salaries, but anyone who's only willing to work 
if they're comfortably well-paid just doesn't get the point.  Yes, I know 
how important it is to be comfortably well-paid.  The point stands.

And now I see that I've spent four minutes writing this email, during which 
400 people died, so back to work.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list