[extropy-chat] SIAI seeking seed AI programmer candidates

Adrian Tymes wingcat at pacbell.net
Wed Jun 2 07:51:33 UTC 2004


--- Michael Anissimov <michael at acceleratingfuture.com>
wrote:
> Adrian Tymes wrote:
> >(first and foremost, what happens to me - and my
> other
> >financial concerns - if the project goes nowhere
> 
> If no FAI project ever goes anywhere, then someone
> eventually builds a 
> self-improving UFAI (or engages in nanowar), and
> you, your financial 
> concerns, and your personal concerns all go *poof*
> in one fell swoop. 

You misread.  If the project - that is, if *YOUR*
project - goes nowhere, well, there are others trying
to build friendly (and technically even Friendly,
although they haven't formally checked their goals
against your specification yet) AIs, some of whom have
a lot more resources and a lot higher chance of
success.

> >...but this answer simply does NOT suffice.  I
> suspect
> >the same is true for any programmer of my caliber
> or
> >higher, which only reinforces things.  (Creating
> >Friendly AI by myself?  Unlikely, at best. 
> Creating
> >Friendly AI with a lot of help of my caliber? 
> That's
> >starting to become possible.)
> 
> That's why we're seeking *Singularitarian* 
> (http://yudkowsky.net/sing/principles.html)
> programmers of extremely 
> high caliber.  They would be in it for the
> saving-the-world aspect.  A 
> successful Singularity would bring about an immense
> amount of material 
> abundance.

What of the person who agrees with your goals, and in
particular wishes to help bring about the Singularity
ASAP, but whose opinion is that joining your team is
not the best route to achieve it?  (At least, among
the presently available routes for said person's
actions.)  According to that link, said person would
seem to be a Singularitarian, and thus who you say
you're looking for.

I'm pointing out a problem with your means to the end,
not the end.  And, again, I mean no insult by any of
this; it's just that an honest evaluation means I must
put humility aside and acknowledge that there is a
chance, however small, that my efforts could help
reshape the world in positive ways - and that not only
have I been trying to steer my career to maximize
those odds, but from some points of view, I have even
had a little success already (by helping to make
practical the neuron/silicon interfaces being
experimented with in many labs today).  Which, when
combined with my experience in programming,
cognitive/brain science, and most of the other areas
you list, makes me seem to be exactly who you're
trying to recruit - but not only do I reject your
current offer, I sense that anyone in my approximate
position would do likewise.

> >Sorry, but the same judgement you call
> for
> >to guide the project through, tells me there are
> far
> >less risky (personally and for all of humanity)
> paths
> >to reach the same end of a Friendly AI, and they
> don't
> >(presently) involve me dedicating my working hours
> to
> >non-payers like you.
> 
> Where *do* they involve dedicating your hours?

On projects that have a greater chance of sustaining
me in the short term, and/or add small but definite
steps towards the future we wish.  Bootstrapping, as
it were.

(For instance, my major project at the moment is
learning nanolithography so I can help advance that
state of the art towards Drexlerian ideals in ways
that make off-Earth pre-Singularity enclaves of
humanity more likely...and on the way, try to tap
certain quantum physics phenomena to try to make a lot
more energy available to mankind.  Success is not
guaranteed, but there doesn't seem to be anyone else
in the world trying my exact route, yet it does seem
to be possible given the tools now available.  A large
spike in environmentally clean domestic energy
generation would probably free up all kinds of
resources, some of which would likely wind up with
organizations like SIAI, not to mention reduce several
factors which negatively impact our lifespans.  BTW,
this is not a call for assistance - I do not presently
see how anyone not already on or near my path could
reasonably help in the near future - just an example.
Another example is contract programming, with little
significance other than paying the bills while I
tackle the above effort.  Eventually, I'd prefer to
have the main effort also pay the bills, but I do not
perceive that I am nearly far enough along yet to
where I could likely get grants or similar
assistance.)

> Are
> there faster tracks 
> to Friendly AI than the one SIAI is currently on? 

In short?  Yes.  (At least, measured by expected time
to success, weighing all the outcomes with their
probabilities.)

> SIAI was formed with 
> the specific goal of getting to FAI as fast as
> possible, so it is our 
> responsibility to seek out faster means if they are
> available.  What are 
> your ideas?  Incremental bootstrapping with
> commercial approaches sounds 
> more credible on the surface, but is isn't likely to
> get us to FAI 
> faster than an exclusive focus, if that's what you
> were thinking of.

I disagree.  Incremental bootstrapping will get you
the resources you need to proceed to the next stage,
and to the next stage after that.  Trying to do it all
at once may seem faster, but giant projects like that
for other applications have a history of failures
which you show no sign of addressing for this
application (or even being aware of, though I do give
you the benefit of the doubt there), so historically
the odds are fantastically tiny that you will create
the first seed AI *BY THAT METHOD*.  You don't even
have much chance of a partial success which others can
build on (which, given the outcomes here, could
arguably be credited as a full, if delayed, success).
Bootstrapping has historically proven to give a much
greater chance of success, with fewer total resources
consumed (the most important resource in this case
being time) before the initially desired goal is
eventually achieved.  (I've seen this time and time
again, and I'm seeing it yet again in my nano
development.  I could detail that story if you want,
but that might diverge from the topic a bit too much.)

And then there is also the problem with relying
exclusively on essentially volunteer labor, even
full-time volunteers like you requested (no guarantee
of good pay - or, by implication, any contractually
enforceable steady pay - means essentially volunteer
labor).  Even the most presently gung-ho volunteer may
start to become disillusioned if the project does not
change the world within the next few years (and there
are very few who would put the Singularity this side
of 2010).

Basically, you need to study what happens to most
efforts like what you propose (*IGNORE* the ends -
they don't matter except as a circumstance until after
the project succeeds, no matter how important they are
- and focus on the means), and see why they tend to
fail.  Specifically, all-volunteer efforts with no
practical ("commercial") applications or focus in the
next several years, and the corresponding potential
degeneration of motivation in those who sign up
(again, regardless of their present feelings: being
constantly hungry or other resource-depleted states
tends to invoke hardwired overrides in the human
psyche - and until after your project succeeds or
someone else beats you to making an AI, you'll have to
use human beings).



More information about the extropy-chat mailing list