[extropy-chat] Singularity econimic tradeoffs

Eugen Leitl eugen at leitl.org
Mon Apr 19 12:15:59 UTC 2004


On Sun, Apr 18, 2004 at 10:30:12PM -0400, Dan Clemmensen wrote:

> When decision criteria are presented to a human in an understandable 
> form, the human can make decisions very rapidly. The decisions a racing 

True. Very useful to use humans as behaviour fitness evaluators.
Cost-effective, too (especially, if you get online gamers to pay 
for the privilege, or military fund the combat simulations). 

> card driver, a fighter pilot, or a video gamer makes are a case in 
> point. This is decision-making on an entirely different level that the 
> one we generally think of in the business world. A programmer currently 

Human visual system has been evolutionary optimized to process specific
complex stimuli very rapidly. It is not obvious we can state real world
problems in such terms. It is also not obvious that the transcoding is not a
harder task than the original problem.

I can kill rogue processes or shoot inodes using command line, or a 
DOOM environment. While latter's more fun, CLI is quicker. 

> makes a few creative decisions per day at the most. The rest of the 
> programming job consists of implementing those decisions by writing, 
> compiling, debugging, and releasing code.

Human programmers have to code the algorithms explicitly. As long as they
have to do that, instead of just specifying boundary conditions/fitness,
especially in informal ways, their productivity has a ceiling. Adding more
people doesn't help over a very small team size.

A system programmed by "give me a control program for a car factory!" is
AI-complete, of course (unless it's just NLP, pulling a template from a
blueprint library, and filling in the blanks by grilling the user).
 
> We are at different levels here. I'm not talking about IT departments. 
> I'm talking about an individual hacker, or perhaps a researcher. I'm 
> also not talking about top-down "productivity" tools like Rational Rose. 
> I'm talking more of bottom-up tools like the various IDEs.

IDEs make debugging simpler. It allows people who're usually too awful
programmers to moonlight as such. It doesn't address the problem of code
writing code, beyond trivial templates.
 
> People insist on trying to computerize the wrong parts of the problem. 
> You speak of planning, scheduling, and understanding. I don't want the 
> computer to do these tasks, at least not initially. I want the computer 
> to do the mundane stuff, it can ask ME to do the planning. If the 
> computer element cannot "understand the intent" it should ask me. The 

DWIM NLP is AI-complete, and dialogue systems are annoying as hell.
Especially, given that they're so braindead it hurts to read the transcripts.

> computer can indeed "weight the implications" of certain decisions, by 
> simply executing all the branches and displaying the results. However if 
> such an activity is too expensive, the computer should tell me and ask 
> for another decision.

I fail to see how that's useful beyond being very neat for debugging
(prototypes of such tools exist).
 
> I have seen a five-year-old child making decisions at a high level in 
> real time, on the soccer field, with two days of training. The kid will 

I would really start worrying if we had an AI good enough to impersonate a
toddler (in physical space, of course).

> obviously not make soccer decisions at the same level as a professional, 
> but the level is still higher than that of a computer. This particular 
> kid has never received formal training in abstract reasoning. However, 
> for the sake of argument, let's assume it does in fact require a trained 
> decision-maker. Given that we only need one of these people to bootstrap 
> the SI, this is not much of a barrier. Pleas consider the amount of time 
> it takes a new player to become proficient at a complex video game. A 
> game player is essentially doing nothing but making decisions.

Yes, AI sucks. Until it stops to suck, quite suddenly. Game over.
 
> It appears that we are in complete agreement. The seed SI has a human 
> component, this seed SI bootstraps, and eventually (within a week in my 
> scenario) reaches a level at which the human component is no longer 
> useful.  My point is that un-augmented humans do not need to create an 
> AI to reach this point.

Can you restate your point? It doesn't follow, at least according to your
argument flow above.

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org         http://nanomachines.net
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20040419/a7bed94e/attachment.bin>


More information about the extropy-chat mailing list