[extropy-chat] singularity conference at stanford
Metavalent Stigmergy
metavalent at gmail.com
Mon May 15 21:21:34 UTC 2006
On 5/15/06, Eugen Leitl <eugen at leitl.org> wrote:
> > I don't quite follow your comment, Russell. If you are saying that
> > increasingly impressive life extension is a fantasy, or that the
>
> Where do you see an impressive life extension, in people?
> What we see is that CR appears to work, and that someday
> (not necessarily this decade) there might be a drug to
> mimick the effects of CR with tolerable side effects.
> Might. We don't know for sure yet.
CR works well in mice and humans are very close genetic cousins. That
said, you are right that it's still an unknown. I try not to rely too
much upon such things in my own forecasting; although they certainly
do influence thinking.
I don't recall which speaker showed the slide, but it was along the
lines that in 1900 the average life expectancy was something like 48
years and in 2000, closer to 78. Other sources vary from 45 in 1900
to 73 in 2000. I interpret anything even close to that range as a
well established trajectory of impressive life extension.
> Why should AI and nano land by 2050-2070? It's not that there's
> a train schedule, or even a roadmap with milestones to
> check off. (If there is, I must have missed my copy).
Again, Eliezer hit the nail on the head, here. AI is not a monolithic
entity that will arrive at the train station on time; or at *any*
time, for that matter. There are many possible AI's in the larger
"mind space" as he put it. Some AI's will emerge before others; the
less capable will of necessity precede the more capable, one would
expect.
That said, I'm also a believer that a certain amount of hand waving
and guesstimating, for better or worse, are valid parts of the
scientific method; however disparaged the general use in service of
digging at a debate opponent. Any hypothesis is just that, an
educated Guess. A hypothesis *is* a highly refined form of hand
waving, but hopefully it's smart hand waving. Personally, I think that
we witnessed some fairly well-educated people making some fairly
well-educated guesses. But then, I tend to give the panelists a good
deal of slack. It's not at all easy to sit up there and be put upon a
pedestal; particularly for the mature and responsible guru who knows
the extent to which the pedestal is a grand illusion. :)
> > continual improvement of AI is a fantasy, I'd be interested to learn
>
> I see no continual improvement in AI which will lead to
> robust, natural intelligence. A quantitative discontinuity
> is needed. Just adding more patchwork to the quilt won't be
> enough.
Agreed, and you make a super important point. For my part, I came
away with the sense of a Eliezer's many minds AI model, if you will.
There will not be one "AI" mind that solves everything all at once.
Rather, one *kind* of AI mind could focus on Medical Diagnosis, such
as Carlos Feder has worked on for many years. Just because one or
more of us have not yet directly observed the continual improvement,
does not mean that it isn't out there, undiscovered or unpublished.
> The bad thing about discontinuities is that they're so hard to predict.
True. On the other hand, "suddenly," an advance will appear and many
will mistakenly label it a discontinuity. However, like so many
scientific advances, it will only be a discontinuity in terms of
publicity and general awareness. Long periods of unacknowledged toil
are almost a central cliché of breakthrough innovation.
> > what data or personal observations you use to form those particular
> > characterizations. Not saying you're wrong, just interested in how
> > you got there.
>
> Don't get me wrong, I like optimism. But you can get hurt by being
> too optimistic, this is why I wonder how you got at your assessment.
Nice lexical parry and reversal. :) My assessment algorithm is
definitely not a pure science! In many areas of life and business, I
tend to gravitate toward difficult, but solvable problems (in my own
subjective estimation); the kind of problems that are A.) clearly
discernible as a desirable problem to solve (in my own subjective
estimation) and B.) accompanied by well-defined desired outcomes (in
my own subjective estimation). From there, my own interests and
forecasts generally narrow through a two-stage process of roughly
sorting by the delta between A and B (in perceived time, effort, and
available resources), followed by filtering through a mishmash of
acquired intuition and putative insight.
It's not all that pretty in pseudo code, but I don't think that it's
really all that unique as an iterative discovery process:
WTFDWK ( ); {
Real Life Forecasts = Objective Data + Subjective Understanding +
Intuitive Insights + Mistakes;
Mistakes++;
Objective Data++;
Mistakes++;
Subjective Understanding++;
Mistakes++;
Intuitive Insights++;
Mistakes++;
Read ( ); Reason ( ); Return ( );
}
WTFDWK is a double-entendre reference to the foregoing process and
"What the #$*! Do We Know?" the whimsical film by the same name
[http://www.imdb.com/title/tt0399877/] which some group members might
find an entertaining diversion.
This deep whimsical respect for conventional thinking and processes
drives some friends and colleagues crazy, because I just can't seem to
land squarely in the purely-data-driven camp. I trust the data, I try
to start with as much data as possible, and certainly ultimately rely
upon the data as the best objective measure of outcomes. Yet, the
real world isn't even close to a purely data-dependent domain; however
much we might like it to be so. That's what makes *simulating* and
*anticipating* the real world so difficult!
In sum, technology and trend forecasting reminds me somewhat of
flying. When I get behind that flight yoke, I set out with a solid
understanding of the physical *laws* of flight and once aloft, I place
a firm reliance upon visual and instrument sources of reliable flight
*data*. For instance, I need to keep a very close eye on engine
power, air speed, flight attitude, and follow all air traffic
controller instructions, etc. Yet, very often, I ultimately trust my
seat-of-the-pants sense of reality in many of the most crucial
situations; especially take-offs and landings. I can't even begin to
count the number of cases in which strict data dependence and logic
would have landed me straight in the ditch!
General aviation pilots must know what the laws of aerodynamics tell
us about how the plane is *supposed* to behave, but we must react in
real time to what it *actually* does; particularly when in ground
effect during take-off and landing, which is different and
inspirational every single time. I've come to suspect that one reason
that some very, very smart people just don't pick up flying is because
they can't let go of what the laws and the data say are *supposed* to
happen or what should *logically* happen and so they freeze up and
don't respond to what is *really* happening.
One might suggest that all human progress is an inherently risky, yet
invigorating flight of fancy, constrained by a set of highly
deterministic physical laws, and kept aloft by an abiding sense of
wonder and curiosity of what *might* be possible just beyond the next
discernible horizon.
More information about the extropy-chat
mailing list