[extropy-chat] The emergence of AI
Eliezer Yudkowsky
sentience at pobox.com
Sun Dec 5 00:03:51 UTC 2004
Robin Hanson wrote:
> Virtually no
> established experts in related fields (i.e., economic growth, artificial
> intelligence, ...)
What a coincidence, that economics is so closely related to Artificial
General Intelligence recursive self-improvement trajectories. Amazingly
enough, just about every person who contacts me seems to have specialized
in a field that lets them make pronouncements about matters of AGI.
As Wil Holland wrote in the foreword to his book "Universal Artificial
Intelligence" (no, it's not worth reading):
"From many years of working in the construction industry and learning the
age-old craftsman's technique of building a structure one block at a time,
I have inadvertantly stumbled upon a method of semantic interpretation that
applies in any situation..."
It would seem that Artificial Intelligence is an even easier art to acquire
than managing a government, writing legislation, or maintaining
international diplomatic relations. I take it you've never run into people
who think that their own field of knowledge easily generalizes to making
pronouncements on specific questions in economics?
As for seed AI, recursively self-improving AI, I am not aware of anyone who
explicitly claims to *specialize* in that except me and Jurgen Schmidhuber.
So far as I know, Jurgen Schmidhuber has not analyzed the seed AI
trajectory problem.
> see this path, or even recognize you as presenting an
> interesting different view they disagree with, even though you have for
> years explained it all in hundreds of pages of impenetrable prose,
> building very little on anyone else's closely related research, filled
> with terminology you invent.
Oh, come now. That accusation may justly be leveled at "General
Intelligence and Seed AI" or "Creating Friendly AI", but I think "Levels of
Organization in General Intelligence", to which I referred Finney, deserves
better than that.
> Are there no demonstration projects you could
> build as a proof of concept of your insights?
When my insights reach the point of apparent *completeness* with respect to
simple problems, that is, it feels like I know how to write a simple demo,
I may (or may not) do so, depending on whether that seems like the fastest
possible route to expanding the project.
I think you underestimate the amount of massive overkill needed in the
theoretical understanding department before I can offhandedly write a
simple demo AI.
> Wouldn't it be worth it
> to take the time to convince at least one or two people who are
> recognized established experts in the fields in which you claim to have
> new insight,
I was not aware that humanity presently boasted *any* recognized,
established experts in the field of either Artificial General Intelligence
or recursive self-improvement trajectories.
So far as the academic field of AI is concerned, the status of the problem
of Artificial Intelligence is "unsolved - no established fundamental theory".
> so they could integrate you into the large intellectual
> conversation?
http://www.sl4.org/archive/0410/10025.html
*Regardless* of what I said there, it is worth noting that as far as I
know, LOGI is the *only* academically published paper (appearing in
_Artificial General Intelligence_, Springer-Verlag 2005) *explicitly*
analyzing seed AI self-improvement trajectories - regardless of what other
published analyses may be claimed to "easily generalize" to that
unprecedented and bizarre scenario. It's amazing how much stuff out there
is "closely related" to AI, you'd think we'd have solved the problem by now.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list