[ExI] Watson On Jeopardy.

Richard Loosemore rpwl at lightlink.com
Thu Feb 17 01:13:59 UTC 2011

Kelly Anderson wrote:
>>> On Feb 16, 2011, at 10:20 AM, Richard Loosemore wrote:
>>> So I repeat my previous request, please tell us all about the wonderful AI
>>> program that you have written that does things even more intelligently than
>>> Watson.
>> Done:  read my papers.
> I've done that. At least all the papers I could find online. I have
> not seen in your papers anything approaching a utilitarian algorithm,
> a practical architecture or anything of the sort. Do you have a
> working program that does ANYTHING? You have some fine theories
> Richard, but theories that don't lead to some kind of productive
> result belong in journals of philosophy, not journals of computer
> science. You have some very interesting philosophical ideas, but I
> haven't seen anything in your papers that rise to the level of
> computer science.
>> Questions?  Just ask!
> What is the USEFUL and working application of your theories?
> Show me the beef!

So demanding, some people.  ;-)

If you have read McClelland and Rumelhart's two-volume "Parallel 
Distributed Processing", and if you have then read my papers, and if you 
are still so much in the dark that the only thing you can say is "I 
haven't seen anything in your papers that rise to the level of computer 
science" then, well...

(And, in any case, my answer to John Clark was as facetious as his 
question was silly.)

At this stage, what you can get is a general picture of the background 
theory.  That is readily obtainable if you have a good knowledge of (a) 
computer science, (b) cognitive psychology and (c) complex systems.  It 
also helps, as I say, to be familiar with what was going on in those PDP 

Do you have a fairly detailed knowledge of all three of these areas?

Do you understand where McClelland and Rumelhart were coming from when 
they talked about the relaxation of weak constraints, and about how a 
lot of cognition seemed to make more sense when couched in those terms? 
  Do you also follow the line of reasoning that interprets M & R's 
subsequent pursuit of non-complex models as a mistake?  And the 
implication that there is a class of systems that are as yet unexplored, 
doing what they did but using a complex approach?

Put all these pieces together and we have the basis for a dialog.

But ...  demanding a finished AGI as an essential precondition for 
behaving in a mature way toward the work I have already published...?  I 
don't think so.  :-)

Richard Loosemore

More information about the extropy-chat mailing list