[ExI] Watson On Jeopardy

Samantha Atkins sjatkins at mac.com
Tue Feb 15 19:28:23 UTC 2011


On 02/15/2011 08:10 AM, David Lubkin wrote:
> What I'm curious about is to what extent Watson learns from his mistakes.
> Not by his programmers adding a new trigger pattern or tweaking
> parameters, but by learning processes within Watson.
>

I am not an expert or learning algorithms but a feedback mechanism from 
a negative result can be used to prune subsequent sufficiently similar 
searches.


> Most? successful people and organizations view their mistakes as a
> tremendous opportunity for improvement. After several off-by-one
> errors in my code, I realize I am prone to those errors, and specially
> check for them. When I see repeated misunderstanding of the referent
> of pronouns, I add the practice of pausing a conversation to clarify who
> "they" refers to where it's ambiguous.

It is not good to directly extrapolate from what a human would do to 
what may or may not be programmed into Watson or what is and is not 
currently programmable as a form of learning.

>
> Limited to Jeopardy, it isn't always clear what kind of question a
> category calls for. Champion players will immediately discern why a
> question was ruled wrong and adapt their game on the fly.

Yes and same comment.

>
> Parenthetically, there is a divide in competitions between playing the
> game and playing your opponent. Take chess. Some champions
> make the objectively best move. Emanuel Lasker chose "lesser"
> moves that he calculated would succeed against *that* player.
> Criticized for it, he'd point out that he won the game, didn't he?
>
> I wonder how often contestants deliberately don't press their buzzer
> because they assess that one of their opponents will think they know
> the answer but will get it wrong.

I very much doubt that Watson includes this level of modelling and 
successfully guessing the likely success of other players on a 
particular question.   That would be really impressive if included and I 
would be very interested in the algorithms employed to make it possible.

>
> Tie game. $1200 clue. I buzz, get it right, $1200. I wait, Spike buzzes,
> gets it right, I'm down $1200. Spike buzzes, gets it wrong, I answer,
> I'm up $2400. No one buzzes, I've lost a chance to be up $1200.

I would expect Watson to only answer when its computed probability of 
being correct was sufficiently high.

>
> I suspect that it doesn't happen very often because of the pressure of
> the moment. (I know contestants but asking them wouldn't answer
> the question.) If so, that's another way for Watson to have an edge.
>
> (Except that last night showed that Watson doesn't yet know what
> the other players' answers were. Watson 2.0 would listen to the game.
> Build a profile of each player. Which questions they buzzed on, how
> long it took, how long it took after buzzing for them to speak their
> answer, voice-stress analysis of how confident they sounded, how
> correct the answer was. (Essentially part of what an expert poker
> player does.)

It would be a fun research project to build that correlation set and 
tweak its predictive abilities.

- s





More information about the extropy-chat mailing list