[ExI] Zen19 is now at 5-dan in Go

Kelly Anderson kellycoinguy at gmail.com
Fri Feb 24 18:22:41 UTC 2012


On Fri, Feb 24, 2012 at 4:10 AM, Eugen Leitl <eugen at leitl.org> wrote:
> http://blog.printf.net/articles/2012/02/23/computers-are-very-good-at-the-game-of-go
>
> Computers are very good at the game of Go
>
> Posted by Chris Ball Thu, 23 Feb 2012 13:07:00 GMT
>
> MIT Speed Go tournament, Jan 2012
>
> There's an attraction between computer programmers and the Asian game of Go.
> I think there's a lot to like about the game — it has very simple rules, high
> complexity (it's "deeper" than chess) and pleasing symmetry and aesthetics. I
> think the real reason programmers are so drawn to it might be a little more
> self-involved, though: being good at things that computers aren't good at
> tends to make programmers happy, and computers are terrible at Go.1

That's one reason I like Go so much. :-)  It is a pride thing. I am a
mediocre Go player, ranked at 11k on IGS myself, which is a level that
requires years to achieve, but is nowhere near competitive. Computers
passed me a few years back.

> Or at least, that's the folk knowledge that's been true for most of my life.
> Computers have always been worse than an average amateur with a few years of
> experience, and incomparably bad to true professionals of the game. Someone I
> was talking to brought up the ineptitude of computers at Go a few days ago,
> talking about new ideas for CAPTCHAs: "just make the human solve Go
> problems", they said, and you're done; computers can't do that, right?

Except that most humans couldn't solve most Go problems... LOL. And,
the sorts of Go problems that the computer would be able to verify
would be the kind that would be able to be solved by computers... :-)

Computers have had a mathematically perfect end game for over 15
years. There is a book with the algorithm in it. Just when the end
game begins is, of course, an interesting question. :-)

> So, I've enjoyed this feeling of technological superiority to computers as
> much as anyone, and it hurts me a little to say this, but here I go: the idea
> that computers are bad at Go is not remotely true anymore. Computers are
> excellent at Go now.

Arrrgh!!!! Soon, they will be good at programming too, I fear! :-)

> To illustrate this, there's some history we should go
> into:
>
> Back in 1997 — the year that Deep Blue beat chess world champion Garry
> Kasparov for the first time — Darren Cook asked Computer Go enthusiasts for
> predictions on when computers will get to shodan (a strong amateur level) and
> when they'll beat World Champion players. John Tromp, an academic researcher
> and approximately shodan-level amateur, noticed the optimism of the guesses
> and wondered aloud whether the bets would continue to be optimistic if money
> were on the line, culminating in:
>
>    John Tromp: "I would happily bet that I won't be beaten in a 10 game
> match before the year 2011."
>
>
> Darren Cook took the bet for $1000, and waited until 2010 before conducting
> the match against the "Many Faces of Go" program: Tromp won by 4 games to 0.

Many Faces of Go isn't all that good... :-)

> That seemed to settle things, but the challenge was repeated last month, from
> January 13th-18th 2012, and things happened rather differently. This time
> Tromp was playing a rising star of a program named Zen19, which won first
> place at the Computer Go Olympiad in 2009 and 2011. The results are in, and:
>
> Zen19 won by 3 games to 1. (For more reading on the challenge, see David
> Ormerod's page or Darren Cook's.)

Ouch.

> Beating an amateur shodan-level player is shocking by itself — that's my
> strength too, and I wasn't expecting a computer to be able to beat me anytime
> soon — but that isn't nearly the limit of Zen19's accomplishments: its
> progress on the KGS Go server looks something like this:

Is KGS an alternative to IGS?

> Year    KGS Rank
>
> 2009    1-dan
>
> 2010    3-dan
>
> 2011    4-dan
>
> 2012    5-dan
>
> To put the 5-dan rank in perspective: amongst the players who played American
> Go Association rated games in 2011, there were only 105 players that are
> 6-dan and above.2 This suggests that there are only around 100 active
> tournament players in the US who are significantly stronger than Zen19. I'm
> sure I'll never become that strong myself.3

Me either, no way. Unless I'm enhanced with a Zen19 module in my brain... LOL.

> Being able to gain four dan-level ranks in three years is incredible, and
> there's no principled reason to expect that Zen19 will stop improving — it
> seems to have aligned itself on a path where it just continues getting better
> the more CPU time you throw at it, which is very reminiscent of the story
> with computer chess. Even more reminiscent (and frustrating!) is the
> technique used to get it to 5-dan. Before I explain how it works, I'll
> explain how an older Go program worked, using my favorite example: NeuroGo.
>
> NeuroGo dates back to 1996, and has what seems like a very plausible design:
> it's a hierarchical set of neural networks, containing a set of "Experts"
> that each get a chance to look at the board and evaluate moves. It's also
> possible for an expert to override the other evaluators — for example, a
> "Life and Death Expert" module could work out whether there's a way to "kill"
> a large group, and an "Opening Expert" could play the first moves of the game
> where balance and global position are most important. This seems to provide a
> nice balance to the tension of different priorities when considering what to
> play.
>
> Zen19, on the other hand, incorporates almost no knowledge about how to make
> strong Go moves! It's implemented using Monte Carlo Tree Search, as are all
> of the recent strong Go programs. Monte Carlo methods involve, at the most
> basic level, choosing between moves by generating many thousands of random
> games that stem from each possible move and picking the move that leads to
> the games where you have the highest score; you wouldn't expect such a random
> technique to work for a game as deep as Go, but it does. This makes me sad
> because while I wasn't foolish enough to believe that humans would always be
> better at Go than computers, I did think that the process of making a
> computer that is very good at Go might be equivalent to the process of
> acquiring a powerful understanding of how human cognition works; that the
> failure of brute-force solutions to Go would mean that we'd need a way to
> approximate how humans approach Go before we'd start to be able to beat
> strong human players reliably by implementing that same approach in silico.

I often wondered the same thing. But "intelligence" appears to have
different paths... that is, the same degree of intelligence in any
task can be achieved by differing means.

> I think that programs like Zen19 have actually learned even less about how to
> play good Go than computer chess programs have learned about how to play good
> chess; at least the chess programs contain heuristics about how to play
> positionally and how to value different pieces (in the absence of overriding
> information like a path to checkmate that involves sacrificing them). This
> lack of inbuilt Go knowledge shows up in Zen19's games — it regularly makes
> moves that look obviously bad, breaking proverbs about good play and stone
> connectivity, leaving you scratching your head at how it's making decisions.
> You can read a commented version of one of its wins against Tromp at
> GoGameGuru, or you could even play against it yourself on KGS.

Fun, will try it.

> Update: Matthew Woodcraft comments below that Zen19 does contain significant
> domain knowledge about Go. It's hard to know exactly how much; it's
> closed-source.
>
> Zen19 is beating extremely strong amateurs, but it hasn't beaten
> professionals in games with no handicap yet. That said, now that we know that
> Zen19 is using Monte Carlo strategies, the reason why it seems to be getting
> stronger as it's fed more CPU time is revealed: these strategies are the most
> obviously parallelizable algorithms out there, and for all we know this exact
> version of Zen19 could end up becoming World Champion if a few more orders of
> magnitude of CPU time were made available to it.
>
> Which would feel like a shame, because I was really looking forward to seeing
> us figure out how brains work.
>
> 1: I think this is probably why I've never been interested in puzzles like
> Sudoku; I can't escape the feeling of "I could write a Perl script that does
> this for me". If I wouldn't put up with such manual labor in my work life,
> why should I put up with it for fun?

I like Sudoku, it relaxes my mind. But Go, on the other hand, gets my
brain highly involved.

> 2: Here is the query I used to come up with the 105 number.
>
> 3: In fact, KGS ranks are stronger than the same-numbered AGA rank, so the
> correct number of active players in the US who are stronger than Zen19 may be
> even smaller. Being stronger than US players isn't the same as being stronger
> than professional players, though — there are many players that are much
> stronger than amateur 5-dan in Asia, because there are high-value tournaments
> and incentives to dedicating your life to mastering Go there that don't exist
> elsewhere.

Yes... Here in Utah, we have one of the 5 dan amateurs, and he is a
transplant from asia. :-)

I haven't been active in local Go in a long long time.

-Kelly




More information about the extropy-chat mailing list