[ExI] Zen19 is now at 5-dan in Go
Eugen Leitl
eugen at leitl.org
Fri Feb 24 11:10:22 UTC 2012
http://blog.printf.net/articles/2012/02/23/computers-are-very-good-at-the-game-of-go
Computers are very good at the game of Go
Posted by Chris Ball Thu, 23 Feb 2012 13:07:00 GMT
MIT Speed Go tournament, Jan 2012
There's an attraction between computer programmers and the Asian game of Go.
I think there's a lot to like about the game — it has very simple rules, high
complexity (it's "deeper" than chess) and pleasing symmetry and aesthetics. I
think the real reason programmers are so drawn to it might be a little more
self-involved, though: being good at things that computers aren't good at
tends to make programmers happy, and computers are terrible at Go.1
Or at least, that's the folk knowledge that's been true for most of my life.
Computers have always been worse than an average amateur with a few years of
experience, and incomparably bad to true professionals of the game. Someone I
was talking to brought up the ineptitude of computers at Go a few days ago,
talking about new ideas for CAPTCHAs: "just make the human solve Go
problems", they said, and you're done; computers can't do that, right?
So, I've enjoyed this feeling of technological superiority to computers as
much as anyone, and it hurts me a little to say this, but here I go: the idea
that computers are bad at Go is not remotely true anymore. Computers are
excellent at Go now. To illustrate this, there's some history we should go
into:
Back in 1997 — the year that Deep Blue beat chess world champion Garry
Kasparov for the first time — Darren Cook asked Computer Go enthusiasts for
predictions on when computers will get to shodan (a strong amateur level) and
when they'll beat World Champion players. John Tromp, an academic researcher
and approximately shodan-level amateur, noticed the optimism of the guesses
and wondered aloud whether the bets would continue to be optimistic if money
were on the line, culminating in:
John Tromp: "I would happily bet that I won't be beaten in a 10 game
match before the year 2011."
Darren Cook took the bet for $1000, and waited until 2010 before conducting
the match against the "Many Faces of Go" program: Tromp won by 4 games to 0.
That seemed to settle things, but the challenge was repeated last month, from
January 13th-18th 2012, and things happened rather differently. This time
Tromp was playing a rising star of a program named Zen19, which won first
place at the Computer Go Olympiad in 2009 and 2011. The results are in, and:
Zen19 won by 3 games to 1. (For more reading on the challenge, see David
Ormerod's page or Darren Cook's.)
Beating an amateur shodan-level player is shocking by itself — that's my
strength too, and I wasn't expecting a computer to be able to beat me anytime
soon — but that isn't nearly the limit of Zen19's accomplishments: its
progress on the KGS Go server looks something like this:
Year KGS Rank
2009 1-dan
2010 3-dan
2011 4-dan
2012 5-dan
To put the 5-dan rank in perspective: amongst the players who played American
Go Association rated games in 2011, there were only 105 players that are
6-dan and above.2 This suggests that there are only around 100 active
tournament players in the US who are significantly stronger than Zen19. I'm
sure I'll never become that strong myself.3
Being able to gain four dan-level ranks in three years is incredible, and
there's no principled reason to expect that Zen19 will stop improving — it
seems to have aligned itself on a path where it just continues getting better
the more CPU time you throw at it, which is very reminiscent of the story
with computer chess. Even more reminiscent (and frustrating!) is the
technique used to get it to 5-dan. Before I explain how it works, I'll
explain how an older Go program worked, using my favorite example: NeuroGo.
NeuroGo dates back to 1996, and has what seems like a very plausible design:
it's a hierarchical set of neural networks, containing a set of "Experts"
that each get a chance to look at the board and evaluate moves. It's also
possible for an expert to override the other evaluators — for example, a
"Life and Death Expert" module could work out whether there's a way to "kill"
a large group, and an "Opening Expert" could play the first moves of the game
where balance and global position are most important. This seems to provide a
nice balance to the tension of different priorities when considering what to
play.
Zen19, on the other hand, incorporates almost no knowledge about how to make
strong Go moves! It's implemented using Monte Carlo Tree Search, as are all
of the recent strong Go programs. Monte Carlo methods involve, at the most
basic level, choosing between moves by generating many thousands of random
games that stem from each possible move and picking the move that leads to
the games where you have the highest score; you wouldn't expect such a random
technique to work for a game as deep as Go, but it does. This makes me sad
because while I wasn't foolish enough to believe that humans would always be
better at Go than computers, I did think that the process of making a
computer that is very good at Go might be equivalent to the process of
acquiring a powerful understanding of how human cognition works; that the
failure of brute-force solutions to Go would mean that we'd need a way to
approximate how humans approach Go before we'd start to be able to beat
strong human players reliably by implementing that same approach in silico.
I think that programs like Zen19 have actually learned even less about how to
play good Go than computer chess programs have learned about how to play good
chess; at least the chess programs contain heuristics about how to play
positionally and how to value different pieces (in the absence of overriding
information like a path to checkmate that involves sacrificing them). This
lack of inbuilt Go knowledge shows up in Zen19's games — it regularly makes
moves that look obviously bad, breaking proverbs about good play and stone
connectivity, leaving you scratching your head at how it's making decisions.
You can read a commented version of one of its wins against Tromp at
GoGameGuru, or you could even play against it yourself on KGS.
Update: Matthew Woodcraft comments below that Zen19 does contain significant
domain knowledge about Go. It's hard to know exactly how much; it's
closed-source.
Zen19 is beating extremely strong amateurs, but it hasn't beaten
professionals in games with no handicap yet. That said, now that we know that
Zen19 is using Monte Carlo strategies, the reason why it seems to be getting
stronger as it's fed more CPU time is revealed: these strategies are the most
obviously parallelizable algorithms out there, and for all we know this exact
version of Zen19 could end up becoming World Champion if a few more orders of
magnitude of CPU time were made available to it.
Which would feel like a shame, because I was really looking forward to seeing
us figure out how brains work.
1: I think this is probably why I've never been interested in puzzles like
Sudoku; I can't escape the feeling of "I could write a Perl script that does
this for me". If I wouldn't put up with such manual labor in my work life,
why should I put up with it for fun?
2: Here is the query I used to come up with the 105 number.
3: In fact, KGS ranks are stronger than the same-numbered AGA rank, so the
correct number of active players in the US who are stronger than Zen19 may be
even smaller. Being stronger than US players isn't the same as being stronger
than professional players, though — there are many players that are much
stronger than amateur 5-dan in Asia, because there are high-value tournaments
and incentives to dedicating your life to mastering Go there that don't exist
elsewhere.
More information about the extropy-chat
mailing list