[ExI] DeepMind Wins Pivotal Second Game In Match With Go Grandmaster

BillK pharos at gmail.com
Thu Mar 10 11:49:40 UTC 2016


After more than four hours of tight play and a rapid-fire end game,
Google’s artificially intelligent Go-playing computer system has won a
second contest against grandmaster Lee Sedol, taking a
two-games-to-none lead in their historic best-of-five match in
downtown Seoul.
<http://www.wired.com/2016/03/googles-ai-wins-pivotal-game-two-match-go-grandmaster/>

Quote:

A New Autonomy

This is particularly true of AlphaGo, which is driven so heavily by
machine learning—technologies that allow it to learn tasks largely on
its own. Hassabis and his team originally built AlphaGo using what are
called deep neural networks, vast networks of hardware and software
that mimic the web of neurons in the human brain. Essentially, they
taught AlphaGo to play the game by feeding thousands upon thousands of
human Go moves into these neural networks.

But then, using a technique called reinforcement learning, they
matched AlphaGo against itself. By playing match after match on its
own, the system could learn to play at an even higher level—perhaps at
a level that eclipses the skills of any human. That’s why it produces
such unexpected moves.

During the match, the commentators even invited DeepMind research
scientist Thore Graepel onto their stage to explain the system’s
rather autonomous nature. “Although we have programmed this machine to
play, we have no idea what moves it will come up with,” Graepel said.
“Its moves are an emergent phenomenon from the training. We just
create the data sets and the training algorithms. But the moves it
then comes up with are out of our hands—and much better than we, as Go
players, could come up with.”
------------

This really sounds like a big leap forward in AI.


BillK




More information about the extropy-chat mailing list