[ExI] DeepMind wins Game1 in Go championship Match

BillK pharos at gmail.com
Wed Mar 9 20:45:44 UTC 2016


On 9 March 2016 at 20:12, Dave Sill  wrote:
> If all you want is a system that can play chess or go, that's clearly
> doable. But these are more simulated intelligence than artificial
> intelligence. Produce a system that can be taught any game the way a human
> learns it, and can learn to play it well via playing and studying the game,
> and *that* will be AI. It doesn't have to be a terribly complex game, and
> the level of play doesn't have to equal human masters: just the general
> ability to learn a game and improve its play will be tremendously more
> impressive than DeepMind.
>

DeepMind *did* teach itself to play Go better.

DeepBlue (the chess program) did a brute force look-ahead search of
thousands of possible moves. But you can't do that with Go. There are
too many possible moves.

This post explains the techniques used.
<http://googleresearch.blogspot.co.uk/2016/01/alphago-mastering-ancient-game-of-go.html>

Now that they have got these neural networks suggesting possible
moves, then 'value-judging' the results, these systems can be applied
to other domains.

"I'm sorry, Dave. I'm afraid I can't do that".

BillK



More information about the extropy-chat mailing list