[ExI] DeepMind wins Game1 in Go championship Match
0.20788 at gmail.com
Thu Mar 10 13:42:49 UTC 2016
Re: [ExI] DeepMind wins Game1 in Go championship Match
> If all you want is a system that can play chess or go, that's clearly
> doable. But these are more simulated intelligence than artificial
> intelligence. Produce a system that can be taught any game the way a human
> learns it, and can learn to play it well via playing and studying the game,
> and *that* will be AI. It doesn't have to be a terribly complex game, and
> the level of play doesn't have to equal human masters: just the general
> ability to learn a game and improve its play will be tremendously more
> impressive than DeepMind.
So, what you want is a system that, without knowing anything in
advance about Go, can be taught the rules via natural language and
vision, and can then begin playing and improving its play, maybe even
incorporating further knowledge and analysis of the game through its
natural language interface, e.g. books and instruction, as well as its
own "experience" and processing. Okay, that sounds like it might be
within reach of current technology. Of course, such a system would not
start out playing at master level and maybe never would without the
help of AplhaGo's team tricking it up in every way they can think of.
But neither do most humans.
> No, that's not what I mean. Building a table of all of the possible
> tic-tac-toe games and playing perfectly by consulting it isn't
> intelligence. Intelligence is determining that by constructing such a table
> one can play perfectly.
It seems perfectly reasonable to me to call a look-up table a form of
intelligence, since it can produce appropriate behavior in response to
arbitrarily complicated situations, provided those situations stay
within the scope of the table's validity. But obviously, this is an
extremely brittle form of intelligence which does not include any
dynamic learning or discovery, and will fail outside its narrow scope.
> human-level intelligence is much, much more than that. The goalposts
> haven't moved, we just haven't made any real progress in AI.
Who is saying this is human-level intelligence in what we consider a
"general" scope, i.e. the actual scope of human intelligence? It is
human-level intelligence in a very narrow scope. Really, are you going
to be the last person still insisting "we just haven't made any real
progress in AI"?
More information about the extropy-chat