[ExI] alpha zero

Dylan Distasio interzone at gmail.com
Thu Dec 7 19:29:38 UTC 2017


Spike-

There is a lot of detail in the paper.  Training was not done on excessive
hardware.  If I'm reading it right, it was done on one PC using 4 TPUs
(these are custom ASIC from Google that are very good at running Tensorflow
which is their flavor of deep learning infrastructure):

You can read more on what a TPU actually is here if you're interested, but
they're basically custom hardware that is good at running neural nets:
https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu

It took 9 hours to train on 44 million different chess games.

If you were so inclined, you could lease 4 TPUs from Google via their cloud
platform and get similar results.  This is on freely available, relatively
modest hardware, not a supercomputer or a massively parallel architecture.

In fact, if you have a good gaming PC with a high end Nvidia GPU, you would
be surprised at what you can do out of the box with deep learning stuff.

Amazon Web Services also has easy plug and play stock deep learning linux
images paired with the highest end Nvidia GPUs that I have used to train my
own nets on.

The combination of easily available cloud based machine learning tools is
what is going to usher in the revolution in this space.  In addition,
Google is starting to allow a machine learning process automatically
generate the best iteration it can find of another machine learning
process.  It's called AutoML and is something to watch
https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html
.

There is so much power available cheaply, and great open source
implementations of this type of technology just waiting for the next person
to get creative and have a breakthrough.


On Thu, Dec 7, 2017 at 1:53 PM, spike <spike66 at att.net> wrote:

>
>
>
>
> *From:* extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] *On
> Behalf Of *spike
> *Subject:* Re: [ExI] alpha zero
>
>
>
>
>
>
>
> *From:* extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org
> <extropy-chat-bounces at lists.extropy.org>] *On Behalf Of *John Clark
>
>
>
> ​>>>​…I still haven’t convinced myself it is true.
>
>
>
>
>
>>
>
>
> >>…If this is a hoax it's a very elaborate one the likes of which we
> haven't seen since the cold fusion fiasco. ​
>
> …John K Clark ​
>
>
>
>
>
>
>
> >…Ja I could have clarified my doubt a bit.  I don’t suspect an
> intentional hoax, rather something they neglected to tell us…spike
>
>
>
>
>
> Further clarification, since I want to make very clear I am not accusing
> the Alpha Zero guys of hoaxing us, nor am I accusing ChessNews of
> intentionally misleading reporting.
>
>
>
> The analogy is more like this: consider a time when something big happened
> and you were there, then the newspaper reported it.  You read the story as
> written and say oooooh no, that isn’t what happened there at all.  You read
> the facts, which are all correct as written, but it paints a very different
> story than what really took place.  The reporters weren’t there, they
> talked to some people, wrote it up the best they could, but it just wasn’t
> right.
>
>
>
> I suspect what we are seeing here is unintentionally misleading reporting,
> where there are some key aspects missing or accidentally reported
> incorrectly.  For instance, if the company somehow came into possession of
> a million nodes computing in parallel for a day, then collected the
> results.  The story was written by a chess guy who may or may not
> understand all the technical details.
>
>
>
> I think Alpha Zero is impressive as all hell, but I ha’ me doots they
> somehow managed to get this much better than Stockfish in a day.  As
> written, I am confident there is something wrong or accidentally
> misleading, probably the use of massively parallel computing resources, or
> the use of a some known-good standard chess engine which was
> parameter-modified by the results of a massive-parallel effort.  Dunno.
> Something is missing here.  Or accidentally overstated.
>
>
>
> This is a heady trip for those of us into computer chess, for a good
> computer chess program teaches to play better chess ourselves.  As we
> program computers, computers program us.
>
>
>
> I welcome a detailed technical paper by the Alpha Zero people, then we
> compare the ChessNews article with the Alpha Zero paper.
>
>
>
> spike
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20171207/18984bb6/attachment.html>


More information about the extropy-chat mailing list