[ExI] Superhuman Poker

Dylan Distasio interzone at gmail.com
Thu Jul 18 16:04:26 UTC 2019


On Thu, Jul 18, 2019 at 11:58 AM John Clark <johnkclark at gmail.com> wrote:

> On Thu, Jul 18, 2019 at 10:43 AM Dave Sill <sparge at gmail.com> wrote:
>
>
> *> Pluribus isn't modifying it's own code. When I said it'd say "I just
>> pick the statistically best play", that was overly simplified. It more like
>> "I pick the statistically best play and continually look at my previous
>> play and try different things and adjust the probabilities so I can do
>> better next time".*
>>
>
> Dave, a program is just code. If a program has changed its behavior then
> the code must have changed. If a human didn't change the code and the
> program received no new input from the outside world then from the process
> of elimination it must have been the program itself that changed the code.
> And if that change resulted in it making more money playing Poker then the
> program has become more intelagent.
>
>
John, Deep and reinforcement learning algos don't work in the same way as
classical programming code.  As Dave mentioned, the code for the algos
absolutely does not change between iterations.  The statistical model
does.  Weights within the model change based on feedback from each learning
iteration, but the code remains untouched.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20190718/49980e6c/attachment.htm>


More information about the extropy-chat mailing list