[ExI] alpha zero

Darin Sunley dsunley at gmail.com
Fri Dec 8 02:17:14 UTC 2017


I strongly suspect that, when we get around to training neural nets to both
perform particular complex tasks and to introspect on precisely how they
perform those tasks, that the neural net's introspection report, like human
introspection reports, will bear no resemblance whatsoever to what is going
on down at the neuron level.

On Thu, Dec 7, 2017 at 7:10 PM, John Clark <johnkclark at gmail.com> wrote:

> On Thu, Dec 7, 2017 at 7:32 PM, Dave Sill <sparge at gmail.com> wrote:
>
>
>> ​> ​
>> AlphaGO has no understanding of the games it plays. It can't explain its
>> strategy
>>
>
> ​A human player couldn't explain exactly why he made the move he did
> rather than the astronomical number of other moves he didn't even consider,
> he would just say that from experience I know when the board is in this
> general sort of position only a small number of moves is even worth
> considering, and I had a good feeling about one of them so I made it. I
> imagine
> AlphaGO
> ​would say much the same thing. After all, if a human genius could explain
> ​exactly how he does what what he does we could just follow his advice and
> we'd all be geniuses too. But a human genius doesn't understand exactly how
> his mind works and neither would a smart program.
>
>
>> ​> ​
>>  that's excessively anthropomorphic
>>
>
> ​That is not a dirty word, I think we should
> ​anthropomorphise things if they are intelligent as we are.
>
> John K Clark
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20171207/1fbdfc23/attachment.html>


More information about the extropy-chat mailing list