[ExI] alpha zero

William Flynn Wallace foozler83 at gmail.com
Fri Dec 8 14:18:46 UTC 2017


John wrote: After all, if a human genius could explain ​exactly how he does
what what he does we could just follow his advice and we'd all be geniuses
too. But a human genius doesn't understand exactly how his mind works and
neither would a smart program.
-------------------
Two things: One - no one can state what process preceded the behavior,
whether physical or mental - all unconscious.  I disagree with John in that
a genius is probably using something mental that is not available to the
rest of us, whatever it may be.  Could anyone be trained to be a Tesla?
Nope.

Two  - if we really want to make some AI perform like a person we will have
to figure out some way to install emotions in it.  Studies show that all
decisions we make are in part emotional.  Just look how apparently
intelligent people can believe the incredible things that they do - read
about Newton for example and his wild ideas.

I think what we want in an AI is totally rational thinking - no judgment as
to whether the decision is 'liked' or 'fun'.  Emotions are just too
variable, changing by the moment according to who knows what 'logic'.   A
policeman shoots a man running away from him and nearly instantly regrets
it.  We certainly don't want decisions like that from an AI.  (that officer
just got 20 years in jail).

With incredibly advanced technology, we might be able to follow the paths
of a thought or action in a brain, noting what parts were involved and in
which order.  That's the what.  Why, how, the meaning of it - totally
hidden.  How to get another brain or computer to do that?  Impossible.

So I ask - do we really want to model an AI after a human brain?  I say
no.  Human thinking is just far too irrational.

bill w

On Thu, Dec 7, 2017 at 8:10 PM, John Clark <johnkclark at gmail.com> wrote:

> On Thu, Dec 7, 2017 at 7:32 PM, Dave Sill <sparge at gmail.com> wrote:
>
>
>> ​> ​
>> AlphaGO has no understanding of the games it plays. It can't explain its
>> strategy
>>
>
> ​A human player couldn't explain exactly why he made the move he did
> rather than the astronomical number of other moves he didn't even consider,
> he would just say that from experience I know when the board is in this
> general sort of position only a small number of moves is even worth
> considering, and I had a good feeling about one of them so I made it. I
> imagine
> AlphaGO
> ​would say much the same thing. After all, if a human genius could explain
> ​exactly how he does what what he does we could just follow his advice and
> we'd all be geniuses too. But a human genius doesn't understand exactly how
> his mind works and neither would a smart program.
>
>
>> ​> ​
>>  that's excessively anthropomorphic
>>
>
> ​That is not a dirty word, I think we should
> ​anthropomorphise things if they are intelligent as we are.
>
> John K Clark
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20171208/5426fd54/attachment.html>


More information about the extropy-chat mailing list