[ExI] alpha zero
William Flynn Wallace
foozler83 at gmail.com
Thu Dec 7 22:45:10 UTC 2017
Teaching yourself to be the best in the world at Chess and GO and
Shogi
is pretty intelligent. john
Here's my question: did the program do anything different from what a
person could do if his mind could work that fast? I think it's likely that
we don't know this. We don't know the qualitative differences between the
computer and a human chess player. Or do we?
bill w
On Thu, Dec 7, 2017 at 4:30 PM, John Clark <johnkclark at gmail.com> wrote:
> On Thu, Dec 7, 2017 at 2:02 PM, Dylan Distasio <interzone at gmail.com>
> wrote:
>
>
>>>
>>> >>
>>> If you can teach yourself to be the best in the world at some complex
>>> task without "thought" then what's the point of "thought"? Who needs it?
>>>
>>
>> >
>> It's not needed as I'm defining it (human level intelligence combined
>> with consciousness
>>
>
> I'm far far more interested in intelligence than consciousness
> ,
> If the machine isn't conscious that's it's problem not mine. But what
> makes you think the machine isn't conscious?
>
>
> whatever that is, but I think we're relatively good at identifying it
>
>
> I can directly detect consciousness only in myself, I have a hypothesis
> that others of my species are conscious too, but not all the time, not when
> they are sleeping or under anesthesia or dead. My hypothesis is other
> people are only conscious when they behave intelligently. Teaching
> yourself to be the best in the world at Chess and GO and
> Shogi
> is pretty intelligent.
>
> >
>> I will give you a real world example of why these networks don't think,
>> and why thought is important. I'm going to shift into image recognition
>> for the example. It is very easy to game these machine learning systems
>> with an adversarial attack that shifts pixel information that is
>> essentially undetectable to the human eye but that will cause the system
>> to misidentify a turtle as a gun (for example).
>>
>
> Humans sometimes
> misidentify
> images too, and unlike people computers are getting better at image
> recognition every day.
>
>
>> >
>> The point of thought is to be able to generalize and make decisions with
>> sometimes very limited information based on experience and imagination.
>> This system is capable of nothing like that.
>>
>
> The system had no information to work with at all except for the basic
> rules of Chess, and that is as little information as you can get, and it
> wan't a specialized Chess program as Deepblue was 20 years ago, the same
> program could generalize enough to teach itself to be the best in the word
> at Go and
> and Shogi
> too.
>
>
> >
> It is still very brittle outside of the goal it has been trained on. It
> would need to be retrained for each new goal,
>
> No, it trained itself, that's what so impressive.
>
>
>
> >
> Deep learning neural nets appear to bear little resemblance to how
> biological nervous systems actually work.
>
> As far as Chess
>
> Go and Shogi are concerned it works far better than
> biological nervous systems
> .
>
>
> >
>> I would still argue that this is very far from strong AI.
>
>
> Teaching yourself to become best in the world in less than a day sure
> doesn't seem very far from strong AI to me.
>
> John K Clark
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20171207/358f9a8b/attachment.html>
More information about the extropy-chat
mailing list