[ExI] alpha zero

Dylan Distasio interzone at gmail.com
Thu Dec 7 22:54:52 UTC 2017


I think if you want to call this intelligence, it is completely alien to
human thought.  Of course, I may be wrong in that assertion, but based on
what we know about real neurons and the number of connections back and
forth between different ones (and the increasing important roles of various
helper cell types that were originally thought to be relatively
unimportant), I don't think deep learning networks are anywhere close to
how human beings learn and think.  I'm not making a judgement on which is
more effective all things being equal, just that they are likely very
different processes.

These networks figure out the importance of and relationships between
inputs by attempting to minimize a mathematical loss function.  It seems
unlikely that this is how wetware works, although I would imagine there are
overlapping network/information theory effects that are above my paygrade
to speak intelligently to.

Google has done some nice work visualizing how neural networks work
recently to try to pull back the curtain a bit, but I think any strong AI
that evolves from these deep learning/reinforcement algos will not be
thinking via the same underlying processes that we use.  This may have some
very large implications, some of which might not be so good for us.

On Thu, Dec 7, 2017 at 5:45 PM, William Flynn Wallace <foozler83 at gmail.com>
wrote:

> Teaching  yourself to be the best in the world at Chess and GO and ​
> Shogi
> ​ is pretty intelligent.  john
>
>
> Here's my question:  did the program do anything different from what a
> person could do if his mind could work that fast?  I think it's likely that
> we don't know this.  We don't know the qualitative differences between the
> computer and a human chess player.  Or do we?
>
> bill w
>
> On Thu, Dec 7, 2017 at 4:30 PM, John Clark <johnkclark at gmail.com> wrote:
>
>> On Thu, Dec 7, 2017 at 2:02 PM, Dylan Distasio <interzone at gmail.com>
>> wrote:
>>
>> ​​
>>>>
>>>> ​>> ​
>>>> ​If you can teach yourself to be the best in the world at some complex
>>>> task ​without "thought" then what's the point of "thought"? Who needs it?
>>>>
>>>
>>> ​> ​
>>> It's not needed as I'm defining it (human level intelligence combined
>>> with consciousness
>>>
>>
>> I'm far far more interested in intelligence than consciousness
>> ​,
>> If the machine isn't conscious that's it's problem not mine. But what
>> makes you think the machine isn't conscious? ​
>>
>>
>>  whatever that is, but I think we're relatively good at identifying it
>>
>>
>> ​I can directly detect consciousness only in myself, I have a hypothesis
>> that others of my species are conscious too, but not all the time, not when
>> they are sleeping or under anesthesia or dead.  My hypothesis is other
>> people are only conscious when they behave intelligently. Teaching
>>  yourself to be the best in the world at Chess and GO and ​
>> Shogi
>> ​ is pretty intelligent.
>>
>> ​> ​
>>> I will give you a real world example of why these networks don't think,
>>> and why thought is important.  I'm going to shift into image recognition
>>> for the example.    It is very easy to game these machine learning systems
>>> with an adversarial attack that shifts pixel information that is
>>> essentially  undetectable to the human eye but that will cause the system
>>> to misidentify a turtle as a gun (for example).
>>>
>>
>> ​Humans sometimes ​
>> misidentify
>> ​ images too, and unlike people computers are getting better at image
>> recognition every day.​
>>
>>
>>> ​> ​
>>> The point of thought is to be able to generalize and make decisions with
>>> sometimes very limited information based on experience and imagination.
>>> This system is capable of nothing like that.
>>>
>>
>> ​The system had no information to work with at all except for the basic
>> rules of Chess, and that is as little information as you can get, and it
>> wan't a specialized Chess program  as Deepblue was 20 years ago, the same
>> program could generalize enough to teach itself to be the best in the word
>> at ​Go and
>> and Shogi
>> ​ too.​
>>
>>
>> ​> ​
>> It is still very brittle outside of the goal it has been trained on.  It
>> would need to be retrained for each new goal,
>>
>> ​No, it trained itself, that's what so impressive. ​
>>
>>
>>
>> ​> ​
>>  Deep learning neural nets appear to bear little resemblance to how
>> biological nervous systems actually work.
>>
>> ​As far as Chess​
>>
>> ​Go and Shogi are concerned it works far better than ​
>> biological nervous systems
>> ​.​
>>
>>
>> ​> ​
>>> I would still argue that this is very far from strong AI.
>>
>>
>> ​Teaching yourself to become best in the world in less than a day sure
>> doesn't seem very far ​from strong AI to me.
>>
>> John K Clark
>>
>>
>>
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20171207/5cc1d961/attachment.html>


More information about the extropy-chat mailing list