[ExI] alpha zero

William Flynn Wallace foozler83 at gmail.com
Fri Dec 8 01:39:28 UTC 2017


stathis wrote:  It may also be that trying to achieve true "understanding"
is a red herring - behaving *as if* it understands is sufficient, and at
bottom what humans do.

---------
This is my nit to pick with philosophers.  They attack some concept and if
someone can come up with any counterexample, not matter how trivial or
based on absurd hypothetical situations, then they agree that they have to
try again.

So they tend to never conclude anything.  But we have to go on living, and
take 'good enough for who/what it's for' data, apply it, deal with the good
and the bad effects.

In a sense, error variance, a strong force, will always be with us .

bill w

On Thu, Dec 7, 2017 at 7:12 PM, Stathis Papaioannou <stathisp at gmail.com>
wrote:

>
>
> On 8 December 2017 at 11:57, William Flynn Wallace <foozler83 at gmail.com>
> wrote:
>
>> I suspect that if we were to look at what philosophers say about it, they
>> would tell us that they really did not know for sure what the word 'know'
>> means.  Only the toad knows (Alice in Wonderland).
>>
>> We may never know how the unconscious works.  It is not meant (whatever
>> that word means) to be conscious.  Duh.
>>
>> "It's as if they are doing this when they think."  This will be as close
>> as we can get.  A model.
>>
>> So it may be that studying people's minds so that we can program
>> computers to copy the way they work is not the best strategy to advance
>> computer thinking.
>>
>
> It may also be that trying to achieve true "understanding" is a red
> herring - behaving *as if* it understands is sufficient, and at bottom what
> humans do.
>
>
>> I suspect the Singularity is pretty far off.
>>
>> bill w
>>
>> On Thu, Dec 7, 2017 at 6:32 PM, Dave Sill <sparge at gmail.com> wrote:
>>
>>> On Thu, Dec 7, 2017 at 5:30 PM, John Clark <johnkclark at gmail.com> wrote:
>>>>
>>>>
>>>>>>>> On Thu, Dec 7, 2017 at 2:02 PM, Dylan Distasio <interzone at gmail.com>
>>>>  wrote:
>>>>
>>> > ​
>>>>  Deep learning neural nets appear to bear little resemblance to how
>>>> biological nervous systems actually work.
>>>>
>>>> ​As far as Chess​
>>>>
>>>> ​Go and Shogi are concerned it works far better than ​
>>>> biological nervous systems
>>>> ​.​
>>>>
>>>>
>>>
>>> Yes, in simple, well-defined domains. Computers are incredibly fast at
>>> math but that doesn't mean they're math geniuses. I can't do billions of
>>> floating point operations per second, but I can explain to a child in terms
>>> it will understand what "addition" means. A CPU has no understanding of
>>> what it does. Likewise, AlphaGO has no understanding of the games it plays.
>>> It can't explain its strategy--it has none, it just "knows" what usually
>>> works--and that's excessively anthropomorphic, it knows nothing: it just
>>> does what it was programmed to do.
>>>
>>> It a clever and useful technique but it's a far cry from a general
>>> intelligence that can interact directly with the world where the rules
>>> aren't all known, and communicate with other intelligent entities, evaluate
>>> novel situations, and solve complex problems.
>>>
>>> -Dave
>>>
>>>
>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>>
>
>
> --
> Stathis Papaioannou
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20171207/b5badb1d/attachment.html>


More information about the extropy-chat mailing list