[ExI] Neural networks score higher than humans in reading and comprehension test

William Flynn Wallace foozler83 at gmail.com
Wed Jan 17 01:15:12 UTC 2018


I am eager for AI systems to get smarter than humans.  Then we can see
which political party they join, see if they switch parties as they
self-train and get smarter, etc.  That would be a kick: we could set some
of them to get all their information from CNN, another from Reason.com,
another from Fox, make bets on their scores from I-Side-With, that kinda
thing.  That would be a hoot.



spike


The problem is that who is smart should be determined by who makes the most
correct predictions.  That's how we judge a theory, right?


Being economics blind I cannot judge their predictions, but I do know that
every year they seem to appear in Congress trying to explain why their
predictions from last year didn't work out.  Then they give a new set of
predictions.  I have often said that the worst mistake is to keep repeating
your mistakes.


But, you say, who else would they listen to?  Somebody with a better track
record.  That, of course, will change every year, or quarter, or
month..................


The Law of Unintended Consequences fouls everything.  So many variables
when you are dealing with worldwide things like economics.  Even a 40 acre
farmer has too many to make successful predictions every year.


So, for people, or for AIs, please define 'smart'.  Compared to the 'real
world', chess is simple.  Give an AI a problem like this:  what will happen
to the economy as a result of the new tax laws.  Ask a panel of economists
the same question.  Now that's a game I'd like to see played.  I predict
that both will lose, giving poor predictions.


Did you ever notice that 'now then...' is completely absurd?


bill w

On Tue, Jan 16, 2018 at 4:50 PM, spike <spike66 at att.net> wrote:

>
>
>
>
> *On Behalf Of *John Clark
>
> *Subject:* Re: [ExI] Neural networks score higher than humans in reading
> and comprehension test
>
>
>
> On Tue, Jan 16, 2018 at 2:55 PM, Dylan Distasio <interzone at gmail.com>
> wrote:
>
>
>
> ​> >​…in theory, adversarial attacks can be used to game machine learning
> systems in a black hat hacking sort of way.  You could trick the system
> into doing exactly what you want it to just by feeding it bad data for ill
> gotten gain.
>
>
>
> ​>…It's certainly a good thing that unlike computers its impossible ​to
> fool human beings, otherwise we could end up with a president who was not
> only crazy but also stupid....  John K Clark ​
>
>
>
>
>
> I am eager for AI systems to get smarter than humans.  Then we can see
> which political party they join, see if they switch parties as they
> self-train and get smarter, etc.  That would be a kick: we could set some
> of them to get all their information from CNN, another from Reason.com,
> another from Fox, make bets on their scores from I-Side-With, that kinda
> thing.  That would be a hoot.
>
>
>
> spike
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20180116/10a41406/attachment.html>


More information about the extropy-chat mailing list