[ExI] inference paradigm in ai

William Flynn Wallace foozler83 at gmail.com
Mon Sep 4 16:29:27 UTC 2017


 spike  wrote: the AI would need to learn all the human foibles we have so
long struggled and mostly failed to overcome, such as… well, do I really
need to offer a list of human foibles?

-------

bill w Yes, please.  Just categories will be OK, such as cognitive errors
and emotions clouding judgment

---------------

 The AI we can create would then be like us only more so.

----------

bill w  I think it would be 'less so', if you have removed the foibles


Inference from context - if there are AI experts reading this, can I get a
simple answer?  Consider synonyms:  for some words, there are dozens, or
even hundreds of definitions (like 'set')   - how does an AI tell one from
another when in some contexts the synonyms are interchangeable and in some
they are not?  I reckon that there's not a simple answer here.


bill w





On Mon, Sep 4, 2017 at 8:48 AM, spike <spike66 at att.net> wrote:

>
>
>
>
>
>
>
>
> AI hipsters among us perhaps you can offer guidance on an insight, where I
> can learn more, etc.
>
>
>
> We have some common software paradigms the computer science people study
> in college, with the usual list which was known back when I was in college,
> the one I remembered using the mnemonic IF-POODLES:
>
>
>
> Imperative
>
> Functional
>
> Procedural
>
> Object Oriented
>
> Declarative
>
> Logic Executable
>
> Symbolic
>
>
>
> Then the engineering students were taught the Imperative paradigm in the
> form of FORTRAN 77 and sent off to work a hopeless 9 to 5 until dead.  The
> rest of it was the domain of those ethereal ivory-tower computer scientists
> who were introduced to the other six common paradigms and sent off to work
> a hopeful 9 to 5 until they retired wealthy at age 40.
>
>
>
> Well, sometimes it worked that way, but in any case, for some time I have
> thought that our fondest AI notions were still somehow missing a paradigm,
> that none of these usual suspects were adequate for making the kinds of
> inferences we think of as intelligence.  For instance, consider the
> following short passage:
>
>
>
> “Last call, drink up!” rose above the din.
>
>
>
> “Do you have any particular specialty?”
>
>
>
> “Indeed madam: gynecology, sub-specialty maximally invasive procedures.”
>
>
>
> OK there you have three short sentences which allow the reader to infer a
> lot of information: boozy schmoozy singles bar, quarter to 2am, guy who may
> or may not be a doctor putting the moves on some open-minded maiden who
> appears receptive.
>
>
>
> Is that approximately what you read from it?  Note that the passage
> doesn’t say a word (directly) with regard to the time, the profession, the
> setting, but we can figure out what is going on there, even if we have no
> direct first hand experience (I have never been in a boozy-schmoozy bar
> (never mind still awake at closing time (but I read about it.)))
>
>
>
> Thought experiment: take a collection of 40 yr olds, give them all IQ
> tests and remove everyone who scored over 85.  Now take a bunch of 10 yr
> olds, IQ test, remove all who score below about 115.  OK two groups: dumb
> adults and smart kids, give them math and reading tests.  The smart kids
> (as a group) generally outperform the dumb adults.  Agree?
>
>
>
> OK now show both of them the Last Call passage.  Nearly all the dumb
> adults know what is going on there and almost none of the book-smart kids
> do.
>
>
>
> OK so what is the difference between the groups that allows the book-dumb
> adults to easily see what the book-smart kids do not?  The grown-ups can
> make inferences, based on experience.  The kids lack the necessary
> experience.
>
>
>
> So if we are to get AI to ever achieve the I, we need to somehow give them
> experience.  They cannot effectively infer anything without experience,
> regardless of how advanced their computing skill.  We need an eighth
> programming paradigm I will call Inference-enabled.
>
>
>
> In order to enable inference, we need to give the AI experience, and the
> only experience we have to offer is our own, which means the AI would need
> to learn all the human foibles we have so long struggled and mostly failed
> to overcome, such as… well, do I really need to offer a list of human
> foibles?
>
>
>
> This line of reasoning leads me to the discouraging conclusion that even
> if we manage to create inference-enabled software, we would need to train
> it by having it read our books (how else?  What books?) and if so, it would
> become as corrupt as we are.  The AI we can create would then be like us
> only more so.
>
>
>
> Damn.
>
>
>
> spike
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170904/5c7bc490/attachment.html>


More information about the extropy-chat mailing list