[ExI] inference paradigm in ai

spike spike66 at att.net
Mon Sep 4 13:48:01 UTC 2017


 

 

 

 

AI hipsters among us perhaps you can offer guidance on an insight, where I
can learn more, etc.

 

We have some common software paradigms the computer science people study in
college, with the usual list which was known back when I was in college, the
one I remembered using the mnemonic IF-POODLES:

 

Imperative

Functional

Procedural

Object Oriented

Declarative

Logic Executable

Symbolic

 

Then the engineering students were taught the Imperative paradigm in the
form of FORTRAN 77 and sent off to work a hopeless 9 to 5 until dead.  The
rest of it was the domain of those ethereal ivory-tower computer scientists
who were introduced to the other six common paradigms and sent off to work a
hopeful 9 to 5 until they retired wealthy at age 40.

 

Well, sometimes it worked that way, but in any case, for some time I have
thought that our fondest AI notions were still somehow missing a paradigm,
that none of these usual suspects were adequate for making the kinds of
inferences we think of as intelligence.  For instance, consider the
following short passage:

 

"Last call, drink up!" rose above the din. 

 

"Do you have any particular specialty?"  

 

"Indeed madam: gynecology, sub-specialty maximally invasive procedures."

 

OK there you have three short sentences which allow the reader to infer a
lot of information: boozy schmoozy singles bar, quarter to 2am, guy who may
or may not be a doctor putting the moves on some open-minded maiden who
appears receptive.

 

Is that approximately what you read from it?  Note that the passage doesn't
say a word (directly) with regard to the time, the profession, the setting,
but we can figure out what is going on there, even if we have no direct
first hand experience (I have never been in a boozy-schmoozy bar (never mind
still awake at closing time (but I read about it.)))  

 

Thought experiment: take a collection of 40 yr olds, give them all IQ tests
and remove everyone who scored over 85.  Now take a bunch of 10 yr olds, IQ
test, remove all who score below about 115.  OK two groups: dumb adults and
smart kids, give them math and reading tests.  The smart kids (as a group)
generally outperform the dumb adults.  Agree?

 

OK now show both of them the Last Call passage.  Nearly all the dumb adults
know what is going on there and almost none of the book-smart kids do.

 

OK so what is the difference between the groups that allows the book-dumb
adults to easily see what the book-smart kids do not?  The grown-ups can
make inferences, based on experience.  The kids lack the necessary
experience.

 

So if we are to get AI to ever achieve the I, we need to somehow give them
experience.  They cannot effectively infer anything without experience,
regardless of how advanced their computing skill.  We need an eighth
programming paradigm I will call Inference-enabled.  

 

In order to enable inference, we need to give the AI experience, and the
only experience we have to offer is our own, which means the AI would need
to learn all the human foibles we have so long struggled and mostly failed
to overcome, such as. well, do I really need to offer a list of human
foibles?  

 

This line of reasoning leads me to the discouraging conclusion that even if
we manage to create inference-enabled software, we would need to train it by
having it read our books (how else?  What books?) and if so, it would become
as corrupt as we are.  The AI we can create would then be like us only more
so.

 

Damn.  

 

spike

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170904/4cd3e28b/attachment.html>


More information about the extropy-chat mailing list