[ExI] AI & education

Adrian Tymes wingcat at pacbell.net
Fri Apr 2 15:50:51 UTC 2010


--- On Thu, 4/1/10, spike <spike66 at att.net> wrote:
> We have worked our way into a
> situation where all
> the classroom material must be broken down into testable
> skills, so it
> encourages memorization of algorithms, while functionally
> discouraging
> actual thought.

A thought: for decades (possibly over a century by now),
people have redefined the boundaries of what counts as
"artificial intelligence", primarily based on what computers
were able to do.

Logic?  Computers can do that better than humans, so that is
not really artificial intelligence.

Math?  Computers can do that better than humans, so that is
not really artificial intelligence.

Chess?  Computers can do that better than humans, so that is
not really artificial intelligence.

Art?  Maybe, but computers have been making progress in
understanding that.  Take animation, for example: creating
the frames in between keyframes is obviously mechanical
and left to computers at most professional studios these
days, but it is believed to take true intelligence to make
those keyframes.

And so on.  Now, what is "actual thought"?

Logic?  That can be reduced to tests and memorization of
algorithms, so that is not actual thought.

Math?  That can be reduced to tests and memorization of
algorithms, so that is not actual thought.

Chess?  That can be reduced to tests and memorization of
algorithms, so that is not actual thought.

Art?  Maybe, but testing has been making progress in
quantifying that.  Take animation, for example: creating
the frames in between keyframes is obviously mechanical
and left to computers at most professional studios these
days, but it is believed to take actual thought to make
those keyframes.

That point aside - without those tests, how do we know how
effective a teacher (or school) is?  There arose a need to
measure this, after many blatantly obvious but anecdotal
examples came to light for decades.  Without measurement,
there can not be fair judgment to say that this teacher is
effective while this other one is not.  With measurement,
the worst of this inequity has started to be identified,
and corrective measures taken.  (Granted, this measurement
needs to take into account factors like the ability and
willingness of the students to learn, which in turn is
affected by things like their home environment.  These
factors can be identified and controlled for, and do not
counter the basic point.)  Yes, there is an element of
"for the children", which is often used to camouflage
moral panics that do not actually help children - but this
movement seems to be directly benefiting them.

> I really started to understand this five years ago when I
> was taking a
> spacecraft feedback controls class at the U.  The
> final was a take-home, all
> materials open, all resources open, one week to do
> it.  My wife and I each
> used over 50 hours to do that exam, taking a couple days
> off of work.  Four
> problems: choose any one and solve it.  My test
> writeup was over 80 pages,
> Shelly's was a little longer.  It occurred to me then
> that this is actually
> the right way to do engineering education and test
> competence, for it
> encouraged creative problem solving, rather than
> memorization of shortcuts
> and misleading oversimplifications such as the traditional
> 95%ile criterion
> for statistical significance.

And there's the rub: how do we quantify creativity?  There
are ways to do even that.  Perhaps those ways need to be
better defined and more widely applied.  But then creativity,
or at least that portion of it that is thereby routinely
tested, would no longer be actual thought, would it?




More information about the extropy-chat mailing list