[ExI] i got friends in low places

spike at rainier66.com spike at rainier66.com
Thu Aug 25 23:52:37 UTC 2022


 

 

From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of Adrian Tymes via extropy-chat
Sent: Thursday, 25 August, 2022 4:12 PM
To: ExI chat list <extropy-chat at lists.extropy.org>
Cc: Adrian Tymes <atymes at gmail.com>
Subject: Re: [ExI] i got friends in low places

 

On Thu, Aug 25, 2022 at 2:46 PM spike jones via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org> > wrote:

Do think about my previous post and offer me an explanation for how an enormous Kalman filter-based weatherman (weather…thing?) is not AI.

 

>…Depends on the definition of AI that you are using.  Is it a true Scotsman? 

 

 

 

Adrian, do allow me to press a little harder.  Again the real message is in the last two paragraphs.

 

Perhaps you recall (I sure do) back in the olden days when human intelligence was defined by foresight leading to creation of a tool.  In the 1960s, researchers confidently opined that humans were the only beasts which make tools.  But… what about chimps who select a twig, strip off the leaves and make a tool for fishing termites out of a stump?  

 

Once the researchers grudgingly admitted that OK primates make tools, then we recognized that all kinds of beasts make tools.  Watch this little guy and explain why this isn’t an example of tool making:

 

https://www.reddit.com/r/nextfuckinglevel/comments/wt91kp/this_clever_little_caterpillar_makes_a_leaf_tent/

 

OK then, certainly there is a distinction in degree: we primates make particle accelerators, the caterpillar made a leaf tent.  Both made tools, humans make cooler ones.

 

OK then, but we had to move those goal posts pretty dang far, ja?  

 

What if we recognize that Kalman filters are a kind of AI, in the sense that the caterpillar is a tool maker, but all the Kalman filter can do is offer some insights on a very specific task, examples being hurricane and drought predictions, prime numbers and so on.  They aren’t really smart, just good at some things.  We humans are smart at everything, and so how secure we feel up here at the top of the intelligence food chain. 

 

But wait.  How many among us know how to take a bunch of sea surface measurements and estimate the coming month’s accumulated cyclonic energy?  I know of software which can do that.  Can’t do anything else, wouldn’t know how to wipe its own butt (if it had one) but it can do the ACE thing way better than humans, better than any human.  It learned how by measuring and calculating, refining, observing, self-modifying, repeat repeat repeat.

 

How many humans know how to estimate the time interval over which the next record prime number will be discovered?  Well, OK I do, so bad example, but in general, these Kalman filter based algorithms can get a super-specialized question, run 24/7, never get tired, bored, distracted, sexually aroused, any of the stuff that keeps us humans from figuring out something.  We can set arbitrarily many of processors working on various things, then… we load the resulting covariance matrices from the various tasks, and end up with a computer which knows a lot of stuff but isn’t actually learning anything.  

 

If all that is correct, then the processors running the Kalman filters are intelligent but know only one thing.  The (what would you call it?) computer which receives the accumulated knowledge (the covariance matrices) knows a lot of stuff but isn’t intelligent.

 

Doesn’t that seem paradoxical?  

 

spike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220825/e8e9c3e2/attachment.htm>


More information about the extropy-chat mailing list