[ExI] i got friends in low places

spike at rainier66.com spike at rainier66.com
Thu Aug 25 21:37:25 UTC 2022


...> On Behalf Of BillK via extropy-chat
Subject: Re: [ExI] i got friends in low places

On Thu, 25 Aug 2022 at 20:19, spike jones via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
> >...Humans somehow manage to demote software we understand to
> non-intelligence without simultaneously demoting ourselves to 
> non-intelligence.  This alone is sufficient evidence of our actual 
> intelligence.
>
> spike
> _______________________________________________


>...Oh, Nooooo!  I’ve just realised that you have got me chatting about the weather.

>...<https://www.bbc.com/future/article/20151214-why-do-brits-talk-about-the-weather-so-much>
----------------

BillK

_______________________________________________


Oy vey, my efforts are in vain, at least temporarily.  The second to last paragraph below is the relevant point, for the industrious time-challenged gainfully employed among us.

BillK, I am not particularly interested in weather, but it occurred to me that one of my old favorite mathematical tools could be used (perhaps) to make useful prognostications about a topic which has proven notoriously difficult to predict.

Over the years, weather predictions have become far more accurate, because we have more instruments in orbit and surface, upon which to base our predictions.  Our ability to multiply enormous matrices has improved over the years and decades, which is required if one is using Kalman filters.  Into the covariance matrix goes all observable metrics.  Then we calculate correlation coefficients for each of the observables, which is (n^2)/2 coefficients (or (n^2)/2-n if one wishes to ignore the 1s on the main diagonal) then there are multiplies and determinants and so forth, so it is a tooonnnnn of calculations.

However... over time, we accumulate more and more data, so the covariance matrix gets better (recall that the covariance matrix starts off life as nxn identity matrix I sub n (which is the most ignorant matrix possible.))  Then every time step, the prediction is compared with the actual, correlations are found, the covariance matrix gets smarter.  And more experienced.  Then smarter, then... repeat until... when?

OK then.  BillK and other AI hipsters among us, do explain please why that wouldn't be an example of a learning algorithm, because it looks to me like it learned, and if so, is not the ability to learn a critical requirement of the definition of intelligence?

And if so... could not we refute one of Eliezer Yudkowsky's most fundamental items in his AI canon, the one I have always found dubious: that true AI is not dependent upon calculation speed and memory capacity.

Please AI hipsters among us, if I have misunderstood Eliezer's contention (which I mighta (or it has become outdated since he was expounding the theory 20 yrs ago)) do set me straight.  I contended back in the 90s that our increasing capacity to calculate would eventually enable AI.  Eliezer opined that it would not, or would not by itself.

OK then, what if we have an enormous Kalman filter looking at weather (since that is what started the discussion.)  Over time we have certainly acquired increasing capacity to multiply enormous matrices and calculate determinants, as well as to store enormous piles of data required to calculate correlation coefficients, ja?  So now... we can do math that just couldn't do 20 yrs ago, because we hadn't the computing horsepower then.

As more instruments are added to the observation pool, the software can add them into its covariance matrix (the software can already do that, no magic required) and it can throw out observables which it finds to have no significant correlation with anything (BillK's garden butterflies for instance (the row and column for British butterflies would be zeros all the way down and all the way across (so throw it out.))  Over time, that matrix would accumulate better higher-confidence measures of ever-more relevant observable variables and the result would be better predictions.  It gets smarter and better.

BillK, any AI hipsters or anyone else here, please can you offer me (in terms even a controls engineer can understand) how that algorithm is not learning and is not AI?  Because it sure looks like AI tome. It is learning how to do something, in a way not all that terribly different from how we carbon-units learn.

spike




More information about the extropy-chat mailing list