[ExI] i got friends in low places

Gadersd gadersd at gmail.com
Fri Aug 26 00:12:07 UTC 2022


The Kalman filter algorithm you described is indeed a learning algorithm. It improves its predictions as more data comes along and therefore is a simple form of AI. However, the domain of problems it can solve is quite limited compared to more advanced techniques such as machine learning. I agree with BillK that even machine learning has a limited domain. Machine learning essentially works by fitting a function onto massive amounts of data by manipulating the function’s parameters until the desired behavior is achieved. In a sense, machine learning memorizes the data, but is able to “read between the lines” to work with new input data it has never seen before that is similar to data it was trained on. Machine learning algorithms struggle to handle data that is too different from what it they have seen before. This lack of generalization ability is one of the greatest weaknesses of machine learning.

If AI that can theoretically solve any problem and has limitless generalization ability is what you are looking for, then something radically different is needed. True generalized AI is actually a solved problem and we have the equations, but very few people seem to be aware of this. I won’t reveal the details because I like my secrets, though the equations are not hard to find if one searches diligently enough.

In any case, I suspect that machine learning is good enough to carry us forward for a while. It can even reach human level if the artificial neural network architectures are further improved and generalized (our brains are neural networks after all).

Eliezer is quite intelligent and knowledgeable so I am very confused if he implied that processing speed and memory doesn’t have a significant positive effect on AI. That would be a blatantly false claim and spits in the face of all current AI research. I wonder if there is a mistake in your memory or perhaps Eliezer’s grasp on AI 20 years ago was much more lacking than it is now.

> On Aug 25, 2022, at 5:37 PM, spike jones via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> ...> On Behalf Of BillK via extropy-chat
> Subject: Re: [ExI] i got friends in low places
> 
> On Thu, 25 Aug 2022 at 20:19, spike jones via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>> 
>>> ...Humans somehow manage to demote software we understand to
>> non-intelligence without simultaneously demoting ourselves to 
>> non-intelligence.  This alone is sufficient evidence of our actual 
>> intelligence.
>> 
>> spike
>> _______________________________________________
> 
> 
>> ...Oh, Nooooo!  I’ve just realised that you have got me chatting about the weather.
> 
>> ...<https://www.bbc.com/future/article/20151214-why-do-brits-talk-about-the-weather-so-much>
> ----------------
> 
> BillK
> 
> _______________________________________________
> 
> 
> Oy vey, my efforts are in vain, at least temporarily.  The second to last paragraph below is the relevant point, for the industrious time-challenged gainfully employed among us.
> 
> BillK, I am not particularly interested in weather, but it occurred to me that one of my old favorite mathematical tools could be used (perhaps) to make useful prognostications about a topic which has proven notoriously difficult to predict.
> 
> Over the years, weather predictions have become far more accurate, because we have more instruments in orbit and surface, upon which to base our predictions.  Our ability to multiply enormous matrices has improved over the years and decades, which is required if one is using Kalman filters.  Into the covariance matrix goes all observable metrics.  Then we calculate correlation coefficients for each of the observables, which is (n^2)/2 coefficients (or (n^2)/2-n if one wishes to ignore the 1s on the main diagonal) then there are multiplies and determinants and so forth, so it is a tooonnnnn of calculations.
> 
> However... over time, we accumulate more and more data, so the covariance matrix gets better (recall that the covariance matrix starts off life as nxn identity matrix I sub n (which is the most ignorant matrix possible.))  Then every time step, the prediction is compared with the actual, correlations are found, the covariance matrix gets smarter.  And more experienced.  Then smarter, then... repeat until... when?
> 
> OK then.  BillK and other AI hipsters among us, do explain please why that wouldn't be an example of a learning algorithm, because it looks to me like it learned, and if so, is not the ability to learn a critical requirement of the definition of intelligence?
> 
> And if so... could not we refute one of Eliezer Yudkowsky's most fundamental items in his AI canon, the one I have always found dubious: that true AI is not dependent upon calculation speed and memory capacity.
> 
> Please AI hipsters among us, if I have misunderstood Eliezer's contention (which I mighta (or it has become outdated since he was expounding the theory 20 yrs ago)) do set me straight.  I contended back in the 90s that our increasing capacity to calculate would eventually enable AI.  Eliezer opined that it would not, or would not by itself.
> 
> OK then, what if we have an enormous Kalman filter looking at weather (since that is what started the discussion.)  Over time we have certainly acquired increasing capacity to multiply enormous matrices and calculate determinants, as well as to store enormous piles of data required to calculate correlation coefficients, ja?  So now... we can do math that just couldn't do 20 yrs ago, because we hadn't the computing horsepower then.
> 
> As more instruments are added to the observation pool, the software can add them into its covariance matrix (the software can already do that, no magic required) and it can throw out observables which it finds to have no significant correlation with anything (BillK's garden butterflies for instance (the row and column for British butterflies would be zeros all the way down and all the way across (so throw it out.))  Over time, that matrix would accumulate better higher-confidence measures of ever-more relevant observable variables and the result would be better predictions.  It gets smarter and better.
> 
> BillK, any AI hipsters or anyone else here, please can you offer me (in terms even a controls engineer can understand) how that algorithm is not learning and is not AI?  Because it sure looks like AI tome. It is learning how to do something, in a way not all that terribly different from how we carbon-units learn.
> 
> spike
> 
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220825/46557b89/attachment.htm>


More information about the extropy-chat mailing list