[ExI] i got friends in low places

BillK pharos at gmail.com
Thu Aug 25 22:21:02 UTC 2022


On Thu, 25 Aug 2022 at 22:47, spike jones via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
>  Do think about my previous post and offer me an explanation for how an enormous Kalman filter-based weatherman (weather…thing?) is not AI.  Looks to me like it checks off all the boxes, which would require us to dig up and move the goalpost yet again.
>
> We might as well just leave the AI goalpost on the back of the truck just to save time.
>
> spike
> _______________________________________________


It is an optimising problem-solver, but only for the specific problem
it was designed for.
Like the Machine Learning programs that play Go or Chess.

Ben Goertzel has just written 'Three Viable Paths to True AGI' which
discusses the requirements for AGI.

<https://bengoertzel.substack.com/p/three-viable-paths-to-true-agi>
Quote:
The deep neural nets and other ML algorithms that are absorbing most
of the AI world’s attention today are, in my own view, fundamentally
unsuited for the creation of human-level AGI.
As I noted in my last blog post, the absolute upper bound for which
these deep nets or any vaguely similar methods could be sensibly hoped
to achieve would be what I’d call “closed-ended quasi-AGI” systems
which could imitate a lot of human behaviors — but which, due to the
fundamental lack of ability to innovate, abstract or generalize, would
be incapable to address difficult unsolved science and engineering
problems, or to perform the self-modification and self-improvement
needed to serve as seed AIs and launch a Singularity.
-------------

ML programs are very useful algorithms but they are not AI.


BillK



More information about the extropy-chat mailing list