[ExI] Will the Singularity take an unexpected path?
rpwl at lightlink.com
Wed Sep 12 14:10:32 UTC 2007
> An AI professor in the UK has made a presentation that sounded unusual
> to me. He sees narrow AI proliferating wildly, until we are
> surrounded by thousands of invisible helpers, all doing their own tiny
> Quote -
> He believes that we are now seeing the emergence of Assistive
> Intelligence which can be characterized as a different kind of AI.
> 'These results can be seen everywhere,' he says. 'Rather than being
> conscious brains in a box, as Hollywood would have it, they are in
> fact small pieces of adaptive and flexible software that help drive
> our cars, diagnose disease and provide opponents in computer games.'
> And he sees this as a trend that will continue. 'There will be
> micro-intelligences all around us – systems that are very good and
> adaptive at particular tasks, and we will be immersed in environments
> stuffed full of helpful devices.'
> This is a form of human augmentation that I haven't heard before.
> More like augmenting the environment around humans, so that the matter
> around us is gradually becoming intelligent. Quite a thought.
There is nothing different in this: it is the same old Narrow AI
message: let's *brand* it with the word "intelligence" even if we
cannot really build an intelligence yet.
If you take some human knowledge, freeze it into a machine, what you've
got is ..... a piece of technology. For example, a governor (you know
those things that were on old steam engines, with four metal balls
attached to a shaft, flying outward to regulate the speed of the shaft?)
is a human idea about controlling spin that has been frozen into a piece
of hardware: a person could make the balls move out and in to dampen
the spin of the shaft, but the mechanism is designed to do that control
So it is with all the Narrow AI mechanisms: they lack the fundamental
feature of true intelligence (the ability to build representations of
any aspect of the world, sight unseen, then use those representations in
the pursuit of sophisticated, adaptable goals), so instead they just
freeze a piece of knowledge that has been acquired by a real
intelligence, together with perhaps a smattering of adaptibility (so
they can refine the human-built frozen knowledge just a little).
This announcement from Nigel Shadbolt is just more puffery from the
Narrow AI crowd, pretending that *lots* of Narrow AI will somehow be the
same as real AI (artificial general intelligence, AGI).
Problem is, you can go around freezing bits of human intelligence until
the cows come home, but why would a million pieces of frozen
intelligence be the same as an adaptable, general intelligence that
creates its own knowledge? Not gonna happen, sorry.
They were saying the same thing back in the Knowledge Based Systems
period of AI, and FWIW Shadbolt came to prominence in that era.
More information about the extropy-chat