[extropy-chat] Jeff Hawkins and AI

Neil Halelamien neuronexmachina at gmail.com
Sat Mar 26 00:35:11 UTC 2005


On Fri, 25 Mar 2005 05:17:11 -0700,
extropy-chat-request at lists.extropy.org Keith Henson
<hkhenson at rogers.com> wrote:
> So they have done 1/400 of the cortical area.  I sure wonder that the
> "cycle" metric is?  Assuming they have come close to human rates, and that
> they were not using a top of the line super computer, we are only a few
> years from human level AI based on a natural model.
> 
> Anyone have a copy of the paper they are about to present?

(variant of a post I made elsewhere)

It took some searching around, but I managed to find the research page
for Dileep George, one of the co-founders and chief engineer. His page
has
links to source code for his visual recognition system, although I
haven't had a chance to evaluate it yet:

http://www.stanford.edu/~dil/invariance/

He organized a workshop on
invariant representations in vision last weekend at Cosyne, one of
the major computational neuroscience conferences. The list of
abstracts is a pretty good read:

http://www.stanford.edu/~dil/cosyne05/index.html

George and Hawkins are also publishing a paper in  the proceedings of
an upcoming neural network conference. Here's the relevant info:

http://www.stanford.edu/~dil/invariance/Download/GeorgeHawkinsIJCNN05.pdf

Title: A Hierarchical Bayesian Model of Invariant Pattern Recognition
in the Visual Cortex

Dileep George and Jeff Hawkins, Stanford University and Redwood
Neuroscience Institute
Accepted for publication in the proceedings of  the International
Joint Conference on Neural Networks. (IJCNN 05)

Abstract: We describe a hierarchical model of invariant visual pattern
recognition in the visual cortex. In this model, the knowledge of how
patterns change when objects move is learned and encapsulated in terms
of high probability sequences at each level of the hierarchy.
Configuration of object parts is captured by the patterns of
coincident high probability sequences. This knowledge is then encoded
in a highly efficient Bayesian Network structure. The learning
algorithm uses a temporal stability criterion to discover object
concepts and movement patterns. We show that the architecture and
algorithms are biologically plausible. The large scale architecture of
the system matches the large scale organization of the cortex and the
micro-circuits derived from the local computations match the
anatomical data on cortical circuits. The system exhibits invariance
across a wide variety of transformations and is robust in the presence
of noise. Moreover, the model also offers alternative explanations for
various known cortical phenomena.



More information about the extropy-chat mailing list