[ExI] The symbol grounding problem in strong AI

Stefano Vaj stefano.vaj at gmail.com
Tue Dec 22 14:18:58 UTC 2009


2009/12/19 Gordon Swobe <gts_2000 at yahoo.com>:
> In other words you want to think that if something causes X to do Y then we can assume X actually knows how to do Y.
>
> That idea entails panpsychism;

Or the opposite.

Meaning that "psychism" is simply a high-level description which
becomes useful when some processes are complicate enough, but does not
involve any underlying mystical phenomenon of a different standing of
what ordinarily happens in nature.

You might be interested in glancing at the principle of computational
equivalence in Wolphram's A New Kind of Science...

What remains is simply a matter of different performances in the
execution of some software, our brain being obviously rather well
optimised (within the constraints dictated by the need to develop it
only through evolutionary methods) to do what it does.

-- 
Stefano Vaj



More information about the extropy-chat mailing list