[ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...]
rpwl at lightlink.com
Sat Feb 5 16:23:30 UTC 2011
Damien Broderick wrote:
> On 2/4/2011 2:29 PM, Richard Loosemore wrote:
>> A human-like cognitive system running on a computer has nothing whatever
>> to do with darwinian evolution. It is not a "darwinian machine" because
>> that phrase "darwinian machine" is semantically empty. There is no such
>> property "darwinian" that can be used here, except the trivial property
>> "Darwinian" == "System that resembles, in structure, another system
>> that was originally designed by a darwinian process"
>> That definition is trivial because nothing follows from it.
> I take it you're not impressed by the quite clearly darwinian models
> sketched by, say, Calvin or Edelman? I find their ideas quite
> provocative and what follows from them is a novel explanation of
> cognition and inventiveness. It might be wrong, and maybe by now has
> been proved to be wrong, but I haven't seen those refutations. What were
Well, unfortunately there are several meanings for "darwinian" going on
In the Edelman sense, as I understand it, "darwinian" actually means
something close to "complex adaptive system", because he is talking
about (mainly) an explanation for morphogenesis in the brain. Now, I
have no quarrel with that aspect of Edelman's work ... but where I do
have difficulty is seeing an explanation for high-level funcionality,
like cognition, in that approach. I think that Edelman (like many
neuroscientists) begins to start handwaving when he wants to make the
connection upward to cognitive-level goings-on.
I confess I have not gone really deeply into Edelman: I drilled down
far enough to get a feeling that sudden, unsupported leaps were being
made into psychology, then I stopped. I would have to go back and take
another read to give you a more detailed answer.
But even then, the overall tenor of his approach is still "How did this
machine come to get built?" rather than "How does this machine actually
work, now that it is built?"
The one exception would be -- of course -- anything that has to do with
the acquisition and development of concepts. Now, if he can show that
concept learning involves some highly complex, self-modifying, recursive
machinery (i.e. something like a darwinian process), then I would say
YAY! and thoroughly agree... this is very much along the same lines that
I pursue. However, notice that there are still some reasons to shy away
from the label "darwinian" because it is not clear that this is anythig
more than a complex system. A darwinian system is definitely a complex
system, but it is also more specific than that, because, it involves sex
and babies. Neurons don't have sex or babies.
So, to be fair, I will admit that the distinction between "How did this
machine come to get built?" and "How does this machine actually work,
now that it is built?" becomes rather less clear when we are talking
about concept learning (because concepts play a role that fits somewhere
between structure and content).
But -- and this is critical -- it is a long, long stretch to go from the
existence of complex adaptive processes in the concept learning
mechanism, to the idea that the system is "darwinian" in any sense that
allows us to make concrete statements about the system's functioning.
Which brings me back to my comment to Stefano. Even if Edelman and
others can extend the use of the term "darwinian" so it can be made to
describe the processes of morphogenesis and concept development, I still
say that the term has no force, no impact, on issues such as the
behavior of a putative "motivational mechanism". I am still left with
an "And that is saying ... what, exactly?" feeling.
More information about the extropy-chat