[extropy-chat] limits of computer feeling

Jef Allbright jef at jefallbright.net
Tue Mar 20 16:52:32 UTC 2007


On 3/20/07, Keith Henson <hkhenson at rogers.com> wrote:
> At 08:55 PM 3/19/2007 -0500, you wrote:
> >At 12:11 PM 3/20/2007 +1100, Stathis wrote:
> >
> > >What would it mean to abrogate evolution? Arguably it has already
> > >happened: we are more concerned with our happiness, which for
> > >evolution is just a means to an end, rather than for example
> > >maximising family size.
> >
> >*Not* "maximizing", unless you add situational provisos (we're K, not
> >r). "Optimizing" might be better, but that's dangerously
> >teleological. "Good-enough-izing" is what I'd call it.
>
> In the EEA it was maximizing, but not family size, it is maximizing the
> number of surviving, reproducing children.  To that end hunter gatherer
> peoples practice infanticide when a just born younger sib would threaten
> the survival of an older but still nursing child.

Am I the only one who feels something akin to the screeching of
fingernails on a blackboard when someone blithely ascribes actions to
"goals" that would require an impossibly objective point of view?
Yes?  Then I'll try to keep my comments to a minimum.

"To that end hunter gatherer peoples practice infanticide..." implies
a teleological purpose that clearly isn't.  Suggest: "Therefore/For
that reason (not purpose) hunter-gatherer peoples practice
infanticide..."

It's the same kind of implicit context confusion that leads to
perennial misunderstanding of  consciousness, free-will etc.,
goals/supergoals in AI,  and much of the PHIL101 discussion that often
dominates these lists.

Arghh!  ;-)

- Jef



More information about the extropy-chat mailing list