[ExI] puzzle - animal consciousness

Mike Dougherty msd001 at gmail.com
Tue May 20 13:04:31 UTC 2014


On Tue, May 20, 2014 at 5:33 AM, Anders Sandberg <anders at aleph.se> wrote:

> The point of the dog example is to make it clearer what reasoning is, and
> whether it has anything to do with an internal language, intuitions or
> deduction. It seems that real dogs are merely interesting examples, while
> the deep question is whether there is an innate intuition or deduction that
> if (A or B or C) is true, ((not A) and (not B)) imply C. Even if dogs did
> behave like in the experiment it might be due to a different mechanism,
> like running mental models of possible worlds, updating their likelihoods
> based on accumulated evidence, and acting when the likelihood becomes
> concentrated enough in one possibility: no deduction needed (at this point
> a philosopher will complain that my Bayesian Beagle is actually equivalent
> to his Deductive Doberman, since Bayes rule implicitly contains the above
> deduction; much barking will ensue).
>

So if Bayesian Beagle and Deductive Doberman behave exactly the same way
are they interchangeable with respect to this question?  Can they both be
replaced by a robot with a program that implements either algorithm?  If
the robot isn't "thinking" then the dogs aren't thinking either.  Oh right,
the robot externalized the thinking to the programmer.  Where does it end?
We can keep moving the intelligence to wherever is convenient if we're
willing to contrive some method to convey that intelligence all the way
through to a choice of path.

thinking about that program, I imagine:

if( scent on A ){ follow path A }
else if( scent on B ){ follow path B }
else follow path C

the above program works, but I would never implement something like that.
instead:

if( scent on A ){ follow path A }
else if( scent on B ){ follow path B }
else if( scent on C ){ follow path C }
else (figure out a new plan)

this second program is more robust.

The interchangeability of observed behavior in this puzzle reminds me of
Searle's Chinese Room.  I wonder if they're all just primers for ultimately
asking if any of the participants in the conversation are thinking to
establish a baseline to the question 'what IS thinking'
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140520/6ef81b60/attachment.html>


More information about the extropy-chat mailing list