[extropy-chat] Creating software with qualia

Jef Allbright jef at jefallbright.net
Wed Nov 30 21:44:31 UTC 2005


Hal -

You've provided a powerful example of the kind of detailed, expanded
presentation that is necessary to have a hope of achieving broad
understanding.

I thought I implied everything you said in the three paragraphs I
posted near the beginning of this discussion.  ;-)

One suggestion:  I kept stumbling over your use of "computational" to
describe the more subjective model.  It seems to me that the other
model was just as computational, but within the domain of physics. 
Might it be more useful to refer to them as the "physical" model and
the "intentional" model?

- Jef


On 11/30/05, "Hal Finney" <hal at finney.org> wrote:
> One thing that strikes me about the qualia debate and the philosophical
> literature on the topic is that it is so little informed by computer
> science.  No doubt this is largely because the literature is old
> and computers are new, but at this point it would seem appropriate to
> consider computer models of systems that might be said to possess qualia.
> I will work out one example here.
>
> Let's suppose we are going to make a simple autonomous robot.  It needs
> to be able to navigate through its environment and satisfy its needs
> for food and shelter.  It has sensors which give it information on the
> external world, and a goal-driven architecture to give structure to
> its actions.  We will assume that the robot's world is quite simple and
> doesn't have any other animals or robots in it, other than perhaps some
> very low-level animals.
>
> One of the things the robot needs to do is to make plans and consider
> alternative actions.  For example, it has to decide which of several
> paths to take to get to different grazing grounds.
>
> In order to equip the robot to solve this problem, we will design it
> so that it has a model of the world around it.  This model is based
> on its sensory inputs and its memory, so the model includes objects
> that are not currently being sensed.  One of the things the robot
> can do with this model is to explore hypothetical worlds and actions.
> The model is not locked into conformance with what is being observed,
> but it can be modified (or perhaps copies of the model would be modified)
> to explore the outcome of various possible actions.  Such explorations
> will be key to evaluating different possible plans of actions in order
> to decide which will best satisfy the robot's goals.
>
> This ability to create hypothetical models in order to explore alternative
> plans requires a mechanism to simulate the outcome of actions the robot
> may take.  If the robot imagines dropping a rock, it must fall to the
> ground.  So the robot needs a physics model that will be accurate enough
> to allow it to make useful predictions about the outcomes of its actions.
>
> This physics model doesn't imply Newton's laws, it can be a much simpler
> model, what is sometimes called "folk physics".  It has rules like: rocks
> are hard, leaves are soft, water will drown you.  It knows about gravity
> and the strength of materials, and that plants grow slowly over time.
> It mostly covers inanimate objects, which largely stay where they
> are put, but may have some simple rules for animals, which move about
> unpredictably.
>
> Using this physics model and its internal representation of the
> environment, the robot can explore various alternative paths and decide
> which is best.  Let us suppose that it is choosing between two paths
> to grazing grounds, but it knows that one of them has been blocked by
> a fallen tree.  It can consider taking that path, and eventually coming
> to the fallen tree.  Then it needs to consider whether it can get over,
> or around, or past the tree.
>
> Note that for this planning process to work, another ingredient is
> needed besides the physics model.  The model of the environment must
> include more than the world around the robot.  It must include the robot
> itself.  He must be able to model his own motions and actions through
> the environment.  He has to model himself arriving at the fallen tree
> and then consider what he will do.
>
> Unlike everything else in the environment, the model of the robot is
> not governed by the physics model.  As he extrapolates future events,
> he uses the physics model for everything except himself.  He is not
> represented by the physics model, because he is far too complex.  Instead,
> we must design the robot to use a computational model for his own actions.
> His extrapolations of possible worlds use a physics model for everything
> else, and a computational model for himself.
>
> It's important that the computational model be faithful to the robot's
> actual capabilities.  When he imagines himself coming to that tree, he
> needs to be able to bring his full intelligence to bear in solving the
> problem of getting past the tree.  Otherwise he might refuse to attempt
> a path which had a problem that he could actually have solved easily.
> So his computational model is not a simplified model of his mind.
> Rather, we must architect the robot so that his full intelligence is
> applied within the computational model.
>
> That is not a particularly difficult task from the software engineering
> perspective.  We just have to modularize the robot's intelligence,
> problem-solving and modelling capabilities so that they can be brought
> to bear in their full force against simulated worlds as well as real ones.
> It is not a hard problem.
>
> I am actually glossing over the true hard problem in designing a robot
> that could work like this.  As I have described it, this robot is capable
> of evaluating plans and choosing the one which works best.  What I have
> left off is how he creates plans and chooses the ones that make sense
> to fully model and evaluate in this way.  This is an unsolved problem
> in computer science.  It is why our robots are so bad.
>
> Ironically, the process I have described, of modelling and evaluation,
> is only present in the highest animals, yet is apparently much simpler
> to implement in software than the part we can't do yet.  Only humans,
> and perhaps a few animals to a limited extent, plan ahead in the manner
> I have described for the robot.  There have been many AI projects built
> on planning in this manner, and they generally have failed.  Animals
> don't plan but they do OK because the unsolved problem, of generating
> "plausible" courses of action, is good enough for them.
>
> This gap in our robot's functionality, while of great practical
> importance, is not philosophically important for the point I am going
> to make.  I will focus on its high-level functionality of modelling the
> world and its own actions in that world.
>
> To jump ahead a bit, the fact that two different kinds of models - a
> physical model for the world, and a computational model for the robot -
> are necessary to create models of the robot's actions in the world is
> where I will find the origins of qualia.  Just as we face a paradox
> between a physical world which seems purely mechanistic, and a mental
> world which is lively and aware, the robot also has two inconsistent
> models of the world, which he will be unable to reconcile.  And I would
> also argue that this use of dual models is inherent to robot design.
> If and when we create successful robots with this ability to plan,
> I expect that they will use exactly this kind of dual architecture for
> their modelling.  But I am getting ahead of the story.
>
> Let us now imagine that the robot faces a more challenging environment.
> He is no longer the only intelligent actor.  He lives in a tribe of
> other robots and must interact with them.  We may also fill his world
> with animals of lesser intelligence.
>
> Now, to design a robot that can work in this world, we will need to
> improve it over the previous version.  In particular, the physics model
> is going to be completely ineffective in predicting the actions of other
> robots in the world.  Their behaviors will be as complex and unpredictable
> as the robot's own.  They can't be modelled like rocks or plants.
>
> Instead, what will be necessary is for the robot to be able to apply his
> own computational model to other agents besides himself.  Previously, his
> model of the world was entirely physical except for a sort of "bubble of
> non-physicality" which was himself as he moved through the model.  Now he
> must extend his world to have multiple such bubbles, as each other robot
> entity will be similarly modelled by a non-physics model, instead using a
> computational one.
>
> This is going to be challenging for us, the architects, because
> modelling other robots computationally is harder than modelling the
> robots' own future actions.  Other robots are much more different than
> the future robot is.  They may have different goals, different physical
> characteristics, and be in very different situations.  So the robot's
> computational model will have to be more flexible in order to make
> predictions of other robot's actions.  The problem is made even worse
> by the fact that he would not know a priori just what changes to make in
> order to model another robot.  Not only must he vary his model, he has to
> figure out just how to vary it in order to produce accurate predictions.
> The robot will be engaged in a constant process of study and analysis
> to improve his computational models of other robots in order to predict
> their actions better.
>
> One of the things we will let the robots do is talk.  They can exchange
> information.  This will be very helpful because it lets them update their
> world models based on information that comes from other robots, rather
> than just their own observations.  It will also be a key way that robots
> can attempt to control and manipulate their environment, by talking to
> other robots in the hopes of getting them to behave in a desired way.
>
> For example, if this robot tribe has a leader who chooses where they will
> graze, our robot may hope to influence this leader's choice, because
> perhaps he has a favorite food and he wants them to graze in the area
> where it is abundant.  How can he achieve this goal?  In the usual way,
> he sets up alternative hypothetical models and considers which ones
> will work best.  In these models, he considers various things he might
> say to the leader that could influence his choice of where to graze.
> In order to judge which statements would be most effective, he uses
> his computational model of the leader in order to predict how he will
> respond to various things the robot might say.  If his model of the
> leader is good, he may be successful in finding something to say that
> will influence the leader and achieve the robot's goal.
>
> Clearly, improving computational models of other robots is of high
> importance in such a world.  Likewise, improved physics models will also
> be helpful in terms of finding better ways to influence the physical
> world.  Robots who find improvements in either of these spheres may be
> motivated to share them with others.  A robot who successfully advances
> the tribe's knowledge of the world may well gain influence as "tit for
> tat" relationships of social reciprocity naturally come into existence.
>
> Robots would therefore be constantly on the lookout for observations and
> improvements which they could share, in order to improve their status
> and become more influential (and thereby better achieve their goals).
> Let's suppose, as another example, that a robot discovers that the
> tribe's leader is afraid of another tribe member.  He finds that such a
> computational model does a better job of predicting the leader's actions.
> He could share this with another tribe member, benefitting that other
> robot, and thereby gaining more influence over them.
>
> One of the fundamental features of the robot's world is that he has
> these two kinds of models that he uses to predict actions, the physics
> model and the computational model.  He needs to be able to decide which
> model to use in various circumstances.  For example, a dead or sleeping
> tribe member may be well handled by a physics model.
>
> An interesting case arises for lower animals.  Suppose there are lizards
> in the robot's world.  He notices that lizards like to lie in the sun,
> but run away when a robot comes close.  This could be handled by a
> physics model which just describes these two behaviors as characteristics
> of lizards.  But it could also be handled by a computational model.
> The robot could imagine himself lying in the sun because he likes its
> warmth and it feels good.  He could imagine himself running away because
> he is afraid of the giant-sized robots coming at him.  Either model
> works to some degree.  Should a lizard be handled as a physical system,
> or a computational system?
>
> The robot may choose to express this dilemma to another robot.
> The general practice of offering insights and information in order
> to gain social status will motivate sharing such thoughts.  The robot
> may point out that some systems are modelled physically and some, like
> other robots, are modelled computationally.  When they discuss improved
> theories about the world, they have to use different kinds of language
> to describe their observations and theories in these areas.  But what
> about lizards, he asks.  It seems that a physics model works OK for
> them, although it is a little complex.  But they could also be handled
> with a computational model, although it would be extremely simplified.
> Which is best?  Are lizards physical or computational entities?
>
> I would suggest that this kind of conversation can be realistically mapped
> into language of consciousness and qualia.  The robot is saying, it is
> "like something" to be you or me or some other robot.  There is more
> than physics involved.  But what about a lizard?  Is it "like something"
> to be a lizard?  What is it like to be a lizard?
>
> Given that robots perceive this inconsistency and paradox between their
> internal computational life and the external physical world, that they
> puzzle over where to draw the line between computational and physical
> entities, I see a close mapping to our own puzzles.  We too ponder over
> the seeming inconsistency between a physical world and our mental lives.
> We too wonder how to draw the line, as when Nagel asks, what is it like
> to be a bat.
>
> In short I am saying that these robots are as conscious as we are, and
> have qualia to the extent that we do.  The fact that they are able and
> motivated to discuss philosophical paradoxes involving qualia makes the
> point very clearly and strongly.
>
> I may be glossing over some steps in the progress of the robots' mental
> lives, but the basic paradox is built into the robot right from the
> beginning, when we were forced to use two different kinds of models
> to allow him to do his planning.  Once we gave the robots the power of
> speech and put them into a social environment, it was natural for them
> to discover and discuss this inconsistency in their models of the world.
> An alien overhearing such a conversation would, it seems to me, be as
> justified in ascribing consciousness and qualia to robots as it would
> be in concluding that human beings had the same properties.
>
> As to when the robot achieved his consciousness, I suspect that it also
> goes back to that original model.  Once he had to deal with a world that
> was part physical and part mental, where he was able to make effective
> plans and evaluate them, he already had the differentiation in place
> that we experience between our mental lives and the physical world.
>
> Hal
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo/extropy-chat
>



More information about the extropy-chat mailing list