[ExI] Semiotics and Computability

Mike Dougherty msd001 at gmail.com
Thu Feb 18 01:13:34 UTC 2010


On Tue, Feb 16, 2010 at 5:56 AM, Stathis Papaioannou <stathisp at gmail.com> wrote:
> I have proposed the example of a brain which has enough intelligence
> to know what the neurons are doing: "neuron no. 15,576,456,757 in the
> left parietal lobe fires in response to noradrenaline, then breaks
> down the noradrenaline by means of MAO and COMT", and so on, for every
> brain event. That would be the equivalent of the man in the CR: there
> is understanding of the low level events, but no understanding of the
> high level intelligent behaviour which these events give rise to. Do
> you see how there might be *two* intelligences here, a high level and
> a low level one, with neither necessarily being aware of the other?

I had a thought to add the following twist to CR:  The man in the box
has no knowledge of the symbols he's manipulating on his first day on
the job.  Over time, he notices a correlation to certain values in his
lookup table(s) and the food slot opening and a tray being slid in...
I understand the man in the room is a metaphor for rules-processing by
wrote, but what if we take the literal approach that he IS a man -
even a supremely gifted intellectual who is informed that eventually
these symbols will reveal the means by which he can escape?  This
scenario segues to the boxing problem of keeping a recursively
improving AI constrained by 'friendliness' or some other artificially
added bounds.  (I understand that FAI is about being inherently
friendly and remains friendly after infinite recursion)

So assuming the man in the box has an infinite supply of pen/paper
with which to keep notes on the relationship of input and output (as
well as his lookup table for I/O transformations) - does it change the
thought experiment considerably if there is motivation for escaping
the room by learning how to manipulate the symbols?



More information about the extropy-chat mailing list