[ExI] Semiotics and Computability
Christopher Luebcke
cluebcke at yahoo.com
Thu Feb 18 01:43:17 UTC 2010
It's also worth considering in what way teaching the man to follow all the rules for manipulating symbols is a different activity than teaching him Chinese. It may be different at the start, but I suspect that, if successful, it amounts to the same thing at the end.
----- Original Message ----
> From: Mike Dougherty <msd001 at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Sent: Wed, February 17, 2010 5:13:34 PM
> Subject: Re: [ExI] Semiotics and Computability
>
> On Tue, Feb 16, 2010 at 5:56 AM, Stathis Papaioannou wrote:
> > I have proposed the example of a brain which has enough intelligence
> > to know what the neurons are doing: "neuron no. 15,576,456,757 in the
> > left parietal lobe fires in response to noradrenaline, then breaks
> > down the noradrenaline by means of MAO and COMT", and so on, for every
> > brain event. That would be the equivalent of the man in the CR: there
> > is understanding of the low level events, but no understanding of the
> > high level intelligent behaviour which these events give rise to. Do
> > you see how there might be *two* intelligences here, a high level and
> > a low level one, with neither necessarily being aware of the other?
>
> I had a thought to add the following twist to CR: The man in the box
> has no knowledge of the symbols he's manipulating on his first day on
> the job. Over time, he notices a correlation to certain values in his
> lookup table(s) and the food slot opening and a tray being slid in...
> I understand the man in the room is a metaphor for rules-processing by
> wrote, but what if we take the literal approach that he IS a man -
> even a supremely gifted intellectual who is informed that eventually
> these symbols will reveal the means by which he can escape? This
> scenario segues to the boxing problem of keeping a recursively
> improving AI constrained by 'friendliness' or some other artificially
> added bounds. (I understand that FAI is about being inherently
> friendly and remains friendly after infinite recursion)
>
> So assuming the man in the box has an infinite supply of pen/paper
> with which to keep notes on the relationship of input and output (as
> well as his lookup table for I/O transformations) - does it change the
> thought experiment considerably if there is motivation for escaping
> the room by learning how to manipulate the symbols?
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
More information about the extropy-chat
mailing list