[ExI] The second step towards immortality

Ben bbenzai at yahoo.com
Sat Jan 11 12:01:34 UTC 2014


Anders Sandberg <anders at aleph.se> wrote:

 >On 09/01/2014 21:18, Ben wrote:
 >> To me, this sounds analogous to "one day, someone will get so good at
 >> simulating music using digital code that it could convince some people
 >> that it really /is/ music (even if the writers know that it is just a
 >> pile of clever code)!!.  What a jape!"
 >>
 >> Something about ducks sounding and looking like ducks comes to mind.
 >>
 >> (IOW, an emulation of an information process is an information process)


 >Yes, but can you tell it apart from a simulation?
 >
 >I can construct a function f(x,y) that produces x+y for a lot of values
 >you care to test, but actually *isn't* x+y [*]. Without looking under
 >the hood they cannot be told apart. Same thing for any information 
process.
 >
 >If what you care about is the stuff coming out of the black box, then
 >what matters is whether there are any relevant differences between the
 >output and what it should be. But sometimes we care about how the stuff
 >is made.


 >Even most Strong AI proponents [**] think that a Turing
 >test succeeding stimulus-response lookup table is not conscious nor
 >intelligent, despite being (by definition) able to convince the
 >interlocutor indefinitely.
 >

 >[**] I admit, I am not entirely sure anymore. I thought it was obvious,
 >but David Chalmers made me doubt whether causal relatedness is actually
 >necessary for consciousness or not. If it isn't, then lookup tables
 >might be conscious after a fashion. Or the sum total consciousness
 >expressed by all possible interactions with the table already exists or
 >existed when it was calculated.


"Without looking under the hood they cannot be told apart."
This is exactly my point.  Looking under the hood is irrelevant for 
anyone except those who want to build one.

I don't think we can assume that a stimulus-response lookup table, for 
instance, is capable of producing behaviour that simulates consciousness 
(in the real world.  As a thought-experiment, fine, but the question of 
whether such a theoretical lookup table would be conscious doesn't 
matter, as it couldn't exist[*]).  There must be many types of process 
that can't do it.  But for the ones that can, in the end, all we can do 
is apply the same test that we apply to other people, and assume that 
any system that produces behaviour fully consistent with conscious 
experience is in fact having conscious experiences, at least for the 
time being.  If it starts to produce behaviour that is inconsistent, we 
can then assume that it's not what we thought, and is either faulty, an 
automaton or perhaps an alien mind that we can't make sense of (see 
'faulty' ;>).

Re. the Turing Test, I don't think we should be relying on it as a good 
indicator of conscious thought.  We already know that some chatbots can 
pass it, under limited circumstances, and teenagers can fail it.  I 
don't buy into the notion that any current chatbot is conscious, or 
(tempting though it may be) that teenagers are not.

I don't understand your 'nontrivial' example.  It does add two 
numbers(int z=x+y;), then it goes into a loop which goes round a few 
times until w is equal to 1, then it returns z.  So all we get is a 
delay in returning the answer.  If the answer is big enough, it may take 
a very long time to return it (disregarding things like the maximum size 
of an integer in the system executing the function), but it always does 
the calculation.  What have I missed?


Spike wrote:

 >Yeeeeeaaano.
 >
 >If we wanted to take the time, we could create a big lookup table in excel
 >that would sound a lot like a human trying to convince another human 
it is a
 >human.

Nope, we couldn't.  Not in practice.  Not one that would work.  It's not 
just a matter of time, but the sheer number of entries needed. You'd get 
combinatorial explosion sooner than you could say it, and all the 
spreadsheets on all the computers in all the planets in the universe 
wouldn't be anywhere near enough.  Not even a teensy fraction of near 
enough.

I grant that, in a very limited domain of knowledge, for a very limited 
amount of time, you might get away with it (this is what chatbots do), 
but it wouldn't fool anyone for long.


[*]  This is an interesting topic in itself:  When does a theoretical 
possibility become invalid because it's not actually a real 
possibility?  Can you conclude that horses are capable of moving stars 
because a theoretical /big enough/ horse would be able to?  A 
consciousness-simulating lookup table runs into practical problems 
simply because it would have to be bigger than the universe.  Apart from 
there not being enough particles in existence to build it, there are 
theoretical objections to it working anyway, like the speed of light.  
So even theoretically, it's an invalid thought-experiment.  Or is it?



More information about the extropy-chat mailing list