[ExI] The digital nature of brains (was: digital simulations)
    Gordon Swobe 
    gts_2000 at yahoo.com
       
    Sun Jan 31 17:25:45 UTC 2010
    
    
  
--- On Sat, 1/30/10, Eric Messick <eric at m056832107.syzygy.com> wrote:
> In the referenced paper, Searle says that weak AI would be
> a useful tool for understanding intelligence, while strong AI would
> duplicate intelligence.  
We might reasonably attribute intelligence to both strong and weak AI systems. However for a system to have strong AI it must also have intentional states defined as conscious thoughts, beliefs, hopes, desires and so on. It must have a subjective conscious mind in the sense that you, Eric, have a mind. 
> I claim (and I expect you would dispute) that an accurate
> neural level simulation of a healthy human brain would constitute 
> strong AI.
I dispute that, yes, if the simulation consists of software running on hardware.
> Assuming that such a simulation accurately reproduced
> responses of an intelligent human (it passes the Turing Test), 
> I'm going to guess that you'd grant it weak AI status, but not strong 
> AI status.
Right.
 
> Furthermore, you seem to be asserting that no test based on
> it's behavior could ever convince you to grant it strong
> status.
Right. Such a system might at first fool me into believing it had strong AI status. I would however discover the deception if I obtained knowledge of its inner workings and found the architecture of a software/hardware system running formal programs as such systems exist today. I would then demote the system to weak AI status.
 
> Let's go a step farther and place the computer running this
> simulation within the skull of the person we have duplicated,
> replacing their brain.  It's connected with all of the neurons which
> used to feed into the brain.
> Now, what you have is a human body which behaves completely
> normally.
Still weak AI.
> I present you with two humans, one of which has had this
> operation performed, and the other of which hasn't.  Both claim
> to be the one who hasn't, but of course one of them is lying 
> (or perhaps mistaken).
> 
> How could you tell which is which?
Exploratory surgery.
> This is of course a variant of the classic Turing Test, and
> we've already stipulated that this simulation passes the Turing
> Test.
> 
> So, can you tell the difference?
I can't know the difference from their external behavior, but I can know it from a bit of surgery + some philosophical arguments. 
> Or do you claim that it will always be impossible to create
> such a simulation in the first place?  No, wait, you've
> already said that systems that pass the Turing Test will be possible, 
> so you're no longer claiming that it is impossible.  Do you want to
> change your mind on that again?
Excuse me? I never argued for the impossibility of such systems and I have not "changed my mind" about this. I wonder now if I can count on you for an honest discussion. 
What I have claimed several times is that the Turing test will give false positives for the simulation.
-gts
      
    
    
More information about the extropy-chat
mailing list