<html><body><div style="color:#000; background-color:#fff; font-family:times new roman, new york, times, serif;font-size:12pt"><div><span><br></span></div><div><span style="font-size: 12pt;">On Sun, May 5, 2013 at 9:25 AM, Gordon <</span><a ymailto="mailto:gts_2000@yahoo.com" href="mailto:gts_2000@yahoo.com" style="font-size: 12pt;">gts_2000@yahoo.com</a><span style="font-size: 12pt;">> wrote:</span><br></div><div style="font-family: 'times new roman', 'new york', times, serif; font-size: 12pt;"><div style="font-family: 'times new roman', 'new york', times, serif; font-size: 12pt;"><div class="y_msg_container"><br></div><div class="y_msg_container">>> That looks like a contradiction to me. How is the reproduction of<br>>> conscious-like behavior different from the reproduction of the functional<br>>> behavior of a system that we know to be conscious?<br><br>>If we make a computer that behaves intelligently we cannot be
sure<br>>that it is conscious. However, if we make computer that replicates the<br>>function of a human brain, replacing neurons with artificial neurons<br>>that respond to inputs in the same way and produce similar outputs,<br>> then we can be sure that the resulting hybrid has the same<br>> consciousness as the original all biological brain. </div><div class="y_msg_container"><br></div><div class="y_msg_container">I think where functionalism is concerned, it applies to any level. An artificial brain that functions like an organic brain is as good as an assembly of artificial neurons that functions like an assembly of organic neurons. You probably cannot have one without the other.</div><div class="y_msg_container"><br></div><div class="y_msg_container">Gordon <br><br></div> </div> </div> </div></body></html>