[ExI] Strong AI Hypothesis: logically flawed?
Anders Sandberg
anders at aleph.se
Sat Sep 27 21:58:02 UTC 2014
Dan <danust2012 at gmail.com> , 27/9/2014 1:53 AM:Again, recall the argument I offered for their core argument:
1. Intelligence supervenes on a physical process in biological brain.
2. That physical process can be made to happen in something other than a biological brain.
3. When you have done this you have made an artificial intelligence.
Assuming there's no flaw in this (again, rather loose) argument's "logic," it seems the easiest way to attack it is to show either be shown to be wrong by either showing that intelligence doesn't supervene on physical processes (even allowing Roderick's "quibble") or that it does but can't happen in anything other than a biological brain (or a biological entity).
No to the first branch. If you find intelligence supervening from non-physical systems, it doesn't tell you whether physical systems can sustain it (maybe humans have immortal souls doing their thinking, but zorgons do all their thinking in their silicon brains, which are easily copied into computers). You need something stronger, like an argument that it cannot be sustained by physical systems - which is your second branch.
It is interesting to consider what kind of input or evidence would be good for convincing us that strong AI is impossible. Clever philosophical arguments rarely seem to do it. Decades of failure is obviously some evidence, but the amount depends on whether one thinks research has actually been looking at relevant stuff or just gone down totally wrong approaches (I guess Minsky thinks this is true). A demonstration that quantum mechanics or ectoplasm is necessary for human intelligence would just shift the goal to use something like those mediums for AI. My guess real anti-strong AI evidence might be like conceptually strong insights into cognitive science or philosophy of mind that actually seem to tell us something relevant about the nature of intelligence and imply some problem with strong AI. An analogy would be the discovery of chemical elements showing that transmutation was impossible (and, with further refinement in nuclear theory, demonstrating that it *was* possible but pointless).
Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140927/e56cb2e2/attachment.html>
More information about the extropy-chat
mailing list