[ExI] Strong AI Hypothesis: logically flawed?

Dan dan_ust at yahoo.com
Fri Sep 26 23:44:49 UTC 2014


This is one of my recent salvos in a discussion Nick's book -- or a review of it -- touched off on the Yahoo group LeftLibertarian2. I thought it might be of interest. The other person referred to is Jeff Olson, an active member of that group who's also published a novel dealing with AI.

By the way, do you any of you have any comments to make on what I call the "core argument" for strong AI? (Scroll down, it's near the end.)


Dan 

On Friday, September 26, 2014 3:08 PM, "Dan dan_ust at yahoo.com [LeftLibertarian2]" <LeftLibertarian2 at yahoogroups.com> wrote:

[Snip] The actual way to demonstrate something is not logically impossible, is to show that nothing on a priori grounds makes it impossible. This doesn't mean, by the way, that one might not be mistaken -- perhaps overlooked some aspect of something that would make it impossible. But then the mistake wouldn't be with someone's overall view of possibility as such, but with the specific case being ill defined rather than poor logic.

To put it pithily as possible: not being [able] to rule something as impossible is the definition of it being possible. (It doesn't, however, tell us anything about it's likelihood. Again, in common parlance, people often use "possible" to mean something that has a vanishingly small likelihood -- as if there were a scale of possible < likely/probable < actual/necessary. But possible as we're discussing it here really just means something is not impossible and only tells us that the likelihood or probability (which are [sometimes] distinguished) is not zero -- just like impossible only tells us they are zero.)

Now you could make an argument that if someone holds there's no demonstration of logical impossibility that this implies logical necessity. That would be an error in modal logic as I understand it. That something's logically necessary, of course, implies it's not logically impossible, but the reverse is not so. For instance, it's logically possible that a third party candidate can win the US presidential race in 2016. (Okay, it's probably more likely that machines will not only start thinking, but becomes our friends too in the same year.:) But that this is possible doesn't mean it's necessary (or inevitable, as I think it might be put in temporal modal logic).

But let me go further than this. I think the case can be made that those arguing for strong AI (the usual term for what you call "True AI" or "Actual AI"; just want to avoid multiplying terms here for what I feel are the same notion) are not merely arguing that their concept is not a priori nonsense, but that it's nomologically possible -- in the sense that it fits what they believe are the known laws of nature. Again, recall the argument I offered for their core argument:

1. Intelligence supervenes on a physical process in biological brain.

2. That physical process can be made to happen in something other than a biological brain.

3. When you have done this you have made an artificial intelligence.

Assuming there's no flaw in this (again, rather loose) argument's "logic," it seems the easiest way to attack it is to show either be shown to be wrong by either showing that intelligence doesn't supervene on physical processes (even allowing Roderick's "quibble") or that it does but can't happen in anything other than a biological brain (or a biological entity). In other words, to either refute physicalism (which, again, does have its serious critics) or to refute multiple realizability (in some form; there's a growing literature on this, but, like physicalism, multiple realizability has its serious critics too). I don't either falls prey to your attack.

I'll go one further. I've met AI enthusiasts who posit strong AI is an empirical claim which they believe will be proved true in the next few decades. (Long enough for many of us to be dead, I take, so no egg on their face if their wrong -- unless you want to pull them out their cryo-tanks and smear it on them:). But they admit it might not come to pass. Is that a reasonable or unreasonable position for them to hold?

And my guess is some of the strong AI types who would go further than this -- seeing it as inevitable -- should they live long enough and not see it come to pass will, no doubt, change their minds. (Just like, I trust, should strong AI really come about, you'll change yours.)

Regards, 

Dan




More information about the extropy-chat mailing list