<div dir="ltr"><br>Hey Gordon,<br><br>It’s great to hear from you again! And thanks for starting up this great discussion on the most important of all topics (surely where the next greatest scientific discovery could come from). And it’s great to see all the other old guys back! Stathis, Anders, John, Eugen…<br>
<br>But, Gordon, as always, I’m really struggling to know what, exactly, it is you believe. People like Stathis have ‘Canonized’ their views, so I know concisely what they think, and who else thinks like them, and how many other experts agree with them. When someone like Stathis says he’s in the Functional Property Dualism camp, I know exactly what they believe, and who else agrees with them, and communication can finally take place – no hours of fuss and muss, like is going on in this thread already –everyone failing to communicate, and mostly talking past each other, in infinitely repetitive yes / no / yes / no ways. When I listen to what you say, I’m not even sure if you agree with the general, near unanimous expert consensus “Representational Qualia Theory” stuff. (<a href="http://canonizer.com/topic.asp/88/6">http://canonizer.com/topic.asp/88/6</a> ). Sometimes, I think we agree on many things, but I can never know for sure, because most of what you say is so nebulous.<br>
<br>I love what Eugen said about the stanford philosophy page said about the word “Intentionality” and how it makes “his skin crawl” and all the great right on reasons he gave for why it is so bad. None of that is falsifiable or testable, and all of the behavior described in that can be duplicated by abstract machines, as long as the representations are interpreted correctly. Using terms like that, just makes communication with hard scientists impossible.<br>
<br>The stuff at Canonizer.com has progressed, significantly, since the last time we talked. In addition to now having Canonized Camps from Lehar, Chalmers, Hameroff, Smythies, Even Daniel Dennett has helped canonize his latest theory he calls “Predictive Bayesian Coding Theory” (Notice even he has abandoned as falsified his previous ‘multiple drafts theory”) <a href="http://canonizer.com/topic.asp/88/21">http://canonizer.com/topic.asp/88/21</a><br>
<br>So my question to you Gordon, is what, exactly, to you think about consciousness. Will it be possible to build an artificial machine, in any way, that has what you think is important, even if it is not an abstracted digital computer? And can you say what you believe concisely, so that others can know what you are saying, enough so that others that think similarly, can say they agree with you? Are there any camps at Canonizer.com, close to what you believe? If not, which is the closest camp? Have you found anyone, even if a philosopher, that believes the same as you do?<br>
<br>I also like the way Eugen ridicules “philosophers”. Philosophers are always talking non falsiable stuff, and they never have any way *(i.e. hard science) of convincing everyone else to think the same way they think we should. Canonizer.com is not philosophy, it is theoretical science, and is all about what kind of hard evidence would falsify your particular theory, and convert you to mine. And it’s all about rigorously measuring for when some scientific data, or good rational arguments, come along strong enough to falsify camps, for former supporters of the theory.<br>
<br><br>Oh and Stathis said:<br><br><<<<<br>There is an argument from David Chalmers which proves that computers<br>can be conscious assuming only that (a) consciousness is due to the<br>brain and (b) the observable behaviour of the brain is computable.<br>
Consciousness need not be defined for the purpose of the argument<br>other than vaguely: you know it if you have it. This makes the<br>argument robust, not dependent on any particular philosophy of mind or<br>other assumptions. The argument assumes that consciousness is NOT<br>
reproducible by a computer and shows that this leads to absurdity. As<br>far as I am aware no-one has successfully challenged the validity of<br>the argument.<br><br><a href="http://consc.net/papers/qualia.html">http://consc.net/papers/qualia.html</a><br>
>>><br><br>Stathis, I think we’ve made much progress on this issue, since you’ve lat been involved in the conversation. I’m looking forward to see if we can now convince you that we have, and that there is a real possibility of a solution to the Chalmers’ conundrum you believe still exists.<br>
<br>I, for one, am in the camp that believes there is an obvious problem with Chalmer’s “fading dancing’ qualia argument, and once you understand this, the solution to the ‘hard problem’, objectively, simulatably, sharably (as in effing of the ineffable) goes away, no fuss, no muss, all falsifiably or scientifically demonstrably (to the convincing of all experts) so. Having an experience like: “oh THAT is what your redness is like – I’ve never experienced anything like that before in my life, and was a that kind of ‘redness zombie’ before now.” Will certainly falsify most bad theories out there today, especially all the ones that predict that kind of sharing or effing of the ineffable will never be possible.<br>
<br>Brent Allsop<br><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Apr 24, 2013 at 8:50 AM, Eugen Leitl <span dir="ltr"><<a href="mailto:eugen@leitl.org" target="_blank">eugen@leitl.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On Wed, Apr 24, 2013 at 03:57:09PM +0200, Anders Sandberg wrote:<br>
<br>
> Eugene, part of this is merely terminology. Power in philosophy is<br>
<br>
</div>Yes, I realize that some of it is jargon. However (and not for<br>
lack of trying) I have yet to identify a single worthwhile<br>
concept coming out of that field, particularly in the theory of mind.<br>
<br>
You used to be a computational neuroscientist before you<br>
became a philosopher (turncoat! boo! hiss!). What is your professional<br>
opinion about the philosophy of mind subdiscipline?<br>
<div class="im"><br>
> something different than in physics, just as it means something very<br>
> different in sociology or political science.<br>
><br>
> Then again, I am unsure if intentionality actually denotes anything, or<br>
> whether it denotes a single something. It is not uncontroversial even<br>
> within the philosophy of mind community.<br>
><br>
><br>
>> "The word itself, which is of medieval Scholastic origin,"<br>
>> ah, so they admit it's useless.<br>
><br>
> Ah, just like formal logic. Or the empirical method.<br>
<br>
</div>Ah, but philosophy begat natural philosophy, aka the sciences.<br>
Unfortunately, the field itself never progressed much beyond<br>
its origins. The more the pity when a stagnant field is<br>
chronically prone to arrogant pronouncements about disciplines<br>
they don't feel they need to have any domain knowledge in.<br>
<div class="im"><br>
>> See, something is fishy with your concept of consciosness. If we look<br>
>> at at as ability to process information, suddenly we're starting to<br>
>> get somewhere.<br>
><br>
> Maybe. Defining information and processing is nearly as tricky. Shannon<br>
> and Kolmogorov doesn't get you all the way, since it is somewhat<br>
> problematic to even defining what the signals are.<br>
><br>
> Measurability is not everything. There are plenty of risks that do not<br>
> have well defined probabilities, yet we need and can make decisions<br>
> about them with above chance success. The problem with consciousness,<br>
> intentionality and the other theory of mind "things" is that they are<br>
> subjective and private - you cannot compare them between minds.<br>
<br>
</div>I really like that the Si elegans has identified the necessity of<br>
a behavior library.<br>
<div class="HOEnZb"><div class="h5">_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</div></div></blockquote></div><br></div>