From eric at m056832107.syzygy.com Mon Feb 1 01:03:29 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 1 Feb 2010 01:03:29 -0000 Subject: [ExI] How to ground a symbol In-Reply-To: <975270.46265.qm@web36504.mail.mud.yahoo.com> References: <20100131230539.5.qmail@syzygy.com> <975270.46265.qm@web36504.mail.mud.yahoo.com> Message-ID: <20100201010329.5.qmail@syzygy.com> Gordon: >This kind of processing goes on in every software/hardware system. Yes, and apparently you didn't understand me. I already addressed this issue later in the same message. It's at a different layer of abstraction. It's fine to ignore parts of messages that you agree with. It's disingenuous to act as though a point hadn't been raised when you're actually ignoring it. >> Come back after you've written a neural network >> simulator and trained it to do something useful. > >Philosophers of mind don't care much about how "useful" it may seem. While I haven't actually written a neural network simulator, I have written quite a few programs that are of similar levels of complexity. I know from experience that things which seem simple, clear, and well defined when thought about in an abstract way are in fact complex, muddy, and ill-defined when one actually tries to implement them. Until such a system has been shown to do something useful, it's probably incomplete, and any intuition learned from writing it may well be useless. That's why I stipulated usefulness. >I think artificial neural networks show great promise as decision > making tools. Natural ones do too. >But 100 billion * 0 = 0. But 100,000,000,000 * 0.000,000,000,001 = 1. Your argument depends on the axiomatic assumption that the level of understanding in a single simulated neuron is *exactly* zero. Even the tiniest amount of understanding in a programmed device (like a thermostat) devastates your argument. So you cling to the belief that understanding must be a binary thing, while the universe around you continues to work by degrees instead of absolutes. Yes, philosophy deals with absolutes, but where it ignores shades of gray in the real world it gets things horribly wrong. -eric From gts_2000 at yahoo.com Mon Feb 1 01:47:46 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 31 Jan 2010 17:47:46 -0800 (PST) Subject: [ExI] multiple realizability In-Reply-To: <20100201010329.5.qmail@syzygy.com> Message-ID: <418148.11027.qm@web36501.mail.mud.yahoo.com> --- On Sun, 1/31/10, Eric Messick wrote: >> This kind of processing goes on in every >> software/hardware system. > > Yes, and apparently you didn't understand me.? I > already addressed this issue later in the same message.? > It's at a different layer of abstraction. The layer of abstraction does not matter to me. What does matter is the extent to which the system has supposed mental operations comprised of computational processes operating over formal elements, i.e., to what extent it operates by formal programs. To that extent, in my view, the system lacks a mind. One can conceive of an "artificially" constructed neural network that is in every respect identical to a natural brain, in which case that machine has a mind. So let's be clear: my objection is not that strong AI cannot happen. It is that it cannot happen in software/hardware systems, networked or stand-alone. To make my point even more clear: I reject the doctrine of multiple realizability. I do not believe we can extract the mind from the neurological material that causes the subjective mental phenomena that characterize it, as if one could put a mind on a massive floppy disk and then load that "mental software" onto another substrate. I reject that idea as nothing more than a 21st century version of Cartesian mind/matter dualism. The irony is that people who don't understand me call me the dualist, and suggest that I rather than they posit the existence of some mysterious mental substance that exists distinct from brain matter. I hope Jeff Davis catches this message. -gts From stathisp at gmail.com Mon Feb 1 08:52:56 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 1 Feb 2010 19:52:56 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <491598.82004.qm@web36508.mail.mud.yahoo.com> References: <491598.82004.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/2/1 Gordon Swobe : >> He is the whole system, but his intelligence is only a >> small and inessential part of the system, as it could easily >> be replaced by dumber components. > > Show me who or what has conscious understanding of the symbols. The intelligence created by the system has understanding. >> It's irrelevant that the man doesn't really >> understand what he is doing. The ensemble of neurons doesn't >> understand what it's doing either, and they are the whole system too. > > I have no objection to your saying that neither the system nor anything contained in it has conscious understanding, but in that case you need to understand that you don't disagree with me; you don't believe in strong AI any more than I do. The system has understanding, but no part of the system either separately or taken as an ensemble has understanding. I've tried to explain this giving several variations on the CRA, none of which you have directly responded to, so here they are again: Suppose that each neuron has sufficient intelligence intelligence for it to know how to do its job. No neuron understands language, but the person does. There are many tiny specialised intelligences and one large general intelligence, and the two don't communicate. This is analogous to the extended CR. Suppose that the neurons are connected as one entity with sufficient intelligence to know when to make its constituent parts fire. This entity doesn't understand language, but the person does. There are two intelligences, one specialised and one general, and the two don't communicate. This is analogous to the CR. Suppose there are several men in the extended CR all doing their bit manipulating symbols. The men don't understand language, but the entity created by the system does. There are several small specialised intelligences (their general intelligence is not put to use) and one large general intelligence, and the two don't communicate. This is analogous to a normal brain. -- Stathis Papaioannou From stathisp at gmail.com Mon Feb 1 10:04:03 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 1 Feb 2010 21:04:03 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <732400.27938.qm@web36501.mail.mud.yahoo.com> References: <20100131182926.5.qmail@syzygy.com> <732400.27938.qm@web36501.mail.mud.yahoo.com> Message-ID: On 1 February 2010 06:22, Gordon Swobe wrote: > --- On Sun, 1/31/10, Eric Messick wrote: >> This was the start of a series of posts where you said that >> someone with a brain that had been partially replaced with >> programmatic neurons would behave as though he was at least partially >> not conscious.? You claimed that the surgeon would have to >> replace more and more of the brain until he behaved as though he was >> conscious, but had been zombified by extensive replacement. > > Right, and Stathis' subject will eventually pass the TT just as your subject will in your thought experiment. But in both cases the TT will give false positives. The subjects will have no real first-person conscious intentional states. I think you have tried very hard to avoid discussing this rather simple thought experiment. It has one premise, call it P: P: It is possible to make artificial neurons which behave like normal neurons in every way, but lack consciousness. That's it! Now, when I ask if P is true you have to answer "Yes" or "No". Is P true? OK, assuming P is true, what happens to a person's behaviour and to his experiences if the neurons in a part of his brain with an important role in consciousness are replaced with these artificial neurons? I'll answer (a) for you: his behaviour must remain unchanged. It must remain unchanged because the artificial neurons behave in a perfectly normal way in their interactions with normal neurons, sensory organs and effector organs, according to P. If they don't, then P is false, and you said that P is true. Can you see a way that I haven't seen whereby it might *not* be a contradiction to claim that the person's neurons will behave normally but the person will behave differently? OK, the person's behaviour remains unchanged, by definition if P is true. What about his experiences? The classic example here is visual perception. If P is true, then the person would go blind; but if P is true, he is also forced to behave as if he has normal vision. So internally, either he must not notice that he is blind, or he must notice that he is blind but be unable to communicate it. The latter is impossible for the same reasons as it is impossible that his behaviour changes: the neurons in his brain which do the thinking are also constrained to behave normally. That leaves the first option, that he goes blind but doesn't notice. If this idea is coherent to you, then you have to admit that you might right now be blind and not know it. However, you have clearly stated that you think this is preposterous: a zombie doesn't know it's a zombie, but you know you're not a zombie, and you would certainly know if you suddenly went blind (as a matter of fact, some people *don't* recognise when they go blind - it's called Anton's syndrome - but these people also behave abnormally, so they aren't zombies or partial zombies). Where does that leave you? I think you have to say you were mistaken in saying P is true. It isn't possible to make artificial neurons which behave like normal neurons in every way but lack consciousness. Can you see another way out that I haven't seen? -- Stathis Papaioannou From stefano.vaj at gmail.com Mon Feb 1 10:16:10 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 1 Feb 2010 11:16:10 +0100 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <20100129192646.5.qmail@syzygy.com> References: <584374.10388.qm@web36504.mail.mud.yahoo.com> <20100129192646.5.qmail@syzygy.com> Message-ID: <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com> On 29 January 2010 20:26, Eric Messick wrote: > Meaning is attached to word symbols when the word symbols are > associated with sense symbols, not with other word symbols. Not all symbols are words - and in fact the word "three" can be associated with the number "3" - but "sense symbols" sounds as a dubious and redundant concept. -- Stefano Vaj From gts_2000 at yahoo.com Mon Feb 1 12:53:49 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 1 Feb 2010 04:53:49 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <641209.68117.qm@web36506.mail.mud.yahoo.com> --- On Mon, 2/1/10, Stathis Papaioannou wrote: > The system has understanding, but no part of the system > either separately or taken as an ensemble has understanding. > I've tried to explain this giving several variations on the > CRA, none of which you have directly responded to Because that answer doesn't make any sense to me, Stathis. Looks like you want to skirt the issue by asserting that the system understands things that the man, *considered as the system*, does not understand. You do this by imagining a fictional third entity that you call the "ensemble of neurons" that exists independently of the system. But the ensemble is the system. Did you read the actual target article? Notice that the system AND the neurons "taken as an ensemble" understand the stories in English but they do not understand the stories in Chinese. Please explain why the ensemble and the system understand English but not Chinese. Why the difference? -gts From stathisp at gmail.com Mon Feb 1 13:14:09 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 2 Feb 2010 00:14:09 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <641209.68117.qm@web36506.mail.mud.yahoo.com> References: <641209.68117.qm@web36506.mail.mud.yahoo.com> Message-ID: On 1 February 2010 23:53, Gordon Swobe wrote: > --- On Mon, 2/1/10, Stathis Papaioannou wrote: > >> The system has understanding, but no part of the system >> either separately or taken as an ensemble has understanding. >> I've tried to explain this giving several variations on the >> CRA, none of which you have directly responded to > > Because that answer doesn't make any sense to me, Stathis. Looks like you want to skirt the issue by asserting that the system understands things that the man, *considered as the system*, does not understand. You do this by imagining a fictional third entity that you call the "ensemble of neurons" that exists independently of the system. But the ensemble is the system. Could you respond to the specific examples I have used to demonstrate this apparently non-obvious point? The neurons do not understand language, they probably don't "understand" anything, and if they got together on a day off to talk about it they still wouldn't understand anything. And yet acting in concert, they produce this new entity, the person, who does understand language. Note that it works both ways: the person, who is very much more intelligent than the neurons, doesn't have a clue what is going on in his head when he thinks either. It's his head, so how is this possible? > Did you read the actual target article? Notice that the system AND the neurons "taken as an ensemble" understand the stories in English but they do not understand the stories in Chinese. Please explain why the ensemble and the system understand English but not Chinese. Why the difference? You have to acknowledge that there are different levels of abstraction. The man understands English but that's completely irrelevant to his mechanistic symbol manipulation. It could be that a lone clever neuron in his frontal lobe understands Russian and recites Pushkin while squirting its neurotransmitters, but that has nothing to do with the man understanding Russian, since it does not in any way impact on the operation of his language centre; and conversely, the clever Russian-speaking neuron does not necessarily have any idea what the man is up nor any knowledge of English or Chinese. -- Stathis Papaioannou From gts_2000 at yahoo.com Mon Feb 1 13:29:15 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 1 Feb 2010 05:29:15 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <452387.36160.qm@web36507.mail.mud.yahoo.com> --- On Mon, 2/1/10, Stathis Papaioannou wrote: >> Right, and Stathis' subject will eventually pass the > TT just as your subject will in your thought experiment. But > in both cases the TT will give false positives. The subjects > will have no real first-person conscious intentional > states. > > I think you have tried very hard to avoid discussing this > rather simple thought experiment. It has one premise, call it P: I didn't avoid anything. We went over it a million times. :) > P: It is possible to make artificial neurons which behave > like normal neurons in every way, but lack consciousness. P = true if we define behavior as you've chosen to define it: the exchange of certain neurotransmitters into the synapses at certain times and other similar exchanges between neurons. I reject as absurd for example your theory that a brain the size of texas constructed of giant neurons made of beer cans and toilet paper will have consciousness merely by virtue of those beer cans squirting neurotransmitters betwixt themselves in the same patterns that natural neurons do. I also reject, in the first place, your implied assumption that the neuron is necessarily the atomic unit of the brain. > OK, assuming P is true, what happens to a person's > behaviour and to his experiences if the neurons in a part of his > brain with an important role in consciousness are replaced with these > artificial neurons? As I explained many times, because your artificial neurons will not help the patient have complete subjective experience, and because experience affects behavior in healthy people, the surgeon will need to keep re-programming the artificial neurons and most likely replacing and reprogramming other neurons until finally at long last he creates a patient that passes the Turing test. But that patient will not have any better quality consciousness than he started with, and may become far worse off subjectively by the time the surgeon finishes, depending on facts about neuroscience that in 2010 nobody knows. Eric offered a more straightforward experiment in which he simulated the entire brain. You complicate the matter by doing partial replacements, but the principles that drive my arguments remain the same: formal programs do not have or cause minds. If they did, the computer in front of you this very moment would have a mind and would perhaps be entitled to vote like other citizens. -gts From stathisp at gmail.com Mon Feb 1 14:28:04 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 2 Feb 2010 01:28:04 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <452387.36160.qm@web36507.mail.mud.yahoo.com> References: <452387.36160.qm@web36507.mail.mud.yahoo.com> Message-ID: 2010/2/2 Gordon Swobe : >> P: It is possible to make artificial neurons which behave >> like normal neurons in every way, but lack consciousness. > > P = true if we define behavior as you've chosen to define it: the exchange of certain neurotransmitters into the synapses at certain times and other similar exchanges between neurons. Yes, that would be one aspect of the behaviour that needs to be reproduced. > I reject as absurd for example your theory that a brain the size of texas constructed of giant neurons made of beer cans and toilet paper will have consciousness merely by virtue of those beer cans squirting neurotransmitters betwixt themselves in the same patterns that natural neurons do. That is a consequence of functionalism but at this point functionalism is assumed to be wrong. All we need is artificial neurons that fit inside the head (which excludes structures the size of Texas) and can fool their neighbours into thinking they are normal neurons. > I also reject, in the first place, your implied assumption that the neuron is necessarily the atomic unit of the brain. OK, P can be made even more general by replacing "neuron" with "component". The component could be subneuronal in size or a collection of multiple neurons.It just has to behave normally in relation to its neighbours. >> OK, assuming P is true, what happens to a person's >> behaviour and to his experiences if the neurons in a part of his >> brain with an important role in consciousness are replaced with these >> artificial neurons? > > As I explained many times, because your artificial neurons will not help the patient have complete subjective experience, Yes, that's an essential part of P: no subjective experiences > and because experience affects behavior in healthy people, the surgeon will need to keep re-programming the artificial neurons and most likely replacing and reprogramming other neurons until finally at long last he creates a patient that passes the Turing test. But that patient will not have any better quality consciousness than he started with, and may become far worse off subjectively by the time the surgeon finishes, depending on facts about neuroscience that in 2010 nobody knows. But how? We agreed that the artificial components BEHAVE NORMALLY. That is their essential feature, apart from lacking consciousness. You remove any normal component whatsoever, drop in the replacement, and the behaviour of the whole brain MUST remain unchanged, or else the replacement component is not as assumed. I can't believe that you don't see this, and after being inconsistent being disingenuous is the worst sin you can commit in philosophical discussions. > Eric offered a more straightforward experiment in which he simulated the entire brain. You complicate the matter by doing partial replacements, but the principles that drive my arguments remain the same: formal programs do not have or cause minds. If they did, the computer in front of you this very moment would have a mind and would perhaps be entitled to vote like other citizens. You keep repeating it but it doesn't make it so. I have assumed that what you are saying is true and tried to show you that it leads to an absurdity, but you respond by saying that if A behaves exactly the same as B then A does not behave exactly the same as B, and carry on as if no-one will notice the problem with this! -- Stathis Papaioannou From bbenzai at yahoo.com Mon Feb 1 14:15:33 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 1 Feb 2010 06:15:33 -0800 (PST) Subject: [ExI] extropy-chat Digest, Vol 77, Issue 1 In-Reply-To: Message-ID: <681115.27739.qm@web113615.mail.gq1.yahoo.com> > From: Gordon Swobe > To: ExI chat list > Subject: Re: [ExI] How to ground a symbol > Message-ID: <589903.82027.qm at web36507.mail.mud.yahoo.com> > Content-Type: text/plain; charset=iso-8859-1 > > --- On Sun, 1/31/10, Ben Zaiboc > wrote: > > > In future, whenever the system sees a rose, it will > know > > whether it's a red rose or not, because there'll be a > part > > of its internal state that matches the symbol "Red".? > > The system you describe won't really "know" it is red. It > will merely act as if it knows it is red, no different from, > say, an automated camera that acts as if it knows the light > level in the room and automatically adjusts for it. Please explain what "really knowing" is. I'm at a loss to see how something that acts exactly as if it knows something is red can not actually know that. In fact, I'm at a loss to see how that sentence can even make sense. You're claiming that something which not only quacks and looks like, but smells like, acts like, sounds like, and is completely indistinguishable down to the molecular level from, a duck, can in fact not be a duck. That if you discover that the processes which give rise to the molecules and their interactions are due to digital information processing, then, suddenly, no duck. Ben Zaiboc From bbenzai at yahoo.com Mon Feb 1 14:28:43 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 1 Feb 2010 06:28:43 -0800 (PST) Subject: [ExI] multiple realizability In-Reply-To: Message-ID: <629922.52960.qm@web113610.mail.gq1.yahoo.com> Gordon Swobe declared: > The layer of abstraction does not matter to me. Well, if that's the case, all your philosophising avails you nothing. At all. Levels of abstraction are vitally important, and if you dismiss them as irrelevant, you're chucking not just the baby, but the whole universe out with the bathwater. If you honestly think levels of abstraction irrelevant, then everything is just a vast sea of gluons and quarks (or something even lower down), and there is no such thing as matter, planets, stars, water, trees, or people. If levels of abstraction are irrelevant, you don't exist. Ben Zaiboc From hkeithhenson at gmail.com Mon Feb 1 17:28:56 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 1 Feb 2010 10:28:56 -0700 Subject: [ExI] Glacier Geoengineering Message-ID: On Mon, Feb 1, 2010 at 5:00 AM, Alfio Puglisi wrote: > On Sun, Jan 31, 2010 at 10:52 AM, Keith Henson wrote: > >> The object is to freeze a glacier to bedrock. snip > > Temperatures at the glacier-bedrock interface can be amazingly high. This > article talks about bedrock *welding* with temperatures higher than 1,000 > Celsius: > > http://jgs.lyellcollection.org/cgi/content/abstract/163/3/417 > > I guess the energy comes from the potential energy of the ice sliding down > the terrain. True. The article makes the point that it happened in a very short time in a small volume though. > This is only enough to take ?out the heat coming out of the earth. ?Probably >> need it somewhat >> larger to pull the huge masses of ice in a few decades down to a >> temperature where they would flow much slower. >> > > If one also needs to remove the heat generated gravitationally, this could > be potentially much larger than just the Earth's heat flux. Good point. Let's put numbers on it. Take a square km of ice a km deep. Consider the case of it sliding at 10 m/year down a 10 m/km (1%) slope. So the energy release would be Mgh. 1000 kg/cubic meter x 10E9cubic m/cubic km x 9.8 x 0.1m =9.8 10E12 J. That is released over a year, so divide by seconds in a year, 3.15 x 10E7or ~3.1 10E5 watts, which is 310 kW. So for this case of a fairly fast moving glacier, gravity released heat would be about 3 times the geo heat. Of course the heat from this motion would stop if the glacier was frozen to the bedrock. Keith From jonkc at bellsouth.net Mon Feb 1 17:04:58 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 1 Feb 2010 12:04:58 -0500 Subject: [ExI] How not to make a thought experiment (was: How to ground a symbol) In-Reply-To: <304772.53589.qm@web36501.mail.mud.yahoo.com> References: <304772.53589.qm@web36501.mail.mud.yahoo.com> Message-ID: On Jan 31, 2010, Gordon Swobe wrote: > Let me know what you think. > http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php More of the same. You ask us to imagine a room too large to fit into the observable universe and then say that it acts intelligently but "obviously" it doesn't understand anything. You just refuse to consider two possibilities: 1) That you don't fully understand understanding as well as you think you do. 2) Even if you don't understand how it could understand the room could still understand. In fact if Darwin is right (and there is an astronomical amount of evidence that he is) then that room MUST have consciousness despite your or my lack of comprehension of the mechanics of it all. And even if Darwin is not right every one of your arguments against consciousness existing in a robot could just as easily be used to argue against consciousness existing in your fellow human beings; but for some reason you seem unenthusiastic in pursuing that line of thought. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Mon Feb 1 18:13:12 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 1 Feb 2010 19:13:12 +0100 Subject: [ExI] Understanding is useless In-Reply-To: <165704.91501.qm@web36502.mail.mud.yahoo.com> References: <65FB1FA7-9C42-47BB-A32A-5B9B2C771FF9@bellsouth.net> <165704.91501.qm@web36502.mail.mud.yahoo.com> Message-ID: <580930c21002011013m7eb5e8b8r483ea67c719b304f@mail.gmail.com> On 29 January 2010 20:25, Gordon Swobe wrote: > Some people here might even call me a chauvinist of sorts for daring to claim that computers don't understand their own words. I suppose typewriters and cell phones should have civil rights too. Why, do you suggest that unconscious human beings should lose their own? ;-) -- Stefano Vaj From eric at m056832107.syzygy.com Mon Feb 1 18:14:30 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 1 Feb 2010 18:14:30 -0000 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com> References: <584374.10388.qm@web36504.mail.mud.yahoo.com> <20100129192646.5.qmail@syzygy.com> <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com> Message-ID: <20100201181430.5.qmail@syzygy.com> Stefano: >Eric: >> Meaning is attached to word symbols when the word symbols are >> associated with sense symbols, not with other word symbols. > >Not all symbols are words - and in fact the word "three" can be >associated with the number "3" - but "sense symbols" sounds as a >dubious and redundant concept. I should probably explain what I mean by the phrase "sense symbols". As a brain thinks, we can consider it as activating and processing sequences of sets of symbols. This is analogous to a CPU having various bit patterns active on internal busses, with the bit patterns representing symbols. Some of the symbols in the brain map 1 to 1 with words in a spoken language, and we would refer to them as word symbols. Other brain symbols appear within the brain as a direct result of the stimulation of sensory neurons in the body, and this is what I mean by a "sense symbol". It's basically the internal representation of directly sensed external events. Actually, I was partially mistaken in saying that meaning cannot be attached to a word by association with other words. A definition could associate a new word with a set of old words, and if all of the old words have meanings (by being grounded or by association) the new one can acquire meaning as well. -eric From Frankmac at ripco.com Mon Feb 1 18:54:46 2010 From: Frankmac at ripco.com (Frank McElligott) Date: Mon, 1 Feb 2010 13:54:46 -0500 Subject: [ExI] war is peace Message-ID: <003f01caa370$0a889bb0$ad753644@sx28047db9d36c> The largest exporter of oil is Russia, more than the Saudi's. Yet there was only one bidder, my my a dream auction. Is this the real world we live in, or are we back in the days of sherwood forest with Robin hood and the sheriff. The Wizard of Russia By Michael Bohm Michael Bohm is opinion page editor of The Moscow Times. A year after former Yukos CEO Mikhail Khodorkovsky was arrested on fraud charges, Baikal Finance Group ? a mysterious company with a share capital of only 10,000 rubles ($330) ? acquired Yukos' largest subsidiary, Yuganskneftegaz, for $9.3 billion in an "auction" consisting of only one bidder. After Yuganskneftegaz was sold four days later to state-controlled Rosneft, Andrei Illarionov, economic adviser to then-President Vladimir Putin, called the state expropriation of Yukos "the Biggest Scam of the Year" in his annual year-end list of Russia's worst events. When Illarionov announced his 2009 list in late December, he should have added another award and given it to Putin: "the Best PR Project of the Decade." The Yukos scam was "legal nihilism" par excellence, but most Russians have a completely different version of the event. The Kremlin's 180-degree PR spin on the Yukos nationalization should be a case study for any nation aspiring to create a Ministry of Truth. As Putin explained in his December call-in show, the Yukos affair was not government expropriation at all, but a way to give money that Yukos "stole from the people" back to the people by helping them buy new homes and repair old ones. Putin, it turns out, is also Russia's Robin Hood. War is peace. Ignorance is strength. Oh by the way Obama's job program is going to cost 100 billion, again another robin hood:) Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Feb 1 21:59:53 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 1 Feb 2010 16:59:53 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <394481.10295.qm@web36506.mail.mud.yahoo.com> References: <394481.10295.qm@web36506.mail.mud.yahoo.com> Message-ID: <1CC02E2F-6A82-4B99-A0B4-39BF26253BEC@bellsouth.net> On Jan 31, 2010, Gordon Swobe wrote: > digital models of human brains will have the real properties of natural brains if and only if natural brains already exist as digital objects You've said that before and when you did I said brains are not important minds are, and minds are digital although they are not objects. To save time and avoid needless wear and tear on electrons the next time you have the urge to repeat that same remark yet again lets adopt the convention of you just saying "41" and my retort to your remark will be "42". > Philosophers of mind don't care much about how "useful" it may seem. And that's why philosophers of mind have never produced anything useful and probably never will; computer programers have, mathematicians have, but philosophers of mind not so much. > They do care if it has a mind capable of having conscious intentional states: Unfortunately that is all philosophers of mind care about, if they spent just a little time considering what the mind in question actually does regardless of what "intentional state" it is in they would be much more successful. If they spent time taking a high school biology class they would be even better off. But they dislike getting their hands dirty conducting experiments other than the thought kind, and considering actual evidence is even more disagreeable to them. Darwin contributed astronomically more to understanding what the mind is than any philosopher of mind that ever lived. And these two bit philosophers act as if they've never heard of him; they deserve our contempt. > Stathis. Looks like you want to skirt the issue by asserting that the system understands things that the man, *considered as the system*, does not understand. Some might think that it was outrageous enough to propose a thought experiment that contained a room larger than the observable universe and that operated so slowly that the 13.7 billion year age of the universe is not nearly enough time for it to complete a single action, and then to confidently proclaim exactly what this bizarre amalgamation can and cannot understand; but no, Searle was just getting warmed up. Calling his next step ridiculous doesn't capture its true nature, it's more like ridiculous to the ridiculous power. Piling absurdity on top of absurdity he now wants us to think about a "man" who "internalized" this contraption that is far too large and far too slow to fit in our universe. I don't know what sort of entity could do that and I would be a fool to claim to know what that vastly improbable, something, could and couldn't do, and so would you, and so would Searle. I do know one thing, whatever it is you can bet your life that it isn't a man. > The system you describe won't really "know" it is red. It will merely act as if it knows it is red Einstein didn't understand physics he just acted like he understood physics. Tiger Woods didn't understand how to play golf he just acted like he understood how to play golf. I've said it before I'll say it again, understanding is useless! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Feb 1 23:47:10 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 01 Feb 2010 17:47:10 -0600 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" Message-ID: <4B6767FE.8080304@satx.rr.com> http://www.foxnews.com/story/0,2933,584500,00.html * Witches, Druids and pagans rejoice! The Air Force Academy in Colorado is about to recognize its first Wiccan prayer circle, a Stonehenge on the Rockies that will serve as an outdoor place of worship for the academy's neo-pagans.* Wiccan cadets and officers on the Colorado Springs base have been convening for over a decade, but the school will officially dedicate a newly built circle of stones on about March 10, putting the outdoor sanctuary on an equal footing with the Protestant, Catholic, Jewish and Buddhist chapels on the base. "When I first arrived here, Earth-centered cadets didn't have anywhere to call home," said Sgt. Robert Longcrier, the lay leader of the neo-pagan groups on base. "Now, they meet every Monday night, they get to go on retreats, and they have a stone circle." Academy officials had no tally of the number of Wiccan cadets at the school of 4,500, but said they had been angling to set up a proper space since the academic year began. "That's one of the newer groups," said John Van Winkle, a spokesman for the academy. "They've had a worship circle on base for some time and we're looking to get them an official one." The Air Force recognizes several distinct forms of neo-paganism, including Dianic Wicca, Seax Wicca, Gardnerian Wicca, shamanism and Druidism, according to Pagan groups that track the information. It isn't nearly as comprehensive when it comes to sects within other religions. The academy still does not recognize, for instance, the massive gulfs between Catholics with guilt problems and those without; or the distinct practices of Jews who keep kosher, those who eat bacon, and those secretly wish they could. Since a 2004 survey of cadets on the base revealed dozens of instances of harassment and intolerance, superintendent Michael Gould has made religious tolerance a priority. Yet Van Winkle, the academy spokesman, said he could not confirm whether the school's superintendent or senior staff would attend the dedication ceremony. "(We) haven't gotten that far yet: First we have to get a date, and then once we get a date for the dedication ceremony we'll see who's going to be available for it," he told FoxNews.com. "Once we get a date that's going to be the real driving force for who's going to attend." From msd001 at gmail.com Tue Feb 2 03:46:15 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 1 Feb 2010 22:46:15 -0500 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" In-Reply-To: <4B6767FE.8080304@satx.rr.com> References: <4B6767FE.8080304@satx.rr.com> Message-ID: <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com> On Mon, Feb 1, 2010 at 6:47 PM, Damien Broderick wrote: > "When I first arrived here, Earth-centered cadets didn't have anywhere to > call home," said Sgt. Robert Longcrier, the lay leader of the neo-pagan > groups on base. Earth-centered cadets... didn't have anywhere... to call home. Is this in comparison to space cadets? Or is it illustrating a problem with location or availability of communications equipment? Or maybe it's about the alienation of earth-centered cadets feeling isolated... from their earthican center? > "Now, they meet every Monday night, they get to go on retreats, and they > have a stone circle." On monday night, the earth-centered cadets go on retreats to a stone circle? > Academy officials had no tally of the number of Wiccan cadets at the school > of 4,500, but said they had been angling to set up a proper space since the > academic year began. I can tell you the "angling" of a stone circle is 360 degrees, no matter how many bi- sects there are. Or maybe while on their monday night retreats they go fishing? I'm not sure how that is a productive way to get work done. > "That's one of the newer groups," said John Van Winkle, a spokesman for the > academy. "They've had a worship circle on base for some time and we're > looking to get them an official one." What criteria is used to make a circle official? Is a qualified someone going to measure diameter and circumference to a high degree of precision before making a declaration? > The Air Force recognizes several distinct forms of neo-paganism, including > Dianic Wicca, Seax Wicca, Gardnerian Wicca, shamanism and Druidism, > according to Pagan groups that track the information. That's pretty impressive considering most of the time members of these groups can hardly recognize each other. > It isn't nearly as comprehensive when it comes to sects within other > religions. The academy still does not recognize, for instance, the massive > gulfs between Catholics with guilt problems and those without; or the > distinct practices of Jews who keep kosher, those who eat bacon, and those > secretly wish they could. And what would these group be "officially" recognized as Whole Guilt Catholics vs Skim Catholics or Bacon Jews vs Fakin Bacon Jews? > "(We) haven't gotten that far yet: First we have to get a date, and then > once we get a date for the dedication ceremony we'll see who's going to be > available for it," he told FoxNews.com. > > "Once we get a date that's going to be the real driving force for who's > going to attend." Much like high school students deciding if they'll attend a sophomore prom... I wonder if we could get the Air Force to recognize our Holy HotTub-based religion and declare an officially sanctioned meeting place on base? You know, to be fair and completely "tolerant." From thespike at satx.rr.com Tue Feb 2 03:54:15 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 01 Feb 2010 21:54:15 -0600 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" In-Reply-To: <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com> References: <4B6767FE.8080304@satx.rr.com> <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com> Message-ID: <4B67A1E7.9020905@satx.rr.com> On 2/1/2010 9:46 PM, Mike Dougherty quoth: >said John Van Winkle did make me wonder about a leg-pull... but he pops up on Google more than once. From moulton at moulton.com Tue Feb 2 05:42:01 2010 From: moulton at moulton.com (moulton at moulton.com) Date: 2 Feb 2010 05:42:01 -0000 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" Message-ID: <20100202054201.55588.qmail@moulton.com> Here is some background info: http://www.nytimes.com/2005/06/04/national/04airforce.html http://www.militaryreligiousfreedom.org/ It looks like things are getting better than they were a few years ago. From gts_2000 at yahoo.com Tue Feb 2 14:43:10 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 2 Feb 2010 06:43:10 -0800 (PST) Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <20100201181430.5.qmail@syzygy.com> Message-ID: <969214.43756.qm@web36506.mail.mud.yahoo.com> --- On Mon, 2/1/10, Eric Messick wrote: > Actually, I was partially mistaken in saying that meaning > cannot be attached to a word by association with other words. I think you make an excellent observation here, Eric. The mere association of a symbol to another symbol does not give either symbol meaning. Symbols have derived intentionality, whereas people who use symbols have intrinsic intentionality. I'll try to explain what I mean... Compare: 1) Jack means that the moon orbits the earth. 2) The word "moon" means a large object that orbits the earth. In the scene described in 1), Jack means something by the symbol "moon". He has intrinsic intentionality. He has a conscious mental state in which *he means* to communicate something about the moon. In sentence 2), we (English speakers of the human species) attribute intentionality to the symbol "moon", as if the symbol itself has a conscious mental state similar to the one Jack had in 1). We imagine for the sake of convenience that symbols mean to say things about themselves. We often speak of words and other symbols this way, treating them as if they have consciousness mental states, as if they really do mean to tell us what they mean. We anthropomorphize our language. The above might seem blindingly obvious (I hope so) but it has bearing on the symbol grounding question. Symbols have meaning only in the minds of conscious agents; that is, the apparent intentionality of words is derived from conscious intentional agents who actually do the meaning. -gts From bbenzai at yahoo.com Tue Feb 2 15:08:38 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 2 Feb 2010 07:08:38 -0800 (PST) Subject: [ExI] Glacier Geoengineering In-Reply-To: Message-ID: <847034.14281.qm@web113617.mail.gq1.yahoo.com> I need to ask a question here, please indulge me if the answer should be obvious: What's the point of sticking glaciers to their bedrock? Also, if you're going to build up stupendous amounts of potential energy like this, you'd better have a good scheme for dealing with it when it finally breaks loose. Hm, maybe not. The frozen-to-bedrock layer will just become the new bedrock, and you'll be back to square one, surely? Ben Zaiboc From bbenzai at yahoo.com Tue Feb 2 15:10:42 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 2 Feb 2010 07:10:42 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <263261.91221.qm@web113605.mail.gq1.yahoo.com> Gordon wrote: "the principles that drive my arguments remain the same: formal programs do not have or cause minds. If they did, the computer in front of you this very moment would have a mind and would perhaps be entitled to vote like other citizens" This is a good example of a "straw man" argument. You are misrepresenting the claim that some formal programs can cause minds as a claim that *all* formal programs *must* cause minds. This is (or should be) obvious nonsense. As many people now have said, directly and indirectly, many times, it's not the 'formal programness' that's important. That is completely irrelevant. What's important is information processing of a particular kind. This could be implemented by a biological system, an electronic or electromechanical system, a purely chemical system, a nanomechanical system or indeed by a massive array of beer cans and string. The fact that you find beer cans and string an unlikely substrate for intelligence is beside the point (I find it unlikely too, but for entirely different reasons, to do with practicality, not theoretical possibility). These 'formal programs' that you keep going on about are just one subset among a large set of possible information processing systems that can give rise to minds, if set up and run in the right way. Ben Zaiboc From stefano.vaj at gmail.com Tue Feb 2 18:57:38 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 2 Feb 2010 19:57:38 +0100 Subject: [ExI] 1984 and Brave New World In-Reply-To: <12411.23612.qm@web27003.mail.ukl.yahoo.com> References: <12411.23612.qm@web27003.mail.ukl.yahoo.com> Message-ID: <580930c21002021057x1cb3d14ai15b3cdafd0d9282a@mail.gmail.com> On 31 January 2010 16:09, Tom Nowell wrote: > Brave New World reflects the utopian thinking of those who believed a technocratic elite could bestow happiness for all, and its focus on biological engineering of people and society reflects the early 20th century eugenicists. In a time when people were publicly advocating the sterilisation of undesirable types, and where people were using dubious biology to push forward their own political views, Huxley warns us of one way in which this could end up. Mmhhh. Where is the "warning"? Huxley does seem to see the Brave New World as the unavoidable destination of the societal goals worth pursuing. And where is "eugenics", at least in a transhumanist sense? The different castes of BNW are kept as stable as possible, no effort to improve, enhance of change their genetic makeover is in place. -- Stefano Vaj From gts_2000 at yahoo.com Tue Feb 2 23:44:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 2 Feb 2010 15:44:33 -0800 (PST) Subject: [ExI] meaning & symbols In-Reply-To: Message-ID: <816357.45313.qm@web36501.mail.mud.yahoo.com> --- On Tue, 2/2/10, Spencer Campbell wrote: > According to Eric, association is the sole factor giving > many symbols meaning in human minds. The only prerequisite is that at > least one symbol in the web has meaning intrinsically; that is to > say, it is a sense symbol. Meaning can effectively be shared between > symbols, and is not diluted in the process. I think I misread Eric's sentence. Thanks for pointing that out. In any case I do not believe there exists any such thing as a "sense symbol". Organisms with highly developed nervous systems create and ponder mental abstractions, aka symbols, about sense data and about other abstractions. Simple organisms on the order of, say, fleas have eyes and other sense organs, so it seems likely that they have awareness of sense data. But because they lack a well developed nervous system it seems very improbable to me that they can do much in the way of forming symbols to represent that data. I also do not believe any symbol of any kind can have "instrinsic meaning". Meaning always arises in the context of a conscious mind. X means Y only according to some conscious Z. In casual conversation we sometimes speak about words as if they mean something, but they do not actually mean anything. Conscious agents mean things and they use words to convey their meanings. -gts From possiblepaths2050 at gmail.com Wed Feb 3 01:01:25 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 2 Feb 2010 18:01:25 -0700 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" In-Reply-To: <20100202054201.55588.qmail@moulton.com> References: <20100202054201.55588.qmail@moulton.com> Message-ID: <2d6187671002021701v18de857bs184fa3e66265410f@mail.gmail.com> I hope the Air Force Academy got a handle on the serious rape problem they had, not so many years ago. John On Mon, Feb 1, 2010 at 10:42 PM, wrote: > > Here is some background info: > http://www.nytimes.com/2005/06/04/national/04airforce.html > http://www.militaryreligiousfreedom.org/ > > It looks like things are getting better than they were a few years ago. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Wed Feb 3 02:32:09 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 2 Feb 2010 18:32:09 -0800 (PST) Subject: [ExI] meaning & symbols In-Reply-To: Message-ID: <230474.84371.qm@web36502.mail.mud.yahoo.com> --- On Tue, 2/2/10, Spencer Campbell wrote > In the second paragraph I almost jumped on you again for > misusing the concept of abstraction, but then I noticed you said "and > about other" rather than "and other". You weren't saying that sense data > are abstractions, if I understand correctly. Right. > When we get to the third paragraph, however, it sounds as > if you believe that manking discovered symbols rather than > invented them. No. Not sure why you would say that. I certainly do not believe we discover symbols. We create them. > Here's the thing: the very idea of a symbol is, in and of > itself, an abstraction. I suspect it's possible to form a > coherent model of the mind (by today's standards) without ever > mentioning symbols or anything like them. It may not be a particularly > elegant model, but it would work as well as any other. No matter what you may choose to call them, people do speak and understand word-symbols. The human mind thus "has semantics" and any coherent model of it must explain that plain fact. The computationalist model fails to explain that plain fact. On that model minds do no more than run programs and programs do no more than manipulate symbols according to rules of syntax. Nothing in the model explains how the mind can have conscious understand the symbols it manipulates. To make the model coherent, its proponents must introduce a homunculous: an observer/user of the supposed brain/computer who sees and understands the meanings of the symbols. But the homunculous fallacy proves fatal to the theory: How does the homunculous understand the symbols if not by some means other than computation? And if that's so then why did we say the mind exists as a computer in the first place? > You're correct in saying that sense symbols do not exist, > but only insofar as there aren't any symbols which DO exist. Hmm, I count 21 word-symbols in that sentence of yours. > All I meant by "intrinsic" meaning was that some symbols in > the field of all available within a given Z are meaningful > irrespective of any other symbols. Eric explains that this is so > because they are invoked directly by incoming sensory data: I see > a dog, I think a dog symbol. Yes, but when I look inside your head I see nothing even remotely resembling a digital computer. Instead I see a marvelous product of biological evolution. -gts From pharos at gmail.com Wed Feb 3 09:04:36 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Feb 2010 09:04:36 +0000 Subject: [ExI] meaning & symbols In-Reply-To: <230474.84371.qm@web36502.mail.mud.yahoo.com> References: <230474.84371.qm@web36502.mail.mud.yahoo.com> Message-ID: On 2/3/10, Gordon Swobe wrote: > Yes, but when I look inside your head I see nothing even remotely > resembling a digital computer. Instead I see a marvelous product > of biological evolution. > > Yes, You have told us all at great length that you very strongly believe that only human brains (and other things which must be almost identical to human brains) can do the magic human 'consciousness' thing. That's fine, you are allowed to believe anything you like, but it is only a belief that you cannot 'prove' is correct. And much reasoning has been produced to show that it is probably a mistaken belief. We shall just have to wait until weak AI computers develop, (probably using new designs and different programming techniques) into machines that apparently have strong AI, are more intelligent than humans and have a type of 'consciousness'. When these machines are out exploring the universe and reporting back to the remaining humans trapped on earth who are being looked after by similar intelligent machines, I fully expect you to say 'But they're not "really* conscious'. BillK From stefano.vaj at gmail.com Wed Feb 3 13:04:44 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 3 Feb 2010 14:04:44 +0100 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" In-Reply-To: <4B6767FE.8080304@satx.rr.com> References: <4B6767FE.8080304@satx.rr.com> Message-ID: <580930c21002030504m701fe685k6f0e955f191467b5@mail.gmail.com> On 2 February 2010 00:47, Damien Broderick wrote: > http://www.foxnews.com/story/0,2933,584500,00.html > > * Witches, Druids and pagans rejoice! The Air Force Academy in Colorado is > about to recognize its first Wiccan prayer circle, a Stonehenge on the > Rockies that will serve as an outdoor place of worship for the academy's > neo-pagans.* > > Good news... Even though I am somewhat diffident of the orthodoxy of US neopagans. ;-) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Feb 3 13:09:05 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 3 Feb 2010 14:09:05 +0100 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <20100201181430.5.qmail@syzygy.com> References: <584374.10388.qm@web36504.mail.mud.yahoo.com> <20100129192646.5.qmail@syzygy.com> <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com> <20100201181430.5.qmail@syzygy.com> Message-ID: <580930c21002030509m60aebf03s28104ee60540978b@mail.gmail.com> On 1 February 2010 19:14, Eric Messick wrote: > Some of the symbols in the brain map 1 to 1 with words in a spoken > language, and we would refer to them as word symbols. Other brain > symbols appear within the brain as a direct result of the stimulation > of sensory neurons in the body, and this is what I mean by a "sense > symbol". > So, is the integer "3" a word symbol or a sense symbol? And what about the ASCII decoding of a byte? Or the rasterisation of the ASCII symbol? And what difference would exactly make? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Feb 3 13:15:28 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 3 Feb 2010 14:15:28 +0100 Subject: [ExI] war is peace In-Reply-To: <003f01caa370$0a889bb0$ad753644@sx28047db9d36c> References: <003f01caa370$0a889bb0$ad753644@sx28047db9d36c> Message-ID: <580930c21002030515y34e2ec6ag9742e901071242b4@mail.gmail.com> 2010/2/1 Frank McElligott > The Yukos scam was "legal nihilism" par excellence, but most Russians have > a completely different version of the event. The Kremlin's 180-degree PR > spin on the Yukos nationalization should be a case study for any nation > aspiring to create a Ministry of Truth. As Putin explained in his December > call-in show, the Yukos affair was not government expropriation at all, but > a way to give money that Yukos "stole from the people" back to the people by > helping them buy new homes and repair old ones. Putin, it turns out, is also > Russia's Robin Hood. War is peace. Ignorance is strength. > > I am really confused. Do you maintain that they were wrong to change their views? Or that they should have left Yukos in the hands it had fallen in? And why? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Wed Feb 3 13:52:04 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 3 Feb 2010 05:52:04 -0800 (PST) Subject: [ExI] meaning & symbols In-Reply-To: Message-ID: <910459.5460.qm@web113613.mail.gq1.yahoo.com> Gordon Swobe wrote: > I do not believe there exists any such thing as > a "sense symbol". > > Organisms with highly developed nervous systems create and > ponder mental abstractions, aka symbols, about sense data > and about other abstractions. > > Simple organisms on the order of, say, fleas have eyes and > other sense organs, so it seems likely that they have > awareness of sense data. But because they lack a well > developed nervous system it seems very improbable to me that > they can do much in the way of forming symbols to represent > that data. OK obviously this word 'symbol' needs some clear definition. I would use the word to mean any distinct pattern of neural activity that has a relationship with other such patterns. In that sense, sensory symbols exist, as do (visual) word symbols, (auditory) word symbols, concept symbols, which are a higher-level abstraction from the above three types, and hundreds of other types of 'symbol', representing all the different patterns of neural activity that can be regarded as coherent units, like emotional states, memories, linguistic units (nouns, verbs, etc.), and their higher-level 'chunks' (birdness, the concept of fluidity, etc.), and so on. But that's just me. Maybe I'm overstretching the use of the word. What do other people mean by the word 'symbol', in this context? Gordon points out that they are all meaningless in themselves, only taking on a meaning in the context of a system that can be called a conscious mind. I'm not sure if the 'conscious' part is necessary, though. In any event, the 'meaning' arises as a result of the interaction of the symbols, grounded in the system's interaction with its environment. To say that an organism's 'hunger', which results in it finding and consuming food, is meaningless unless the organism is conscious, is rather a silly statement, and calls into question what we mean by 'meaning'. Ben Zaiboc From jonkc at bellsouth.net Wed Feb 3 15:32:24 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Feb 2010 10:32:24 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <969214.43756.qm@web36506.mail.mud.yahoo.com> References: <969214.43756.qm@web36506.mail.mud.yahoo.com> Message-ID: On Feb 2, 2010, Gordon Swobe wrote: > The mere association of a symbol to another symbol does not give either symbol meaning. > Symbols have derived intentionality, whereas people who use symbols have intrinsic intentionality. Broken down to its smallest component parts, a symbol is something that consistently and systematically changes the state of the symbol reader. A Turing Machine does this when it encounters a zero or a one, and a punch card reader does this when it encounters a hole. You demand an explanation of human style intentionality and say, correctly, that the examples I cite are far less complex and awe inspiring, but if they were just as mysterious they wouldn't be doing their job. I honestly don't know what you want, you say you want an explanation but when one is provided and its split into parts small enough to comprehend you say I understand that so it can't be the explanation. Your retort is always I don't understand that or I do understand that so "obviously" that can't be right. Even in theory I don't see how any explanation would satisfy you. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Feb 3 16:13:18 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Feb 2010 11:13:18 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <230474.84371.qm@web36502.mail.mud.yahoo.com> References: <230474.84371.qm@web36502.mail.mud.yahoo.com> Message-ID: On Feb 2, 2010, at 9:32 PM, Gordon Swobe wrote: > The human mind thus "has semantics" and any coherent model of it must explain that plain fact. But before that can happen you must explain what you mean by explain. > > The computationalist model fails to explain that plain fact. It explains it beautifully according to my understanding of the word. You want a theory that is simultaneously completely understandable and utterly mysterious, so naturally you have been disappointed. > On that model minds do no more than run programs and programs do no more than manipulate symbols according to rules of syntax. Correct me if I'm wrong but I seem to think you may have said something along those lines before, and I think I even remember people bringing up very good counter arguments against that argument that you have steadfastly ignored. > Nothing in the model explains how the mind can have conscious understand the symbols it manipulates. True, they are not comprehensible and incomprehensible at the same time. > I look inside your head I see nothing even remotely resembling a digital computer. Then why are people spending hundreds of millions of dollars building digital computers that simulate larger and larger chunks of neurons? > Instead I see a marvelous product of biological evolution. How can you dare use the word "Evolution"!? YOUR VIEWS ARE 100% INCOMPATIBLE WITH EVOLUTION! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ablainey at aol.com Wed Feb 3 19:54:58 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Wed, 03 Feb 2010 14:54:58 -0500 Subject: [ExI] meaning & symbols In-Reply-To: <910459.5460.qm@web113613.mail.gq1.yahoo.com> Message-ID: <8CC7321E5DB89A5-D54-12CB@webmail-d081.sysops.aol.com> -----Original Message----- Ben Zaiboc wrote >OK obviously this word 'symbol' needs some clear definition. >I would use the word to mean any distinct pattern of neural activity that has a >relationship with other such patterns. In that sense, sensory symbols exist, as >do (visual) word symbols, (auditory) word symbols, concept symbols, which are a >higher-level abstraction from the above three types, and hundreds of other types >of 'symbol', representing all the different patterns of neural activity that can >be regarded as coherent units, like emotional states, memories, linguistic units >(nouns, verbs, etc.), and their higher-level 'chunks' (birdness, the concept of >fluidity, etc.), and so on. > >But that's just me. Maybe I'm overstretching the use of the word. > >What do other people mean by the word 'symbol', in this context? > >Gordon points out that they are all meaningless in themselves, only taking on a >meaning in the context of a system that can be called a conscious mind. > >I'm not sure if the 'conscious' part is necessary, though. In any event, the >'meaning' arises as a result of the interaction of the symbols, grounded in the >system's interaction with its environment. > >To say that an organism's 'hunger', which results in it finding and consuming >food, is meaningless unless the organism is conscious, is rather a silly >statement, and calls into question what we mean by 'meaning'. > >Ben Zaiboc I agree. The problem is that we are using linguistic symbols to which we give our own personal meaning to debate a system that we do not fully understand and of which cannot effectively articulate our personal view. I would go along with the notion that there are sense symbols and many others kinds. So In that context of "Symbols" I dont think conciousness is necessary. Certainly not at a self awareness level. Does this exclude inteligence? I think our definitions need some tweaking. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Wed Feb 3 22:05:18 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 3 Feb 2010 14:05:18 -0800 (PST) Subject: [ExI] meaning & symbols In-Reply-To: Message-ID: <837227.95744.qm@web36508.mail.mud.yahoo.com> --- On Wed, 2/3/10, BillK wrote: > You have told us all at great length that you very > strongly believe that only human brains (and other things which must > be almost identical to human brains) can do the magic human > 'consciousness' thing. I have no interest in magic. I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents (semantics). Idealistic dreamers here on ExI take offense. Sorry about that. -gts From gts_2000 at yahoo.com Wed Feb 3 23:08:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 3 Feb 2010 15:08:34 -0800 (PST) Subject: [ExI] meaning & symbols In-Reply-To: Message-ID: <642330.6805.qm@web36501.mail.mud.yahoo.com> --- On Wed, 2/3/10, John Clark wrote: > Broken down to its smallest component parts, a symbol is something that > consistently and systematically changes the state of the symbol reader. Many things aside from symbols can consistently and systematically change the state of the symbol reader. This wikipedia definition seems better: "A symbol is something such as an object, picture, written word, sound, or particular mark that represents something else by association, resemblance, or convention." -gts From hkeithhenson at gmail.com Thu Feb 4 00:26:16 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 3 Feb 2010 17:26:16 -0700 Subject: [ExI] Glacier Geoengineering Message-ID: On Wed, Feb 3, 2010 at 5:00 AM, Ben Zaiboc wrote: > I need to ask a question here, please indulge me if the answer should be obvious: > > What's the point of sticking glaciers to their bedrock? To slow them down. That way they don't run off into the sea or down to lower altitudes where they melt > Also, if you're going to build up stupendous amounts of potential energy like this, you'd better have a good scheme for dealing with it when it finally breaks loose. > > Hm, maybe not. The frozen-to-bedrock layer will just become the new bedrock, and you'll be back to square one, surely? No, they will still move, but much slower. Ice is like cold tar and the colder you get it the slower it moves. Keith From ddraig at gmail.com Tue Feb 2 06:03:23 2010 From: ddraig at gmail.com (ddraig) Date: Tue, 2 Feb 2010 17:03:23 +1100 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" In-Reply-To: <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com> References: <4B6767FE.8080304@satx.rr.com> <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com> Message-ID: On 2 February 2010 14:46, Mike Dougherty wrote: > I can tell you the "angling" of a stone circle is 360 degrees, no > matter how many bi- sects there are. Maybe the stones lean inwards? Or outwards? Maybe the whole thing is designed to be in a slow process of collapse, until you end with something looking like a stone circle made of dominos? >> "That's one of the newer groups," said John Van Winkle, a spokesman for the >> academy. "They've had a worship circle on base for some time and we're >> looking to get them an official one." > > What criteria is used to make a circle official? Circles are, officially, circular, I believe. > That's pretty impressive considering most of the time members of these > groups can hardly recognize each other. Sure they can. They are fat, and dress in black. > I wonder if we could get the Air Force to recognize our Holy > HotTub-based religion and declare an officially sanctioned meeting > place on base? ?You know, to be fair and completely "tolerant." Works for me. Is the HPS cute? Dwayne -- ddraig at pobox.com irc.deoxy.org #chat ...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e... http://www.barrelfullofmonkeys.org/Data/3-death.jpg our aim is wakefulness, our enemy is dreamless sleep From lacertilian at gmail.com Mon Feb 1 00:54:57 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 31 Jan 2010 16:54:57 -0800 Subject: [ExI] How to ground a symbol In-Reply-To: <975270.46265.qm@web36504.mail.mud.yahoo.com> References: <20100131230539.5.qmail@syzygy.com> <975270.46265.qm@web36504.mail.mud.yahoo.com> Message-ID: Gordon Swobe : >Eric Messick : >> The animations and other text at the site all indicate that >> this is the type of processing going on in Chinese rooms. > > This kind of processing goes on in every software/hardware system. No, it doesn't. That's only the result of the processing. I went over this before. The processing itself is so spectacularly more fine-grained that thinking about it as an "if this input, then this output" rule is outright fallacious. Yes, you put that input in; yes, you get that output out; but between these two points, a universe is created and destroyed. Gordon Swobe : >Eric Messick : >> Come back after you've written a neural network >> simulator and trained it to do something useful. > > Philosophers of mind don't care much about how "useful" it may seem. They do care if it has a mind capable of having conscious intentional states: thoughts, beliefs, desires and so on as I've already explained. The point isn't to have a useful product, it's to demonstrate a minimal comprehension of how neural network simulations work. You left out the crux of what Eric said: "Then we'll see if your intuition still says that computers can't understand anything." Getting a neural network simulation to do anything useful is sufficiently difficult that you will necessarily learn something about them in the process, and this may change your intuitive impression of what a computer is capable of. Besides, we don't care what philosophers of mind think. We care what computers think. Regrettably, we are forced to talk to the former in order to learn about the latter. From lacertilian at gmail.com Mon Feb 1 02:47:55 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 31 Jan 2010 18:47:55 -0800 Subject: [ExI] multiple realizability In-Reply-To: <418148.11027.qm@web36501.mail.mud.yahoo.com> References: <20100201010329.5.qmail@syzygy.com> <418148.11027.qm@web36501.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > The layer of abstraction does not matter to me. Bad move. You've been attacked before on the basis that you have trouble comprehending the importance of abstraction. How far down through the layers one can go before new complexities cease to emerge is a tremendous component of the argument against formal programs being capable of creating consciousness. To prove this is trivial. All I have to do is invoke a couple of black boxes: One box contains my brain, and another box contains a standard digital computer running the best possible simulation of my brain. Both brains begin in exactly the same state. A single question is sent into each box at the same moment, and the response coming out of the other side is identical. This is the highest level of abstraction, turning whole brains into, essentially, pseudo-random number generators. They carry states; an input is combined with the state through a definite function; the state changes; an output is produced. Gordon has said before that in situations like these, it is impossible to determine whether or not either box has consciousness without "exploratory surgery". I assume Gordon is at least as good a surgeon as Almighty God, and has unlimited computing power available to analyze the resulting data instantaneously. The point is that such surgery is precisely the sort of process which reduces the level of abstraction. A crash course may be in order. You are given ten thousand people. You ask, "How many have blue eyes?". The number splits into two, becoming less abstract. You ask, "How many are taller than I am?". Now there are four numbers, and one quarter the abstraction. Eventually any question you ask will be redundant, as you will have split the population into ten thousand groups of one. But there is still some abstraction left: people are not fundamental particles. So you ask enough questions to uniquely identify every proton, neutron, electron, and any other relevant components. Yet still your description is abstract, because you've only differentiated the particles: you haven't determined their exact locations in space. And here, in a universe equipped with the Heisenberg uncertainty principle, we find that you can't. The description is still abstract. It can be made less so, as we expend greater and greater sums of energy to pin down ever more precise configurations of human beings, but to eliminate abstraction entirely would require infinite energy. In this thread Gordon explicitly rejects the notion that a mind can be copied, in whole, to another substrate without a catastrophic interruption in subjective experience. I agree with this, but I think it's for a completely different reason. I can't say for sure because his clarification made things less clear. Proposition A: a machine operating by formal programs cannot replicate the human mind. Proposition B: a neural network could conceivably replicate the human mind. Logical Conclusion: an individual human mind cannot be extracted from its neurological material. This does not appear to follow, unless you were counting artificial neural networks as "neurological material". I understood you to mean the specific neurons responsible for instantiating the mind in question originally. By my understanding, that one experiment in which you replace each individual neuron with an artificial duplicate, one by one, would preserve the same conscious mind you started with. Actually I am kind of counting on this last point being true, so I have a vested interest in finding out whether or not it is. If you can convince me of my error before I actually act on it, Gordon, I would appreciate it. For the record, I am a dualist in the sense that I believe minds are distinct entities from brains, as well as that programs are distinct entities from computers. However, I do not believe that minds or programs are composed of a "substance" in any sense. Both are insubstantial. Software (which I say includes minds) is one layer of abstraction higher than its supporting hardware (which I say includes brains), and therefore one order of magnitude less "real". I'm not sure what the radix is for that order of magnitude, but I am absolutely confident that it is exactly one order! From lacertilian at gmail.com Mon Feb 1 19:15:11 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 1 Feb 2010 11:15:11 -0800 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: References: <20100131182926.5.qmail@syzygy.com> <732400.27938.qm@web36501.mail.mud.yahoo.com> Message-ID: Stathis Papaioannou : > P: It is possible to make artificial neurons which behave like normal > neurons in every way, but lack consciousness. > > That's it! Now, when I ask if P is true you have to answer "Yes" or > "No". Is P true? Yes. But not for any reason relevant to the discussion. The proposition doesn't illustrate your point. Ordinary neurons behave normally without producing consciousness all the time! This state can be produced with trivial effort: either fall asleep, faint, or get somebody to knock you upside the head. Presto. An entire unconscious brain, neurons and all. Request that you clarify the constraints of the experiment. Now, for the other thing that bothered me... Gordon Swobe : > P = true if we define behavior as you've chosen to define it: the exchange of certain neurotransmitters into the synapses at certain times and other similar exchanges between neurons. Stathis has not chosen to define behavior this way. Stathis Papaioannou : > Yes, that would be one aspect of the behaviour that needs to be reproduced. See, he's talking about behavior in full: walking, talking, thinking, everything. I don't know why he didn't come right out and say that when obviously it's a point of contention. I had to deduce it from this cryptic reply. It seems as if Gordon believes behavior has nothing to do with consciousness, and Stathis believes consciousness is produced as a direct result of behavior. Further, that the quantity of consciousness is proportional to the intelligence of that behavior. I'd be interested to hear from each of you a description of what would constitute the simplest possible conscious system, and whether or not such a system would necessarily also have intelligence or understanding. I haven't been able to figure out exactly what any of these three words mean to either of you. I am pretty sure, however, that you each have radically different definitions. From lacertilian at gmail.com Mon Feb 1 19:49:23 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 1 Feb 2010 11:49:23 -0800 Subject: [ExI] extropy-chat Digest, Vol 77, Issue 1 In-Reply-To: <681115.27739.qm@web113615.mail.gq1.yahoo.com> References: <681115.27739.qm@web113615.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : >Gordon Swobe : >> The system you describe won't really "know" it is red. It >> will merely act as if it knows it is red, no different from, >> say, an automated camera that acts as if it knows the light >> level in the room and automatically adjusts for it. > > Please explain what "really knowing" is. > > I'm at a loss to see how something that acts exactly as if it knows something is red can not actually know that. ?In fact, I'm at a loss to see how that sentence can even make sense. Like so many other things, it depends on the method of measurement. Gordon did not describe any such thing, but we can assume he had at least a vague notion of one in mind. It actually is possible to get that paradoxical result, and in fact it's easy enough that examples of it are widespread in reality. See: public school systems the world over, and their obsessive tendency to test knowledge. It's alarmingly easy to get the right answer on a test without understanding why it's the right answer, but a certain mental trick is required to notice when this happens. Basically, you have to understand your own understanding without falling into an infinite recursion loop. Human beings are naturally born into that ability, but most people lose it in school because they learn (incorrectly) that understanding doesn't make a difference. Ben Zaiboc : > You're claiming that something which not only quacks and looks like, but smells like, acts like, sounds like, and is completely indistinguishable down to the molecular level from, a duck, can in fact not be a duck. ?That if you discover that the processes which give rise to the molecules and their interactions are due to digital information processing, then, suddenly, no duck. This is the standard method of measurement in philosophy: omniscience. The only problem is, omniscience tends to break down rather rapidly when confronted with questions about subjective experience. If you do manage to pry a correct answer from your god's-eye view, it will typically be paradoxical, ambiguous, or both. Works great for ducks, though, and brains by extension. If you assume the existence of consciousness in a given brain, and then you perfectly reconstruct that brain elsewhere on an atomic level, the copy must necessarily also have consciousness. But then you have to ask whether or not it's the same consciousness, and, in my case, I'm forced to conclude that the copy is identical, but distinct. In the next moment, the two versions will diverge, ceasing to be identical. So far so good. However, Gordon usually does not begin with a working consciousness: he tries to construct one from scratch, and he finds that when he uses a digital computer to do so, he fails. I'm not sure yet whether this is a fundamental limitation built into how digital computers work, or if Gordon is just a really bad programmer. I tend to believe the latter. Gordon believes the former, so he's extended the notion to situations in which we DO begin with a working consciousness and then try to move it to another medium. Hope that elucidates matters for you. Also, that it's accurate. From lacertilian at gmail.com Tue Feb 2 18:03:01 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 2 Feb 2010 10:03:01 -0800 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <969214.43756.qm@web36506.mail.mud.yahoo.com> References: <20100201181430.5.qmail@syzygy.com> <969214.43756.qm@web36506.mail.mud.yahoo.com> Message-ID: On Tue, Feb 2, 2010 at 6:43 AM, Gordon Swobe wrote: > --- On Mon, 2/1/10, Eric Messick wrote: > >> Actually, I was partially mistaken in saying that meaning >> cannot be attached to a word by association with other words. > > I think you make an excellent observation here, Eric. The mere association of a symbol to another symbol does not give either symbol meaning. > > Symbols have derived intentionality, whereas people who use symbols have intrinsic intentionality. I'll try to explain what I mean... > > Compare: > > 1) Jack means that the moon orbits the earth. > > 2) The word "moon" means a large object that orbits the earth. > > In the scene described in 1), Jack means something by the symbol "moon". He has intrinsic intentionality. He has a conscious mental state in which *he means* to communicate something about the moon. > > In sentence 2), we (English speakers of the human species) attribute intentionality to the symbol "moon", as if the symbol itself has a conscious mental state similar to the one Jack had in 1). We imagine for the sake of convenience that symbols mean to say things about themselves. We often speak of words and other symbols this way, treating them as if they have consciousness mental states, as if they really do mean to tell us what they mean. We anthropomorphize our language. > > The above might seem blindingly obvious (I hope so) but it has bearing on the symbol grounding question. Symbols have meaning only in the minds of conscious agents; that is, the apparent intentionality of words is derived from conscious intentional agents who actually do the meaning. > > -gts > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Gordon Swobe : >> Actually, I was partially mistaken in saying that meaning >> cannot be attached to a word by association with other words. > > I think you make an excellent observation here, Eric. The mere association of a symbol to another symbol does not give either symbol meaning. This is exactly what Eric did not say. The whole paragraph, in case you missed it, was: Eric Messick : > Actually, I was partially mistaken in saying that meaning cannot be > attached to a word by association with other words. ?A definition > could associate a new word with a set of old words, and if all of the > old words have meanings (by being grounded or by association) the new > one can acquire meaning as well. According to Eric, association is the sole factor giving many symbols meaning in human minds. The only prerequisite is that at least one symbol in the web has meaning intrinsically; that is to say, it is a sense symbol. Meaning can effectively be shared between symbols, and is not diluted in the process. From lacertilian at gmail.com Wed Feb 3 00:16:42 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 2 Feb 2010 16:16:42 -0800 Subject: [ExI] meaning & symbols In-Reply-To: <816357.45313.qm@web36501.mail.mud.yahoo.com> References: <816357.45313.qm@web36501.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > In any case I do not believe there exists any such thing as a "sense symbol". > > Organisms with highly developed nervous systems create and ponder mental abstractions, aka symbols, about sense data and about other abstractions. > > Simple organisms on the order of, say, fleas have eyes and other sense organs, so it seems likely that they have awareness of sense data. But because they lack a well developed nervous system it seems very improbable to me that they can do much in the way of forming symbols to represent that data. In the second paragraph I almost jumped on you again for misusing the concept of abstraction, but then I noticed you said "and about other" rather than "and other". You weren't saying that sense data are abstractions, if I understand correctly. Nothing to disagree with there. When we get to the third paragraph, however, it sounds as if you believe that manking discovered symbols rather than invented them. Here's the thing: the very idea of a symbol is, in and of itself, an abstraction. I suspect it's possible to form a coherent model of the mind (by today's standards) without ever mentioning symbols or anything like them. It may not be a particularly elegant model, but it would work as well as any other. So, it's really just a matter of convenience to talk about symbols instead of synapses. Fleas have synapses, if fewer than we do, so if we wanted to we could easily say that they form and use symbols (blood, not-blood) within their puny flea-minds. We wouldn't be wrong. You're correct in saying that sense symbols do not exist, but only insofar as there aren't any symbols which DO exist. Gordon Swobe : > I also do not believe any symbol of any kind can have "instrinsic meaning". Meaning always arises in the context of a conscious mind. X means Y only according to some conscious Z. You're right, of course. It was a poor choice of words. I was trying to convey Eric's theory, which I mostly agree with, in as lazy a manner as possible. All I meant by "intrinsic" meaning was that some symbols in the field of all available within a given Z are meaningful irrespective of any other symbols. Eric explains that this is so because they are invoked directly by incoming sensory data: I see a dog, I think a dog symbol. I have no control over whether or not this happens, except to avoid looking at dogs. It's impossible to perceive, or even conceive, a discrete object without simultaneously attaching a symbol to it. Or, if you prefer, grounding a symbol on it. (Assuming that we're considering information processing in terms of symbols.) From lacertilian at gmail.com Wed Feb 3 03:44:54 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 2 Feb 2010 19:44:54 -0800 Subject: [ExI] meaning & symbols In-Reply-To: <230474.84371.qm@web36502.mail.mud.yahoo.com> References: <230474.84371.qm@web36502.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > No matter what you may choose to call them, people do speak and understand word-symbols. The human mind thus "has semantics" and any coherent model of it must explain that plain fact. I still can't see how the computationalist model fails here, but, more significantly, I can't see why you think it does. Maybe if I went back through the archives and read this whole discussion from the start, but even I don't have that much free time. Gordon Swobe : >Spencer Campbell : >> You're correct in saying that sense symbols do not exist, >> but only insofar as there aren't any symbols which DO exist. > > Hmm, I count 21 word-symbols in that sentence of yours. And I count 23, because those apostrophes denote points where I've smashed two discrete words together to save space. I could also get 25 by treating "insofar" as the full three words it's composed of. Then again, it's really just an arbitrary convention that allows me to do this, so if I change the convention I could just as easily count "in saying that" (ins'ingt), "but only" ('tonly), and "insofar as there" (you get the idea) as single word-symbols as well. There's also an uncompressed "do not" in there, and the concept of sense symbols might catch on to such an extent that we start talking about sensesymbols instead! So I might just as well say that there are only 14 word symbols in that sentence. Then again, I only chose those particular words because it struck me that I almost always put them together in just those sequences. I could make any two words into one, if I don't care about efficiency. Therefore, I could squeeze the whole sentence into a single, magnificently specific word that I'll never, ever have a chance to use again. Conclusion: spaces are not (aren't) the be-all end-all demarcation method of choice, and this is why the word counter in my command-line shell comes up with a slightly different answer than the one built into Google Docs. By the first method this message weighs in at 368 words, whereas the second confidently gives me a figure of 371. From lacertilian at gmail.com Wed Feb 3 16:36:02 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 3 Feb 2010 08:36:02 -0800 Subject: [ExI] meaning & symbols In-Reply-To: <910459.5460.qm@web113613.mail.gq1.yahoo.com> References: <910459.5460.qm@web113613.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > OK obviously this word 'symbol' needs some clear definition. > > I would use the word to mean any distinct pattern of neural activity that has a relationship with other such patterns. > But that's just me. ?Maybe I'm overstretching the use of the word. > > What do other people mean by the word 'symbol', in this context? About the same. It's a problematic definition in that *distinct* patterns of neural activity are hard to come by, but I can't do any better. From lacertilian at gmail.com Wed Feb 3 16:44:16 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 3 Feb 2010 08:44:16 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <969214.43756.qm@web36506.mail.mud.yahoo.com> Message-ID: John Clark : > Your retort is always I don't understand that or I do understand that so > "obviously" that can't be right. Even in theory I don't see how any > explanation would satisfy you. Right now I'm thinking the only way to do it is by forming an unbreakable line of similarity between Turing machines and human brains. Not an easy task for the same reason you hinted at: one is very simple and easy to understand, whereas one is very complex and difficult to understand. Basically, it all depends on what Gordon thinks is the simplest conceivable object capable of intentionality. From lacertilian at gmail.com Thu Feb 4 00:30:40 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 3 Feb 2010 16:30:40 -0800 Subject: [ExI] meaning & symbols In-Reply-To: <642330.6805.qm@web36501.mail.mud.yahoo.com> References: <642330.6805.qm@web36501.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Many things aside from symbols can consistently and systematically change the state of the symbol reader. This isn't true even remotely true of all symbol readers real and imaginary. Maybe you're talking about the human mind specifically. But, if not: Turing machines are not real things, but they are symbol readers. If we imagine a Turing machine whose state can be changed, consistently and systematically, by anything aside from symbols, we are not really imagining a Turing machine anymore. Gordon Swobe : > This wikipedia definition seems better: > > "A symbol is something such as an object, picture, written word, sound, or particular mark that represents something else by association, resemblance, or convention." More generally accurate, yes, but in this context not nearly as useful as Ben Zaiboc's definition. Wikipedia's does not come anywhere near explaining how one symbol can dynamically give rise to a chain of other symbols, which, to my thinking, is the very essence of thought. My guess is that no one here believes meaning can exist outside of thought, or at least a thought-like process. The only question is how thought-like the process has to be. From gts_2000 at yahoo.com Thu Feb 4 01:34:32 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 3 Feb 2010 17:34:32 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <428505.81133.qm@web36503.mail.mud.yahoo.com> --- On Mon, 2/1/10, Stathis Papaioannou wrote: >> I reject as absurd for example your theory that a >> brain the size of Texas constructed of giant neurons made of >> beer cans and toilet paper will have consciousness merely by >> virtue of those beer cans squirting neurotransmitters >> betwixt themselves in the same patterns that natural neurons >> do. > > That is a consequence of functionalism but at this point > functionalism is assumed to be wrong. ?? Can a conscious Texas-sized brain constructed out of giant neurons made of beer cans and toilet paper exist as a possible consequence of your brand of functionalism? Or not? -gts From stathisp at gmail.com Thu Feb 4 03:03:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 4 Feb 2010 14:03:08 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <428505.81133.qm@web36503.mail.mud.yahoo.com> References: <428505.81133.qm@web36503.mail.mud.yahoo.com> Message-ID: On 4 February 2010 12:34, Gordon Swobe wrote: > --- On Mon, 2/1/10, Stathis Papaioannou wrote: > >>> I reject as absurd for example your theory that a >>> brain the size of Texas constructed of giant neurons made of >>> beer cans and toilet paper will have consciousness merely by >>> virtue of those beer cans squirting neurotransmitters >>> betwixt themselves in the same patterns that natural neurons >>> do. >> >> That is a consequence of functionalism but at this point >> functionalism is assumed to be wrong. > > ?? > > Can a conscious Texas-sized brain constructed out of giant neurons made of beer cans and toilet paper exist as a possible consequence of your brand of functionalism? Or not? It would have to be much, much larger than Texas if it was to be human equivalent and it probably wouldn't physically possible due (among other problems) to loss of structural integrity over the vast distances involved. However, theoretically, there is no problem if such a system is Turing-complete and if the behaviour of the brain is computable. As for the "??": I have ASSUMED that functionalism is wrong, i.e. that is possible to make a structure which behaves like a brain but lacks consciousness, to see where this leads. I have shown (with your help) that it leads to a contradiction, eg. "the structure both does and does not behave exactly like a normal brain", which implies that the original assumption must be FALSE. It is like assuming that sqrt(2) is rational, and then showing that this leads to contradiction, which implies that sqrt(2) is not rational. -- Stathis Papaioannou From avantguardian2020 at yahoo.com Thu Feb 4 06:33:39 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 3 Feb 2010 22:33:39 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <969214.43756.qm@web36506.mail.mud.yahoo.com> Message-ID: <317129.32350.qm@web65616.mail.ac4.yahoo.com> ----- Original Message ---- > From: Spencer Campbell > To: ExI chat list > Sent: Wed, February 3, 2010 8:44:16 AM > Subject: Re: [ExI] How not to make a thought experiment > > John Clark : > > Your retort is always I don't understand that or I do understand that so > > "obviously" that can't be right. Even in theory I don't see how any > > explanation would satisfy you. > > Right now I'm thinking the only way to do it is by forming an > unbreakable line of similarity between Turing machines and human > brains. Not an easy task for the same reason you hinted at: one is > very simple and easy to understand, whereas one is very complex and > difficult to understand. > > Basically, it all depends on what Gordon thinks is the simplest > conceivable object capable of intentionality. If you equate intentionality with consciousness, one is left with the result that individual cells (of all types)?are conscious. This is because cells demonstrate intentionality. It is one of the lesser known hallmarks of life. Survival is intentional and?anything that left survival strictly to chance would?quickly be weeded out by natural selection. One can clearly see that in this video posted earlier by Spike. http://www.youtube.com/watch?v=JnlULOjUhSQ The white blood cell is clearly *intent* on eating the bacterium. And the bacterium is clearly *intent* on evading the theat to its existense.?Therefore a bacterium is the simplest concievable object that I am?confident is capable of intentionality. Although viruses being far?simpler may possibly?also display intentionality if you?interpret trying to?hijack cells and evade?the immune?response as hallmarks of "intention". With regard to the on going discussion, I think that it may be?an important first step?to try to program a computer to be unequivocally "alive" even on the level of a bacterium as a first step. It would be far simpler than trying to create a "brain" from scratch and would lend a great deal of support to the functional case. Not to mention it would disprove vitalism once and for all which would be a feather in the cap of functionalism.??? ? Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From bbenzai at yahoo.com Thu Feb 4 09:35:52 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 01:35:52 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <496675.14673.qm@web113614.mail.gq1.yahoo.com> Spencer Campbell wrote: > >From my perspective, Gordon has been very consistent > when it comes to > what will and will not pass the Turing test. His arguments, > implicitly > or explicitly, state that the Turing test does not measure > consciousness. This is one point on which he and I agree. The Turing test was designed to answer the question "can machines think?". It doesn't measure consciousness directly (we don't know of anything that can), but it does measure something which can only be the product of consciousness: The ability of a system to convince a human that it is itself human. This is equivalent to convincing them that is is conscious. If this wasn't the case, people would have no real reason to believe that other people were conscious. For this reason, I'd say that anything which can convincingly pass the Turing test should be regarded as conscious. Obviously, you'd want to take this seriously, and not be satisfied with a five-minute conversation. It'd have to be over a period of time, involving many different domains of knowledge before you'd be fully convinced, but if and when you were convinced that you were actually talking to a human, You'd have to admit that either you think you were talking to a conscious being, or that you think other humans aren't conscious. Ben Zaiboc From jameschoate at austin.rr.com Thu Feb 4 10:25:00 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 4 Feb 2010 10:25:00 +0000 Subject: [ExI] The digital nature of brains In-Reply-To: <496675.14673.qm@web113614.mail.gq1.yahoo.com> Message-ID: <20100204102500.86W8S.511470.root@hrndva-web26-z02> No it does not. It is test which asks if a human being can tell the difference through a remote communications channel between a machine and a human. It says absolutely nothing about intelligence, thinking, or anything like that with regard to machines. These sorts of claims demonstrate that the claimant has an inverted understanding of the issue. The Turing Test has one, and only one outcome...to measure the limits of human ability. ---- Ben Zaiboc wrote: > The Turing test was designed to answer the question "can machines think?". -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From stathisp at gmail.com Thu Feb 4 11:15:11 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 4 Feb 2010 22:15:11 +1100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: References: Message-ID: On 31 January 2010 14:07, Spencer Campbell wrote: > Stathis Papaioannou : >>Gordon Swobe : >>> A3: syntax is neither constitutive of nor sufficient for semantics. >>> >>> It's because of A3 that the man in the room cannot understand the symbols. I started the robot thread to discuss the addition of sense data on the mistaken belief that you had finally recognized the truth of that axiom. Do you recognize it now? >> >> No, I assert the very opposite: that meaning is nothing but the >> association of one input with another input. You posit that there is a >> magical extra step, which is completely useless and undetectable by >> any means. > > Crap! Now I'm doing it too. This whole discussion is just an absurdly > complex feedback loop, neither positive nor negative. It will never > get better and it will never end. Yet the subject matter is > interesting, and I am helpless to resist. > > First, yes, I agree with Stathis's assertion that association of one > input with another input, or with another output, or, generally, of > one datum with another datum, is the very definition of meaning. > Literally, "A means B". This is mathematically equivalent to, "A > equals B". Smoke equals fire, therefore, if smoke is true or fire is > true then both are true. This is very bad reasoning, and very human. > Nevertheless, we can say that there is a semantic association between > smoke and fire. > > Of course the definitions of semantics and syntax seem to have become > deranged somewhere along the lines, so someone with a different > interpretation of their meaning than I have may very well leap at the > chance to rub my face in it here. This is a risk I am willing to take. > > So! > > To see a computer's idea of semantics one might look at file formats. > An image can be represented in BMP or PNG format, but in either case > it is the same image; both files have the same meaning, though the > manner in which that meaning is represented differs radically, just as > 10/12 differs from 5 * 6^-1. > > Another source might be desktop shortcuts. You double-click the icon > for the terrible browser of your choice, and your computer takes this > to mean instead that you are double-clicking an EXE file in a > completely different place. Note that I could very naturally insert > the word "mean" there, implying a semantic association. > > Neither of these are nearly so human a use of semantics, because the > relationship in each case is literal, not causal. However, it is still > semantics: an association between two pieces of information. > > Gordon has no beef with a machine that produces intelligent behavior > through semantic processes, only with one that produces the same > behavior through syntax alone. > > At this point, though, his argument becomes rather hazy to me. How can > anything even resembling human intelligence be produced without > semantic association? > > A common feature in Searle's thought experiments, and in Gordon's by > extension, is that there is a very poor description of the exact > process by which a conversational computer determines how to respond > to any given statement. This is necessary to some extent, because if > anyone could give a precise description of the program that passes the > Turing test, well, they could just write it. > > In any case, there's just no excuse to describe that program with > rules like: if I hear "What is a pig?" then I will say "A farm > animal". Sure, some people give that response to that question some of > the time. But if you ask it twice in a row to the same person, you > will get dramatically different answers each time. It's a gross > oversimplification, but I'm forced to admit that it is technically > valid if one views it only as what will happen, from a very high-level > perspective, if "What is a pig?" is the very next thing the Chinese > Room is asked. A whole new lineup of rules like that would be have to > be generated after each response. Not a very practical solution. > Effective, but not efficient. > > However, it seems to me that even if we had the brute processing power > to implement a system like that while keeping it realistically > quick-witted, it would still be impossible to generate that rule > without the program containing at least one semantic fact, namely, > "pig = farm animal". > > The only part syntactical rules play in this scenario is to insert the > word "a" at the beginning of the sentence. Syntax is concerned only > with grammatical correctness. Using syntax alone, one might imagine > that the answer would be "a noun": the place at which "pig" occurs in > the sentence implies that the word must be a noun, and this is as > close as a syntactical rule can come to showing similarity between two > symbols. If the grammar in question doesn't explicitly provide > categories for symbols, as in English, then not even this can be done, > and a meaningful syntax-based response is completely impossible. > > I started on this message to point out that Stathis had completely > missed the point of A3, but sure enough I ended up picking on Searle > (and Gordon) as well. > > In the end, I would like to make the claim: syntax implies semantics, > and semantics implies syntax. One cannot find either in isolation, > except in the realm of one's imagination. Like so many other divisions > imposed between natural (that is, non-imaginary) phenomena, this one > is valid but false. I'm not completely sure what you're saying in this post, but at some point the string of symbol associations (A means B, B means C, C means D...) is grounded in sensory input. Searle would say that there needs to be an extra step whereby the symbol so grounded gains "meaning", but this extra step is not only completely mysterious, it is also completely superfluous, since every observable fact about the world would be the same without it. It's like claiming that a subset of humans have an extra dimension of meaning, meaning*, which is mysterious and undetectable, but assuredly there making their lives richer. -- Stathis Papaioannou From stefano.vaj at gmail.com Thu Feb 4 11:59:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 4 Feb 2010 12:59:50 +0100 Subject: [ExI] Personal conclusions Message-ID: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> On 3 February 2010 23:05, Gordon Swobe wrote: > I have no interest in magic. I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents (semantics). Yes, this is clear by now. The bunch of threads of which Gordon Swobe is the star, which I have admittedly followed on and off, also because of their largely repetitive nature, have been interesting, albeit disquieting, for me. Not really to hear him reiterate innumerable times that for whatever reason he thinks that (organic? human?) brains, while obviously sharing universal computation abilities with cellular automata and PCs, would on the other hand somewhat escape the Principle of Computational Equivalence. But because so many of the people having engaged in the discussion of the point above, while they may not believe any more in a religious concept of "soul", seem to accept without a second thought that some very poorly defined Aristotelic essences would per se exist corresponding to the symbols "mind", "consciousness", "intelligence", and that their existence in the sense above would even be an a priori not really open to analysis or discussion. Now, if this is the case, I have sincerely troubles in finding a reason why we should not accept on an equal basis, the article of faith that Gordon Swobe proposes as to the impossibility for a computer to exhibit the same. Otherwise, we should perhaps reconsider a little non really the AI research programmes in place, but rather, say, the Circle of Vienna, Popper or Dennett. -- Stefano Vaj From stathisp at gmail.com Thu Feb 4 12:07:17 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 4 Feb 2010 23:07:17 +1100 Subject: [ExI] Personal conclusions In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: On 4 February 2010 22:59, Stefano Vaj wrote: > But because so many of the people having engaged in the discussion of > the point above, while they may not believe any more in a religious > concept of "soul", seem to accept without a second thought that some > very poorly defined Aristotelic essences would per se exist > corresponding to the symbols "mind", "consciousness", "intelligence", > and that their existence in the sense above would even be an a priori > not really open to analysis or discussion. Probably you and I believe the same things about "mind", "consciousness" etc., but we use different words. -- Stathis Papaioannou From stefano.vaj at gmail.com Thu Feb 4 12:16:45 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 4 Feb 2010 13:16:45 +0100 Subject: [ExI] The digital nature of brains In-Reply-To: <496675.14673.qm@web113614.mail.gq1.yahoo.com> References: <496675.14673.qm@web113614.mail.gq1.yahoo.com> Message-ID: <580930c21002040416o54ebd748rceedf0dabe607034@mail.gmail.com> On 4 February 2010 10:35, Ben Zaiboc wrote: > For this reason, I'd say that anything which can convincingly pass the Turing test should be regarded as conscious. In fact, I suspect that anything that can convincingly pass the Turing test is simply conscious *by definition*, because it is the test that we routinely apply to check whether the system we are in touch with is conscious or not (say, when trying to decide whether some human being is asleep or dead). The simple question is: should something, in addition to being able to perform as well as the average adult, alert human being in a Turing test, have blue eyes, flesh limbs, a hairy head or a liver to qualify as "conscious"? If we try to analyse any "intuition" we may have in this sense, any such intuition evaporates quickly enough -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Thu Feb 4 12:24:24 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 4 Feb 2010 13:24:24 +0100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: References: Message-ID: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> On 4 February 2010 12:15, Stathis Papaioannou wrote: > I'm not completely sure what you're saying in this post, but at some > point the string of symbol associations (A means B, B means C, C means > D...) is grounded in sensory input. Defined as? > Searle would say that there needs > to be an extra step whereby the symbol so grounded gains "meaning", > but this extra step is not only completely mysterious, it is also > completely superfluous, since every observable fact about the world > would be the same without it. > Which sounds pretty equivalent to saying that it does not exist, if one accepts that one's "world" is simply the set of all observable phenomena, and that a claim pertaining to the existence of something is meaningless only if it can be disproved. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Feb 4 13:03:26 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 00:03:26 +1100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> References: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> Message-ID: 2010/2/4 Stefano Vaj : > On 4 February 2010 12:15, Stathis Papaioannou wrote: >> >> I'm not completely sure what you're saying in this post, but at some >> point the string of symbol associations (A means B, B means C, C means >> D...) is grounded in sensory input. > > Defined as? Input from the environment. "Chien" is "hund", "hund" is "dog", and "dog" is the furry creature with four legs and a tail, as learned by English speakers as young children. >> Searle would say that there needs >> to be an extra step whereby the symbol so grounded gains "meaning", >> but this extra step is not only completely mysterious, it is also >> completely superfluous, since every observable fact about the world >> would be the same without it. > > Which sounds pretty equivalent to saying that it does not exist, if one > accepts that one's "world" is simply the set of all observable phenomena, > and that a claim pertaining to the existence of something is meaningless > only if it can be disproved. Yes, or you could create undetectable entities like this whenever the fancy took you. -- Stathis Papaioannou From bbenzai at yahoo.com Thu Feb 4 13:38:59 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 05:38:59 -0800 (PST) Subject: [ExI] Mind extension In-Reply-To: Message-ID: <767464.48754.qm@web113618.mail.gq1.yahoo.com> Spencer Campbell wrote: > By my understanding, that one experiment in which > you replace each individual neuron with an artificial duplicate, one > by one, would preserve the same conscious mind you started with. > Actually I am kind of counting on this last point being true, so I > have a vested interest in finding out whether or not it is. If you can > convince me of my error before I actually act on it, Gordon, I would > appreciate it. I've been pondering this issue, and it's possible that there's a way around the problem of confirming that consciousness can run on artificial neurons without actually removing existing natural neurons, and condemning the subject to death if it turns out to be untrue. I'm thinking of an 'mind extension' scenario, where you attach these artificial neurons (or their software equivalent) to an existing brain using neural interfaces, in a configuration that does something useful, like giving an extra sense or an expanded or secondary short-term memory (of course all this assumes good neural interface technology, working artificial neurons and a better understanding of mental architecture than we have just now). Let the user settle in with the new part of their brain for a while, then they should be able to tell if they 'inhabit' it or if it's just like driving a car: it's something 'out there' that they are operating. If they feel that their consciousness now partly resides in the new brain area, it should be possible to duplicate all the vital brain modules and selectively anaesthetise their biological counterparts without any change in subjective experience. If the person says "Hang on, I blanked out there" for the period of time the artificial brain parts were operating on their own, we would know that they don't support conscious experience, and the person could say 'no thanks' to uploading, with their original brain intact. The overall idea is to build extra room for the mind to expand into, and see if it really has or not. If the new, artificial parts actually don't support consciousness, you'd soon notice. If they do, you could augment your brain to the point where the original was just a tiny part, and you wouldn't even miss it when it eventually dies off. Ben Zaiboc From bbenzai at yahoo.com Thu Feb 4 13:13:12 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 05:13:12 -0800 (PST) Subject: [ExI] multiple realizability In-Reply-To: Message-ID: <764802.35206.qm@web113618.mail.gq1.yahoo.com> I suspect that it's ignorance of the importance of levels of abstraction that can lead to ideas like "minds can come from neural networks, but not from digital programs". All you need to see is that a digital program can implement a neural network at a higher level of abstraction to demolish this idea. That's an over-simplification of course, because the digital program/s would more likely implement a set of software objects that interact to implement individual neural nets that interact to implement sets of information processing mechanisms that interact to create a mind. That's 5 levels of abstraction in my probably over-simplistic concept of the process. There may well be several more in a realistic implementation. Ben Zaiboc From gts_2000 at yahoo.com Thu Feb 4 13:47:00 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 05:47:00 -0800 (PST) Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> Message-ID: <595536.39512.qm@web36508.mail.mud.yahoo.com> --- On Thu, 2/4/10, Stefano Vaj wrote: Stathis wrote: >> Searle would say that there >> needs to be an extra step whereby the symbol so grounded gains >> "meaning", but this extra step is not only completely mysterious, it >> is also completely superfluous, since every observable fact about >> the world would be the same without it. No, he would remind you of the obvious truth there exist facts in the world that have subjective first-person ontologies. We can know those facts only in the first-person but they have no less reality than those objective third-person facts that as you say "would be the same without it". Real subjective first-person facts of the world include one's own conscious understanding of words. Stefano wrote: > Which sounds pretty equivalent to saying that it does not > exist, I think you want to deny the reality of the subjective. I don't know why. -gts From alfio.puglisi at gmail.com Thu Feb 4 13:56:39 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Thu, 4 Feb 2010 14:56:39 +0100 Subject: [ExI] New NASA plans Message-ID: <4902d9991002040556x5a5407c1r7a8e0bfee32f401a@mail.gmail.com> Does anyone know if this article from the Economist about Obama's plans for NASA: http://www.economist.com/sciencetechnology/displayStory.cfm?story_id=15449787&source=features_box_main is anywhere accurate? The overall tone is more positive than I expected... in particular, the elimination of "cost-plus" contracts seems a big step in cleaning things up. And, well, I'm a huge fan of SpaceX :-) Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Thu Feb 4 13:32:16 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 05:32:16 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <854080.27226.qm@web36504.mail.mud.yahoo.com> --- On Wed, 2/3/10, Stathis Papaioannou wrote: >> Can a conscious Texas-sized brain constructed out of >> giant neurons made of beer cans and toilet paper exist as a >> possible consequence of your brand of functionalism? Or >> not? > > It would have to be much, much larger than Texas if it was > to be human equivalent and it probably wouldn't physically possible due > (among other problems) to loss of structural integrity over the > vast distances involved. However, theoretically, there is no > problem if such a system is Turing-complete and if the behaviour of > the brain is computable. Okay, I take that as a confirmation of your earlier assertion that brains made of beer cans and toilet paper can have consciousness provided those beer cans squirt the correct neurotransmitters between themselves at the correct times. This suggests to me that your ideology has a firmer grip on your thinking that does your sense of the absurd, and that no reductio ad absurdum argument will find traction with you. I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them. :-) -gts From bbenzai at yahoo.com Thu Feb 4 14:26:44 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 06:26:44 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <438813.76903.qm@web113617.mail.gq1.yahoo.com> claimed: > ---- Ben Zaiboc > wrote: > > > The Turing test was designed to answer the question > "can machines think?". > No it does not. It is test which asks if a human being can > tell the difference through a remote communications channel > between a machine and a human. > > It says absolutely nothing about intelligence, thinking, or > anything like that with regard to machines. These sorts of > claims demonstrate that the claimant has an inverted > understanding of the issue. The Turing Test has one, and > only one outcome...to measure the limits of human ability. Well, we're talking about different things. I said "it was designed to..", and you replied "no it does not". Both of these can be true. The test was intended to test the abilities of a machine to convince a human, not to test the abilities of the human. Of course that may well be one of its side effects! (apparently a disturbingly high proportion of people - mostly teenagers I think - are convinced by some chatbots) Ben Zaiboc From stefano.vaj at gmail.com Thu Feb 4 14:30:37 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 4 Feb 2010 15:30:37 +0100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: References: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> Message-ID: <580930c21002040630k1d8931dapd63f2ef62ff51491@mail.gmail.com> On 4 February 2010 14:03, Stathis Papaioannou wrote: > 2010/2/4 Stefano Vaj : > > On 4 February 2010 12:15, Stathis Papaioannou > wrote: > >> > >> I'm not completely sure what you're saying in this post, but at some > >> point the string of symbol associations (A means B, B means C, C means > >> D...) is grounded in sensory input. > > > > Defined as? > > Input from the environment. "Chien" is "hund", "hund" is "dog", and > "dog" is the furry creature with four legs and a tail, as learned by > English speakers as young children. > Mmhhh. "Dog" is a sound perceived with one's ears, subvocalised or represented by the appropriate characters in a given typeface, Pluto may be a design or icon of such an animal, the bits by which he is rasterised are another symbol thereof, the pixel of the image of an actual dog on somebody's retina is another symbol thereof. Symbols all the way down, all of them "sensorial" after a fashion, for us as exactly as for any other system. OTOH, inputs and interfaces are of course crucial to the definition of a given system. Mr. Jones is different from young baby Brown who is different from a bat who is different from a PC with a SCSI scanner which is different from an I-Phone... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Thu Feb 4 14:35:27 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 06:35:27 -0800 (PST) Subject: [ExI] The simplest possible conscious system In-Reply-To: Message-ID: <820642.67186.qm@web113613.mail.gq1.yahoo.com> The simplest possible conscious system Spencer Campbell asked: > I'd be interested to hear from each of you a description of what would > constitute the simplest possible conscious system, and whether or not > such a system would necessarily also have intelligence or > understanding. Hm, interesting challenge. I'd probably define Intelligence as problem-solving ability, and Understanding as the association of new 'concept-symbols' with established ones. I'd take "Conscious" to mean "Self-Conscious" or "Self-Aware", which almost certainly involves a mental model of one's self, as well as an awareness of the environment, and one's place in it. I'd guess that the simplest possible conscious system would have an embodiment ('real' or virtual) within an environment, sensors, actuators, the ability to build internal representations of both its environment and itself, and by implication some kind of state memory. Hm. maybe we already have conscious robots, and don't realise it! This is just a stab in the dark, though, I may be way off. As for possessing intelligence and understanding, the simplest possible conscious system almost certainly wouldn't have much of either, although by my definitions above, there would have to be *some* of both. Just not very much (it would need Intelligence only if it's going to try to survive). Ben Zaiboc From gts_2000 at yahoo.com Thu Feb 4 15:00:22 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 07:00:22 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: <20100204102500.86W8S.511470.root@hrndva-web26-z02> Message-ID: <325635.76699.qm@web36508.mail.mud.yahoo.com> Spencer: > How would one determine, in practice, whether or not any > given information processor is a digital computer? I would start by looking for the presence of real physical digital electronic hardware and syntactic instructions that run on it. In some cases you will find those instructions in the software. In other cases you will find them coded into the hardware or firmware. Another way to answer your question: If you find yourself wanting to consult a philosopher about whether a given entity might in some sense exist at some level of description as a digital computer then most likely it's not really a digital computer. :) >> Is it accurate to say that two digital computers, >> networked together, may themselves constitute a larger digital computer? Sure. >> Is the Internet a digital computer? Or, equivalently, >> depending on your definition of the Internet: is the Internet a >> piece of software running on a digital computer? I see the internet as a network of computers that run software. You could consider it one large computer if you like. >> Finally, would you say that an artificial neural >> network is a digital computer? Software implementations of artificial neural networks certainly fall under the general category of digital computer, yes. However in my view no software of any kind can cause subjective experience to arise in the software or hardware. I consider it logically impossible that syntactical operations on symbols, whether they be 1's and 0's or Shakespeare's sonnets, can cause the system implementing those operations to have subjective mental contents. The upshot is that 1) strong AI on digital computers is false, and 2) the human brain does something besides run programs, assuming it runs programs at all. -gts From jonkc at bellsouth.net Thu Feb 4 15:47:02 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 4 Feb 2010 10:47:02 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <317129.32350.qm@web65616.mail.ac4.yahoo.com> References: <969214.43756.qm@web36506.mail.mud.yahoo.com> <317129.32350.qm@web65616.mail.ac4.yahoo.com> Message-ID: <4C1944CD-40AD-48A1-A274-2CF40D90CA77@bellsouth.net> On Feb 4, 2010, The Avantguardian wrote: > a bacterium is the simplest concievable object that I am confident is capable of intentionality. Stripped to its essentials intentionality means someone or something that can change its internal state, a state that predisposes it to do one thing rather than another; or at least that's what I mean by the word. I like it because it lacks circularity. So I would say that a punch card reader is simpler than a bacterium and it has intentionality. A Turing Machine is even simpler and it has intentionality too. Granted this underlying mechanism may seem a bit mundane and inglorious, but that's in the very nature of explanations; presenting complex and mysterious things in the smallest possible chunks in a way that is easily understood. Gordon would disagree with me because for him intentionality means having consciousness, and having consciousness means having intentionality. A circle has no end so that may be why his thread has been going on for so long with no end in sight. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jameschoate at austin.rr.com Thu Feb 4 16:43:16 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 4 Feb 2010 16:43:16 +0000 Subject: [ExI] The digital nature of brains In-Reply-To: <438813.76903.qm@web113617.mail.gq1.yahoo.com> Message-ID: <20100204164316.1CI9P.454213.root@hrndva-web06-z02> This is a perfect example of my 'understanding inversion' claim... First, we're not talking about different things. The Turing Test was suggested, not 'designed' as it's not a algorithm or mechanism. At best it's a heuristic. If you read Turing's papers and the period documentation the fundamental question is 'can the person tell the difference?'. If the answer is 'yes' the -pre-assumptive claim- is that some level of 'intelligence' has been reached in AI technology. Exactly what that level is, is never defined specifically by the original authors. The second and follow on generations of AI researchers have interpreted it to mean that AI has intelligence in the human sense. I would suggest, strongly, that this is a cultural 'taboo' that differentiates main stream from perceived cranks. They way you flip the meaning of 'can the person tell the difference' to 'machine to convince' are specious and moot. The important point is the human not being able to tell the difference. You say it is not meant to test the ability of humans, but it is the humans who -must be convinced-. I would say you're trying to massage the test to fit a preconceived cultural desire and not a real technical benchmark. It's about validating human emotion and not mechanical performance. ---- Ben Zaiboc wrote: > Well, we're talking about different things. I said "it was designed to..", and you replied "no it does not". Both of these can be true. > > The test was intended to test the abilities of a machine to convince a human, not to test the abilities of the human. Of course that may well be one of its side effects! (apparently a disturbingly high proportion of people - mostly teenagers I think - are convinced by some chatbots) -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jameschoate at austin.rr.com Thu Feb 4 16:48:30 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 4 Feb 2010 16:48:30 +0000 Subject: [ExI] The simplest possible conscious system In-Reply-To: <820642.67186.qm@web113613.mail.gq1.yahoo.com> Message-ID: <20100204164830.VFDFM.454315.root@hrndva-web06-z02> I would agree, however there is a couple of issues that must be addressed before it becomes meaningful. First, what is 'conscious'? That definition must not use human brains as an axiomatic measure. Otherwise we're arguing in circles and making an axiomatic assumption that humans are somehow fundamentally gifted with a singular behavior. This destroys our test on several levels. The point being the theoretic structure must demonstrate that human thought is conscious and not be assumptive on that point. We can't use an a priori assumption we are conscious, that we think we are does not make it so. ---- Ben Zaiboc wrote: > The simplest possible conscious system > > Spencer Campbell asked: > > > I'd be interested to hear from each of you a description of what would > > constitute the simplest possible conscious system, and whether or not > > such a system would necessarily also have intelligence or > > understanding. > > Hm, interesting challenge. -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jonkc at bellsouth.net Thu Feb 4 16:51:20 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 4 Feb 2010 11:51:20 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <642330.6805.qm@web36501.mail.mud.yahoo.com> References: <642330.6805.qm@web36501.mail.mud.yahoo.com> Message-ID: On Feb 3, 2010 Gordon Swobe wrote: > Many things aside from symbols can consistently and systematically change the state of the symbol reader. Like what? And if it consistently and systematically changes the state of the symbol reader exactly what additional quality do these "many things" have that disqualifies them as being symbols? > This wikipedia definition seems better: > "A symbol is something such as an object, picture, written word, sound, or particular mark that represents something else by association Rather like a hole in a particular place on a punch card, and its association to a particular column in a punch card reader. > I take that as a confirmation of your earlier assertion that brains made of beer cans and toilet paper can have consciousness provided those beer cans squirt the correct neurotransmitters between themselves at the correct times. ABSOLUTELY! > This suggests to me that your ideology has a firmer grip on your thinking that does your sense of the absurd, and that no reductio ad absurdum argument will find traction with you. Before you use a reductio ad absurdum argument you must be certain it's logically contradictory, just being odd is not good enough. > I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them. It seems odd to us for beer cans and toilet paper to be conscious but in a beer can world it would seem equally odd for 3 pounds of grey goo to be conscious. Neither is logically contradictory. > I have no interest in magic. I'm sure you tell yourself that and I'm sure you believe it, but I don't believe it. Grey goo has magic but beer cans computers and toilet paper don't, despite all the talk of semantics and syntax that is the heart of your argument. > I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents And how did you learn of this very interesting fact? You certainly didn't prove it mathematically or find it in the fossil record, you must have learned of it magically. A magic stronger than Darwin. John K Clark > > -gts > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Thu Feb 4 17:50:01 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Thu, 4 Feb 2010 11:50:01 -0600 Subject: [ExI] Blue Brain Project film preview Message-ID: <55ad6af71002040950o43a4825xa44e985b385c002c@mail.gmail.com> Noah Sutton is making a documentary: http://thebeautifulbrain.com/2010/02/bluebrain-film-preview/ "We are very proud to present the world premiere of BLUEBRAIN ? Year One, a documentary short which previews director Noah Hutton?s 10-year film-in-the-making that will chronicle the progress of The Blue Brain Project, Henry Markram?s attempt to reverse-engineer a human brain. Enjoy the piece and let us know what you think." There's a longer video that explains what he's up to. The Emergence of Intelligence in the Neocortical Microcircuit http://video.google.com/videoplay?docid=-2874207418572601262&ei=lghrS6GmG4jCqQLA1Yz7DA - Bryan http://heybryan.org/ 1 512 203 0507 From lacertilian at gmail.com Thu Feb 4 18:12:27 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 10:12:27 -0800 Subject: [ExI] Personal conclusions In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: Stefano Vaj : > Not really to hear him reiterate innumerable times that for whatever > reason he thinks that (organic? human?) brains, while obviously > sharing universal computation abilities with cellular automata and > PCs, would on the other hand somewhat escape the Principle of > Computational Equivalence. Yeah... yeah. He doesn't seem like the type to take Stephen Wolfram seriously. I'm working on it. Fruitlessly, maybe, but I'm working on it. Getting some practice in rhetoric, at least. Stefano Vaj : > ... very poorly defined Aristotelic essences would per se exist > corresponding to the symbols "mind", "consciousness", "intelligence" ... Actually, I gave a fairly rigorous definition for intelligence in an earlier message. I've refined it since then: The intelligence of a given system is inversely proportional to the average action (time * work) which must be expended before the system achieves a given purpose, assuming that it began in a state as far away as possible from that purpose. (As I said before, this definition won't work unless you assume an arbitrary purpose for the system in question. Purposes are roughly equivalent to attractors here, but the system may itself be part of a larger system, like us. Humans are tricky: the easiest solution is to say they swap purposes many times a day, which means their measured intelligence would change depending on what they're currently doing. Which is consistent with observed reality.) I can't give similarly precise definitions for "mind" or consciousness, and I wouldn't be able to describe the latter at all. Tentatively, I think consciousness is devoid of measurable qualities. This would make it impossible to prove its existence, which to my mind is a pretty solid argument for its nonexistence. Nevertheless, we talk about it all the time, throughout history and in every culture. So even if it doesn't exist, it seems reasonable to assume that it is at least meaningful to think about. Stefano Vaj : > Now, if this is the case, I have sincerely troubles in finding a > reason why we should not accept on an equal basis, the article of > faith that Gordon Swobe proposes as to the impossibility for a > computer to exhibit the same. Your argument runs like this: We have assumed at least one truth a priori. Therefore, we should assume all truths a priori. No, sorry. Doesn't work that way. All logic is, at base, fundamentally illogical. You begin by assuming something for no logical reason whatsoever, and attempt to redeem yourself from there. That doesn't mean reasoning is futile. There's a big difference between a logical assumption (which doesn't exist) and a rational assumption (which does). Accepting at face value that we have minds, intelligence, and consciousness, is perfectly rational. Accepting at face value that computers can not, is not. I can't say exactly why you should believe either of these statements, of course. They aren't in the least bit logical. Make of them what you will. I have to go eat breakfast. From jonkc at bellsouth.net Thu Feb 4 18:32:10 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 4 Feb 2010 13:32:10 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <595536.39512.qm@web36508.mail.mud.yahoo.com> References: <595536.39512.qm@web36508.mail.mud.yahoo.com> Message-ID: <4529BD17-B295-4C3A-B869-2C19DDC12F88@bellsouth.net> On Feb 4, 2010, Gordon Swobe wrote: >>> > he [Searle] would remind you of the obvious truth there exist facts in the world that have subjective first-person ontologies. We can know those facts[...] What's with this "we" business? You can know certain subjective facts about the universe from direct experience, and that outranks everything else even logic. But you have no reason to think that I or anybody else or a computer could do that too, and yet you do think that at least for the first two, you think other people are conscious when they are able to act intelligently; you do this because like me you couldn't function it you thought you were the only conscious being in the universe. But every one of the arguments you have used against the existence of consciousness in computers could just as easily be used to argue against the existence of consciousness in your fellow human beings, but you have never done so. You could also use your arguments to try to show that even you are not conscious, but as I say direct experience outranks everything else; but you have no reason to believe that other people who act intelligently are fundamentally different from anything else that acts intelligently. John K Clark > only in the first-person but they have no less reality than those objective third-person facts that as you say "would be the same without it". > > Real subjective first-person facts of the world include one's own conscious understanding of words. > > Stefano wrote: >> Which sounds pretty equivalent to saying that it does not >> exist, > > I think you want to deny the reality of the subjective. I don't know why. > > -gts > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Thu Feb 4 18:24:10 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Thu, 4 Feb 2010 10:24:10 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: <4C1944CD-40AD-48A1-A274-2CF40D90CA77@bellsouth.net> References: <969214.43756.qm@web36506.mail.mud.yahoo.com> <317129.32350.qm@web65616.mail.ac4.yahoo.com> <4C1944CD-40AD-48A1-A274-2CF40D90CA77@bellsouth.net> Message-ID: <565163.72063.qm@web65609.mail.ac4.yahoo.com> From: John Clark >To: ExI chat list >Sent: Thu, February 4, 2010 7:47:02 AM >Subject: Re: [ExI] How not to make a thought experiment > > >On Feb 4, 2010, The Avantguardian wrote: > >a bacterium is the simplest concievable object that I am?confident is capable of intentionality. > >Stripped to its essentials intentionality means someone or something that can change its internal state, a state that predisposes it to do one thing rather >than another; or at least that's what I mean by the word. I like it because it lacks circularity. While I understand your dislike of circularity, the definition you give is far too broad. Almost everything has an internal state that can be changed. The discovery of this and the mathematics behind it made?Ludwig Boltzmann famous. A rock has a temperature which is an?"internal state". If the temperature of the rock is higher than that of its surroundings,?its internal state predisposes the rock to?cool down. FWIW evolution by natural selection is based on a circular argument as well. Species evolve by the differential survival and reproduction of the fittest members of the species. What is fitness? Those adaptations?that allows?members of a?species to survive and reproduce.? >So I would say that a punch card reader is simpler than a bacterium and it has intentionality. A Turing Machine is even simpler and it has intentionality >too. While I will not discount the possibility that in the future a sufficently complex program running on a computer may exhibit life or consciousness, that program does not currently exist. Currently the "intentionality" of existing software is completely explicit and vicarious. That is to say that all software currently in existence?exhibits only the intentionality of the programmer and not any native or implicit intentionality of its own. By the same token, a mouse trap exhibits explicit intentionality as well, but?lacks implicit intentionality. That is we would say the mouse trap is *intended* to catch a mouse but we would not say the mouse trap is *intent* on catching a mouse.?Now some people may think that is true of?bacteria as well, but we laugh at intelligent design don't we? >Granted this underlying mechanism may seem a bit mundane and inglorious, but that's in the very nature of explanations; presenting complex and >mysterious things in the smallest possible chunks in a way that is easily understood.? The way of reductionism is fraught with the peril of oversimplification. You can reduce an automobile to quarks but that doesn't give you any insight as to how an automobile works. >Gordon would disagree with me because for him intentionality means having consciousness, and having consciousness means having intentionality. Then Gordon must accept that a bacterium is conscious. I however would say that implicit intentionality is necessary?for consciousness but not sufficient.? >A circle has no end so that may be why his thread has been going on for so long with no end in sight. One can extrapolate insufficent data into any conclusion one likes. Two given points can lie on a straight line or on?a?drawing of a unicorn. Neither of?these is likely the truth. Which is why I prefer empiricaI science to philosophy. I?think experimentation is the only hope of?settling this argument. ?? ?Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From aware at awareresearch.com Thu Feb 4 18:30:39 2010 From: aware at awareresearch.com (Aware) Date: Thu, 4 Feb 2010 10:30:39 -0800 Subject: [ExI] Personal conclusions In-Reply-To: References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: On Thu, Feb 4, 2010 at 10:20 AM, Aware wrote: > It's simply and necessarily how any system refers to references to > itself. ?Yes, it's recursive, and therefore unfamiliar and unsupported > by a language and culture that evolved to deal with relatively shallow > context and linear relationships of cause and effect. ?Meaning is not > as perceived by the observer, but in the response of the observer, > determined by its nature within a particular context. I left out a key word, sorry. Should have been: > It's simply and necessarily how any system refers to references to > itself. Yes, it's recursive, and therefore unfamiliar and unsupported > by a language and culture that evolved to deal with relatively shallow > context and linear relationships of cause and effect. Meaning is not > as perceived by the observer, but in the observed response of the observer, > determined by its nature within a particular context. - Jef From rpwl at lightlink.com Thu Feb 4 18:18:17 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 04 Feb 2010 13:18:17 -0500 Subject: [ExI] ANNOUNCE: New "Artificial General Intelligence" discussion list on Google Groups In-Reply-To: <55ad6af71002040950o43a4825xa44e985b385c002c@mail.gmail.com> References: <55ad6af71002040950o43a4825xa44e985b385c002c@mail.gmail.com> Message-ID: <4B6B0F69.8090709@lightlink.com> In response to the imminent closure of the AGI discussion list, I just set this up as an alternative: http://groups.google.com/group/artificial-general-intelligence Full name of the group is "Artificial General Intelligence" but the short name is AGI-group. (Note that there is already a google group called Artificial General Intelligence, but it appears to be spam-dead.) Its purpose is to encourage polite and well-informed discussion, so it will be moderated to that effect. Allow me to explain my rationale. In the past I felt like posting substantial content to the AGI list because it seemed that there were some people who were well-informed enough to engage in discussion. These days, the noise level is so high that I have no interest, because I know that the people who would give serious thought to real issues are just not listening anymore. I understand that Ben Goertzel is trying to solve this by setting up the H+ forum on AGI. I wish him luck in this, of course, and I myself have joined that forum and will participate if there is useful material there. But I also prefer the faster, easier format of a discussion list WHEN THAT LIST IS CONTROLLED. Consider this to be an experiment, then. If it works, it works. If not, then not. Anyone can join. But if there are people who (a) send ad hominem remarks (b) rant on about fringe topics (c) persistently introduce irrelevant material ... they will first be subjected to KILLTHREADs, and then if it does not stop they will be banned. This process will be escalated slowly, and anything as drastic as a ban will be preceded by soliciting the opinions of the group if it is a borderline case. Wow! That sounds draconian! Who is to say what is "fringe" and what is way out there, but potentially valuable? Well, the best I can offer is this. I have over 25 years' experience of research in AI, physics and psychology, and I have also investigated other "fringe" areas like scientific parapsychology, so I consider my standards to be very tolerant when it comes to new ideas (after all, I have some outlier ideas of my own), but also savvy enough to know when someone is puncturing the envelope, rather than pushing it. So here goes. You are all invited to join at the above address. For the serious people: let's try to establish a standard early on. Richard Loosemore From aware at awareresearch.com Thu Feb 4 18:20:45 2010 From: aware at awareresearch.com (Aware) Date: Thu, 4 Feb 2010 10:20:45 -0800 Subject: [ExI] Personal conclusions In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: On Thu, Feb 4, 2010 at 3:59 AM, Stefano Vaj wrote: > On 3 February 2010 23:05, Gordon Swobe wrote: >> I have no interest in magic. I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents (semantics). > > Yes, this is clear by now. > > The bunch of threads of which Gordon Swobe is the star, which I have > admittedly followed on and off, also because of their largely > repetitive nature, have been interesting, albeit disquieting, for me. Interesting to me too, as an example of our limited capability at present, even among intelligent, motivated participants, to effectively specify and frame disparate views and together seek a greater, unifying context which either resolves the apparent differences or serves to clarify them. > > Not really to hear him reiterate innumerable times that for whatever > reason he thinks that (organic? human?) brains, while obviously > sharing universal computation abilities with cellular automata and > PCs, would on the other hand somewhat escape the Principle of > Computational Equivalence. Gordon exhibits a strong reductionist bent; he seems to believe that Truth WILL be found if only one can see closely and precisely enough into the heart of the matter. Ironically, to the extent that he parrots Searle his logic is impeccable, but he went seriously off that track when engaging with Stathis in the neuron-replacement thought experiment. Most who engage in this debate fall into the same trap of defending functionalism, and this is where the Chinese Room Argument gets most of its mileage, but functionalism, materialism and computationalism are really not at issue. Searle quite clearly and coherently shows that syntax DOES NOT entail semantics, no matter how detailed the implementation. So at the sophomoric level representative of most common objections, the debate spins around and around, as if Searle were denying functionalist, materialist, or computationalist accounts of reality. He's not, and neither is Gordon. The point is that there's a paradox. [And paradox is always a matter of insufficient context. In the bigger picture, all the pieces must fit.] John Clark jumps in to hotly defend Truth and his simple but circular view that consciousness is a Fact, it obviously arrived via Evolution thus Evolution is the key. And how dare you deny Evolution--or Truth!? Stathis patiently (he has plenty of patients, as well as patience) rehashes the defense of functionalism which needs no defending, and although Gordon occasionally asserts that he doesn't disagree (on this) he doesn't go far enough to acknowledge and embrace the apparent truth of functionalist accounts WHILE highlighting the ostensible paradox presented by Searle. Eric and Spencer jump in (late in the game, if a merry-go-round can be said to have a "later" point in the ride) and contribute the next layer after functionalism: If we accept that we have "consciousness", "and unquestionably we do", and we accept materialist, functionalist, computationalist accounts of reality, then the answer is not to be found in the objects being represented, but in the complex associations between them. They too are correct (within their context) but their explanation only raises the problem another level, no closer to resolution. > But because so many of the people having engaged in the discussion of > the point above, while they may not believe any more in a religious > concept of "soul", seem to accept without a second thought that some > very poorly defined Aristotelic essences would per se exist > corresponding to the symbols "mind", "consciousness", "intelligence", > and that their existence in the sense above would even be an a priori > not really open to analysis or discussion. Yes, many of our "rationalist" friends decry belief in a soul, but passionately defend belief in an essential self--almost as if their self depended on it. And along the way we get essential [qualia, experience, intentionality, free-will, meaning, personal identity...] and paradox. And despite accumulating evidence of the incoherence of consciousness, with all its gaps, distortions, fabrication and confabulation, we hang on to it, and decide it must be a very Hard Problem. Thus inoculated, and fortified by the biases built in to our language and culture, we know that when someone comes along and says that it's actually very simple, cf. Dennett, Metzinger, Pollack, Buddha..., we can be sure, even though we can't make sense of what they're saying, that they must be wrong. A few deeper thinkers, aiming for greater coherence over greater context, have suggested that either all entities "have consciousness" or none do. This is a step in the right direction. Then the question, clarified, might be decided in simply information-theoretic terms. But even then, more often they will side with Panpsychism (even a rock has consciousness, but only a little) than to face the possibility of non-existence of an essential experiencer. > Now, if this is the case, I have sincerely troubles in finding a > reason why we should not accept on an equal basis, the article of > faith that Gordon Swobe proposes as to the impossibility for a > computer to exhibit the same. > > Otherwise, we should perhaps reconsider a little non really the AI > research programmes in place, but rather, say, the Circle of Vienna, > Popper or Dennett. Searle is right, in his logic. Wrong, in his premises. No formal syntactic system produces semantics. Further, to the extent that the human brain is formally described, no semantics will be found there either. We never had it, and don't need it. "It" can't even be defined in functional terms. The notion is incoherent, despite the strength and seductiveness of the illusion. It's simply and necessarily how any system refers to references to itself. Yes, it's recursive, and therefore unfamiliar and unsupported by a language and culture that evolved to deal with relatively shallow context and linear relationships of cause and effect. Meaning is not as perceived by the observer, but in the response of the observer, determined by its nature within a particular context. Yes, it may feel like a direct attack on the sanctity of Self, but it's not. It destroys nothing that ever existed, and opens up thinking on agency just as valid, extending beyond the boundaries of the cranium, or the skin, or the organism plus its tools, or ... Oh well. Baby steps... - Jef From lacertilian at gmail.com Thu Feb 4 18:57:01 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 10:57:01 -0800 Subject: [ExI] The digital nature of brains In-Reply-To: <325635.76699.qm@web36508.mail.mud.yahoo.com> References: <20100204102500.86W8S.511470.root@hrndva-web26-z02> <325635.76699.qm@web36508.mail.mud.yahoo.com> Message-ID: Gordon Swobe wrote: > Spencer: >> Finally, would you say that an artificial neural network is a digital computer? > > Software implementations of artificial neural networks certainly fall under the general category of digital computer, yes. I could easily have guessed you would say that, but my question pertains to hard, non-simulated artificial neural networks. This brings up another point of interest: you seem to place computer programs within the category of digital computers. This isn't how I use the term. I would say: Firefox is not a digital computer, it is instantiated by a digital computer. All computers are physical objects in reality; if they are not, they should be explicitly designated as virtual computers. As a side note, are all computers effectively digital computers? It'd save me some time, and the Internet some bandwidth, if so. Personally I could go either way. When I want to be fully inclusive, I usually say "information processor", which denotes brains just as well as laptops. From gts_2000 at yahoo.com Thu Feb 4 19:09:39 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 11:09:39 -0800 (PST) Subject: [ExI] Principle of Computational Equivalence In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: <578968.53151.qm@web36506.mail.mud.yahoo.com> --- On Thu, 2/4/10, Stefano Vaj wrote: > Not really to hear him reiterate innumerable times that for > whatever reason he thinks that (organic? human?) brains, while > obviously sharing universal computation abilities with cellular > automata and PCs, would on the other hand somewhat escape the Principle > of Computational Equivalence. I see no reason to consider the so-called Principle of Computational Equivalence of philosophical interest with respect to natural objects like brains. Given a natural entity or process x and a computation of it c(x) it does not follow that c(x) = x. It does matter whether x = an organic apple or an organic brain. c(x) = x iff x = a true digital artifact. It seems to me that we have no reason to suppose except as a matter of religious faith that any x in the natural world actually exists as a digital artifact. For example we might in principle create perfect computations of hurricanes. It would not follow that hurricanes do computations. -gts From hkeithhenson at gmail.com Thu Feb 4 19:09:49 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 4 Feb 2010 12:09:49 -0700 Subject: [ExI] Space based solar power again Message-ID: (reply to a discussion on another list about power satellites) How you get the energy down from GEO is a problem with a couple of known solutions. What has to be solved is getting the parts to GEO (or the parts to LEO and the whole thing to GEO if you build it in LEO). Even at a million tons per year (what's needed for a decent sized SBSP project) the odds are against the cost being low enough for power satellites to make sense (i.e., undercut coal and nuclear) if you try to transport the parts with chemical rockets. You either have to go to some non reaction method, magnet launcher, cannon, launch loop or space elevator, or you have to go to an exhaust velocity higher than what the energy of chemical fuels will give you. The non reaction methods are extremely difficult engineering problems, partly because we live a the bottom of a dense atmosphere, partly because of the extreme energy needed. The rule of thumb from the rocket equation is that mass ratio 3 will get the vehicle up to the exhaust velocity and a mass ratio 2 will get it to a bit under 0.7 of the exhaust velocity. Beyond mass ratio 3 the payload fraction rapidly goes to zero. So to get to LEO on a mass ratio 3 means an average exhaust velocity of around 9.5 km/sec The Skylon gets about 10.5 km/sec equivalent Ve in air breathing mode. Laser heated hydrogen will give up to 9.8 km/sec. So much for the physics, on to the engineering! :-) Keith From lacertilian at gmail.com Thu Feb 4 19:11:07 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 11:11:07 -0800 Subject: [ExI] Mind extension In-Reply-To: <767464.48754.qm@web113618.mail.gq1.yahoo.com> References: <767464.48754.qm@web113618.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > (lots and lots of neat stuff) Ever since Stathis put me on the spot to state my feelings on uploading and "brain prostheses", and in fact for several years prior, I've been thinking about pretty much the same thing. At the time I sent my response I actually thought this is what he was talking about, but later decided he probably meant a full brain transplant. Judging by the craziness Alex is pulling off with that EPOC EEG, I'm thinking it would be trivially easy to add new "wings" to our brains. We might be able to do it right now, with lab-grown neurons, if we can figure out a way to increase skull capacity. I hadn't considered a few of the tests you propose, though. Specifically, temporarily "turning off" the organic hemisphere to see if the synthetic hemisphere keeps working. I can imagine a lot of problems with putting that into practice. Certainly, we couldn't even try it until we start creating neurons via building rather than via growing. Hmm! From Frankmac at ripco.com Fri Feb 5 19:14:07 2010 From: Frankmac at ripco.com (Frank McElligott) Date: Fri, 5 Feb 2010 14:14:07 -0500 Subject: [ExI] war is peace Message-ID: <004001caa697$66d15db0$ad753644@sx28047db9d36c> In Russia, strong leaders are held in high esteem. If you want to take over an oil company, just throw the CEO in jail for tax evasion and the people will say he deserved it and our Putin knows best. Then to create a legal auction to have the state take over the company under this action with one bidder is ok, it was legal wasn't it. Russia is a different place, rules are set by strong leaders and the people go along with it. What is legal is what Putin says is legal. As Example from the US, if Obama decided that Goldman Sach's bonus plan was out of the realm of what was good for the country, he could arrest the CEO, stopping him from doing god's work, and then have the government take over the company and thus screwing the shareholders out of hundreds of millions. Sorry that's a bad example I should have used AIG instead. Last year thats what Us Gov't did with AIG except they have not arrested no one YET. What I could have used was the EU taking over the books of Greece because we all know the Greek Gov't is corrupt and for the good of the EU we must stop tose Greeks from cheating. If Greece falls, so does Spain, so does Portugal, and must I say Italy as well? Greece is in trouble because they lied, AIG was in trouble on count of their greed, and in Russia it's tax problems. If your Government tells you it is right, you accept it, as we did with Bush and his Weapons of Mass Destruction, in Russia they are no different, or in EU for that matter. So my friend , War is Peace, here in the US, in Russia, and now even in the Eurzone. Hope that helps Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Thu Feb 4 19:19:47 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 11:19:47 -0800 Subject: [ExI] multiple realizability In-Reply-To: <764802.35206.qm@web113618.mail.gq1.yahoo.com> References: <764802.35206.qm@web113618.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > I suspect that it's ignorance of the importance of levels of abstraction that can lead to ideas like "minds can come from neural networks, but not from digital programs". ?All you need to see is that a digital program can implement a neural network at a higher level of abstraction to demolish this idea. Yep. But, I'm sure Gordon has already been there. My guess is he took a plane instead of walking or driving, though, and most likely missed all of the cultural flavor by persistently following a tour guide. I wouldn't be surprised if he just stayed in a hotel the whole time. More as an experiment than anything else, I've been trying to figure out how to take him step-by-step into Abstraction City and show him everything he missed. Right now I'm stuck on black boxes. We have a long way to go. From lacertilian at gmail.com Thu Feb 4 19:43:45 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 11:43:45 -0800 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <854080.27226.qm@web36504.mail.mud.yahoo.com> References: <854080.27226.qm@web36504.mail.mud.yahoo.com> Message-ID: Gordon Swobe : >Stathis Papaioannou : >> ... loss of structural integrity over the >> vast distances involved. However, theoretically, there is no >> problem if such a system is Turing-complete and if the behaviour of >> the brain is computable. > I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them. :-) Whenever I've seen Stathis use the term "an absurdity", I've mentally translated it to "a paradox". A beer can brain is absurd, but not paradoxical. An unconscious brain which is exactly identical to a conscious brain is paradoxical, but not absurd. From bbenzai at yahoo.com Thu Feb 4 20:07:37 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 12:07:37 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <356658.52755.qm@web113601.mail.gq1.yahoo.com> wrote: ---- Ben Zaiboc wrote: > > > Well, we're talking about different things. I > said "it was designed to..", and you replied "no it does > not". Both of these can be true. > This is a perfect example of my 'understanding inversion' > claim... > > First, we're not talking about different things. The Turing > Test was suggested, not 'designed' as it's not a algorithm > or mechanism. At best it's a heuristic. If you read Turing's > papers and the period documentation the fundamental question > is 'can the person tell the difference?'. If the answer is > 'yes' the -pre-assumptive claim- is that some level of > 'intelligence' has been reached in AI technology. Exactly > what that level is, is never defined specifically by the > original authors. The second and follow on generations of AI > researchers have interpreted it to mean that AI has > intelligence in the human sense. I would suggest, strongly, > that this is a cultural 'taboo' that differentiates main > stream from perceived cranks. > > They way you flip the meaning of 'can the person tell the > difference' to 'machine to convince' are specious and moot. > The important point is the human not being able to tell the > difference. You say it is not meant to test the ability of > humans, but it is the humans who -must be convinced-. > > I would say you're trying to massage the test to fit a > preconceived cultural desire and not a real technical > benchmark. It's about validating human emotion and not > mechanical performance. Um, I think 'understanding inversion' is right. I don't actually understand what you're trying to say. Ben Zaiboc From gts_2000 at yahoo.com Thu Feb 4 20:39:00 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 12:39:00 -0800 (PST) Subject: [ExI] Personal conclusions In-Reply-To: Message-ID: <469838.61714.qm@web36504.mail.mud.yahoo.com> --- On Thu, 2/4/10, Aware wrote: > So at the sophomoric level representative of most common > objections, the debate spins around and around, as if Searle were > denying functionalist, materialist, or computationalist accounts > of reality. He's not, and neither is Gordon.? On the contrary, I most certainly do deny the functionalist and computationalist, (but not so much the materialist), accounts of reality. By the way, to make things as clear as mud: 1) computationalism is a species of functionalism, not a theory that competes with it as suggested, and 2) functionalism is not about making artificial neurons, per se, and 3) nobody here in recent months has articulated a true functionalist or functionalist/computationalist account of mind or reality. -gts From lacertilian at gmail.com Thu Feb 4 21:22:22 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 13:22:22 -0800 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: References: Message-ID: Stathis Papaioannou wrote: > I'm not completely sure what you're saying in this post, but at some > point the string of symbol associations (A means B, B means C, C means > D...) is grounded in sensory input. I'm talking about syntax and semantics, but especially syntax. In the context of this discussion, you're making a statement about semantics. One assumption (or conclusion, it's hard to tell) made by the notorious Gordon Swobe is that digital computers are capable of syntax, but not of semantics. I made that post to explore the question of whether or not that's even possible in theory. If I was vague and difficult to understand (I was), that might be due to the fact I have a very fuzzy idea of what Gordon means when he talks about syntax, and his is the definition I tried to use. I wouldn't describe your typical CPU as performing syntactical operations normally, but here I would do so without hesitation. From lacertilian at gmail.com Thu Feb 4 21:41:48 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 13:41:48 -0800 Subject: [ExI] Semiotics and Computability Message-ID: Gordon Swobe : > Stathis wrote: >> Searle would say that there >> needs to be an extra step whereby the symbol so grounded gains >> "meaning", but this extra step is not only completely mysterious, it >> is also completely superfluous, since every observable fact about >> the world would be the same without it. > > No, he would remind you of the obvious truth there exist facts in the world that have subjective first-person ontologies. We can know those facts only in the first-person but they have no less reality than those objective third-person facts that as you say "would be the same without it". You're both wrong! Only I am right! Me! >From my limited research, it appears Searle has never said anything about some unknown extra step necessary to produce meaning. If you think his arguments imply any such thing, that's your extrapolation, not his. The Chinese room argument isn't chiefly about meaning: it's about understanding. They're extremely different things. We take meaning as input and output, or at least feel like we do, but we simply HAVE understanding. And no, it isn't a substance. It's a measurable phenomenon. Not easily measurable, but measurable nonetheless. Secondly, "facts with subjective first-person ontologies" is a nightmarishly convoluted phrase. Does the universe even have facts in it, technically speaking? I suppose what I'm meant to do is pick a component of my subjective experience, say, my headache, and call it a fact. Then I say the fact of my headache has a subjective first-person ontology. But that's redundant: all subjective things are first-person things, and vice-versa. And "ontology" actually means "the study of existence". I don't think the fact of my headache has any kind of study, let alone such an esoteric one. Gordon must have meant "existence", not "ontology". Searle uses that same terminology. It makes things terribly difficult. So to say something has "subjective first-person ontology" really means it "exists only for the subject". There are facts (my headache) which exist only for the subject (me). Ah! Now it makes sense. I even have a word for facts like that: "delusions". It's a low blow, I know. It shouldn't be, but it is. Really, it just means we're too hard on especially delusional people. We need delusions in order to function. They aren't inherently bad. Who was it that wrote the paper describing how a delusion of self is unavoidable when implementing a general-purpose consciousness such as myself? I liked that paper. It appealed to my nihilistic side, which is also the rest of me. Ugh, this is going to drive me crazy. I have to remember some keywords to search for. He used a very specific term to refer to that delusion. "Distributed agent" was used in the paper, I think, but not the message that linked to the paper... From gts_2000 at yahoo.com Thu Feb 4 22:17:15 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 14:17:15 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <938103.15549.qm@web36508.mail.mud.yahoo.com> --- On Thu, 2/4/10, Spencer Campbell wrote: > Secondly, "facts with subjective first-person ontologies" > is a nightmarishly convoluted phrase. Sorry for the jargon. > Does the universe even have facts in > it, technically speaking? I suppose what? I'm meant to > do is pick a component of my subjective experience, say, my headache, > and call it a fact. Exactly right. > Then I say the fact of my headache has a subjective > first-person ontology. Yes. > But that's redundant: all subjective things are first-person > things, and vice-versa. And "ontology" actually means "the > study of existence". I make the distinction because subjective experiences exist both epistemically and ontologically in the first person; that is, we can do epistemic investigations of their causes (why do you have a headache?) in the same sense that we do epistemic investigations of any third-person objective phenomena, and they also have their *existence* in the first-person and thus a first-person ontology. Some people especially my materialist friends seem wont to deny the first-person ontology of consciousness. They deny its existence altogether and attempt to reduce it to something "material", not realizing that in doing so they use the same dualistic vocabulary as do those with whom they want to disagree. Theirs is an over-reaction; we can keep consciousness as a real phenomenon without bringing Descartes back from the dead. -gts From lacertilian at gmail.com Thu Feb 4 22:24:59 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 14:24:59 -0800 Subject: [ExI] The simplest possible conscious system In-Reply-To: <820642.67186.qm@web113613.mail.gq1.yahoo.com> References: <820642.67186.qm@web113613.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > Hm, interesting challenge. > > I'd probably define Intelligence as problem-solving ability, and > Understanding as the association of new 'concept-symbols' with established ones. > > I'd take "Conscious" to mean "Self-Conscious" or "Self-Aware", which almost certainly involves a mental model of one's self, as well as an awareness of the environment, and one's place in it. Somehow I was expecting people to radically disagree on these definitions, but you actually have very similar conceptions of consciousness, intelligence and understanding to my own. Understanding is notably different in my mind, though: I'd say to have a mental model of a thing is to understand that thing. Symbols don't really enter into it, except that we use them as shorthand to refer to understood models. The more similarly your model behaves to the target system, the better you understand that system! Ben Zaiboc : > I'd guess that the simplest possible conscious system would have an embodiment ('real' or virtual) within an environment, sensors, actuators, the ability to build internal representations of both its environment and itself, and by implication some kind of state memory. ?Hm. maybe we already have conscious robots, and don't realise it! I can conceive of a disembodied consciousness, interacting with its environment only through verbal communication, which would be simpler. Top that! : > I would agree, however there is a couple of issues that must be addressed before it becomes meaningful. > > First, what is 'conscious'? That definition must not use human brains as an axiomatic measure. I agree. The only problem is that, if consciousness exists, any English definition of it would at least be inaccurate, if not outright incorrect. We can only approximate the speed of light using feet, but we can describe it exactly with meters. I'm not even sure if consciousness is better considered as a binary state, present or absent, or if we should be talking about degrees of consciousness. Certainly, intelligence and understanding are both scalar quantities. Is the same true of consciousness? My current theory is that consciousness requires recursive understanding: that is, understanding of understanding. Meta-understanding. I don't know if it exhibits any emergent properties over and above that, though, or if there are any other prerequisites. From stathisp at gmail.com Thu Feb 4 22:27:52 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 09:27:52 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <854080.27226.qm@web36504.mail.mud.yahoo.com> References: <854080.27226.qm@web36504.mail.mud.yahoo.com> Message-ID: On 5 February 2010 00:32, Gordon Swobe wrote: > --- On Wed, 2/3/10, Stathis Papaioannou wrote: > >>> Can a conscious Texas-sized brain constructed out of >>> giant neurons made of beer cans and toilet paper exist as a >>> possible consequence of your brand of functionalism? Or >>> not? >> >> It would have to be much, much larger than Texas if it was >> to be human equivalent and it probably wouldn't physically possible due >> (among other problems) to loss of structural integrity over the >> vast distances involved. However, theoretically, there is no >> problem if such a system is Turing-complete and if the behaviour of >> the brain is computable. > > Okay, I take that as a confirmation of your earlier assertion that brains made of beer cans and toilet paper can have consciousness provided those beer cans squirt the correct neurotransmitters between themselves at the correct times. This suggests to me that your ideology has a firmer grip on your thinking that does your sense of the absurd, and that no reductio ad absurdum argument will find traction with you. > > I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them. :-) I take it that you are aware of the concept of "Turing equivalence"? It implies that if a digital computer can have a mind, then any Turing equivalent machine can also have a mind. If a beer can computer is Turing equivalent then you don't gain anything philosophically by pointing to it and saying that it's "absurd"; that's more like a politician's subterfuge than a philosophical argument. The absurdity I was referring to, on the other hand, is logical contradiction. Spencer Campbell suggested that these may not be the same thing but that is what I meant; see http://en.wikipedia.org/wiki/Proof_by_contradiction. The logical contradiction is the claim that, for example, artificial brain components can be made which both do and do not behave exactly the same as normal neurons. Not even God can make it so that both P and ~P are true; however, God could easily make a beer can and toilet paper computer or a Chinese Room. It is a difference in kind, not a difference in degree. -- Stathis Papaioannou From stathisp at gmail.com Thu Feb 4 22:43:49 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 09:43:49 +1100 Subject: [ExI] Mind extension In-Reply-To: <767464.48754.qm@web113618.mail.gq1.yahoo.com> References: <767464.48754.qm@web113618.mail.gq1.yahoo.com> Message-ID: On 5 February 2010 00:38, Ben Zaiboc wrote: > I've been pondering this issue, and it's possible that there's a way around the problem of confirming that consciousness can run on artificial neurons without actually removing existing natural neurons, and condemning the subject to death if it turns out to be untrue. > > I'm thinking of an 'mind extension' scenario, where you attach these artificial neurons (or their software equivalent) to an existing brain using neural interfaces, in a configuration that does something useful, like giving an extra sense or an expanded or secondary short-term memory (of course all this assumes good neural interface technology, working artificial neurons and a better understanding of mental architecture than we have just now). ?Let the user settle in with the new part of their brain for a while, then they should be able to tell if they 'inhabit' it or if it's just like driving a car: it's something 'out there' that they are operating. > > If they feel that their consciousness now partly resides in the new brain area, it should be possible to duplicate all the vital brain modules and selectively anaesthetise their biological counterparts without any change in subjective experience. > > If the person says "Hang on, I blanked out there" for the period of time the artificial brain parts were operating on their own, we would know that they don't support conscious experience, and the person could say 'no thanks' to uploading, with their original brain intact. > > The overall idea is to build extra room for the mind to expand into, and see if it really has or not. ?If the new, artificial parts actually don't support consciousness, you'd soon notice. If they do, you could augment your brain to the point where the original was just a tiny part, and you wouldn't even miss it when it eventually dies off. An important point is that if you noticed a difference not only would that mean the artificial parts don't support normal consciousness, it would also mean the artificial parts do not exactly reproduce the objectively observable behaviour of the natural neurons. -- Stathis Papaioannou From stathisp at gmail.com Thu Feb 4 22:53:23 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 09:53:23 +1100 Subject: [ExI] The digital nature of brains In-Reply-To: <325635.76699.qm@web36508.mail.mud.yahoo.com> References: <20100204102500.86W8S.511470.root@hrndva-web26-z02> <325635.76699.qm@web36508.mail.mud.yahoo.com> Message-ID: On 5 February 2010 02:00, Gordon Swobe wrote: > Software implementations of artificial neural networks certainly fall under the general category of digital computer, yes. However in my view no software of any kind can cause subjective experience to arise in the software or hardware. I consider it logically impossible that syntactical operations on symbols, whether they be 1's and 0's or Shakespeare's sonnets, can cause the system implementing those operations to have subjective mental contents. Let's be clear: it is not LOGICALLY impossible that syntax can give rise to meaning. There is no LOGICAL contradiction in the claim that when a symbol is paired with a particular type of input, then that symbol is grounded, and grounding of the symbol is sufficient for meaning. You don't like this idea because you have a view that there is a mysterious extra layer to provide meaning, but that is a claim about the way the world is (one that is not empirically verifiable), not a LOGICAL claim. -- Stathis Papaioannou From ablainey at aol.com Thu Feb 4 23:13:42 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Thu, 04 Feb 2010 18:13:42 -0500 Subject: [ExI] Pig Symbol In-Reply-To: References: <20100204102500.86W8S.511470.root@hrndva-web26-z02> <325635.76699.qm@web36508.mail.mud.yahoo.com> Message-ID: <8CC7406D403F237-55FC-520D@webmail-m027.sysops.aol.com> Following on from Symbols, AI and especially the robot seeing a pig. Here is a website that pretends to be a personality test based upon your drawing of a pig. There has been over 2 million pigs drawn so far. It seems to me that you could get some interesting insight into AI image recognition by feeding in 2 million + drawings of pigs. The site owner is asking for suggestions of what to do with the drawings. Does anyone have an AI that needs some training data? Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Thu Feb 4 23:18:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 15:18:34 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <523017.16837.qm@web36507.mail.mud.yahoo.com> --- On Thu, 2/4/10, Stathis Papaioannou wrote: >> Software implementations of artificial neural networks >> certainly fall under the general category of digital >> computer, yes. However in my view no software of any kind >> can cause subjective experience to arise in the software or >> hardware. I consider it logically impossible that >> syntactical operations on symbols, whether they be 1's and >> 0's or Shakespeare's sonnets, can cause the system >> implementing those operations to have subjective mental >> contents. > > Let's be clear: it is not LOGICALLY impossible that syntax > can give rise to meaning. I think it is logically impossible. > There is no LOGICAL contradiction in the > claim that when a symbol is paired with a particular type of input, > then that symbol is grounded, and grounding of the symbol is > sufficient for meaning. I take it that on your view a picture dictionary understands the nouns for which it has pictures, since it "pairs" its word-symbols with sense-data, grounding the symbols in the same way that a computer + webcam can pair and ground symbols. How about a lunch menu? Does it understand sandwiches? :-) -gts From eric at m056832107.syzygy.com Fri Feb 5 00:10:05 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 5 Feb 2010 00:10:05 -0000 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <580930c21002030509m60aebf03s28104ee60540978b@mail.gmail.com> References: <584374.10388.qm@web36504.mail.mud.yahoo.com> <20100129192646.5.qmail@syzygy.com> <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com> <20100201181430.5.qmail@syzygy.com> <580930c21002030509m60aebf03s28104ee60540978b@mail.gmail.com> Message-ID: <20100205001005.5.qmail@syzygy.com> Stefano writes: >So, is the integer "3" a word symbol or a sense symbol? The integer 3 is a concept for which a brain probably has a symbol. That symbol will be distinct from the symbol for the word "three", and both are distinct from the impressions (represented by sense symbols) generated when someone views a hand with three fingers held up. All those symbols are related to each other, and activation of any one is likely to make it easier to activate any of the others. > And what about the ASCII decoding of a byte? I'm not sure exactly what you're asking here. ASCII maps byte values to stereotypical glyphs, so I'm assuming you're referring to the glyph '3' as a decoding of the byte value 0x33. When you look at that glyph, a particular sense symbol will be activated, which will likely lead to activation of the corresponding concept and word symbols mentioned above. > Or the rasterisation of the ASCII symbol? Again, I'm not sure exactly what you're getting at. Is that rasterisation what shows up on your video monitor when the computer displays the '3' glyph? I could think about the concept of that occurring, or I could look at the result (see above). >And what difference would exactly make? Not much, really. They're just names for things, so we can talk about them. The brain probably uses similar mechanisms to process all those symbols. That processing is likely confined to different areas of the brain for each type of symbol, though. I don't think anyone knows yet how the brain does any of this processing. We don't even know much about how the symbols might be encoded, although theories do exist. I happen to like William Calvin's theory as presented in "The Cerebral Code": http://williamcalvin.com/bk9/ I don't think we're yet to the point where we can put that theory to the test. We do know a good deal about the low level processing, but things get complicated as we climb the abstraction ladder. -eric From eric at m056832107.syzygy.com Fri Feb 5 00:25:12 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 5 Feb 2010 00:25:12 -0000 Subject: [ExI] meaning & symbols In-Reply-To: <910459.5460.qm@web113613.mail.gq1.yahoo.com> References: <910459.5460.qm@web113613.mail.gq1.yahoo.com> Message-ID: <20100205002512.5.qmail@syzygy.com> Ben writes: >OK obviously this word 'symbol' needs some clear definition. > >I would use the word to mean any distinct pattern of neural activity > that has a relationship with other such patterns. In that sense, > sensory symbols exist, as do (visual) word symbols, (auditory) word > symbols, concept symbols, which are a higher-level abstraction from > the above three types, and hundreds of other types of 'symbol', > representing all the different patterns of neural activity that can > be regarded as coherent units, like emotional states, memories, > linguistic units (nouns, verbs, etc.), and their higher-level > 'chunks' (birdness, the concept of fluidity, etc.), and so on. This sounds exactly like what I mean when I use the term "symbol" in this context. The question came up about how hard it might be to tease apart *distinct* patterns of neural activity. I agree that this is likely to be tricky. I expect many symbols will be active in a brain at the same time, and differentiating them could be hard. They may change representation with the brain region they are active in. I do expect that a symbol is simpler than a global neural firing pattern, though. If a firing pattern in one part of the brain triggers a similar firing pattern in another part of the brain, is the same symbol active in both areas, or are there two distinct symbols? I don't think we have a good enough handle on this to answer such questions yet. -eric From possiblepaths2050 at gmail.com Fri Feb 5 00:50:11 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 4 Feb 2010 17:50:11 -0700 Subject: [ExI] "Supreme Court Allows Corporations To Run For Political Office, " Onion parody article Message-ID: <2d6187671002041650i3cb96621u60890b5970ebbef4@mail.gmail.com> This is both funny and creepy... http://www.theonion.com/content/news_briefs/supreme_court_allows John : ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Fri Feb 5 02:33:29 2010 From: aware at awareresearch.com (Aware) Date: Thu, 4 Feb 2010 18:33:29 -0800 Subject: [ExI] Personal conclusions In-Reply-To: <469838.61714.qm@web36504.mail.mud.yahoo.com> References: <469838.61714.qm@web36504.mail.mud.yahoo.com> Message-ID: On Thu, Feb 4, 2010 at 12:39 PM, Gordon Swobe wrote: > On the contrary, I most certainly do deny the functionalist and computationalist, (but not so much the materialist), accounts of reality. Well, sometimes in effect you do; sometimes you don't. You seem to enjoy the polemics more than you do the opportunity to encompass a greater context of understanding. > By the way, to make things as clear as mud: 1) computationalism is a species of functionalism, not a theory that competes with it as suggested, Seems to me that { Materialism { Functionalism { Computationalism}}}. Your "clear as mud" is clearly appropriate. > and 2) functionalism is not about making artificial neurons, per se, Stathis would argue, I think, that such was the point of that part of his discussion with you. > and 3) nobody here in recent months has articulated a true functionalist or functionalist/computationalist account of mind or reality. My central point (were you paying attention?) is that there can be no "true functionalist/computationalist account of mind..." notwithstanding the legitimacy of all three of these isms in their appropriate contexts. Finally, Gordon, I'd like to thank you for your characteristically thorough and thoughtful reply to my comments... - Jef From x at extropica.org Fri Feb 5 03:08:29 2010 From: x at extropica.org (x at extropica.org) Date: Thu, 4 Feb 2010 19:08:29 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: On Thu, Feb 4, 2010 at 1:41 PM, Spencer Campbell wrote: > Who was it that wrote the paper describing how a delusion of self is > unavoidable when implementing a general-purpose consciousness such as > myself? I liked that paper. It appealed to my nihilistic side, which > is also the rest of me. > > Ugh, this is going to drive me crazy. I have to remember some keywords > to search for. He used a very specific term to refer to that delusion. > "Distributed agent" was used in the paper, I think, but not the > message that linked to the paper... Pollock, JL, Ismael J. 2006. Knowledge and reality; So You Think You Exist? In Defense of Nolipsism :35-62. I've posted it to this list twice now. This is the first indication I've seen that anyone read it. - Jef From ablainey at aol.com Fri Feb 5 08:24:01 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 05 Feb 2010 03:24:01 -0500 Subject: [ExI] "Supreme Court Allows Corporations To Run For Political Office, " Onion parody article In-Reply-To: <2d6187671002041650i3cb96621u60890b5970ebbef4@mail.gmail.com> References: <2d6187671002041650i3cb96621u60890b5970ebbef4@mail.gmail.com> Message-ID: <8CC7453B455DDF3-2B10-C068@webmail-d005.sysops.aol.com> Funny and true. I can't see how it is different from reality? This is much more apparent In the UK where each plitical party is a registered bussiness, with its corporate logo and company rules. Even one of the police forces is now alledgedly a subsidary of IBM. Politics and authority have always been big business. -----Original Message----- From: John Grigg To: ExI chat list ; World Transhumanist Association Discussion List ; transfigurism Sent: Fri, 5 Feb 2010 0:50 Subject: [ExI] "Supreme Court Allows Corporations To Run For Political Office, " Onion parody article This is both funny and creepy... http://www.theonion.com/content/news_briefs/supreme_court_allows John : ) _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Feb 5 09:59:50 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 20:59:50 +1100 Subject: [ExI] Principle of Computational Equivalence In-Reply-To: <578968.53151.qm@web36506.mail.mud.yahoo.com> References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> <578968.53151.qm@web36506.mail.mud.yahoo.com> Message-ID: On 5 February 2010 06:09, Gordon Swobe wrote: > --- On Thu, 2/4/10, Stefano Vaj wrote: > >> Not really to hear him reiterate innumerable times that for >> whatever reason he thinks that (organic? human?) brains, while >> obviously sharing universal computation abilities with cellular >> automata and PCs, would on the other hand somewhat escape the Principle >> of Computational Equivalence. > > I see no reason to consider the so-called Principle of Computational Equivalence of philosophical interest with respect to natural objects like brains. > > Given a natural entity or process x and a computation of it c(x) it does not follow that c(x) = x. It does matter whether x = an organic apple or an organic brain. > > c(x) = x iff x = a true digital artifact. It seems to me that we have no reason to suppose except as a matter of religious faith that any x in the natural world actually exists as a digital artifact. > > For example we might in principle create perfect computations of hurricanes. It would not follow that hurricanes do computations. Gordon, that is all true, but sometimes even a bad copy of an object does perform the same function as the object. For example, a ball may fly through the air like an apple even though it isn't an apple and lacks many of the other properties of an apple. The claim is not that a computer will be *identical* with the brain but that it will reproduce the intelligence of the brain and, as a corollary, the consciousness of the brain, which it turns out (from a logical argument that you can't or won't follow or even attempt to rebut) is impossible to disentangle from the intelligence. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 5 10:14:01 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 21:14:01 +1100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <595536.39512.qm@web36508.mail.mud.yahoo.com> References: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> <595536.39512.qm@web36508.mail.mud.yahoo.com> Message-ID: On 5 February 2010 00:47, Gordon Swobe wrote: > --- On Thu, 2/4/10, Stefano Vaj wrote: > > Stathis wrote: >>> Searle would say that there >>> needs to be an extra step whereby the symbol so grounded gains >>> "meaning", but this extra step is not only completely mysterious, it >>> is also completely superfluous, since every observable fact about >>> the world would be the same without it. > > No, he would remind you of the obvious truth there exist facts in the world that have subjective first-person ontologies. We can know those facts only in the first-person but they have no less reality than those objective third-person facts that as you say "would be the same without it". > > Real subjective first-person facts of the world include one's own conscious understanding of words. I don't deny subjective experience but I deny that when I understand something I do anything more than associate it with another symbol, ultimately grounded in something I have seen in the real world. That would seem necessary and sufficient for understanding, and for the subjective experience of understanding, such as it is. Searle is postulating an extra layer over and above this which is completely useless. What's to stop us postulating even more layers: people with red hair have understanding*, which stands in relation to understanding as understanding stands in relation to mere symbol-association. Of course the redheads don't behave any differently and don't even know they are any different, but when they use a word they experience something which non-redheads could never even imagine. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 5 10:30:40 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 21:30:40 +1100 Subject: [ExI] The digital nature of brains In-Reply-To: <523017.16837.qm@web36507.mail.mud.yahoo.com> References: <523017.16837.qm@web36507.mail.mud.yahoo.com> Message-ID: On 5 February 2010 10:18, Gordon Swobe wrote: > I take it that on your view a picture dictionary understands the nouns for which it has pictures, since it "pairs" its word-symbols with sense-data, grounding the symbols in the same way that a computer + webcam can pair and ground symbols. > > How about a lunch menu? Does it understand sandwiches? :-) No, they're not intelligent. Your argument here is equivalent to me pointing to an inert lump of matter in order to demonstrate that matter is incapable of thinking. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 5 11:35:26 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 22:35:26 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: On 5 February 2010 08:41, Spencer Campbell wrote: > >From my limited research, it appears Searle has never said anything > about some unknown extra step necessary to produce meaning. If you > think his arguments imply any such thing, that's your extrapolation, > not his. The Chinese room argument isn't chiefly about meaning: it's > about understanding. They're extremely different things. We take > meaning as input and output, or at least feel like we do, but we > simply HAVE understanding. > > And no, it isn't a substance. It's a measurable phenomenon. Not easily > measurable, but measurable nonetheless. By definition it isn't measurable, since (according to Searle and Gordon) it would be possible to perfectly reproduce the behaviour of the brain, but leave out understanding. It is only possible to observe behaviour, so if behaviour is separable from understanding, you can't observe it. I'm waiting for Gordon to say, OK, I've changed my mind, it is *not* possible to reproduce the behaviour of the brain and leave out understanding, but he just won't do it. -- Stathis Papaioannou From pharos at gmail.com Fri Feb 5 12:35:10 2010 From: pharos at gmail.com (BillK) Date: Fri, 5 Feb 2010 12:35:10 +0000 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: On 2/5/10, Stathis Papaioannou wrote: > By definition it isn't measurable, since (according to Searle and > Gordon) it would be possible to perfectly reproduce the behaviour of > the brain, but leave out understanding. It is only possible to observe > behaviour, so if behaviour is separable from understanding, you can't > observe it. I'm waiting for Gordon to say, OK, I've changed my mind, > it is *not* possible to reproduce the behaviour of the brain and leave > out understanding, but he just won't do it. > > "You cannot reason people out of a position that they did not reason themselves into." ? Ben Goldacre (Bad Science) Gordon is listening to a voice in his head that tells him that 'It *must* be this way. It just *must*!' And you can't argue with that. True-believer syndrome is an expression coined by M. Lamar Keene to describe an apparent cognitive disorder characterized by believing in the reality of paranormal or supernatural events after one has been presented overwhelming evidence that the event was fraudulently staged. BillK From gts_2000 at yahoo.com Fri Feb 5 13:05:35 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 05:05:35 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <426324.97073.qm@web36507.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: > By definition it isn't measurable, since (according to > Searle and Gordon) it would be possible to perfectly reproduce the > behaviour of the brain, but leave out understanding. Does your watch understand what time it is, Stathis? No, of course not. But yet it tells you the correct time anyway, *as if* it had understanding. Amazing! -gts From gts_2000 at yahoo.com Fri Feb 5 13:23:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 05:23:33 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <211266.88332.qm@web36506.mail.mud.yahoo.com> --- On Fri, 2/5/10, BillK wrote: > Gordon is listening to a voice in his head that tells him,,, I barely find time to respond to sincere posts from people like Stathis. I have no time to respond to childish insults from the peanut gallery. Please stop. -gts From stathisp at gmail.com Fri Feb 5 13:25:24 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 00:25:24 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <426324.97073.qm@web36507.mail.mud.yahoo.com> References: <426324.97073.qm@web36507.mail.mud.yahoo.com> Message-ID: On 6 February 2010 00:05, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >> By definition it isn't measurable, since (according to >> Searle and Gordon) it would be possible to perfectly reproduce the >> behaviour of the brain, but leave out understanding. > > Does your watch understand what time it is, Stathis? No, of course not. But yet it tells you the correct time anyway, *as if* it had understanding. Amazing! The watch obviously does not behave exactly like a human, so there is no reason why it should understand time in the same way as a human does. The point is that is impossible to make a brain or brain component that behaves *exactly* like the natural equivalent but lacks understanding. Note that this says nothing about programs or computers: it is impossible through *any means whatsoever* to make such a device, even even if you could invoke magic. -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Feb 5 13:31:16 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 05:31:16 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <213871.74812.qm@web36502.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: >> I take it that on your view a picture dictionary >> understands the nouns for which it has pictures, since it >> "pairs" its word-symbols with sense-data, grounding the >> symbols in the same way that a computer + webcam can pair >> and ground symbols. > >> How about a lunch menu? Does it understand sandwiches? > :-) > > No, they're not intelligent. Your argument here is > equivalent to me pointing to an inert lump of matter in order to > demonstrate that matter is incapable of thinking. True or false, Stathis: When program running on a digital computer associates a sense-datum (say, an image of object taken with its web-cam) with the appropriate word-symbol, the system running that program has now by virtue of that association grounded the word-symbol and now has understanding of the meaning of that word-symbol. -gts From sparge at gmail.com Fri Feb 5 13:37:31 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 5 Feb 2010 08:37:31 -0500 Subject: [ExI] The digital nature of brains In-Reply-To: <213871.74812.qm@web36502.mail.mud.yahoo.com> References: <213871.74812.qm@web36502.mail.mud.yahoo.com> Message-ID: On Fri, Feb 5, 2010 at 8:31 AM, Gordon Swobe wrote: > > True or false, Stathis: > > When program running on a digital computer associates a sense-datum (say, an image of object taken with its web-cam) with the appropriate word-symbol, the system running that program has now by virtue of that association grounded the word-symbol and now has understanding of the meaning of that word-symbol. That depends entirely upon the nature of the program. -Dave From gts_2000 at yahoo.com Fri Feb 5 13:45:26 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 05:45:26 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <591415.38933.qm@web36503.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: >> Does your watch understand what time it is, Stathis? >> No, of course not. But yet it tells you the correct time >> anyway, *as if* it had understanding. Amazing! > > The watch obviously does not behave exactly like a human, > so there is no reason why it should understand time in the same way as > a human does. My point here is that intelligent behavior does not imply understanding. We can construct robots that behave intelligently like humans but which have no subjective understanding of anything whatsoever. We've already started doing so in limited areas. It's only a matter of time before we do it in a general sense (weak AGI). > The point is that is impossible to make a brain or > brain component that behaves *exactly* like the natural > equivalent but lacks understanding. Not impossible at all! Weak AI that passes the Turing test is entirely possible. It will just take a lot of hard work to get there. -gts From stathisp at gmail.com Fri Feb 5 13:45:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 00:45:12 +1100 Subject: [ExI] The digital nature of brains In-Reply-To: <213871.74812.qm@web36502.mail.mud.yahoo.com> References: <213871.74812.qm@web36502.mail.mud.yahoo.com> Message-ID: On 6 February 2010 00:31, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >>> I take it that on your view a picture dictionary >>> understands the nouns for which it has pictures, since it >>> "pairs" its word-symbols with sense-data, grounding the >>> symbols in the same way that a computer + webcam can pair >>> and ground symbols. >> >>> How about a lunch menu? Does it understand sandwiches? >> :-) >> >> No, they're not intelligent. Your argument here is >> equivalent to me pointing to an inert lump of matter in order to >> demonstrate that matter is incapable of thinking. > > True or false, Stathis: > > When program running on a digital computer associates a sense-datum (say, an image of object taken with its web-cam) with the appropriate word-symbol, the system running that program has now by virtue of that association grounded the word-symbol and now has understanding of the meaning of that word-symbol. Does an amoeba have an understanding of "food" when it makes an association between the relevant chemotactic signals and the feeling it gets when it engulfs the morsel? You might say "no, the amoeba and this behaviour is too simple". Yet it's from compounding such simple behaviours that we get human level intelligence. The computer behaviour you described is even simpler than that of the amoeba, so you would have to grant the amoeba understanding before considering the possibility that the computer has understanding. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 5 13:51:14 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 00:51:14 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <591415.38933.qm@web36503.mail.mud.yahoo.com> References: <591415.38933.qm@web36503.mail.mud.yahoo.com> Message-ID: On 6 February 2010 00:45, Gordon Swobe wrote: >> The point is that is impossible to make a brain or >> brain component that behaves *exactly* like the natural >> equivalent but lacks understanding. > > Not impossible at all! Weak AI that passes the Turing test is entirely possible. It will just take a lot of hard work to get there. Yes, but then when pressed you say that such a brain or brain component would *not* behave exactly like the natural equivalent! -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Feb 5 14:01:50 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 06:01:50 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <805514.15115.qm@web36501.mail.mud.yahoo.com> --- On Fri, 2/5/10, Dave Sill wrote: >> True or false, Stathis: >> >> When a program running on a digital computer associates >> a sense-datum (say, an image of an object taken with its >> web-cam) with the appropriate word-symbol, the system >> running that program has now by virtue of that association >> grounded the word-symbol and now has understanding of the >> meaning of that word-symbol. > > That depends entirely upon the nature of the program. I see. So then let us say programmer A writes a program that fails but that programmer B writes one that succeeds. What programming tricks did B use such that his program instantiated an entity capable of having subjective understanding of words? (And where can I find him? I want to hire him.) -gts From gts_2000 at yahoo.com Fri Feb 5 14:18:40 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 06:18:40 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <459214.1690.qm@web36507.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: >> True or false, Stathis: >> >> When a program running on a digital computer associates >> a sense-datum (say, an image of an object taken with its >> web-cam) with the appropriate word-symbol, the system >> running that program has now by virtue of that association >> grounded the word-symbol and now has [conscious] >> understanding of the meaning of that word-symbol. > > Does an amoeba have an understanding of "food" when it > makes an association between the relevant chemotactic signals No, ameobas have nothing I mean by consciousness. Is my statement above true or false, Stathis? I added the word "conscious" to make me meaning even more clear. I ask you these T/F questions to try find out exactly what you think. -gts From sparge at gmail.com Fri Feb 5 14:19:21 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 5 Feb 2010 09:19:21 -0500 Subject: [ExI] The digital nature of brains In-Reply-To: <805514.15115.qm@web36501.mail.mud.yahoo.com> References: <805514.15115.qm@web36501.mail.mud.yahoo.com> Message-ID: On Fri, Feb 5, 2010 at 9:01 AM, Gordon Swobe wrote: > > I see. So then let us say programmer A writes a program that fails but that programmer B writes one that succeeds. > > What programming tricks did B use such that his program instantiated an entity capable of having subjective understanding of words? (And where can I find him? I want to hire him.) I don't think that achieving intelligence will be the result of "programming tricks". I also don't think it'll be a one-man effort, and I'm pretty sure it hasn't been done yet. -Dave From gts_2000 at yahoo.com Fri Feb 5 14:35:44 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 06:35:44 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <860799.26946.qm@web36506.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: > You might say "no, the amoeba and this behaviour is too simple". Yet > it's from compounding such simple behaviours that we get human level > intelligence. I do not care how the computer behaves. Does it have conscious understanding of the meaning of the word by virtue of having associated it with an image file of the object represented by the word? I think you know the answer. -gts From gts_2000 at yahoo.com Fri Feb 5 14:12:52 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 06:12:52 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <369222.38842.qm@web36505.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: >> Not impossible at all! Weak AI that passes the Turing >> test is entirely possible. It will just take a lot of hard >> work to get there. > > Yes, but then when pressed you say that such a brain or > brain component would *not* behave exactly like the natural > equivalent! I've said that such an artificial neuron/brain will require a lot of work before it behaves like the natural equivalent. This is why the surgeon in your thought experiment must keep replacing and re-programming your artificial neurons until finally he creates a patient that passes the TT. -gts From stathisp at gmail.com Fri Feb 5 14:42:09 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 01:42:09 +1100 Subject: [ExI] The digital nature of brains In-Reply-To: <459214.1690.qm@web36507.mail.mud.yahoo.com> References: <459214.1690.qm@web36507.mail.mud.yahoo.com> Message-ID: On 6 February 2010 01:18, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >>> True or false, Stathis: >>> >>> When a program running on a digital computer associates >>> a sense-datum (say, an image of an object taken with its >>> web-cam) with the appropriate word-symbol, the system >>> running that program has now by virtue of that association >>> grounded the word-symbol and now has [conscious] >>> understanding of the meaning of that word-symbol. >> >> Does an amoeba have an understanding of "food" when it >> makes an association between the relevant chemotactic signals > > No, ameobas have nothing I mean by consciousness. > > Is my statement above true or false, Stathis? I added the word "conscious" to make me meaning even more clear. > > I ask you these T/F questions to try find out exactly what you think. False, as it is for the amoeba doing the same thing. To have human level consciousness and understanding it has to have human level intelligence! You seem to dismiss this obvious point for computers, yet you're not nearly so generous with bestowing consciousness on non-human organisms as consistency would require. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 5 14:50:40 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 01:50:40 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <369222.38842.qm@web36505.mail.mud.yahoo.com> References: <369222.38842.qm@web36505.mail.mud.yahoo.com> Message-ID: On 6 February 2010 01:12, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >>> Not impossible at all! Weak AI that passes the Turing >>> test is entirely possible. It will just take a lot of hard >>> work to get there. >> >> Yes, but then when pressed you say that such a brain or >> brain component would *not* behave exactly like the natural >> equivalent! > > I've said that such an artificial neuron/brain will require a lot of work before it behaves like the natural equivalent. This is why the surgeon in your thought experiment must keep replacing and re-programming your artificial neurons until finally he creates a patient that passes the TT. You agree that the artificial neuron will perfectly replicate the behaviour of the natural neuron it replaces, and in the same breath you say that the brain will start behaving differently and the surgeon will have to make further adjustments! Do you really not see that this is a blatant contradiction? -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Feb 5 15:00:11 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 07:00:11 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <204176.88244.qm@web36508.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: > > --- On Fri, 2/5/10, Stathis Papaioannou > wrote: > > > >>> Not impossible at all! Weak AI that passes the > Turing > >>> test is entirely possible. It will just take a > lot of hard > >>> work to get there. > >> > >> Yes, but then when pressed you say that such a > brain or > >> brain component would *not* behave exactly like > the natural > >> equivalent! > > > > I've said that such an artificial neuron/brain will > require a lot of work before it behaves like the natural > equivalent. This is why the surgeon in your thought > experiment must keep replacing and re-programming your > artificial neurons until finally he creates a patient that > passes the TT. > > You agree that the artificial neuron will perfectly > replicate the behaviour of the natural neuron it replaces, and in the > same breath you say that the brain will start behaving differently and > the surgeon will have to make further adjustments! Do you really not > see that this is a blatant contradiction? I think you've misrepresented or misunderstood me here. Where in the same breath did I say these things? In your thought experiment, the artificial program-driven neurons will require a lot of work for the same reason that programming weak AI will require a lot of work. We're not there yet, but it's within the realm of programming possibility. -gts From rpwl at lightlink.com Fri Feb 5 15:03:25 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 05 Feb 2010 10:03:25 -0500 Subject: [ExI] Symbol Grounding [WAS Re: The digital nature of brains] In-Reply-To: <805514.15115.qm@web36501.mail.mud.yahoo.com> References: <805514.15115.qm@web36501.mail.mud.yahoo.com> Message-ID: <4B6C333D.7090205@lightlink.com> Gordon Swobe wrote: > --- On Fri, 2/5/10, Dave Sill wrote: > >>> True or false, Stathis: >>> >>> When a program running on a digital computer associates a >>> sense-datum (say, an image of an object taken with its web-cam) >>> with the appropriate word-symbol, the system running that program >>> has now by virtue of that association grounded the word-symbol >>> and now has understanding of the meaning of that word-symbol. >> That depends entirely upon the nature of the program. > > I see. So then let us say programmer A writes a program that fails > but that programmer B writes one that succeeds. > > What programming tricks did B use such that his program instantiated > an entity capable of having subjective understanding of words? (And > where can I find him? I want to hire him.) [I doubt that you could afford me, but I am open to pleasant surprises.] As to the earlier question, you are asking about the fundamental nature of "grounding". Since there is a huge amount of debate and confusion on the topic, I will save you the trouble of searching the mountain of prior art and come straight to the answer. If a system builds a set of symbols that purport to be "about" things in the world, then the only way to decide if those symbols are properly grounded is to look at (a) the mechanisms that build those symbols, (b) the mechanisms that use those symbols (to do, e.g., thinking), (c) the mechanisms that adapt or update the symbols over time, (d) the interconnectedness of the symbols. If these four aspects of the symbol system are all coherently engaged with one another, so that the building mechanisms generate symbols that the deployment mechanisms then use in a way that is consistent, and the development mechanisms also modify the symbols in a coherent way, and the connectedness makes sense, then the symbols are grounded. The key to understanding this last paragraph is that Harnad's contention was that, as a purely practical matter, this kind of global coherence can only be achieved if ALL the mechanisms are working together from the get-go .... which means that the building mechanisms, in particular, are primarily responsible for creating the symbols (using real world interaction). So the normal way for symbols to get grounded is for there to be meaningful "pickup" mechanisms that extract the symbols autonomously, as a result of the system interacting with the environment. But notice that pickup of the trivial kind you implied above (the system just has an object detector attached to its webcam, and a simple bit of code that forms an association with a word) is not by itself enough to satisfy the requirements of grounding. Direct pickup from the senses is a NECESSARY condition for grounding, it is not a SUFFICIENT condition. Why not? Because if this hypothetical system is going to be intelligent, then you need a good deal more than just the webcam and a simple association function - and all that other machinery that is lurking in the background has to be coherently connected to the rest. Only if the whole lot is built and allowed to develop in a coherent, autonomous manner, can the system be said to be grounded. So, because you only mentioned a couple of mechanisms at the front end (webcam and association function) you did not give enough information to tell if the symbols are grounded or not. The correct answer was, then, "it depends on the program". The point of symbol grounding is that if the symbols are connected up by hand, the subtle relationships and mechanism-interactions are almost certainly not going to be there. But be careful about what is claimed here: in principle someone *could* be clever enough to hand-wire an entire intelligent system to get global coherence, and in that case it could actually be grounded, without the symbols being picked up by the system itself. But that is such a difficult task that it is for all practical purposes impossible. Much easier to give the system a set of mechanisms that include the pickup (symbol-building) mechanisms and let the system itself find the symbols that matter. It is worth noting that although Harnad did not say it this way, the problem is really an example of the complex systems problem (cf my 2007 paper on the subject). Complex-system issues are what make it practically impossible to hand-wire a grounded system. You make one final comment, which is about building a system that has a "subjective" understanding of words. That goes beyond grounding, to philosophy of mind issues about subjectivity. A properly grounded system will talk about having subjective comprehension or awareness of meanings, not because it is grounded per se, but because it has "analysis" mechanisms that adjudicate on subjectivity issues, and these mechanisms have systemic issues that give rise to subjectivity. For more details about that, see my 2009 paper on Consciousness, which was given at the AGI conference last year. Richard Loosemore From gts_2000 at yahoo.com Fri Feb 5 15:12:42 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 07:12:42 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <800656.29106.qm@web36502.mail.mud.yahoo.com> I think you've just confused consciousness with behavior that seems as-if conscious, and that we need to be more precise in our language. I absolutely disagree with your contention that things (neurons, brains, whatever) that behave as-if they have consciousness must by virtue of that fact have consciousness. -gts - From pharos at gmail.com Fri Feb 5 15:23:05 2010 From: pharos at gmail.com (BillK) Date: Fri, 5 Feb 2010 15:23:05 +0000 Subject: [ExI] Semiotics and Computability In-Reply-To: <211266.88332.qm@web36506.mail.mud.yahoo.com> References: <211266.88332.qm@web36506.mail.mud.yahoo.com> Message-ID: On 2/5/10, Gordon Swobe wrote: > I barely find time to respond to sincere posts from people like Stathis. > I have no time to respond to childish insults from the peanut gallery. > Please stop. > > I see it more as a statement of fact rather than an insult. You and responders to you have produced over 500 messages going round in circles while you keep repeating the same *belief*, that it is impossible for computers to ever understand anything. Your thought experiments and strawmen arguments get wilder and wilder as you desperately try to repeat the same *belief* using different words. Which in turn causes more confusion as it appears that you might be saying something different, when you're not. Stathis (and others) arguments have certainly clarified the situation so that working towards creating human level (and greater) intelligence in computers appears a worthwhile objective. For that we should thank him. BillK From stathisp at gmail.com Fri Feb 5 15:25:34 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 02:25:34 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <204176.88244.qm@web36508.mail.mud.yahoo.com> References: <204176.88244.qm@web36508.mail.mud.yahoo.com> Message-ID: On 6 February 2010 02:00, Gordon Swobe wrote: > I think you've misrepresented or misunderstood me here. Where in the same breath did I say these things? > > In your thought experiment, the artificial program-driven neurons will require a lot of work for the same reason that programming weak AI will require a lot of work. We're not there yet, but it's within the realm of programming possibility. The artificial neurons (or subneuronal or multineuronal structures, it doesn't matter) exhibit the same behaviour as the natural equivalents, but lack consciousness. That's all you need to know about them: you don't have to worry how difficult it was to make them, just that they have been made (provided it is logically possible). Now it seems that you allow that such components are possible, but then you say that once they are installed the rest of the brain will somehow malfunction and needs to be tweaked. That is the blatant contradiction: if the brain starts behaving differently, then the artificial components lack the defining property you agreed they have. -- Stathis Papaioannou From aware at awareresearch.com Fri Feb 5 15:51:59 2010 From: aware at awareresearch.com (Aware) Date: Fri, 5 Feb 2010 07:51:59 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: On Fri, Feb 5, 2010 at 4:35 AM, BillK wrote: > On 2/5/10, Stathis Papaioannou wrote: >> By definition it isn't measurable, since (according to Searle and >> ?Gordon) it would be possible to perfectly reproduce the behaviour of >> ?the brain, but leave out understanding. It is only possible to observe >> ?behaviour, so if behaviour is separable from understanding, you can't >> ?observe it. I'm waiting for Gordon to say, OK, I've changed my mind, >> ?it is *not* possible to reproduce the behaviour of the brain and leave >> ?out understanding, but he just won't do it. > > "You cannot reason people out of a position that they did not reason > themselves into." > ? Ben Goldacre (Bad Science) Ironically, nearly EVERYONE in this discussion is defending the "obvious, indisputable, common-sense position" that this [qualia | consciousness | meaning | intentionality...(name your 1st-person essence)] actually exists as an ontological attribute of certain systems. It's strongly reminiscent of belief in phlogiston or ?lan vital, but so much trickier because of the epistemological factor. Nearly everyone here, with righteous rationality, are defending a position they did not reason themselves into, even though, when pressed, they will admit they don't know how to model it or even clearly define it. Gordon presents Searle's argument, and no one here gets that the logic is right, but the premise is wrong--because they are True Believers sharing that premise. The "consciousness" you're looking for--that you assume drives your thinking and receives your experience--doesn't exist. The illusion of an essential self (present always only when the system asks) is simply the necessary behavior of any system referring to references to itself. - Jef From jonkc at bellsouth.net Fri Feb 5 16:04:28 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 5 Feb 2010 11:04:28 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <565163.72063.qm@web65609.mail.ac4.yahoo.com> References: <969214.43756.qm@web36506.mail.mud.yahoo.com> <317129.32350.qm@web65616.mail.ac4.yahoo.com> <4C1944CD-40AD-48A1-A274-2CF40D90CA77@bellsouth.net> <565163.72063.qm@web65609.mail.ac4.yahoo.com> Message-ID: <3B0B158F-40D3-4F38-A7FD-BFD538221EFB@bellsouth.net> Me: >> Stripped to its essentials intentionality means someone or something that can change its internal state, a state that predisposes it to do one thing rather than another; or at least that's what I mean by the word. I like it because it lacks circularity. >> The Avantguardian > While I understand your dislike of circularity, the definition you give is far too broad. Almost everything has an internal state that can be changed. The discovery of this and the mathematics behind it made Ludwig Boltzmann famous. You are making my case for me! Certainly looking at things that way proved to be very productive indeed for Mr. Boltzmann. > A rock has a temperature which is an "internal state". If the temperature of the rock is higher than that of its surroundings, its internal state predisposes the rock to cool down. I have no problem with that, if it's good enough for Boltzmann it's good enough for me. > evolution by natural selection is based on a circular argument as well. Species evolve by the differential survival and reproduction of the fittest members of the species. The circularity is in the redundant nature of your sentence not in the Theory of Evolution; it could be better stated just by saying "Species evolve by differential survival". And even then it would only be 50% true because it says nothing about random mutation. And speaking of Evolution, I have pointed out many times that Gordon's ideas are totally incompatible with Darwin's, nobody has disputed this but they just shrug and continue to argue about some arcane point in his latest thought experiment. I don't get it. > all software currently in existence exhibits only the intentionality of the programmer and not any native or implicit intentionality of its own. You are advising us in the above to get right back on the good old circular express; I believe that looking at things that way will bring us about as much enlightenment as Gordon has given us in the last month or so. I also think you are making an error in assuming that intentionality is an all or nothing thing. Yes a Turing Machine finding a zero or a one may seem simple and un-mysterious compared with our deepest desires, but as I said before that is in the very nature of explanations, or at least it is of good ones. > The way of reductionism is fraught with the peril of oversimplification. For some reason nowadays it's very fashionable to bad mouth reductionism, but it is at the heart of nearly every scientific discovery made in the last 500 years; waiting until you understand everything before you try to understand anything has not proven to be productive. If you refuse to break down consciousness into smaller easier to understand parts you are doomed to circularity as Gordon has ably demonstrated. > I prefer empiricaI science to philosophy. Me too. > I think experimentation is the only hope of settling this argument. But that would not change Gordon's mind, he specifically said that no matter what a robot did, no matter how brilliantly it behaved he would not treat it as conscious because... well... because it's a robot. What really got me was that the other day he had the gall to mention the word "Evolution". John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Feb 5 17:04:27 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 5 Feb 2010 12:04:27 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <591415.38933.qm@web36503.mail.mud.yahoo.com> References: <591415.38933.qm@web36503.mail.mud.yahoo.com> Message-ID: <8829C060-4B3A-43D2-A1A1-6BB31AD968D3@bellsouth.net> Since my last post Gordon Swobe has posted 13 times: > I do not care how the computer behaves. Yes Gordon I know, you don't care about behavior and that of course means you don't care about Evolution either, and I have a real problem with that because, even if your thought experiments were superb rather than absurd, real experiments outrank thought experiments. > My point here is that intelligent behavior does not imply understanding. That is 100% incompatible with Darwin's ideas. > We can construct robots that behave intelligently like humans but which have no subjective understanding of anything whatsoever. That is 100% incompatible with Darwin's ideas. > Does your watch understand what time it is, Stathis? No, of course not. Yet another wonderful example of how not to make a thought experiment. You want to investigate a property so you set up a thought experiment. You have absolutely no way of measuring or even detecting this property, so you just arbitrarily state that this property does or does not exist in this thought experiment, and then claim to have proven something profound about the existence or nonexistence of that property. Don't you think that's just a bit ridiculous? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlosehuerta at gmail.com Fri Feb 5 17:23:49 2010 From: carlosehuerta at gmail.com (Carlos Huerta) Date: Fri, 5 Feb 2010 12:23:49 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: <246CA158-8B20-47A6-AA68-D482AA749F78@gmail.com> Hi, is there somewhere I can find this (Pollock, JL, Ismael J. 2006. Knowledge and reality; So You Think You Exist? In Defense of Nolipsism) paper for online reading? Or more of JL Pollock's work? Thanks Get Technical El Feb 4, 2010, a las 10:08 PM, x at extropica.org escribi?: > On Thu, Feb 4, 2010 at 1:41 PM, Spencer Campbell > wrote: >> Who was it that wrote the paper describing how a delusion of self is >> unavoidable when implementing a general-purpose consciousness such as >> myself? I liked that paper. It appealed to my nihilistic side, which >> is also the rest of me. >> >> Ugh, this is going to drive me crazy. I have to remember some >> keywords >> to search for. He used a very specific term to refer to that >> delusion. >> "Distributed agent" was used in the paper, I think, but not the >> message that linked to the paper... > > Pollock, JL, Ismael J. 2006. Knowledge and reality; So You Think You > Exist? In Defense of Nolipsism :35-62. > > I've posted it to this list twice now. This is the first indication > I've seen that anyone read it. > > - Jef > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Feb 5 19:00:54 2010 From: pharos at gmail.com (BillK) Date: Fri, 5 Feb 2010 19:00:54 +0000 Subject: [ExI] Semiotics and Computability In-Reply-To: <246CA158-8B20-47A6-AA68-D482AA749F78@gmail.com> References: <246CA158-8B20-47A6-AA68-D482AA749F78@gmail.com> Message-ID: On 2/5/10, Carlos Huerta wrote: > > Hi, is there somewhere I can find this (Pollock, JL, Ismael J. 2006. > Knowledge and reality; So You Think You > Exist? In Defense of Nolipsism) paper for online reading? Or more of JL > Pollock's work? > Isn't Google marvelous? ;) BillK From jrd1415 at gmail.com Fri Feb 5 20:50:34 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 5 Feb 2010 13:50:34 -0700 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: On Thu, Feb 4, 2010 at 8:08 PM, wrote: > Pollock, JL, Ismael J. ?2006. ?Knowledge and reality; So You Think You > Exist? In Defense of Nolipsism :35-62. > > I've posted it to this list twice now. ?This is the first indication > I've seen that anyone read it. Can be found at: http://oscarhome.soc-sci.arizona.edu/ftp/PAPERS/Nolipsism.pdf Best, jeff davis From steinberg.will at gmail.com Fri Feb 5 21:15:32 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 5 Feb 2010 16:15:32 -0500 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: References: <854080.27226.qm@web36504.mail.mud.yahoo.com> Message-ID: <4e3a29501002051315n758e0f95v848faebb866dc406@mail.gmail.com> Neurons also encode information based on the relative strengths of connections with adjacent neural structures, as well as in their propensities towards different underlying currents. It seems like a lot of you, perhaps due to the polarizing atmosphere of the land of GetSwobia, have too easily discounted legitimate logic and mathematics. Now--the Chinese Room is an absolutely ridiculous analogy and Searle should be demeaned for it, as should anyone who takes it too seriously, because semantics ARE NOT STATIC. I do not understand how anyone can begin to make any arguments centering around this sort of ill-thought out, sophomoric idea. It's as if Searle simply took the first thought experiment he could think of, immediately deciding, without considering reality, that the rules of understanding could be "booked"--simply an untruth. Really, the man should have a second set of books which give him rules for erasing and writing in new rules in his first set, and a third set which tells him how to edit the second, and on and on until we asymptotically approach sets of books for which I/O is practically meaningless. Now, in truth, the books of the mind are not truly leveled like this but probably exist on multiple levels at once, with different types of information in the brain having widespread effects on many other types. Even sets of remarkably few neurons will demonstrate very, very complicated recursions. If each neuron has its own rules for varying connections based on input and prior connection strength (much like the rules for a TM,) the fact that it can change it owes rules perhaps lends itself to the idea of mental non-computability, at least in today's sense of the word. Swobe is still wrong, but brains aren't Turing equivalent because the brain does NOT remain a constant T(n) but instead is composed of innumerable modular T(x); T(y); T(z); each is constantly changing the T-value of itself and adjacent virtual machines. Each module has in it some semblance of UTM-ness allowing it to read others, perhaps owing to a greater mental structure of which we are not yet aware. I understand the physicalist's desire to immediately quash all notions of noncomputability, but this is the same sort of blind partisanship that, if continued, will prevent us from truly learning how we think. A static TM is a limited concept. Understanding of the brain will dictate our need to branch out and explore self-modifying Turturingmachineing Machines... -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Feb 5 22:13:35 2010 From: pharos at gmail.com (BillK) Date: Fri, 5 Feb 2010 22:13:35 +0000 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <4e3a29501002051315n758e0f95v848faebb866dc406@mail.gmail.com> References: <854080.27226.qm@web36504.mail.mud.yahoo.com> <4e3a29501002051315n758e0f95v848faebb866dc406@mail.gmail.com> Message-ID: On 2/5/10, Will Steinberg wrote: > Even sets of remarkably few neurons > will demonstrate very, very complicated recursions. If each neuron has its > own rules for varying connections based on input and prior connection > strength (much like the rules for a TM,) the fact that it can change it owes > rules perhaps lends itself to the idea of mental non-computability, at least > in today's sense of the word. > > Swobe is still wrong, but brains aren't Turing equivalent because the brain > does NOT remain a constant T(n) but instead is composed of innumerable > modular T(x); T(y); T(z); each is constantly changing the T-value of itself > and adjacent virtual machines. Each module has in it some semblance of > UTM-ness allowing it to read others, perhaps owing to a greater mental > structure of which we are not yet aware. > > I understand the physicalist's desire to immediately quash all notions of > noncomputability, but this is the same sort of blind partisanship that, if > continued, will prevent us from truly learning how we think. A static TM is > a limited concept. Understanding of the brain will dictate our need to > branch out and explore self-modifying Turturingmachineing Machines... > > I strongly agree with this comment. That's why an eternity ago (was it really only a month ago?) I said that our present digital computers didn't work the same way as the brain. My attempt at a description was:- The brain is more like an analogue computer. It is not like a digital computer that runs a program stored in memory. The brain *is* the program and *is* the computer. And it is a constantly changing analogue computer as it grows new paths and links. There are no brain programs that resemble computer programs stored in a coded format since all the programming and all the data is built into neuronal networks. If you want to get really complicated, you can think of the brain as multiple analogue computers running in parallel, processing different functions, all growing and changing and passing signals between themselves. We may need a new generation of a different kind of computer to generate this 'consciousness'. It is a different question whether we need this 'consciousness' in our intelligent computers. ------------------ BillK From stefano.vaj at gmail.com Fri Feb 5 22:44:07 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 5 Feb 2010 23:44:07 +0100 Subject: [ExI] Space based solar power again In-Reply-To: References: Message-ID: <580930c21002051444p34551596o15c5545ff6ec648a@mail.gmail.com> On 4 February 2010 20:09, Keith Henson wrote: > Even at a million tons per year (what's needed for a decent sized SBSP > project) the odds are against the cost being low enough for power > satellites to make sense (i.e., undercut coal and nuclear) if you try > to transport the parts with chemical rockets. > > You either have to go to some non reaction method, magnet launcher, > cannon, launch loop or space elevator, or you have to go to an exhaust > velocity higher than what the energy of chemical fuels will give you. Or - be it just in theory - you can go to some nuclear reaction, Project Orion-like, method, which would be in my understanding much easier to implement, at the state-of-the-art, than any of the alternatives. -- Stefano Vaj From bbenzai at yahoo.com Sat Feb 6 08:37:55 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 6 Feb 2010 00:37:55 -0800 (PST) Subject: [ExI] Nolopsism In-Reply-To: Message-ID: <243641.74772.qm@web113601.mail.gq1.yahoo.com> Carlos Huerta asked: > Hi, is there somewhere I can find this (Pollock, JL, Ismael > J.? 2006.??? > Knowledge and reality; So You Think You > Exist? In Defense of Nolipsism) paper for online reading? > Or more of? > JL Pollock's work? > Thanks Yes, you use a search engine ;> First on the list after searching for 'Nolipsism' in Scroogle: http://www.u.arizona.edu/~jtismael/nolipsism.pdf BTW, this is a great paper. I've been reading through it, and so far, it seems to make perfect sense. The basic idea is simple and elegant, and as far as I can see, completely solves all these circular discussions we've been having on this list. I'm sure certain parties wouldn't agree with that opinion though! I'm pretty impressed. Ben Zaiboc From stathisp at gmail.com Sat Feb 6 09:09:36 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 20:09:36 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <4e3a29501002051315n758e0f95v848faebb866dc406@mail.gmail.com> References: <854080.27226.qm@web36504.mail.mud.yahoo.com> <4e3a29501002051315n758e0f95v848faebb866dc406@mail.gmail.com> Message-ID: 2010/2/6 Will Steinberg : > Swobe is still wrong, but brains aren't Turing equivalent because the brain > does NOT remain a constant T(n) but instead is composed of innumerable > modular T(x); T(y); T(z); each is constantly changing the T-value of itself > and adjacent virtual machines.? Each module has in it some semblance of > UTM-ness allowing it to read others, perhaps owing to a greater mental > structure of which we are not yet aware. A Turing machine is limited in that it does not handle dynamic interaction with an environment, as brains and digital computers do. However, all digital computers are said to be Turing emulable, because any computation the computer can do a Turing machine could also do. A brain could be emulated on a digital computer provided that there is nothing in the physics of the brain that is not computable. An example of non-computable brain physics would be processes that require actual real numbers a solution of the halting problem in order to model them. Absent such complications, the brain (or any other part of the universe) could be modelled by any digital computer with the right program and enough memory. -- Stathis Papaioannou From stefano.vaj at gmail.com Sat Feb 6 09:48:41 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Feb 2010 10:48:41 +0100 Subject: [ExI] war is peace In-Reply-To: <004001caa697$66d15db0$ad753644@sx28047db9d36c> References: <004001caa697$66d15db0$ad753644@sx28047db9d36c> Message-ID: <580930c21002060148p119e4ef9h8926a0590fdbf843@mail.gmail.com> 2010/2/5 Frank McElligott : > If your Government tells you it is right, you accept it, as we did with Bush > and his Weapons of Mass Destruction, in Russia they are? no different, or in > EU for that matter. Mmhhh, just for the sake of discussion, there is a difference. If the government X says that WMDs exist in a given place or that global warming exists and is largely anthropic, either it is empirically true or it is not. If a given government wants to nationalise a given company, and makes use of its power (or implements legislation granting it the power) to do so, it is hard to say that such nationalisation is not "legal". At most, it may not be "right" according to one's socio-political views. But there again, there are a lot of illegal activities (say, writing Galileo's Dialogue Concerning the Two Chief World Systems in 1632, or starting the US war of independence) that one is ready to condone, if one likes or approve them. This is why, being a jurist myself, I am especially wary of invoking legality or illegality as a value judgment... :-) -- Stefano Vaj From stefano.vaj at gmail.com Sat Feb 6 10:02:49 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Feb 2010 11:02:49 +0100 Subject: [ExI] The simplest possible conscious system In-Reply-To: <820642.67186.qm@web113613.mail.gq1.yahoo.com> References: <820642.67186.qm@web113613.mail.gq1.yahoo.com> Message-ID: <580930c21002060202i4b0917f0m3d5404c7d219d8b2@mail.gmail.com> On 4 February 2010 15:35, Ben Zaiboc wrote: > I'd guess that the simplest possible conscious system would have an embodiment ('real' or virtual) within an environment, sensors, actuators, the ability to build internal representations of both its environment and itself, and by implication some kind of state memory. ?Hm. maybe we already have conscious robots, and don't realise it! If "conscious" is taken to mean "exhibiting the same information processing features of an average adult, healthy, alert, educated human being" the theoretical answer for me corresponds to the question "what is the simplest possible universal computer". And the answer is discussed quite in depth in A New Kind of Science. OTOH, most universal computers which are in fact not human beings, including all those who are much simpler than brains, would be monstrously inefficient in performing such task. So, if you are referring to something which may pass a Turing test without its opponent (or perhaps the universe...) dying of old age between one question and its answer, the requirements would of course be much stricter. -- Stefano Vaj From stefano.vaj at gmail.com Sat Feb 6 10:15:21 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Feb 2010 11:15:21 +0100 Subject: [ExI] Personal conclusions In-Reply-To: References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: <580930c21002060215t5cacb91fqe4555291e02cea4f@mail.gmail.com> On 4 February 2010 19:12, Spencer Campbell wrote: > The intelligence of a given system is inversely proportional to the > average action (time * work) which must be expended before the system > achieves a given purpose, assuming that it began in a state as far > away as possible from that purpose. I would say it sounds a good one to me. In particular since it is not a white-black one and does not invoke metaphysical, ineffable entities. What about "perfomance in the execution of a given kind of data processing"? > This would make it impossible to prove its existence, which to my mind > is a pretty solid argument for its nonexistence. Nevertheless, we talk > about it (consciousness) all the time, throughout history and in every culture. So > even if it doesn't exist, it seems reasonable to assume that it is at > least meaningful to think about. Absolutely. In fact, there are a lot of useful and perfectly legitimate concepts which do not correspond to "entities". If I say "horizon", "beauty", "computation", "popular will", "sleep", everybody knows what I am talking about, even though nobody, except perhaps Plato, thinks that they have to be something you can rap your nuckles against. -- Stefano Vaj From bbenzai at yahoo.com Sat Feb 6 12:41:15 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 6 Feb 2010 04:41:15 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <375474.21539.qm@web113618.mail.gq1.yahoo.com> Gordon Swobe wrote: >>> True or false, Stathis: >>> >>> When a program running on a digital computer associates >>> a sense-datum (say, an image of an object taken with its >>> web-cam) with the appropriate word-symbol, the system >>> running that program has now by virtue of that association >>> grounded the word-symbol and now has understanding of the >>> meaning of that word-symbol. >> >> That depends entirely upon the nature of the program. > I see. So then let us say programmer A writes a program that fails but that programmer B writes one that succeeds. Hang on, how would you know? What test would you use to determine whether programmer B's program has understanding whereas A's hasn't? > I do not care how the computer behaves. Does it have conscious > understanding of the meaning of the word by virtue of having > associated it with an image file of the object represented > by the word? Well, if you disregard it's behaviour, how can you know what it's doing? Whether or not it's having 'conscious understanding' must surely be reflected in its behaviour, and observing its behaviour is the only kind of test that can be done. If you say "we don't need to look at behaviour, we can look at its structure instead", that presupposes that we know what kinds of structure do and don't give rise to conscious understanding, and as we are creating these systems to investigate this in the first place, we would be assuming the very thing we want to prove. There'd be no point in doing the experiment. That's not science. It may be philosophy, but it's definitely not science. Ben Zaiboc From bbenzai at yahoo.com Sat Feb 6 12:39:06 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 6 Feb 2010 04:39:06 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <484433.35998.qm@web113602.mail.gq1.yahoo.com> BillK wrote: > > ... an eternity ago (was it really only a month > ago?) I said > that our present digital computers didn't work the same way > as the brain. Is anybody actually claiming that? I certainly wouldn't. Computers don't work the same way as the traffic, but that doesn't stop us from using them to model traffic. This is why levels of abstraction are so crucial. It doesn't matter that digital computers don't work the same way as the brain. What matters is that digital computers can create virtual objects that do. (or objects that create objects that create objects that do, etc.). Ben Zaiboc From stefano.vaj at gmail.com Sat Feb 6 16:08:06 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Feb 2010 17:08:06 +0100 Subject: [ExI] Nolopsism In-Reply-To: <243641.74772.qm@web113601.mail.gq1.yahoo.com> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> Message-ID: <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> On 6 February 2010 09:37, Ben Zaiboc wrote: > BTW, this is a great paper. ?I've been reading through it, and so far, it seems to make perfect sense. ?The basic idea is simple and elegant, and as far as I can see, completely solves all these circular discussions we've been having on this list. ?I'm sure certain parties wouldn't agree with that opinion though! Indeed. Even though he is too pessimistic in saying that "cannot accept that we cannot exist". We should say "we cannot avoid making use of reflexive indicators", but there are plenty of other useful, understandable and practical concepts to which no "essence" really corresponds. Why should one's "self" be an exception? Most of dualism can easily be reduced to linguistic short-circuits and paradoxes... -- Stefano Vaj From natasha at natasha.cc Sat Feb 6 17:21:31 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 6 Feb 2010 11:21:31 -0600 Subject: [ExI] Polytopia Interview with Natasha Message-ID: http://spacecollective.org/projects/Polytopia/ Or link to article here: http://spacecollective.org/Wildcat/5527/The-Audacious-beauty-of-our-future-N atasha-VitaMore-an-interview Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From gts_2000 at yahoo.com Sat Feb 6 19:27:40 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 6 Feb 2010 11:27:40 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <156490.82842.qm@web36504.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: >> In your thought experiment, the artificial >> program-driven neurons will require a lot of work for the >> same reason that programming weak AI will require a lot of >> work. We're not there yet, but it's within the realm of >> programming possibility. > > The artificial neurons (or subneuronal or multineuronal > structures, it doesn't matter)... If it doesn't matter, then let's keep it straightforward and refer to artificial brains rather than to artificial neurons surgically inserted into the midst of natural neurons. This will eliminate a lot of uncertainties that arise from the present state of ignorance about neuroscience. > exhibit the same behaviour as the natural equivalents, > but lack consciousness. In my view an artificial brain can exhibit the same intelligent behaviors as a natural brain without having subjective mental states where we define behavior as, for example, acts of speech. > That's all you need to know about them: you don't have to worry how > difficult it was to make them, just that they have been made (provided > it is logically possible). Now it seems that you allow that such > components are possible, but then you say that once they are installed > the rest of the brain will somehow malfunction and needs to be tweaked. > That is the blatant contradiction: if the brain starts behaving > differently, then the artificial components lack > the defining property you agreed they have. As above, let's save a lot of confusion and speak of brains rather than individual neurons. -gts From lacertilian at gmail.com Sat Feb 6 20:09:01 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 12:09:01 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: Stathis Papaioannou : >Spencer Campbell : >> They're extremely different things. We take >> meaning as input and output, or at least feel like we do, but we >> simply HAVE understanding. >> >> And no, it isn't a substance. It's a measurable phenomenon. Not easily >> measurable, but measurable nonetheless. > > By definition it isn't measurable, since (according to Searle and > Gordon) it would be possible to perfectly reproduce the behaviour of > the brain, but leave out understanding. It is only possible to observe > behaviour, so if behaviour is separable from understanding, you can't > observe it. I'm waiting for Gordon to say, OK, I've changed my mind, > it is *not* possible to reproduce the behaviour of the brain and leave > out understanding, but he just won't do it. Unfortunately for both you and Gordon, both of you are right in this case. If you define understanding as I do, that is: to understand a system is to have a model of that system in your mind, which entails the ability to correctly guess past or future states of the system based on an assumed state, or the consequences of an interaction between this and another understood system. It's easy to see how this definition covers understanding things like weather patterns, but it also applies in some unexpected ways. I understand English. I can guess what will happen in your mind when you read this sentence; it'll be a pretty inaccurate guess, by any objective measure, but it will be of a higher quality than pure chance would predict by many, many orders of magnitude. To define understanding in terms of associations between symbols does not make sense to me. I understand that dogs are canines. This has no relationship whatsoever to my understanding of dogs; I can only make that statement based on my understanding of English. It's more a fact about words than it is about animals. Returning to the original point: Stathis is correct in saying that understanding has an effect on behavior, and Gordon is correct in saying that intelligent behavior does not imply understanding. I can argue these points further if they aren't obvious, but to me they are. It should be possible, theoretically, to perfectly reproduce human behavior without reproducing a lick of human understanding. But this isn't entirely true. We can set up an experiment in which the (non-understanding) robot does exactly the same thing as the human, but if we observed the human and robot in their natural environments for a couple years it ought soon become obvious that they approach the world in radically different ways, even if their day-to-day behavior is nearly indistinguishable. (The robot I'm thinking of would be built to "understand" the world right off the bat, rather than learning about things as it goes along, as we do.) From gts_2000 at yahoo.com Sat Feb 6 20:11:56 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 6 Feb 2010 12:11:56 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <22844.25525.qm@web36508.mail.mud.yahoo.com> --- On Fri, 2/5/10, Aware wrote: > Ironically, nearly EVERYONE in this discussion is defending > the "obvious, indisputable, common-sense position" that this > [qualia | consciousness | meaning | intentionality...(name your > 1st-person essence)] actually exists as an ontological attribute of > certain systems. Spencer mentioned a head-ache as an example of something I would call a fact of reality that exists with a first-person ontology. > It's strongly reminiscent of belief in phlogiston... Have you ever had a head-ache, Jef? How about a tooth-ache? It seems to me that these kinds of phenomena really do exist in the world. I actually had a tooth extracted two weeks ago, and I can tell you that few things had more reality to me then than the experience of the tooth-ache that precipitated my desire to see the dentist. Subjective experiences such as these differ from such phenomena as mountains and planets only in so much they have first-person rather than third-person ontologies. My dentist agrees that tooth-aches really do exist, and so does the Bayer company. I consider myself a materialist, but in the reaction against mind/matter dualism some of my fellow materialists (e.g., Dennett) go overboard and irrationally deny the plain facts of subjective experience. They try to explain it away in third-person terms, fearing that any recognition of the mental will place them in the same came with Descartes. They don't understand that in so doing they embrace and acknowledge Descartes' dualistic vocabulary. -gts From gts_2000 at yahoo.com Sat Feb 6 20:55:38 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 6 Feb 2010 12:55:38 -0800 (PST) Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: Message-ID: <684806.48339.qm@web36503.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: > I don't deny subjective experience but I deny that when I > understand something I do anything more than associate it with another > symbol, ultimately grounded in something I have seen in the real > world. That would seem necessary and sufficient for understanding, and > for the subjective experience of understanding, such as it is. When I asked you about a digital computer that did exactly that, you acknowledged that said computer lacked conscious understanding of the symbol and went off on a tangent about amoebas. So then it seems that first you say these sorts of associations are necessary and sufficient for subjective experience of understanding, but then you don't. re: the amoeba As I use the word "consciousness", I believe the amoeba has none whatsoever. This unconscious creature exhibits intelligent behavior but because it has no nervous system I doubt very seriously that it has any conscious experience of living. It looks for food intelligently in the same sense that your watch tells the time intelligently and in the same sense in which weak AI systems may one day have the intelligence needed to pass the Turing test; that is, it has intelligence but no consciousness. -gts From aware at awareresearch.com Sat Feb 6 21:05:58 2010 From: aware at awareresearch.com (Aware) Date: Sat, 6 Feb 2010 13:05:58 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <22844.25525.qm@web36508.mail.mud.yahoo.com> References: <22844.25525.qm@web36508.mail.mud.yahoo.com> Message-ID: On Sat, Feb 6, 2010 at 12:11 PM, Gordon Swobe wrote: > Have you ever had a head-ache, Jef? How about a tooth-ache? It seems to me that these kinds of phenomena really do exist in the world. > > I actually had a tooth extracted two weeks ago, and I can tell you that few things had more reality to me then than the experience of the tooth-ache that precipitated my desire to see the dentist. Subjective experiences such as these differ from such phenomena as mountains and planets only in so much they have first-person rather than third-person ontologies. My dentist agrees that tooth-aches really do exist, and so does the Bayer company. > > I consider myself a materialist, but in the reaction against mind/matter dualism some of my fellow materialists (e.g., Dennett) go overboard and irrationally deny the plain facts of subjective experience. They try to explain it away in third-person terms, fearing that any recognition of the mental will place them in the same came with Descartes. They don't understand that in so doing they embrace and acknowledge Descartes' dualistic vocabulary. Gordon, you presented the ostensible puzzle of Searle's Chinese Room, in which YOU are left facing a paradox. I contributed a very simple, clear and coherent (but perhaps jarringly non-intuitive) resolution to your paradox. A resolution that you're unable to accept due to your discomfort with the notion that there is no ESSENTIAL Gordon Swobe to experience ESSENTIAL qualia, despite my reassurances that this in no way denies the very real Gordon Swobe and his experiences as we AND YOU know them. A resolution that I've lived with for nearly thirty years now; one that flipped my world-view inside-out, leaving everything the same but simpler (no singularity of Self) and that costs nothing, while providing a more coherent basis for reasoning and extrapolation. Fine, enjoy your faith in the illusion, and live with the paradox. In everyday life, as long as you're not, for example, trying in vain to find a way to physically implement the qualia you imagine to exist, you should have little trouble. Your limited view does get in the way of more advanced thinking on the topic of agency and its role in metaethics, which I consider crucial to the ongoing growth of what matters to us as a society, but hey, you've got lots of company. This is a very old argument, and all the necessary pieces of the puzzle are strewn about you. If you use all the pieces, they fit together only one way. - Jef From gts_2000 at yahoo.com Sat Feb 6 22:01:52 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 6 Feb 2010 14:01:52 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <264701.53414.qm@web36506.mail.mud.yahoo.com> --- On Sat, 2/6/10, Aware wrote: >> Have you ever had a head-ache, Jef? How about a >> tooth-ache? It seems to me that these kinds of phenomena >> really do exist in the world. >> >> I actually had a tooth extracted two weeks ago, and I >> can tell you that few things had more reality to me then >> than the experience of the tooth-ache that precipitated my >> desire to see the dentist. Subjective experiences such as >> these differ from such phenomena as mountains and planets >> only in so much they have first-person rather than >> third-person ontologies. My dentist agrees that tooth-aches >> really do exist, and so does the Bayer company. > >> I consider myself a materialist, but in the reaction >> against mind/matter dualism some of my fellow materialists >> (e.g., Dennett) go overboard and irrationally deny the plain >> facts of subjective experience. They try to explain it away >> in third-person terms, fearing that any recognition of the >> mental will place them in the same camp with Descartes. They >> don't understand that in so doing they embrace and >> acknowledge Descartes' dualistic vocabulary. > > Gordon, you presented the ostensible puzzle of Searle's > Chinese Room, in which YOU are left facing a paradox. > > I contributed a very simple, clear and coherent (but > perhaps jarringly non-intuitive) resolution to your paradox. > > A resolution that you're unable to accept due to your > discomfort with the notion that there is no ESSENTIAL Gordon Swobe to > experience ESSENTIAL qualia, despite my reassurances that this in no > way denies the very real Gordon Swobe and his experiences as we AND > YOU know them. > > A resolution that I've lived with for nearly thirty years > now; one that flipped my world-view inside-out, leaving everything > the same but simpler (no singularity of Self) and that costs nothing, > while providing a more coherent basis for reasoning and > extrapolation. > > Fine, enjoy your faith in the illusion, and live with the > paradox.? In everyday life, as long as you're not, for example, trying > in vain to find a way to physically implement the qualia you imagine > to exist, you should have little trouble.? Your limited view > does get in the way of more advanced thinking on the topic of agency and > its role in metaethics, which I consider crucial to the ongoing growth > of what matters to us as a society, but hey, you've got lots of > company. > > This is a very old argument, and all the necessary pieces > of the puzzle are strewn about you.? If you use all the > pieces, they fit together only one way. I'll ask again: have you ever had a tooth-ache? -gts From avantguardian2020 at yahoo.com Sat Feb 6 22:41:03 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 6 Feb 2010 14:41:03 -0800 (PST) Subject: [ExI] Nolopsism In-Reply-To: <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> Message-ID: <641575.48388.qm@web65616.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stefano Vaj > To: ExI chat list > Sent: Sat, February 6, 2010 8:08:06 AM > Subject: Re: [ExI] Nolopsism > > On 6 February 2010 09:37, Ben Zaiboc wrote: > > BTW, this is a great paper. ?I've been reading through it, and so far, it > seems to make perfect sense. ?The basic idea is simple and elegant, and as far > as I can see, completely solves all these circular discussions we've been having > on this list. ?I'm sure certain parties wouldn't agree with that opinion though! > > Indeed. Even though he is too pessimistic in saying that "cannot > accept that we cannot exist". We should say "we cannot avoid making > use of reflexive indicators", but there are plenty of other useful, > understandable and practical concepts to which no "essence" really > corresponds. Why should one's "self" be an exception? Most of dualism > can easily be reduced to linguistic short-circuits and paradoxes... Philosophically nolipsism bears some resemblance to Buddhism which is fine from a spiritual point of view. E.g. why fear death when there is no "me" to die, etc.?Being an attorney however, I am sure you are aware of the legal can of worms nolipsism opens up. Human rights are tied to identity. If "I" don't exist, then stealing my stuff or even murdering me is a victimless crime. Doesn't make for a happy outcome in my opinion, especially for libertarians.?Probably why the authors back-pedalled from their claims in the conclusion. From lacertilian at gmail.com Sun Feb 7 00:11:08 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 16:11:08 -0800 Subject: [ExI] The simplest possible conscious system In-Reply-To: <580930c21002060202i4b0917f0m3d5404c7d219d8b2@mail.gmail.com> References: <820642.67186.qm@web113613.mail.gq1.yahoo.com> <580930c21002060202i4b0917f0m3d5404c7d219d8b2@mail.gmail.com> Message-ID: Stefano Vaj : > If "conscious" is taken to mean "exhibiting the same information > processing features of an average adult, healthy, alert, educated > human being" the theoretical answer for me corresponds to the question > "what is the simplest possible universal computer". Of course "conscious" is not taken to mean that here, nor anything like that, if only for the reasons pointed out by James Choate two or three days ago. Humans are ludicrously complex examples of consciousness (assuming they are conscious at all), and almost completely useless for finding an answer to the question. Unless, of course, we can incrementally subtract non-vital functions, paring down a prototypical human being to nothing more than a consciousness-generating machine. Then we have to think about how humans do it at all. This is an excellent line of inquiry to pursue. Most awake people are conscious, and most asleep people are unconscious. Agreed? Agreed. What about other states? Peculiar cases may be illustrative here. Is a lucid dreamer conscious? Any other dreamer? A blacked-out drunk? A hypnosis subject? An acid head? I've heard of mental states available in meditation that eradicate the subject-object duality, and others that are devoid of all content save for vast diffuse consciousness. The latter is obviously conscious, by definition, but I don't know about the former; if I am meditating on an idol, and then perceive myself to be identical with the idol, am I conscious? Is the idol? There aren't a lot of sharp lines to be drawn here. This much is obvious. Aside from that, I have very little to contribute. Can any experienced psychonauts lend some anecdotal evidence here? From lacertilian at gmail.com Sun Feb 7 00:34:53 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 16:34:53 -0800 Subject: [ExI] Personal conclusions In-Reply-To: References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: Aware : > And despite accumulating evidence of the incoherence of consciousness, > with all its gaps, distortions, fabrication and confabulation, we hang > on to it, and decide it must be a very Hard Problem. ?Thus inoculated, > and fortified by the biases built in to our language and culture, we > know that when someone comes along and says that it's actually very > simple, cf. Dennett, Metzinger, Pollack, Buddha..., we can be sure, > even though we can't make sense of what they're saying, that they must > be wrong. > > A few deeper thinkers, aiming for greater coherence over greater > context, have suggested that either all entities "have consciousness" > or none do. ?This is a step in the right direction. ?Then the > question, clarified, might be decided in simply information-theoretic > terms. ?But even then, more often they will side with Panpsychism > (even a rock has consciousness, but only a little) than to face the > possibility of non-existence of an essential experiencer. Considering the thread's subject, it seems safe to burn some bytes on personal information. So: I subscribe to panexperientialism myself. Either everything has subjective experience, or nothing does. Unfortunately this doesn't help me at all when faced with a question like, "is a human more conscious than a pig more conscious than a fly more conscious than a rock?". I want to say yes, really I do, but at the moment I just can't! I see no reason whatsoever why certain amounts or types of information processing should "attract" excess consciousness, if, like me, you want to treat it as a fundamental property of matter. So, my contributions to the discussion are probably incremental at best. I have stepped far enough back to understand that I understand nothing, and just barely further. Dennett, Metzinger, Pollack, Buddha (Gautama?). I have some books to put on my reading list. From lacertilian at gmail.com Sun Feb 7 01:15:55 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 17:15:55 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <264701.53414.qm@web36506.mail.mud.yahoo.com> References: <264701.53414.qm@web36506.mail.mud.yahoo.com> Message-ID: Quoting from the "Personal conclusions" thread, because, on reflection, it seems more relevant over here. Stefano Vaj : >Spencer Campbell : >> The intelligence of a given system is inversely proportional to the >> average action (time * work) which must be expended before the system >> achieves a given purpose, assuming that it began in a state as far >> away as possible from that purpose. > > I would say it sounds a good one to me. In particular since it is not > a white-black one and does not invoke metaphysical, ineffable > entities. What about "perfomance in the execution of a given kind of > data processing"? Then you crash straight into the concept of FLOPS, and all the terrible awful difficulties it entails. "Performance" is not well-defined with respect to computing, or at least not to the extent you'd expect, and I shudder to think of how one would go about distinguishing "a given kind of data processing" from any other kind. I might give that definition to some strange thing like "cognitive excellence" which no one ever talks about outside of circles like this one, but certainly not to intelligence. A total idiot can learn to multiply large numbers quickly. The modern computer: a total idiot with astounding cognitive excellence. Gordon Swobe : > Stathis Papaioannou : >> I don't deny subjective experience but I deny that when I >> understand something I do anything more than associate it with another >> symbol, ultimately grounded in something I have seen in the real >> world. That would seem necessary and sufficient for understanding, and >> for the subjective experience of understanding, such as it is. > > When I asked you about a digital computer that did exactly that, you acknowledged that said computer lacked conscious understanding of the symbol and went off on a tangent about amoebas. > > So then it seems that first you say these sorts of associations are necessary and sufficient for subjective experience of understanding, but then you don't. For once, I agree unequivocally with Gordon Swobe. I'm not sure how to feel about that! Ambivalent? Nonplussed? I think I'll go with indifferent. If Stathis continues to conflate intelligence, understanding, consciousness, and, worst of all, symbolic association, I may have a lasting position in the Searle-Gordon camp. It's a shame I believe formal programs are perfectly capable of reproducing human subjective experience. Otherwise, I'd fit in just fine. : > Pollock, JL, Ismael J. ?2006. ?Knowledge and reality; So You Think You > Exist? In Defense of Nolipsism :35-62. > > I've posted it to this list twice now. ?This is the first indication > I've seen that anyone read it. > > - Jef Yes! Thank you. Both for reminding me and for posting it to begin with. I already agreed with the premise, so no tectonic shift of world-view occurred, but hearing some coherent theories as to WHY we must believe ourselves to be real, objective, unchanging entities, not necessarily corresponding to any physical structure, was very rewarding nonetheless. From gts_2000 at yahoo.com Sun Feb 7 01:18:12 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 6 Feb 2010 17:18:12 -0800 (PST) Subject: [ExI] The simplest possible conscious system In-Reply-To: Message-ID: <820785.29759.qm@web36504.mail.mud.yahoo.com> --- On Sat, 2/6/10, Spencer Campbell wrote: > I've heard of mental states available in meditation that > eradicate the subject-object duality, and others that are devoid of > all content save for vast diffuse consciousness. The latter is obviously > conscious, by definition, but I don't know about the former; if I am > meditating on an idol, and then perceive myself to be identical with the > idol, am I conscious? Is the idol? I have since the 70s practiced transcendental meditation. Occasionally while meditating my mind seems to, as you say, eradicate the subject-object duality. However I can only infer this indirectly, and only after the fact. During those actual moments I have awareness of nothing at all. I don't believe in qualia, as such, because the idea implies the possibility of consciousness-without-an-object or consciousness-sans-qualia. Such states seem to me impossible both in theory and in practice. Instead I believe one experiences various qualities or aspects of one's own unified field of consciousness. -gts From spike66 at att.net Sun Feb 7 04:37:34 2010 From: spike66 at att.net (spike) Date: Sat, 6 Feb 2010 20:37:34 -0800 Subject: [ExI] valentines day advice In-Reply-To: <641575.48388.qm@web65616.mail.ac4.yahoo.com> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com><580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: <9EF6A887A5D648958A8E14DBE5F0912F@spike> In the midst of all the heavy technical discussion of minds and consciousness, do let me here interject a bit of useful advice for the lads as we approach that emotional minefield known as Valentines Day. Perhaps you have already chosen a gift for your sweethearts, but this one can be in addition to that, and will only cost you about 20 bucks. It might even get you a get-out-of-jail-free card for your next romantic screwup. Buy your sweetheart a DVD of the Celtic Woman singing Songs From the Heart. Take my word for it, me lads. Then watch it with her. Be prepared, for at some time during the performance you can count on her asking some form of the question "Which of them is the most beautiful?" You will at that time respond instantly with "You are, my dear." This will likely be followed with a less-than-fully-sincere version of "Bullshit, now which one?" At this time you must stick with your original story like a thrice conviced felon caught with a smoking pistol in his hand, by uttering "All are beautiful of course, but the radiance of your beauty exceeds them all, as the noonday sun outshines the stars!" We know of course that if measured at equal distance, perhaps as many as half the stars do in fact outshine the sun, for the sensible radiation is proportional to the inverse square of the distance. This is a detail that you and I can keep between just us lads, shall we? Good. Practice the delivery until you can do it without dirisive laughter, on your first (and only) try. You can do it: I did, and I was making it up as I went along. But of course I benefit from many years of dedicated practice at this sort of thing. Hey, I too benefit from the inverse square law, being no Rock Hudson myself. You don't even lie, exactly: your own sweetheart does in fact outshine even the stunning Lisa Kelly, assuming Lisa is home in Ireland while you sit beside your sweetheart in say, Australia, or Neptune. Now get out there and buy that DVD, and practice your lines. You have only one chance at it. Good luck! spike From lacertilian at gmail.com Sun Feb 7 05:31:02 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 21:31:02 -0800 Subject: [ExI] The simplest possible conscious system In-Reply-To: <820785.29759.qm@web36504.mail.mud.yahoo.com> References: <820785.29759.qm@web36504.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > I have since the 70s practiced transcendental meditation. Occasionally while meditating my mind seems to, as you say, eradicate the subject-object duality. However I can only infer this indirectly, and only after the fact. During those actual moments I have awareness of nothing at all. Gordon! I never knew. My opinion of you is again on the upward part of its fluctuation. This seems to me a good argument for the idea that all consciousness is that of a subject being aware of an object. I'd have said that before, but my source regarding that particular state (from Ken Wilber, I think) wasn't very specific on the matter. The vast-consciousness-without-content state does seem to contradict this theory, though, from what little I know about it. I've heard it described (probably by Ken Wilber again) as a void which is only aware of itself. You could interpret that as saying that there technically is a subject-object duality in the moment, but both positions are occupied by the same thing and the thing in question actually isn't anything. Precisely as nolipsism predicts! And Buddhism, and so on. Does any of this help us to construct the simplest possible conscious system? Self-awareness seems to be a good description of consciousness, but awareness isn't exactly understanding. I'm not sure what sort of mechanism is responsible for awareness. You could claim that a Turing machine is aware of the symbol it's currently reading, and only that symbol. Logically, then, a Turing machine that does nothing but read a tape of symbols denoting itself should be conscious. The symbol grounding problem again: how to cause a symbol to denote anything at all? General consensus dictates that some sort of interaction with the environment is necessary. It's obvious to me that this works when taken to the extremely sophisticated level of human awareness, but I would be hard pressed to define an exact point at which the unconsciousness of an ungrounded Turing machine is replaced by the consciousness of an egotistic Spencer Campbell. Attaching a webcam to associate images with symbols (using complex object recognition software, of course), which are then fed to the machine on tape, does not seem sufficient to produce consciousness even if you point the camera at a mirror. Yet, I have no good reason to believe it isn't. Sheer anthropocentric prejudice alone makes me say that such a system is incapable of awareness: the Swobe Fallacy. So, I haven't managed to convince myself that a system simpler than a disembodied verbal AI (discussed previously) is capable of consciousness. It must be, though, if I can remain conscious even with duct tape over my mouth. Calling the potential to communicate a feature of consciousness would be extraordinarily counterintuitive, at best. Basically I am talking to myself at this point. Do all possible consciousnesses do that? Hmm. From lacertilian at gmail.com Sun Feb 7 05:54:47 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 21:54:47 -0800 Subject: [ExI] Nolopsism In-Reply-To: <641575.48388.qm@web65616.mail.ac4.yahoo.com> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: The Avantguardian : > Philosophically nolipsism bears some resemblance to Buddhism which is fine from a spiritual point of view. E.g. why fear death when there is no "me" to die, etc.?Being an attorney however, I am sure you are aware of the legal can of worms nolipsism opens up. Human rights are tied to identity. If "I" don't exist, then stealing my stuff or even murdering me is a victimless crime. Doesn't make for a happy outcome in my opinion, especially for libertarians.?Probably why the authors back-pedalled from their claims in the conclusion. Hi, my name is Spencer Campbell, I will be your Stefano Vaj for tonight. Are victimless crimes morally, ethically, and legally acceptable? They ARE crimes, so, no to the last one. The first two are arguable. I feel confident that a coherent system of law could be made without the assumption that any selves exist for it to protect. It would essentially treat people as highly valuable property, no different from houses or cars, owned by entities just as imaginary as corporations. Really it could only streamline everything. We should do this. We should do this right now. From nanite1018 at gmail.com Sun Feb 7 06:23:39 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Sun, 7 Feb 2010 01:23:39 -0500 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: <915E9809-F351-43BE-A6A5-10E53227BD42@GMAIL.COM> > Are victimless crimes morally, ethically, and legally acceptable? They > ARE crimes, so, no to the last one. The first two are arguable. I feel > confident that a coherent system of law could be made without the > assumption that any selves exist for it to protect. It would > essentially treat people as highly valuable property, no different > from houses or cars, owned by entities just as imaginary as > corporations. > > Really it could only streamline everything. We should do this. We > should do this right now. -Spencer Campbell Problem: Who do you punish? This imaginary entity that damaged the property of another imaginary entity? If you do it like that, then I don't see any difference between that and a legal system based on actual "selves." And without a victim, there is no crime. I can't see the purpose of law without individual rights as its basis (rights based on principles derived from the nature of human beings), and if you eliminate the individual, you'll have a hard time justifying anything, ultimately. Corporations are entities made up of people ultimately, and they are created and owned and controlled by people. Hence a crime against a corporation is a crime against a group of people (the owners or employees). Without individuals, you can't say make laws based on happiness, or prosperity, or anything else, because all of those reference individuals and minds. And obviously "rights" go right out the window. I'll finish reading the article and probably get back later. Joshua Job nanite1018 at gmail.com From stathisp at gmail.com Sun Feb 7 07:09:30 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 7 Feb 2010 18:09:30 +1100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <684806.48339.qm@web36503.mail.mud.yahoo.com> References: <684806.48339.qm@web36503.mail.mud.yahoo.com> Message-ID: On 7 February 2010 07:55, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >> I don't deny subjective experience but I deny that when I >> understand something I do anything more than associate it with another >> symbol, ultimately grounded in something I have seen in the real >> world. That would seem necessary and sufficient for understanding, and >> for the subjective experience of understanding, such as it is. > > When I asked you about a digital computer that did exactly that, you acknowledged that said computer lacked conscious understanding of the symbol and went off on a tangent about amoebas. > > So then it seems that first you say these sorts of associations are necessary and sufficient for subjective experience of understanding, but then you don't. These sorts of associations are the basic stuff of which understanding is made, but obviously there are degrees of understanding, involving complex syntax and multiple associations. A human's understanding, perception and intelligence stands in relationship to that of a simple computer system's as it stands in relationship to that of a simple organism. > re: the amoeba > > As I use the word "consciousness", I believe the amoeba has none whatsoever. This unconscious creature exhibits intelligent behavior but because it has no nervous system I doubt very seriously that it has any conscious experience of living. It looks for food intelligently in the same sense that your watch tells the time intelligently and in the same sense in which weak AI systems may one day have the intelligence needed to pass the Turing test; that is, it has intelligence but no consciousness. The amoeba is not only less conscious than a human, it is also less intelligent. Do you think it is just a coincidence that intelligence and consciousness seem to be directly proportional? A neuron is not essentially different from an amoeba, except in the fact that it cooperates with other neurons to process information that the individual neuron does not understand (rather like the man in the Chinese Room). It is this activity which gives rise to intelligence and consciousness, not anything to do with the biology of the neuron itself. The biology of the neuron is akin to the workings of an internal combustion engine in a car: it is essential to make the car go and any significant problem with it will make the car stop, but if you replaced the whole thing with an electric motor and battery system of similar characteristics the car would go just as well. -- Stathis Papaioannou From stathisp at gmail.com Sun Feb 7 08:25:43 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 7 Feb 2010 19:25:43 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <156490.82842.qm@web36504.mail.mud.yahoo.com> References: <156490.82842.qm@web36504.mail.mud.yahoo.com> Message-ID: On 7 February 2010 06:27, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >>> In your thought experiment, the artificial >>> program-driven neurons will require a lot of work for the >>> same reason that programming weak AI will require a lot of >>> work. We're not there yet, but it's within the realm of >>> programming possibility. >> >> The artificial neurons (or subneuronal or multineuronal >> structures, it doesn't matter)... > > If it doesn't matter, then let's keep it straightforward and refer to artificial brains rather than to artificial neurons surgically inserted into the midst of natural neurons. This will eliminate a lot of uncertainties that arise from the present state of ignorance about neuroscience. It is a basic requirement of the experiment that the brain replacement be *partial*. This is in order to demonstrate that there is a problem with the idea that a brain part could have normal behaviour but lack consciousness. Having demonstrated that the brain parts must have consciousness, it should then be obvious that an entirely artificial brain made out of these parts will also be conscious. It is true that we don't at present have the capability to make such artificial brains or neurons, but I have asked you to assume that we do. Surely this is no more difficult than imagining the Chinese Room! The Chinese Room is logically possible but probably physically impossible, while artificial neurons may even become available in our lifetimes. >> exhibit the same behaviour as the natural equivalents, >> but lack consciousness. > > In my view an artificial brain can exhibit the same intelligent behaviors as a natural brain without having subjective mental states where we define behavior as, for example, acts of speech. > >> That's all you need to know about them: you don't have to worry how >> difficult it was to make them, just that they have been made (provided >> it is logically possible). Now it seems that you allow that such >> components are possible, but then you say that once they are installed >> the rest of the brain will somehow malfunction and needs to be tweaked. >> That is the blatant contradiction: if the brain starts behaving >> differently, then the artificial components lack >> the defining property you agreed they have. > > As above, let's save a lot of confusion and speak of brains rather than individual neurons. Is there anyone out there still following this thread who is confused by my description of the thought experiment or doesn't understand its rationale? Please email me off list if you prefer. -- Stathis Papaioannou From stathisp at gmail.com Sun Feb 7 08:32:23 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 7 Feb 2010 19:32:23 +1100 Subject: [ExI] The simplest possible conscious system In-Reply-To: <820785.29759.qm@web36504.mail.mud.yahoo.com> References: <820785.29759.qm@web36504.mail.mud.yahoo.com> Message-ID: On 7 February 2010 12:18, Gordon Swobe wrote: > I don't believe in qualia, as such, because the idea implies the possibility of consciousness-without-an-object or consciousness-sans-qualia. Such states seem to me impossible both in theory and in practice. Instead I believe one experiences various qualities or aspects of one's own unified field of consciousness. There isn't any context in which you could use "qualia" where "experience" or "perception" would not do as well. -- Stathis Papaioannou From bbenzai at yahoo.com Sun Feb 7 10:53:22 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 7 Feb 2010 02:53:22 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <455971.33560.qm@web113605.mail.gq1.yahoo.com> Gordon Swobe wrote: --- On Sat, 2/6/10, Aware wrote: > >> A resolution that you're unable to accept due to your >> discomfort with the notion that there is no ESSENTIAL Gordon Swobe to >> experience ESSENTIAL qualia, despite my reassurances that this in no >> way denies the very real Gordon Swobe and his experiences as we AND >> YOU know them. ... >> This is a very old argument, and all the necessary pieces >> of the puzzle are strewn about you.? If you use all the >> pieces, they fit together only one way. > I'll ask again: have you ever had a tooth-ache? Gordon, your repeated question shows that you're either ignoring or not getting the point of Jef's reply. If it makes no sense to you, please say so, don't just regurgitate the same question that he is replying to. On the other hand, if you're simply ignoring things that you don't like reading, what's the point of continuing the conversation? Ben Zaiboc From bbenzai at yahoo.com Sun Feb 7 11:04:36 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 7 Feb 2010 03:04:36 -0800 (PST) Subject: [ExI] Nolopsism In-Reply-To: Message-ID: <370344.96441.qm@web113608.mail.gq1.yahoo.com> The Avantguardian wrote: > Philosophically nolipsism bears some resemblance to > Buddhism which is fine from a spiritual point of view. > E.g. why fear death when there is no "me" to die, etc.? > Being an attorney however, I am sure you are aware of > the legal can of worms nolipsism opens up. Human rights > are tied to identity. If "I" don't exist, then > stealing my stuff or even murdering me is a victimless crime. > Doesn't make for a happy outcome in my opinion, > especially for libertarians. > Probably why the authors back-pedalled from their > claims in the conclusion. Philosophically, it may be as you say. Practically, though, it's not really that useful because it makes no actual difference to the way we regard things like fear of death, or to the law. It's in the scientific arena that nolipsism is most useful, because it explains what subjectivity actually is, and clears the nonsense and confusion out of the way. We know, at least in theory, that subjectivity can be built into an artificial mind, and we can finally dump the concept of the 'hard problem' in the bin. The concept of a "de se" designator explains why we don't have souls, not why we shouldn't have property rights. Ben Zaiboc From nebathenemi at yahoo.co.uk Sun Feb 7 11:21:51 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Sun, 7 Feb 2010 11:21:51 +0000 (GMT) Subject: [ExI] Glacier Geoengineering In-Reply-To: Message-ID: <671565.18196.qm@web27001.mail.ukl.yahoo.com> Keith's proposal relies on using a lot of organic liquids with a low vapor point. Keith, how are you proposing to trap the vapor, condense it and re-use it? If this process isn't highly efficient, you get two big problems: 1) you need to add a lot more liquid, which costs energy to make, adding to expense and cost 2) you have to worry about the environmental problems when the vapor condenses somewhere else. In fact, even with a tiny amount of leakage this can become a problem. I'm not sure how much of these it would take to become toxic rather than mild irritants, but in the volumes needed to freeze glaciers there is a risk of a major spill. Seeing as places with the huge glaciers like Antarctica and Greenland have coastlines with fragile polar ecosystems, I can see this being a problem. In Kim Stanley Robinson's recent trilogy of ecothrillers (40 days of rain/ 50 degrees below/ 60 days) one of the protagonists investigates geoengineering for a presidential candidate and advises him in the last book. The scheme they use for direct lowering of sea-levels is pumping sea water on to the West Antarctic where the glaciers are highly stable, and increasing glacier coverage that way. Tom From aware at awareresearch.com Sun Feb 7 13:21:55 2010 From: aware at awareresearch.com (Aware) Date: Sun, 7 Feb 2010 05:21:55 -0800 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: On Sat, Feb 6, 2010 at 9:54 PM, Spencer Campbell wrote: > The Avantguardian : >> Philosophically nolipsism bears some resemblance to Buddhism which is fine from a spiritual point of view. E.g. why fear death when there is no "me" to die, etc.?Being an attorney however, I am sure you are aware of the legal can of worms nolipsism opens up. Human rights are tied to identity. If "I" don't exist, then stealing my stuff or even murdering me is a victimless crime. Doesn't make for a happy outcome in my opinion, especially for libertarians.?Probably why the authors back-pedalled from their claims in the conclusion. > > Hi, my name is Spencer Campbell, I will be your Stefano Vaj for tonight. > > Are victimless crimes morally, ethically, and legally acceptable? They > ARE crimes, so, no to the last one. The first two are arguable. I feel > confident that a coherent system of law could be made without the > assumption that any selves exist for it to protect. It would > essentially treat people as highly valuable property, no different > from houses or cars, owned by entities just as imaginary as > corporations. > > Really it could only streamline everything. We should do this. We > should do this right now. Spencer, a more coherent system of justice is possible, and "we" continue to move, in fits and starts, in this direction already for many millennia. But it's not based on abolishment of the self. As Pollock's paper shows, a sense of self is NECESSARY for situated effectiveness. Rather, it's a matter of identification of self over a GREATER sphere of agency. Increasing agreement on the rightness or "morality" of actions corresponds to the extent such actions are assessed as promoting an increasing context of increasingly coherent, hierarchical, fine-grained, evolving but present (subjective) values, via methods methods increasingly effective, in principle, over increasing (objective) scope of interaction. Lather, rinse, repeat. Yes, it's a mouthful, and I estimate it takes several hundred pages to unpack in order to accommodate the priors of most here. - Jef From jonkc at bellsouth.net Sun Feb 7 16:54:32 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 7 Feb 2010 11:54:32 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <684806.48339.qm@web36503.mail.mud.yahoo.com> References: <684806.48339.qm@web36503.mail.mud.yahoo.com> Message-ID: <1608D5D4-35BB-4CCC-8E48-7B3A679037B6@bellsouth.net> Since my last post Gordon Swobe has posted 5 times. > I consider myself a materialist Gordon considers himself a rationalist too, but we all know that is not the case. > but in the reaction against mind/matter dualism some of my fellow materialists (e.g., Dennett) go overboard and irrationally deny the plain facts of subjective experience BULLSHIT. Dennett is not a fool and only a fool would deny subjective experiance > As I use the word "consciousness", I believe the amoeba has none whatsoever. This unconscious creature exhibits intelligent behavior but [...] But what? If Gordon believes an amoeba acts intelligently (something I will not defend) then he has absolutely no reason to believe it is not conscious, except perhaps for a mysterious little voice in his head whispering otherwise. And let me repeat for the 422 time that Gordon's ideas and Darwin's are 100% incompatible. If consciousness and intelligence are not linked them science has no explanation how consciousness came to be on planet Earth, and yet we know with absolute certainty that it did at least once and probably many billions of times. People, this is not a minor point, this is a show stopper as far as Gordon's ideas are concerned. Charles Darwin had the single best idea that any human being ever had and it is at the center of all the biological sciences. Either Gordon Swobe is the greatest genius the human race has ever produced or the man is dead wrong. I must confess to being a little disappointed in Extropians because I seem to be the only one who sees how important Evolution is; instead they dispute Gordon on some obscure point in his latest ridiculous thought experiment. Real experiments take precedence over thought experiments and planet Earth has been conducting one for the last 3 billion years. The results of that experiment are unambiguous, consciousness and intelligence MUST be linked and if you or me or Gordon doesn't understand exactly how that could be it doesn't change the fact that it is. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Sun Feb 7 17:23:48 2010 From: max at maxmore.com (Max More) Date: Sun, 07 Feb 2010 11:23:48 -0600 Subject: [ExI] The Audacious beauty of our future - Natasha Vita-More, an interview Message-ID: <201002071750.o17Hoark026457@andromeda.ziaspace.com> It's good to see an extensive and informative interview that's also presented in such an appealing format. Congrats! I haven't come across this site before, but it's nicely designed and appears to have some other intriguing content. http://spacecollective.org/Wildcat/5527/The-Audacious-beauty-of-our-future-Natasha-VitaMore-an-interview Max From gts_2000 at yahoo.com Sun Feb 7 18:58:26 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 7 Feb 2010 10:58:26 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <455971.33560.qm@web113605.mail.gq1.yahoo.com> Message-ID: <82654.40894.qm@web36501.mail.mud.yahoo.com> How about you, Ben? Have you ever had a toothache? Was it a real toothache? Or was it just an illusion? -gts From bbenzai at yahoo.com Sun Feb 7 19:02:21 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 7 Feb 2010 11:02:21 -0800 (PST) Subject: [ExI] Blue Brain Project In-Reply-To: Message-ID: <558651.23421.qm@web113616.mail.gq1.yahoo.com> Anyone not familiar with this can read about it here: http://seedmagazine.com/content/article/out_of_the_blue/P1/ The next ten years should be interesting! Ben Zaiboc From gts_2000 at yahoo.com Sun Feb 7 19:30:19 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 7 Feb 2010 11:30:19 -0800 (PST) Subject: [ExI] The simplest possible conscious system In-Reply-To: Message-ID: <487344.53568.qm@web36503.mail.mud.yahoo.com> --- On Sun, 2/7/10, Spencer Campbell wrote: > This seems to me a good argument for the idea that all > consciousness is that of a subject being aware of an object. Yes. > I'd have said that before, but my source regarding that particular state > (from Ken Wilber, I think) wasn't very specific on the matter. Mystics including Wilber like to talk about such things as consciousness-without-an-object. I believe they're misguided. > The vast-consciousness-without-content state does seem to > contradict this theory, though, from what little I know about it. I've > heard it described (probably by Ken Wilber again) as a void which is > only aware of itself. Speaking only from my own experience, I can tell you that there does exist for me a state that *seems* like consciousness-without-content. I can see how those who have mystical philosophical biases might interpret that state as consciousness of the "void" or some such. I have no such mystical bias (although I once did) and I believe that, in reality, the experience to which I refer above represents only a very clear and silent state of mind. It appears typically in the moment following that during which subject-object actually does disappear -- after the transcendent moment itself -- and is easily mistaken for it. -gts From spike66 at att.net Sun Feb 7 19:35:34 2010 From: spike66 at att.net (spike) Date: Sun, 7 Feb 2010 11:35:34 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: <1608D5D4-35BB-4CCC-8E48-7B3A679037B6@bellsouth.net> References: <684806.48339.qm@web36503.mail.mud.yahoo.com> <1608D5D4-35BB-4CCC-8E48-7B3A679037B6@bellsouth.net> Message-ID: <8AF4B8E496854465AAD96F0C0D14B279@spike> ...On Behalf Of John Clark ...I must confess to being a little disappointed in Extropians because I seem to be the only one who sees how important Evolution is... John K Clark John you speak great blammisphies! I practically worship evolution. Or so I am told by the religionistas. This is my contribution to your meme: one knowns not misery and cognitive dissonance until one has been an avid observer of wildlife *without* Darwin's dangerous ideas. In my own misspent youth, not really knowing anything about evolution, I struggled to understand how these observations could even be possible. They didn't talk about Darwin in the public schools in those days, not even in the biology classes. I listened carefully in biology, always enjoying it, actually reading the textbook even, but I sure do not recall any discussion of evolution. My first real exposure to the notion was from Carl Sagan's Cosmos, in my third year of college. (!) One just cannot get nature without the concept of evolution. Those who have always assumed evolution may not realize what it is like to be an observer of the wilderness without the knowledge of evolution. Learning of Darwin was like being intellectually born again, and doing it right the second time. spike From gts_2000 at yahoo.com Sun Feb 7 20:07:12 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 7 Feb 2010 12:07:12 -0800 (PST) Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: Message-ID: <445777.14858.qm@web36505.mail.mud.yahoo.com> --- On Sun, 2/7/10, Stathis Papaioannou wrote: >> As I use the word "consciousness", I believe the > amoeba has none whatsoever. This unconscious creature > exhibits intelligent behavior but because it has no nervous > system I doubt very seriously that it has any conscious > experience of living. It looks for food intelligently in the > same sense that your watch tells the time intelligently and > in the same sense in which weak AI systems may one day have > the intelligence needed to pass the Turing test; that is, it > has intelligence but no consciousness. > > The amoeba is not only less conscious than a human, it is > also less intelligent. The amoeba has no neurons or nervous system, Stathis, so "less conscious" is an understatement. It has no consciousness at all. -gts From lacertilian at gmail.com Sun Feb 7 20:47:20 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 12:47:20 -0800 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: Aware : > Spencer, a more coherent system of justice is possible, and "we" > continue to move, in fits and starts, in this direction already for > many millennia. > > But it's not based on abolishment of the self. ?As Pollock's paper > shows, a sense of self is NECESSARY for situated effectiveness. > > Rather, it's a matter of identification of self over a GREATER sphere of agency. Right: corporations. Total abolishment of the self would ultimately require doing away with recognition of discrete objects entirely, which doesn't make a lot of sense. I would want a system that treats "selves" as legal fabrications, not one which denies their conceptual validity. Phew, almost said "existence" there. Shaky ground. Aware : > Increasing agreement on the rightness or "morality" of actions > corresponds to the extent such actions are assessed as promoting an > increasing context of increasingly coherent, hierarchical, > fine-grained, evolving but present (subjective) values, via methods > methods increasingly effective, in principle, over increasing > (objective) scope of interaction. Lather, rinse, repeat. > > Yes, it's a mouthful, and I estimate it takes several hundred pages to > unpack in order to accommodate the priors of most here. You're correct: for me, it's more than a mouthful. You seem to be giving a definition for "increasing agreement on the rightness or 'morality' of actions", but I can't figure out exactly how that bears on the discussion. It wasn't a point of contention. We seem to be making incremental progress toward Kantland. It's only a matter of time before we come across a roaming herd of categorical imperatives. From kanzure at gmail.com Sun Feb 7 20:49:15 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Sun, 7 Feb 2010 14:49:15 -0600 Subject: [ExI] Blue Brain Project In-Reply-To: <558651.23421.qm@web113616.mail.gq1.yahoo.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> Message-ID: <55ad6af71002071249q64f8f7d2yc4017d23457856a9@mail.gmail.com> On Sun, Feb 7, 2010 at 1:02 PM, Ben Zaiboc wrote: > Anyone not familiar with this can read about it here: > http://seedmagazine.com/content/article/out_of_the_blue/P1/ > > The next ten years should be interesting! I know I mentioned these links a few days ago, but it's worth repeating. Noah Sutton is making a documentary: http://thebeautifulbrain.com/2010/02/bluebrain-film-preview/ "We are very proud to present the world premiere of BLUEBRAIN ? Year One, a documentary short which previews director Noah Hutton?s 10-year film-in-the-making that will chronicle the progress of The Blue Brain Project, Henry Markram?s attempt to reverse-engineer a human brain. Enjoy the piece and let us know what you think." There's a longer video that explains what he's up to. The Emergence of Intelligence in the Neocortical Microcircuit http://video.google.com/videoplay?docid=-2874207418572601262&ei=lghrS6GmG4jCqQLA1Yz7DA - Bryan http://heybryan.org/ 1 512 203 0507 From stefano.vaj at gmail.com Sun Feb 7 20:50:56 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 7 Feb 2010 21:50:56 +0100 Subject: [ExI] The simplest possible conscious system In-Reply-To: References: <820642.67186.qm@web113613.mail.gq1.yahoo.com> <580930c21002060202i4b0917f0m3d5404c7d219d8b2@mail.gmail.com> Message-ID: <580930c21002071250u1acda46dnfab88c16064f1445@mail.gmail.com> On 7 February 2010 01:11, Spencer Campbell wrote: > Stefano Vaj : >> If "conscious" is taken to mean "exhibiting the same information >> processing features of an average adult, healthy, alert, educated >> human being" the theoretical answer for me corresponds to the question >> "what is the simplest possible universal computer". > > Of course "conscious" is not taken to mean that here, nor anything > like that, if only for the reasons pointed out by James Choate two or > three days ago. Humans are ludicrously complex examples of > consciousness (assuming they are conscious at all), and almost > completely useless for finding an answer to the question. No, this was not my point. What I meant is "if conscious does not mean to be entrusted with a soul/operated by an organic brain/being bipedal". I have no doubt that consciousness which is the (one of the?) vague concept(s) which we adopt to indicate human alertness can be exhibited by *any* universal computer. That is, if reasonable performance is not a requirement. -- Stefano Vaj From lacertilian at gmail.com Sun Feb 7 21:05:40 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 13:05:40 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <156490.82842.qm@web36504.mail.mud.yahoo.com> Message-ID: Stathis Papaioannou : > Is there anyone out there still following this thread who is confused > by my description of the thought experiment or doesn't understand its > rationale? Please email me off list if you prefer. Seems pretty clear to me, as a neuron-by-neuron replacement is precisely what I've wanted for the past two to five years. I would advise phrasing it again, simply and concisely, because (a) what you have in mind may have changed since you last did so, (b) I might have overwritten your description with my own, and (c) the point on which Gordon disagrees remains a total mystery. Incidentally, I had a toothache last night. Not that anyone asked, but it was an illusion and I was very frustrated that I couldn't dispel it. Gordon, time for the true or false game: there is a difference between real toothaches and illusionary toothaches. My answer is "false", and I get the impression that yours will be "true". If so, why? How? Is there any way to measure realness in a toothache, scientifically? If not, is it possible to distinguish between the two through purely subjective experience? To both of these questions I give, of course, a decisive "no". But maybe you mean something by the word "illusion" that I haven't yet grasped. Currently, I am using a definition borrowing partly from Buddhism and partly from Dungeons & Dragons. If anyone knows of another dimension I've missed, do tell. From lacertilian at gmail.com Sun Feb 7 21:13:44 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 13:13:44 -0800 Subject: [ExI] Nolopsism In-Reply-To: <915E9809-F351-43BE-A6A5-10E53227BD42@GMAIL.COM> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> <915E9809-F351-43BE-A6A5-10E53227BD42@GMAIL.COM> Message-ID: JOSHUA JOB : > Problem: Who do you punish? This imaginary entity that damaged the property of another imaginary entity? If you do it like that, then I don't see any difference between that and a legal system based on actual "selves." And without a victim, there is no crime. I can't see the purpose of law without individual rights as its basis (rights based on principles derived from the nature of human beings), and if you eliminate the individual, you'll have a hard time justifying anything, ultimately. Solution: No one needs to be punished. In theory, the only justification for legal punishment right now is to modify future behavior on a societal scale. There are far more effective and less draconian methods of doing this. See: Norwegian open prisons. http://www.globalpost.com/dispatch/europe/091017/norway-open-prison There are other justifications which make punishment a much more attractive option. Prisoners of war in ancient Rome were made to fight as gladiators. Justification: entertainment. An eye-for-an-eye system is self-justifying: justice for the sake of justice. JOSHUA JOB : > Corporations are entities made up of people ultimately, and they are created and owned and controlled by people. Hence a crime against a corporation is a crime against a group of people (the owners or employees). Without individuals, you can't say make laws based on happiness, or prosperity, or anything else, because all of those reference individuals and minds. And obviously "rights" go right out the window. A measurement of prosperity need make no reference to individuals or minds unless corporations, countries, and planets count as individuals. You could define the Earth's prosperity as equivalent to its biodiversity, for example, and just start tracking all the DNA. I don't know why you would, but you could. My argument against happiness is the same as my argument against punishment: it is valuable only as a tool for behavioral modification, heartless as that may sound. Look at how happiness evolved. It's just an arbitrary reward for survival. This is the attitude with which I regard my own happiness, and it doesn't seem to impair me in any way practical or philosophical. Finally: obviously "rights" don't go out the window at all! In fact, we would only have more of them. A brand-new car would have the right not to be crushed into a tiny cube, because such would be blatantly wasteful and wrong. Similarly, a brand-new human would have the same right, but a totaled junker or a corpse would not. From aware at awareresearch.com Sun Feb 7 21:03:42 2010 From: aware at awareresearch.com (Aware) Date: Sun, 7 Feb 2010 13:03:42 -0800 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: On Sun, Feb 7, 2010 at 12:47 PM, Spencer Campbell wrote: > Aware : >> Spencer, a more coherent system of justice is possible, and "we" >> continue to move, in fits and starts, in this direction already for >> many millennia. >> >> But it's not based on abolishment of the self. ?As Pollock's paper >> shows, a sense of self is NECESSARY for situated effectiveness. >> >> Rather, it's a matter of identification of self over a GREATER sphere of agency. > > Right: corporations. Total abolishment of the self would ultimately > require doing away with recognition of discrete objects entirely, > which doesn't make a lot of sense. I would want a system that treats > "selves" as legal fabrications, not one which denies their conceptual > validity. > > Phew, almost said "existence" there. Shaky ground. > > Aware : >> Increasing agreement on the rightness or "morality" of actions >> corresponds to the extent such actions are assessed as promoting >> an increasing context of increasingly coherent, hierarchical, >> fine-grained, present but evolving (subjective) values, via >> methods increasingly effective, in principle, over increasing >> (objective) scope of interaction. Lather, rinse, repeat. >> >> Yes, it's a mouthful, and I estimate it takes several hundred pages to >> unpack in order to accommodate the priors of most here. > > You're correct: for me, it's more than a mouthful. You seem to be > giving a definition for "increasing agreement on the rightness or > 'morality' of actions", but I can't figure out exactly how that bears > on the discussion. It wasn't a point of contention. I saw you wrote "I feel confident that a coherent system of law could be made without the assumption that any selves exist for it to protect." I responded to that without recognizing the sarcasm that became obvious further down. Sorry. > We seem to be making incremental progress toward Kantland. It's only a > matter of time before we come across a roaming herd of categorical > imperatives. I'm talking about an upgrade to Kant's Categorical Imperative. Its most significant weakness is its lack of evolutionary perspective. - Jef From lacertilian at gmail.com Sun Feb 7 21:33:20 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 13:33:20 -0800 Subject: [ExI] The simplest possible conscious system In-Reply-To: <580930c21002071250u1acda46dnfab88c16064f1445@mail.gmail.com> References: <820642.67186.qm@web113613.mail.gq1.yahoo.com> <580930c21002060202i4b0917f0m3d5404c7d219d8b2@mail.gmail.com> <580930c21002071250u1acda46dnfab88c16064f1445@mail.gmail.com> Message-ID: Stefano Vaj : > No, this was not my point. What I meant is "if ?conscious does not > mean to be entrusted ?with a soul/operated by an organic brain/being > bipedal". I was thinking you meant "having thoughts, feelings, opinions, sensory experiences, and every other feature common to the information processing of all adult, healthy, alert, educated human beings". That seemed extremely excessive to me, as a description for the minimum prerequisites of consciousness, and thus the strong reaction. This second definition of yours is deductive, not inductive, so it doesn't tell me much: only what you think consciousness isn't, not what it is. From lacertilian at gmail.com Sun Feb 7 22:24:57 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 14:24:57 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: <1608D5D4-35BB-4CCC-8E48-7B3A679037B6@bellsouth.net> References: <684806.48339.qm@web36503.mail.mud.yahoo.com> <1608D5D4-35BB-4CCC-8E48-7B3A679037B6@bellsouth.net> Message-ID: I am now officially arguing with John Clark. Cover your eyes, everybody, 'cause this is gonna get real ugly, real fast. John Clark : > Since my last post?Gordon Swobe has posted 5 times. > > I consider myself a materialist > > Gordon considers himself a rationalist too, but we all know that is not the > case. False. I consider him a rationalist. Therefore, we do not all know that is not the case. Rationalism dictates that reason alone is sufficient to produce knowledge. Gordon trusts absolutely in his powers of reason, even when the material evidence contradicts him: the very model of a rationalist! > But what? If Gordon believes an amoeba acts intelligently (something I will > not defend) then he has absolutely no reason to believe it is not conscious, > except perhaps for a mysterious little voice in his head whispering > otherwise. Point one: leukocytes, such as the one in the (fantastic) video that's been floating around here, obviously behave intelligently when it comes to chasing down and devouring foreign bodies. It does not make sense to say they don't. And no, I will not repeat my definition of intelligence again! Point two: all beliefs are mysterious little voices whispering in someone's head. If you believe otherwise, clearly you are delusional and/or have never read the relevant Lewis Carrol story. http://www.ditext.com/carroll/tortoise.html Point three: if you really must insist on a logical-sounding reason to claim that a leukocyte might be intelligent but not conscious, I suggest you ask Gordon for one directly rather than trying to goad him into defending his honor. Of course this assumes you're actually trying to further the discussion, rather than simply shouting libel into the aether for its own sake (as I am now). > And let me repeat for the 422 time that Gordon's ideas and Darwin's are 100% > incompatible. I take it you've read On the Origin of Species. I have not. Quote a passage that contradicts Gordon's ideas, please? I have my doubts that he says much about consciousness directly, but if you can even come up with something IMPLICITLY incompatible I would be impressed. > If consciousness and intelligence are not linked them science > has no explanation how consciousness came to be on planet Earth, and yet we > know with absolute certainty that it did at least once and probably many > billions of times. Do we John? Do we know that? With absolute certainty, no less. > People, this is not a minor point, this is a show stopper > as far as Gordon's ideas are concerned. Charles Darwin had the single best > idea that any human being ever had and it is at the center of all the > biological sciences.?Either?Gordon Swobe is the greatest genius the human > race has ever produced or the man is dead wrong. If you say so, if you say so, and if you say so, respectively. No one in the history of the universe has ever come anywhere close to John Clark's astounding mastery of hyperbole. If the loudest philosopher wins, we're done. > I must confess to being a little disappointed in Extropians because I seem > to be the only one who sees how important Evolution is; instead they dispute > Gordon on some obscure point in his latest ridiculous thought experiment. It disturbs me on a visceral level that you capitalize the E in "evolution" like that, but I'll disregard it for the purposes of argument. Of course evolution is important. In general. However, it isn't the least bit important for someone concerned with comprehending the logic of Gordon Swobe well enough to start agreeing with him or to show him where he went wrong. It does not matter whether or not a given thought experiment is ridiculous if the person who came up with it believes it isn't, and the only way to even THEORETICALLY convince them otherwise is to pick at the most obscure points in it. You can expect they've already noticed the least obscure. > Real experiments take precedence over thought experiments and planet Earth > has been conducting one for the last 3 billion years. The results of that > experiment are unambiguous, consciousness and intelligence MUST be linked > and if you or me or Gordon doesn't understand exactly how that could be it > doesn't change the fact that it is. One: life is not an experiment. Two: the earth can not conduct experiments. Three: I'm sorry but I can't stop myself from using a stereotypical southern creationist voice when I read that paragraph. Reel experamints teyk press-a-dents. Not to a rationalist, they don't. From stathisp at gmail.com Sun Feb 7 22:54:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 8 Feb 2010 09:54:47 +1100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <445777.14858.qm@web36505.mail.mud.yahoo.com> References: <445777.14858.qm@web36505.mail.mud.yahoo.com> Message-ID: On 8 February 2010 07:07, Gordon Swobe wrote: > The amoeba has no neurons or nervous system, Stathis, so "less conscious" is an understatement. It has no consciousness at all. As far as you're concerned the function of the nervous system - intelligence - which is due to the interactions between neurons bears no essential relationship to consciousness. You believe that consciousness is a property of certain specialised cells. That leaves open the possibility that amoebae have these properties also, despite their apparent lack of intelligence. -- Stathis Papaioannou From lacertilian at gmail.com Mon Feb 8 00:08:24 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 16:08:24 -0800 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: Aware : >Spencer Campbell : >> You're correct: for me, it's more than a mouthful. You seem to be >> giving a definition for "increasing agreement on the rightness or >> 'morality' of actions", but I can't figure out exactly how that bears >> on the discussion. It wasn't a point of contention. > > I saw you wrote "I feel confident that a coherent system of law could > be made without the assumption that any selves exist for it to protect." > > I responded to that without recognizing the sarcasm that became > obvious further down. ?Sorry. Apology unnecessary, but accepted. To be fair, it was half-sarcasm. On the Internet, I try to avoid saying things that I don't at least partially believe are true. Aware : >Spencer Campbell : >> We seem to be making incremental progress toward Kantland. It's only a >> matter of time before we come across a roaming herd of categorical >> imperatives. > > I'm talking about an upgrade to Kant's Categorical Imperative. ?Its > most significant weakness is its lack of evolutionary perspective. Request that you start a new thread to elaborate on that. If I understand you correctly, which I suspect I do not, you're implying that evolution is purpose-generating. To my mind this is a massive teleological mistake; the evolution of life on Earth is just one elaborate ongoing accident, not a directed effort toward better things. You know what humanity has evolved lately? The ability to burn fat faster. http://www.cracked.com/blog/lose-weight-the-natural-way-by-slowly-evolving I'm not sure how I feel about Cracked.com being the first hit for "brown fat evolution" on Google, but there you go. We have evolved to become less fuel-efficient. Not in exchange for higher peak power output or anything. Just because. From max at maxmore.com Mon Feb 8 00:23:17 2010 From: max at maxmore.com (Max More) Date: Sun, 07 Feb 2010 18:23:17 -0600 Subject: [ExI] Theories, medicine, and government Message-ID: <201002080023.o180NQdN028663@andromeda.ziaspace.com> I liked this quote from Taleb's excellent book: "A theory is like medicine (or government): often useless, sometimes necessary, always self-serving, and on occasion lethal. So it needs to be used with care, moderation, and close adult supervision." -- Taleb, The Black Swan, p. 285. Max From lacertilian at gmail.com Mon Feb 8 00:56:24 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 16:56:24 -0800 Subject: [ExI] Theories, medicine, and government In-Reply-To: <201002080023.o180NQdN028663@andromeda.ziaspace.com> References: <201002080023.o180NQdN028663@andromeda.ziaspace.com> Message-ID: Max More : > I liked this quote from Taleb's excellent book: > > "A theory is like medicine (or government): often useless, sometimes > necessary, always self-serving, and on occasion lethal. So it needs to be > used with care, moderation, and close adult supervision." > -- Taleb, The Black Swan, p. 285. > > Max Magnificent. This does imply that I am roughly equivalent to a morphine addict (or senator), of course. From max at maxmore.com Mon Feb 8 03:19:14 2010 From: max at maxmore.com (Max More) Date: Sun, 07 Feb 2010 21:19:14 -0600 Subject: [ExI] The Surrogates, graphic novel Message-ID: <201002080325.o183PRUR020245@andromeda.ziaspace.com> I have added this comment on The Surrogates to my list of "Comics of Transhumanist Interest" here: The Surrogates, by Robert Venditti & Brett Weldele I found the movie mildly entertaining, but not terribly engaging or intellectually stimulating. The original graphic novel is a little more interesting. I didn't much like the illustration by Weldele, though it might be more to your taste (too lacking in detail for my liking). The strongest parts, for me, were the fictional ads for surrogate bodies (which seemed to have very much in common with Natasha Vita-More's earlier "Primo Posthuman") and related text on the ad campaign. We're all extremely familiar with the idea of virtual bodies in virtual space. Surrogates differs from the standard by envisioning a world of physical surrogate bodies that often look like de-aged and enhanced versions of people's "real" physical bodies. The tone is lightly anti-transhumanist, alas. In reality, transhumanists might like to have such surrogate bodies, but surely they would also prefer to enhance their primary bodies, rather than to leave their sluggish, slobbish physical primaries stacked ungainly in the closet. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From avantguardian2020 at yahoo.com Mon Feb 8 03:11:54 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sun, 7 Feb 2010 19:11:54 -0800 (PST) Subject: [ExI] Nolopsism Message-ID: <292696.68811.qm@web65611.mail.ac4.yahoo.com> ----- Original Message ---- > From: Ben Zaiboc > To: extropy-chat at lists.extropy.org > Sent: Sun, February 7, 2010 3:04:36 AM > Subject: Re: [ExI] Nolopsism > Philosophically, it may be as you say.? Practically, though, it's not really > that useful because it makes no actual difference to the way we regard things > like fear of death, or to the law. > > It's in the scientific arena that nolipsism is most useful, because it explains > what subjectivity actually is, and clears the nonsense and confusion out of the > way. How so??Siddhartha Guatama?said that the self was an illusion some 3000 years ago only?he called it the skandha of consciousness instead of a "de se designator".?What makes nolopsism more scientifically useful than Buddhism? Are you suggesting that by sweeping consciousness under a rug, it can be scientifically ignored? I can imagine the dialog: Chalmers: What is the neurological basis of phenomenal consciousness? Pollock: Phenomenal consciousness doesn't actually exist. It is simply a necessary illusion of subjectivity. It allows you to think about yourself without knowing anything about yourself. Which?would be?useful?if you went on a bender and passed out?in the Stanford Library. That is, of course,?if there were a you to do the thinking and a you to to think about. Which there isn't. But human?minds weren't meant to go there, so you can pretend to exist if you want. ? Julie Andrews [waltzing by]: Me... the name... I?call myself...? Chalmers: Oookaaay...?So what is the neurological basis of the *illusion* of phenomenal consciousness? ? Pollock: [glances at watch] Well would you?look at the time. It's been nice chatting but I must be going now. ? >? We know, at least in theory, that subjectivity can be built into an > artificial mind, and we can finally dump the concept of the 'hard problem' in > the bin. So you think that programming a computer to falsely believe itself to be conscious is easier than to program one to actually be so. Or do you think that programming a computer to use "de se" designators necessarily makes it think itself conscious? A person could get by without the use "de se" designators yet still retain a sense of self. It might sound funny, but?a person could consistently refer to themselves in the third-person by name even in their thoughts.?Stuart doesn't think that "de se" designators are particularly profound.?Stuart doesn't need them. Do you see what Stuart means?? ? > The concept of a "de se" designator explains why we don't have souls, not why we > shouldn't have property rights.? Property rights are no less abstract than souls. Neither seems to have a physical basis beyond?metaphysical/philosophical fiat. Communists tend not to believe in either. ? Stuart LaForge ? "Never express yourself more clearly than you think." - Niels Bohr From Frankmac at ripco.com Mon Feb 8 03:43:08 2010 From: Frankmac at ripco.com (Frank McElligott) Date: Sun, 7 Feb 2010 22:43:08 -0500 Subject: [ExI] belief in Karma, Message-ID: <001e01caa870$d8f5ee80$ad753644@sx28047db9d36c> In 2005 the city of New.Orleans. was under water, I was there during that time helping out, and yes it was REALLY under water and dying a slow death. Now within 5 years from that event that city can rejoice in their football team as a sign from the CONTROLLER, that's a new name for you know who, that the worst is now over, and from now on everything will be on the upside instead on the slope to no where. Now if Haiti had a football team,,,,,,,, Bye the Bye, the sun is acting up again with large solar flares, in the "X" range, are expected this week to peak. There are some people who believe it affects people to do strange things, and become very very fearful here on earth Watch the stock market go crazy this week, not because of Greece, it's the sun doing the damage. The flares effect the magnetic waves and telephone communications I am told, but I am retired without a job so what do I know? Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Mon Feb 8 04:01:27 2010 From: max at maxmore.com (Max More) Date: Sun, 07 Feb 2010 22:01:27 -0600 Subject: [ExI] Joy in the transcendence of the species Message-ID: <201002080401.o1841ase025933@andromeda.ziaspace.com> I know many of you are fascinated by the intricacies and peculiarities of language. Perhaps you might be motivated to help me figure this out: I have long enjoyed the word "Schadenfreude", meaning pleasure derived from the misfortunes of others. [Note: I've enjoyed the *term* "schadenfreude", not the thing it refers to.] I got to thinking what word (if one could be coined) would mean "pleasure in the transcendence of the species" (i.e., transcendence of the human condition). It may be asking a lot of a word (even a German word) to do all this work, but I'd like to give it a try. According to Wikipedia: The Buddhist concept of mudita, "sympathetic joy" or "happiness in another's good fortune," is cited as an example of the opposite of schadenfreude. However, that doesn't do the job that I'm looking for. On a first stab, exclusively thinking about German, I came up with the rather unsatisfactory "erhabenheitschaude" which would mean (I think) joy in transcendence. That's part of what I'm looking for, but doesn't fit the bill. Any thoughts? (Please, *anything* to dilute the Searle/semantics/syntax discussion...) Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From hkeithhenson at gmail.com Mon Feb 8 04:23:42 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 7 Feb 2010 21:23:42 -0700 Subject: [ExI] extropy-chat Digest, Vol 77, Issue 12 In-Reply-To: References: Message-ID: On Sun, Feb 7, 2010 at 5:00 AM, Tom Nowell wrote: > Keith's proposal relies on using a lot of organic liquids with a low vapor point. > > Keith, how are you proposing to trap the vapor, condense it and re-use it? If this process isn't highly efficient, you get two big problems: Heat pipes are sealed, passive devices. It's hard to put a number on their efficiency besides 100%. http://en.wikipedia.org/wiki/Heat_pipe > 1) you need to add a lot more liquid, which costs energy to make, adding to expense and cost > 2) you have to worry about the environmental problems when the vapor condenses somewhere else. In fact, even with a tiny amount of leakage this can become a problem. I'm not sure how much of these it would take to become toxic rather than mild irritants, but in the volumes needed to freeze glaciers there is a risk of a major spill. Seeing as places with the huge glaciers like Antarctica and Greenland have coastlines with fragile polar ecosystems, I can see this being a problem. Propane and ammonia (two good choices) don't cause environmental problems in the small amounts used. > In Kim Stanley Robinson's recent trilogy of ecothrillers (40 days of rain/ 50 degrees below/ 60 days) one of the protagonists investigates geoengineering for a presidential candidate and advises him in the last book. The scheme they use for direct lowering of sea-levels is pumping sea water on to the West Antarctic where the glaciers are highly stable, and increasing glacier coverage that way. I think you mean East Antartic but I am not sure this would be a good idea. The salt in seawater might cause the glaciers to soften and slide off into the ocean. Of course, if you are going to pump water upwards more than a few hundred meters, it only cost 200-300 meters of head to take the salt out with osmosis. Interesting concept though. To put numbers on it, the area of the earth is ~5.1 x 10E14 square meters. 3/4 of that is water, so ~3.8 10E15 square meters. To lower the oceans by a meter in a year would require pumping at 1.21 x 10E7 cubic meters per second. 12,100,000 cubic meters per second. Hmm The flow of the Amazon is 219,000 cubic meters per second, so it would take 55 times the flow of the Amazon. Pumping it up some 3000 meters to the ice sheet would take considerable energy, P=Q*g*h*1/pump efficiency (0.9). 1.21*10E7*9.8*3000/0.9 = 396 GW. 400 one GW reactors would do the job. (Please check this number.) Keith > Tom From spike66 at att.net Mon Feb 8 04:34:26 2010 From: spike66 at att.net (spike) Date: Sun, 7 Feb 2010 20:34:26 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <201002080401.o1841ase025933@andromeda.ziaspace.com> References: <201002080401.o1841ase025933@andromeda.ziaspace.com> Message-ID: <12C91DE16C6A47ADB3E4696EC08C4B18@spike> > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Max More > ...I came up with > the rather unsatisfactory "erhabenheitschaude" which would > mean (I think) joy in transcendence. That's part of what I'm > looking for, but doesn't fit the bill. Any thoughts? Erhebungaufregung? Approximately ~ horny for uplifting. > > (Please, *anything* to dilute the Searle/semantics/syntax > discussion...) > > Max Hey I tried to give them Valentines Day advice. Did they laugh at my jokes? No! They all seem to have slammed the whole bottle of serious pills lately. I do commend the Searlers for at least maintaining a most civil tone throughout the marathon discussion however. You guys were aware that occasional lighthearted silliness is an extropian tradition, ja? spike From aware at awareresearch.com Mon Feb 8 04:50:41 2010 From: aware at awareresearch.com (Aware) Date: Sun, 7 Feb 2010 20:50:41 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <12C91DE16C6A47ADB3E4696EC08C4B18@spike> References: <201002080401.o1841ase025933@andromeda.ziaspace.com> <12C91DE16C6A47ADB3E4696EC08C4B18@spike> Message-ID: On Sun, Feb 7, 2010 at 8:34 PM, spike wrote: > You guys were aware that occasional lighthearted silliness is an extropian > tradition, ja? I was certainly Aware... I suppose I could confuse those who get my position, and reinforce the thinking of those who don't by appropriating the old Behaviorist gag on the inaccessibility of subjective experience: Jef: It was good for you; was it good for me? Gordon: Don't bother me. I have a headache... Or maybe we could spend the next couple of months debating where one's lap goes when one stands up. - Jef From nanite1018 at gmail.com Mon Feb 8 05:10:07 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Mon, 8 Feb 2010 00:10:07 -0500 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> <915E9809-F351-43BE-A6A5-10E53227BD42@GMAIL.COM> Message-ID: On Feb 7, 2010, at 4:13 PM, Spencer Campbell wrote: > Solution: No one needs to be punished. In theory, the only > justification for legal punishment right now is to modify future > behavior on a societal scale. There are far more effective and less > draconian methods of doing this... While I agree that the point of a justice system is to prevent the violation of people's rights, and whatever means are best for doing this are obviously to be preferred (such as, I suppose, open prisons, though I am wary of such a thing). > A measurement of prosperity need make no reference to individuals or > minds unless corporations, countries, and planets count as > individuals... My argument against happiness is the same as my argument against > punishment: it is valuable only as a tool for behavioral modification, > heartless as that may sound. Look at how happiness evolved. It's just > an arbitrary reward for survival. This is the attitude with which I > regard my own happiness, and it doesn't seem to impair me in any way > practical or philosophical. Seeing as how the goal of all living organisms is to live (as any other goal leads to death and the end of all possible goals), and we are conscious entities (whatever that means in nolipsism, even if it is an imaginary thing in the imagination of an imaginary thing, haha) that need motivation to survive, and emotions, including happiness, are an important part of that. Sure, it is behavior modification, of a kind, but it is still important to human beings, at least for now (and I hope indefinitely). > Finally: obviously "rights" don't go out the window at all! In fact, > we would only have more of them. A brand-new car would have the right > not to be crushed into a tiny cube, because such would be blatantly > wasteful and wrong. Similarly, a brand-new human would have the same > right, but a totaled junker or a corpse would not. A car can't have rights because it isn't self-aware. It isn't even alive. It doesn't have choices, and nothing matters to it. It doesn't have the quality of being this weird self-referencing thing with a de se operator, it lacks the capacity of reason. And thus, it can't have any rights. A corpse isn't alive or rational or aware either. A brand-new human is, or at least will be in short order. Proliferating rights in the manner you suggest devalues the word and destroys its meaning, allowing evil people to appropriate it for their own ends, as they did in socialist countries all across the globe. The result? The deaths of tens of millions from starvation, disease, and brutal suppression of dissent. Having rights that you suggest would likely lead to chaos and that would support the rise of oppressive regimes, just as the proliferation of "rights" to jobs, health care, income, education, etc. have caused problems by creating a "need" for ever more oppressive regulations. The result? More chaos, and more regulation. Networks of rational agents generate spontaneous order through rational self-interest. I don't see how you can have any such thing as rational agents, or self-interest, without some "thing" which is an agent and has an interest in its own existence. Even if there is no such thing as a "self", there is a thing which employs a de se operator to describe "itself", whatever "it" is, and I'm not clear on what the difference is between such an entity and a "self". It obviously has memory, reasons, and is self-aware (i.e. aware of the the thing that is speaking, thinking, etc., whatever it is). Doesn't some "thing" have to exist to employ such an operator? Joshua Job nanite1018 at gmail.com From thespike at satx.rr.com Mon Feb 8 05:10:59 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 07 Feb 2010 23:10:59 -0600 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: References: <201002080401.o1841ase025933@andromeda.ziaspace.com> <12C91DE16C6A47ADB3E4696EC08C4B18@spike> Message-ID: <4B6F9CE3.1020701@satx.rr.com> On 2/7/2010 10:50 PM, Aware wrote: > Or maybe we could spend the next couple of months debating where one's > lap goes when one stands up. It goes into abeyance. Or, in the case of the herniated, into hiatus. Damien Broderick From max at maxmore.com Mon Feb 8 06:33:49 2010 From: max at maxmore.com (Max More) Date: Mon, 08 Feb 2010 00:33:49 -0600 Subject: [ExI] Joy in the transcendence of the species Message-ID: <201002080634.o186Xxrd020842@andromeda.ziaspace.com> spike wrote: >Erhebungaufregung? Approximately ~ horny for uplifting. Young man, wash your mouth out with soap. There will be no uplifting of your horn in my presence! Why, the very idea. Now, if you all can keep your bungaufregung in your pants, I'm still looking for a word that would combine joy + posthuman transcension + everyone. M From max at maxmore.com Mon Feb 8 06:39:47 2010 From: max at maxmore.com (Max More) Date: Mon, 08 Feb 2010 00:39:47 -0600 Subject: [ExI] Joy in the transcendence of the species Message-ID: <201002080639.o186duCL019011@andromeda.ziaspace.com> I just remembered a term that is close (but not quite) what I'm looking for: "extrosattva", which is a play on "boddisattva". See: http://www.gregburch.net/writing/Extrosattva.htm Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From spike66 at att.net Mon Feb 8 07:09:06 2010 From: spike66 at att.net (spike) Date: Sun, 7 Feb 2010 23:09:06 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <201002080634.o186Xxrd020842@andromeda.ziaspace.com> References: <201002080634.o186Xxrd020842@andromeda.ziaspace.com> Message-ID: <8DEAFD88CD6D4BDEAC0B05D6DEA9AD2F@spike> > ...On Behalf Of Max More > Subject: Re: [ExI] Joy in the transcendence of the species > > spike wrote: > > >Erhebungaufregung? Approximately ~ horny for uplifting. > > Young man, wash your mouth out with soap... Young? Vas ist dieses young? We have "dirty old man" and the "young horndog" but we need a term for those of us who fall somewhere in the middle. Regarding another topic on which I posted earlier today, evolution and the fact that it isn't discussed much in the US public schools. Is it? Some time ago Max complained that his US students knew so little about Darwin that he had to waste valuable time in his philosophy lectures explaining the basics of evolution before he could even start on the material. Max am I remembering it correctly? The insight I had on this topic was given to me by one of our regulars, who was visiting me at my house last week. He commented that his friend had been in NASA in the early years, but had grown discouraged, for after the Apollo program was over, the organization was taken over by jesus freaks. In retrospect, my own early years in a space town reflects exactly that notion. I never really realized that everywhere wasn't like my own home town. Now I know it isn't like that everywhere. I already know you British guys get the right story, since Darwin was one of your own. So question please, USians, did your public school teach you about Darwin? Did they teach it right? spike From spike66 at att.net Mon Feb 8 06:55:36 2010 From: spike66 at att.net (spike) Date: Sun, 7 Feb 2010 22:55:36 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <201002080639.o186duCL019011@andromeda.ziaspace.com> References: <201002080639.o186duCL019011@andromeda.ziaspace.com> Message-ID: <10FC50E8D8EF496F87F9E1DB26856F34@spike> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > Subject: Re: [ExI] Joy in the transcendence of the species > > I just remembered a term that is close (but not quite) what > I'm looking for: "extrosattva", which is a play on "boddisattva". See: > http://www.gregburch.net/writing/Extrosattva.htm > > Max So where the heck has Greg Burch been hiding out? spike From stefano.vaj at gmail.com Mon Feb 8 10:48:16 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 8 Feb 2010 11:48:16 +0100 Subject: [ExI] Nolopsism In-Reply-To: <641575.48388.qm@web65616.mail.ac4.yahoo.com> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: <580930c21002080248t60f6e9d7ubd5d4f73275f1fcd@mail.gmail.com> On 6 February 2010 23:41, The Avantguardian wrote: > Being an attorney however, I am sure you are aware of the legal can of worms nolipsism opens up. Human rights are tied to identity. If "I" don't exist, then stealing my stuff or even murdering me is a victimless crime. Doesn't make for a happy outcome in my opinion, especially for libertarians.?Probably why the authors back-pedalled from their claims in the conclusion. Besides the fact that victimless crimes do exist in positive law, you cannot steal from a legal entity, and yet we are well aware of the conventional nature of concepts such as its identity, will, good faith, responsibility, etc. Moreover, we do *not* consider victimless crimes those affecting unconscious human beings. Why should it be necessary to take a stance as to some metaphysical quality of the consciousness of human beings to regulate social life so that harming them out of the circumstances provided for in law is a crime? But you are right on a point: the POV according to which only an "essentialist humanist polipsism" can be the ground for granting us a personhood status would be an argument to keep out other different, albeit persuasively exhibiting similar behaviours, entities. Say, the great apes, or computers passing the Turing test or uploaded humans. -- Stefano Vaj From stathisp at gmail.com Mon Feb 8 10:59:02 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 8 Feb 2010 21:59:02 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <156490.82842.qm@web36504.mail.mud.yahoo.com> Message-ID: On 8 February 2010 08:05, Spencer Campbell wrote: > Seems pretty clear to me, as a neuron-by-neuron replacement is > precisely what I've wanted for the past two to five years. I would > advise phrasing it again, simply and concisely, because (a) what you > have in mind may have changed since you last did so, (b) I might have > overwritten your description with my own, and (c) the point on which > Gordon disagrees remains a total mystery. The premise is that it is possible to make an artificial neuron which behaves exactly the same as a biological neuron, but lacks consciousness. We've been discussing computer consciousness but it doesn't have to be a computerised neuron, we could say that it is animated by a little demon and the conclusion from the thought experiment remains unchanged. These zombie neurons are then put into your head replacing normal neurons that play some important role in a conscious process, such as visual perception or understanding of language. Before going further, is it perfectly clear that the behaviour of the remaining biological parts of your brain and your overall behaviour will remain unchanged? If not, then the artificial neuron does not work as claimed. OK, so your behaviour is unchanged and your thoughts are unchanged as a result of the substitution; for if your thoughts changed, you would be able to say "my thoughts have changed", and therefore your behaviour would have changed. What of your consciousness? If your consciousness changes as a result of the substitution, you would be unable to notice a change, since again if you noticed the change you would be able to say "I've noticed a change", and that would be a change in your behaviour, which is impossible. So: if your consciousness changes as a result of the substitution, you would be unable to notice any change. You would lose all visual perception and not only behave as if you had normal vision, but also honestly believe that you had normal vision. Or you would lose the ability to understand words starting with "r" but you would be able to use these words appropriately and honestly believe that you understood what these words meant. You would be partially zombified but you wouldn't know it. In which case, how do you know you don't now have zombie vision, zombie understanding or a zombie toothache? If zombie consciousness is indistinguishable objectively *or* subjectively (i.e. by the unzombified part of a partly zombified mind) from real consciousness, then the claim that there is nevertheless a distinction is meaningless. The conclusion, therefore, is that the original premise is false: it is not possible to make an artificial neuron which behaves exactly the same as a biological neuron, but lacks consciousness. Either such a neuron would not really behave like a biological neuron, or it would behave like a biological neuron and also have the consciousness inherent in a biological neuron. This is a statement of the functionalist position, of which computationalism is a subset. It is possible that computationalism is false but functionalism is still true. Note that the above argument assumes no theory of consciousness. Its conclusion is just that if consciousness exists at all, whatever it is, it is ineluctably linked to brain function. -- Stathis Papaioannou From stefano.vaj at gmail.com Mon Feb 8 11:18:23 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 8 Feb 2010 12:18:23 +0100 Subject: [ExI] Semiotics and Computability In-Reply-To: <264701.53414.qm@web36506.mail.mud.yahoo.com> References: <264701.53414.qm@web36506.mail.mud.yahoo.com> Message-ID: <580930c21002080318q5c3cfda6h920b50b14ebbfe9b@mail.gmail.com> On 6 February 2010 23:01, Gordon Swobe wrote: > I'll ask again: have you ever had a tooth-ache? I for one never have, but I have had my fair share of experience of physical pain. Now, absolutely *nothing* in such experience ever told me that "pain" is anything else than a word describing a computational feature, programmed by natural selection, on the tunes of "if , then ". But perhaps it does tell that to everybody else, and I am the only philosophical zombie in the world... :-D -- Stefano Vaj From stefano.vaj at gmail.com Mon Feb 8 11:29:34 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 8 Feb 2010 12:29:34 +0100 Subject: [ExI] Glacier Geoengineering In-Reply-To: <847034.14281.qm@web113617.mail.gq1.yahoo.com> References: <847034.14281.qm@web113617.mail.gq1.yahoo.com> Message-ID: <580930c21002080329q5f76a769u138e142b1965e22a@mail.gmail.com> On 2 February 2010 16:08, Ben Zaiboc wrote: > I need to ask a question here, please indulge me if the answer should be > obvious: > > What's the point of sticking glaciers to their bedrock? > Because we can? :-) Or, on the tunes of Sir Edmund Hilary, "because they are there"? :-) But admittedly I have not the foggiest idea on whether we should make it a habit... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Feb 8 11:29:32 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 8 Feb 2010 22:29:32 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <580930c21002080318q5c3cfda6h920b50b14ebbfe9b@mail.gmail.com> References: <264701.53414.qm@web36506.mail.mud.yahoo.com> <580930c21002080318q5c3cfda6h920b50b14ebbfe9b@mail.gmail.com> Message-ID: On 8 February 2010 22:18, Stefano Vaj wrote: > On 6 February 2010 23:01, Gordon Swobe wrote: >> I'll ask again: have you ever had a tooth-ache? > > I for one never have, but I have had my fair share of experience of > physical pain. > > Now, absolutely *nothing* in such experience ever told me that "pain" > is anything else than a word describing a computational feature, > programmed by natural selection, on the tunes of "if threatened by fire>, then ". You do know that first you withdraw your hand, through reflex, and then experience the pain? No doubt the purpose of the pain is so that you will remember not to do it again. But why pain; why not disgust, or horror, or just reluctance? -- Stathis Papaioannou From bbenzai at yahoo.com Mon Feb 8 11:06:36 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 8 Feb 2010 03:06:36 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <711259.1066.qm@web113612.mail.gq1.yahoo.com> Gordon Swobe wrote: > Ben Zaiboc wrote: >>Gordon Swobe wrote: >>> I'll ask again: have you ever had a tooth-ache? >> Gordon, your repeated question shows that you're either ignoring >> or not getting the point of Jef's reply. >> If it makes no sense to you, please say so, don't just >> regurgitate the same question that he is replying to. >> On the other hand, if you're simply ignoring things that you >> don't like reading, what's the point of continuing the >> conversation? > How about you, Ben? Have you ever had a toothache? Was it a > real toothache? Or was it just an illusion? Ah, Ignoring, then. Fine. Also ironic, as you're the one who's said, more than once, "I wonder now if I can count on you for an honest discussion?" Ben Zaiboc From stefano.vaj at gmail.com Mon Feb 8 11:39:45 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 8 Feb 2010 12:39:45 +0100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <445777.14858.qm@web36505.mail.mud.yahoo.com> References: <445777.14858.qm@web36505.mail.mud.yahoo.com> Message-ID: <580930c21002080339s52487cc2hcc656db8a99da769@mail.gmail.com> On 7 February 2010 21:07, Gordon Swobe wrote: > The amoeba has no neurons or nervous system, Stathis, so "less conscious" > is an understatement. It has no consciousness at all. > This is becoming increasingly circular. Why a nervous system should produce "consciousness", whatever it may be? And how would it be different not to have one? Have you ever tried? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Mon Feb 8 11:51:51 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 8 Feb 2010 12:51:51 +0100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <264701.53414.qm@web36506.mail.mud.yahoo.com> <580930c21002080318q5c3cfda6h920b50b14ebbfe9b@mail.gmail.com> Message-ID: <580930c21002080351l52934842xc6c3a3f5890c31c3@mail.gmail.com> On 8 February 2010 12:29, Stathis Papaioannou wrote: > You do know that first you withdraw your hand, through reflex, and > then experience the pain? No doubt the purpose of the pain is so that > you will remember not to do it again. Yes. Or no. It might even be a byproduct, what is Gould's word for that? But I do not see how it would change my original remark. > But why pain; why not disgust, > or horror, or just reluctance? By definition. Because "pain" is simply the name we give to our reactions to the fact of being burnt, which may well be different (in its causes, perhaps as well in its consequences or intensity) from that generated by "horrorful" rather than "painful" experiences. All the paradox of qualia is of course that in such perspective you may well "feel" horror, or for that matter pleasure, when I feel pain. You would obviously call it "pain" anyway, as long as you speak English, and as long as your reaction thereto would be identical, there would be no way ever to know it. As a consequence, it would seem obvious that since "feelings abstracted from reactions" are not part of the phenomenical reality, their concept is only a philosophical Fata Morgana dictated by a few centuries of dualism. -- Stefano Vaj From bbenzai at yahoo.com Mon Feb 8 11:31:58 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 8 Feb 2010 03:31:58 -0800 (PST) Subject: [ExI] The Surrogates, graphic novel In-Reply-To: Message-ID: <938582.59281.qm@web113609.mail.gq1.yahoo.com> Seems to me that the story sacrifices common-sense for the sake of having a story. That kind of technology would make the need for remote surrogate bodies unnecessary. People would just, as Max says, upgrade their own bodies, and be cyborgs. ben Zaiboc From gts_2000 at yahoo.com Mon Feb 8 12:43:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 8 Feb 2010 04:43:45 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <468928.31819.qm@web36508.mail.mud.yahoo.com> --- On Sun, 2/7/10, Spencer Campbell wrote: > Gordon, time for the true or false game: there is a > difference between real toothaches and illusionary toothaches. > > My answer is "false", and I get the impression that yours > will be "true". If so, why? How? No, I answer false also. I asked that question about toothaches to point out that subjective facts exist and that consciousness exists. It makes no difference whether your toothache exists as a result of a cavity or as an effect caused by a stage-hypnotist; if you feel the pain then it exists with as much reality as does a heart attack. It should seem obvious that the world contains both subjective facts like toothaches and objective facts like mountains. It should seem equally obvious that consciousness exists, and that consciousness has certain qualities. The majority of people do in fact consider these things perfectly obvious. And contrary to the bafflegab promulgated by some quasi-intellectual pseudo-philosophers, on these subjects the majority of people have it exactly right. -gts From bbenzai at yahoo.com Mon Feb 8 13:11:03 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 8 Feb 2010 05:11:03 -0800 (PST) Subject: [ExI] Joy in the transcendence of the species In-Reply-To: Message-ID: <584597.9024.qm@web113602.mail.gq1.yahoo.com> Transzendenzjedenfreude Ben Zaiboc From msd001 at gmail.com Mon Feb 8 14:42:24 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 8 Feb 2010 09:42:24 -0500 Subject: [ExI] The Surrogates, graphic novel In-Reply-To: <938582.59281.qm@web113609.mail.gq1.yahoo.com> References: <938582.59281.qm@web113609.mail.gq1.yahoo.com> Message-ID: <62c14241002080642j56bb72c2r969b4ff798608e69@mail.gmail.com> On Mon, Feb 8, 2010 at 6:31 AM, Ben Zaiboc wrote: > Seems to me that the story sacrifices common-sense for the sake of having a story. > > That kind of technology would make the need for remote surrogate bodies unnecessary. ?People would just, as Max says, upgrade their own bodies, and be cyborgs. Sure, but the general population has a hard time understanding even the dumbed-down parts. I was a bit annoyed by the suddenly super-human acts of robot jumping - as if being a robot allows a surri to violate gravity at will. So yes, it was Hollywood for the sake of a telling a story. In those scenes where surri were mangled, my wife had a visceral emotional response. I asked her if that was because she was thinking about the surrigates as people. She admitted that she was. I found that interesting because people rarely have such reaction to a car accident. It's the same loss of personal property, but if the human operator walks away the response is usually "Well at least nobody was harmed" - Why does a machine that looks like a person warrant the extra attention? I commented that the "dread camp" were effectively neo-Amish. They really made no sense to the culture, but provided a plot device that was easy to understand. I think it might have made a more interesting story for us to imagine the clash between the embodied real world presences and the disembodied/uploaded virtual world presences. (the usual competition for resource utilization/etc.) I think Greg Egan's "Diaspora" has a nice treatment of fleshers, gleisner robots and citizens. [http://en.wikipedia.org/wiki/Diaspora_(novel)] From aware at awareresearch.com Mon Feb 8 16:10:11 2010 From: aware at awareresearch.com (Aware) Date: Mon, 8 Feb 2010 08:10:11 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <201002080639.o186duCL019011@andromeda.ziaspace.com> References: <201002080639.o186duCL019011@andromeda.ziaspace.com> Message-ID: On Sun, Feb 7, 2010 at 10:39 PM, Max More wrote: > I just remembered a term that is close (but not quite) what I'm looking for: > "extrosattva", which is a play on "boddisattva". See: > http://www.gregburch.net/writing/Extrosattva.htm It's close, but crosses the line into the mythical. [Re-reading Greg's essay triggered strong nostalgia for those heady days of Extropian discussion past.] I'll contribute for brainstorming that (Hofstadter's) "super-rationality" encompasses much of the concept: acting on the basis of identification with a future context of meaning and value greater than one's present context. ==> superrationalist - Jef From natasha at natasha.cc Mon Feb 8 16:21:02 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 8 Feb 2010 10:21:02 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" Message-ID: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> It's an informative article. My issue with it, however, is that it seems to be black or white and misrepreents Max. http://ieet.org/index.php/IEET/more/3670/ Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: att920ae.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From bbenzai at yahoo.com Mon Feb 8 16:08:51 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 8 Feb 2010 08:08:51 -0800 (PST) Subject: [ExI] Nolipsism [was Re: Nolopsism] In-Reply-To: Message-ID: <57273.49114.qm@web113615.mail.gq1.yahoo.com> The Avantguardian wrote: > From: Ben Zaiboc >> Philosophically, it may be as you say.? Practically, though, it's not really > that useful because it makes no actual difference to the way we regard things > like fear of death, or to the law. > >> It's in the scientific arena that nolipsism is most useful, because it explains > what subjectivity actually is, and clears the nonsense and confusion out of the > way. > How so? Siddhartha Guatama said that the self was an illusion some 3000 years ago only he called it the skandha of consciousness instead of a "de se designator". What makes nolopsism more scientifically useful than Buddhism? Are you suggesting that by sweeping consciousness under a rug, it can be scientifically ignored? Buddhism says nothing about neuroscience. Nevertheless, maybe the skandha of consciousness is just as scientifically useful as the "de se" designator. Maybe they are the same thing. This is not sweeping consciousness under a rug, it's subjecting it to the light of day. Far from ignoring it, this is an attempt to explain it. An attempt that to me, at least, makes pretty good sense. >> We know, at least in theory, that subjectivity can be built into an > artificial mind, and we can finally dump the concept of the 'hard problem' in > the bin. > So you think that programming a computer to falsely believe itself to be conscious is easier than to program one to actually be so. Or do you think that programming a computer to use "de se" designators necessarily makes it think itself conscious? A person could get by without the use "de se" designators yet still retain a sense of self. It might sound funny, but?a person could consistently refer to themselves in the third-person by name even in their thoughts.?Stuart doesn't think that "de se" designators are particularly profound.?Stuart doesn't need them. Do you see what Stuart means?? We seem to be getting different things from this paper. I'm not at all suggesting that a computer (or robot) be programmed to 'falsely believe itself to be conscious', and neither are the authors (how would it even be possible? it would have to already be conscious in order to believe anything, so the belief wouldn't be false). The suggestion is that a non-descriptive reflexive designator is necessary for general-purpose cognition, and that this is what "I" is. Inasmuch as we regard "I" as the thing that is conscious, the "de se" designator is at the heart of consciousness. This is not a 'false' consciousness, it's what consciousness is, whether it be in a robot or a human. It's the ungrounded symbol that gives personal meaning to everything else. By definition, A person could *not* get by without a "de se" designator yet still retain a sense of self, because it is the very essence of the sense of self. Where is this third-person Stuart? What location does he occupy? Not right now as in some temporary location defined by an external coordinate system, but at any time? There's only one answer: "Here" (Stuart points to self). Third-person Stuart has nowhere to point. He has no self-centred coordinate system. Only first-person Stuart can have such a thing. If there is no self for Stuart to point to, he cannot answer the question, it means nothing to him. >> The concept of a "de se" designator explains why we don't have souls, not why we > shouldn't have property rights.? > Property rights are no less abstract than souls. Neither seems to have a physical basis beyond metaphysical/philosophical fiat. Communists tend not to believe in either. The difference is that the term "property rights" signifies, things that exist, as sets of rules, and are useful. The term "souls" signifies only a fantasy, and is not useful. Property rights have an effect on the world. Even thought they aren't material things they still exist. Souls don't. I think it's very very easy to misunderstand what is meant by "the self is an illusion". It needs a fair bit of pondering. For me, at least, the concept of "de se" desgignators makes it much clearer. Ben Zaiboc From thespike at satx.rr.com Mon Feb 8 16:49:30 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 10:49:30 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> Message-ID: <4B70409A.9070807@satx.rr.com> On 2/8/2010 10:21 AM, Natasha Vita-More wrote: > My issue with it, however, is that it seems to be black or white and > misrepreents Max. That would be this, I assume: In what way does this misrepresent Max? By snipping the larger conext of his comment, presumably, but did he say and mean that? Damien Broderick From jonkc at bellsouth.net Mon Feb 8 16:32:25 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Feb 2010 11:32:25 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <468928.31819.qm@web36508.mail.mud.yahoo.com> References: <468928.31819.qm@web36508.mail.mud.yahoo.com> Message-ID: Since my last post Gordon Swobe has posted 4 times. > > It should seem obvious that the world contains both subjective facts like toothaches and objective facts like mountains. It should seem equally obvious that consciousness exists, and that consciousness has certain qualities. The majority of people do in fact consider these things perfectly obvious. And contrary to the bafflegab promulgated by some quasi-intellectual pseudo-philosophers, on these subjects the majority of people have it exactly right. Swobe has used this straw man argument many times, and I believe he is being disingenuous. Nobody on this list, or any rational person for that matter, seriously thinks he doesn't think. True, I have heard some say that consciousness is an illusion, and yes that is a bit dumb as it's not at all clear why it's more "illusionary" than any other perfectly respectable mental phenomena, but that's not as bad as saying it doesn't exist. No sane person thinks consciousness doesn't exists, although some very silly people may say so when they try (unsuccessfully) to sound sophisticated and provocative. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Mon Feb 8 17:00:57 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 8 Feb 2010 11:00:57 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <4B70409A.9070807@satx.rr.com> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com> Message-ID: <4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> "Libertarian transhumanists like Thiel and More..." Please read: RU's interview with Max: http://www.acceleratingfuture.com/people/Max-More/?interview=32 Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Damien Broderick Sent: Monday, February 08, 2010 10:50 AM To: ExI chat list Subject: Re: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" On 2/8/2010 10:21 AM, Natasha Vita-More wrote: > My issue with it, however, is that it seems to be black or white and > misrepreents Max. That would be this, I assume: In what way does this misrepresent Max? By snipping the larger conext of his comment, presumably, but did he say and mean that? Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ablainey at aol.com Mon Feb 8 17:14:37 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Mon, 08 Feb 2010 12:14:37 -0500 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> Message-ID: <8CC76F9532BE5FA-4614-1CE8@webmail-d042.sysops.aol.com> I lost interest at 'Liberal democracy' in the first paragraph. The clue to it being bipolar to the nth degree is in the title. Divide and conquer. NLP. Exclude all other posibility. Read it all before and living in the UK see it, hear it, read it and get shoveled it everyday. i'll try and wade through it later. Maybe i'm wrong? A -----Original Message----- From: Natasha Vita-More To: 'ExI chat list' ; extrobritannia at yahoogroups.com Sent: Mon, 8 Feb 2010 16:21 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" It's an informative article. My issue with it, however, is that it seems to be black or white and misrepreents Max. http://ieet.org/index.php/IEET/more/3670/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Feb 8 17:21:11 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 11:21:11 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com> <4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> Message-ID: <4B704807.5090907@satx.rr.com> On 2/8/2010 11:00 AM, Natasha Vita-More wrote: > "Libertarian transhumanists like Thiel and More..." > > Please read: RU's interview with Max: > http://www.acceleratingfuture.com/people/Max-More/?interview= That would be the following 2004 discussion: As far as the actual quote in James Hughe's essay goes, does it misrepresent Max's thinking on the topic at issue? This next quote would suggest that Hughe's implication of antidemocratic or top-down bias (if that's what he means) is wrong: Still, Max is quoted as saying Damien Broderick From natasha at natasha.cc Mon Feb 8 17:26:05 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 8 Feb 2010 11:26:05 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <4B704807.5090907@satx.rr.com> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> Message-ID: <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1> So what? Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Damien Broderick Sent: Monday, February 08, 2010 11:21 AM To: ExI chat list Subject: Re: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" On 2/8/2010 11:00 AM, Natasha Vita-More wrote: > "Libertarian transhumanists like Thiel and More..." > > Please read: RU's interview with Max: > http://www.acceleratingfuture.com/people/Max-More/?interview= That would be the following 2004 discussion: As far as the actual quote in James Hughe's essay goes, does it misrepresent Max's thinking on the topic at issue? This next quote would suggest that Hughe's implication of antidemocratic or top-down bias (if that's what he means) is wrong: Still, Max is quoted as saying Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From thespike at satx.rr.com Mon Feb 8 17:35:49 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 11:35:49 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1> Message-ID: <4B704B75.5010709@satx.rr.com> On 2/8/2010 11:26 AM, Natasha Vita-More wrote: > So what? Eh? From hkeithhenson at gmail.com Mon Feb 8 17:51:34 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 8 Feb 2010 10:51:34 -0700 Subject: [ExI] Refreezing the Arctic ocean Message-ID: On another list someone wrote > The bigger problem right now is solar insolation during polar summers. This > is the Arctic problem in the immediate term and a intermediate problem for > the West Antarctic Ice Sheet. A blue water Arctic is starting, I don't want > to use the words "chain reaction" (it's not a nuclear process), but least > non-linear warming (oceanic). Scaled down by a factor of 25, say a 2.5-3 inch pipe 50 feet long, the heat pipe I analyzed would suck out 4 kW with the same delta T for the heat exchanger. The Arctic isn't as cold as the Antarctic, but at least 5 months of the year it is cold enough to make this work. A kWh is 3600 kJ. So such a heat pipe would freeze 4*3600/333kJ/kg, ~43 kg of water per hour. After 5 months, that would be ~160000 kg of ice, or 160 tonnes, or 160 cubic meters. This would be a cylinder of ice ~16 m high by ~4 meters. Such a slender ratio would have to be examined for stability (sinking weight on a cable perhaps). The interior would be at -15 C, which seems to me that it would likely last without much loss till the next winter, but this would need to be calculated. In 5 years the block would be 8 meters across. I don't know what would be the optimum heat pipe size or spacing, or how spacing could be maintained, but this is the general idea of how to refreeze the Arctic ocean. What it would be doing is to raise the effective temperature in the Arctic during the coldest part of the year, making the radiation into space higher. I also don't know if this is economically feasible, or even desirable (shipping). But this is the kind of thought that would go into an engineering solution if anyone cared to look in this direction. The West Antarctic Ice Sheet is made for the trick of freezing it to bedrock >>this motion would stop if the glacier was frozen to the bedrock. > > Then, it's not a glacier. It's not going to stop them moving entirely, just slow them down. Ice is still plastic. (mondo snip to another post) >> FACT: All glaciers are retreating, many already gone. Glaciers are the >> source of the >> majority of the world's fresh water, directly or indirectly. ... > > Sure, Keith compares them to farm land. Only for the point that the area of glaciers isn't orders of magnitude larger than the area humans have massively affected. This is an economic argument: if humans were really were desperate, we could afford to pin glaciers and slow them down. It's not just melting in sunlight. Glaciers flow down to lower and warmer altitudes and into the sea where they melt. Getting dark particulates out of the air by switching away from coal and dirty burning engines would also help slow down melting. >> All we can do is band-aid what we can and live with the pain, I have been aware since the early 70s. Dr Peter Vajk and I were nearly thrown out of a Limits to Growth conference in 1975 for having the gall to suggest there might be a way out of the problem. Now, 35 years later, and perhaps too late, space based solar power is finally getting serious attention. They still are not taking a systems approach which tells you that chemical exhaust velocity is not enough for low cost energy. But the methods,for example, air breathing part way up and laser heated hydrogen above that, are obvious if you go back to the basic physics. Of course, economically it only works for a large traffic model. Keith From thespike at satx.rr.com Mon Feb 8 17:58:44 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 11:58:44 -0600 Subject: [ExI] IEET piece re "Problems of Transhumanism" In-Reply-To: <4B704B75.5010709@satx.rr.com> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1> <4B704B75.5010709@satx.rr.com> Message-ID: <4B7050D4.8050802@satx.rr.com> On 2/8/2010 11:35 AM, Damien Broderick wrote: >Natasha Vita-More : >> So what? > Eh? To expand on that: Isn't what I quoted to the point of Hughes' essay, which starts: BUT, he goes on below, and follows with the quote from Max I've now cited twice, which you don't think is relevant (I suppose). Is it really irrelevant? Perhaps so. Here it is again: I certainly don't think that Max's comment on "monkey politics" is a disparagement of *democracy* but rather of tribal power plays, hierarchies of force and authority, etc. And his views are certainly inconsistent with Hughes' glib opening comment about a supposed >H "tendency to disparage liberal democracy in favor of the rule by dei ex machina and technocratic elites." But it doesn't sound like a ringing endorsement of the position that "liberal democracy is the best path to betterment," which is where Hughes started. Damien Broderick From max at maxmore.com Mon Feb 8 18:18:26 2010 From: max at maxmore.com (Max More) Date: Mon, 08 Feb 2010 12:18:26 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" Message-ID: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> Damien, you seem to be suggesting ("Still, Max is quoted as saying") that Hughes "implication of antidemocratic or top-down bias" is understandable because of my statement that "Democratic arrangements have no intrinsic value; they have value only to the extent that they enable us to achieve shared goals while protecting our freedom." If so, I don't understand how you can say that. Saying that democratic arrangements (as they exist at any particular time) have no intrinsic value is not in the least equivalent to saying that authoritarian control is better. Should we not strive for something better than the ugly system of democracy that currently exists? Are authoritarian arrangements the only conceivable alternative? Perhaps you use "democracy" far more broadly than I do. If you mean it such that it covers all possible non-authoritarian systems, then your interpretation makes sense. I prefer to restrict it to the systems we have seen historically, which are crude attempts to convert the desires of people in general into governance through various forms of majority voting. Max Damien wrote: >As far as the actual quote in James Hughe's essay goes, does it >misrepresent Max's thinking on the topic at issue? This next quote would >suggest that Hughe's implication of antidemocratic or top-down bias (if >that's what he means) is wrong: > >freedom of action, and experimentation. Opposing authoritarian social >control and favoring the rule of law and decentralization of power. >Preferring bargaining over battling, and exchange over compulsion. >Openness to improvement rather than a static utopia. [...] I find it >both amusing and revolting to observe socialist transhumanists who >characterize themselves as "democratic transhumanists" but use the term >"democracy" as a cover for using governmental power to compel everyone >to fit into their notion of "equality." Democracy, in the more generally >accepted sense, is an important way of implementing the principle of >Open Society.> > >Still, Max is quoted as saying > >only to the extent that they enable us to achieve shared goals while >protecting our freedom. Surely, as we strive to transcend the >biological limitations of human nature, we can also improve upon >monkey politics? > ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From natasha at natasha.cc Mon Feb 8 18:19:04 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 8 Feb 2010 12:19:04 -0600 Subject: [ExI] IEET piece re "Problems of Transhumanism" In-Reply-To: <4B7050D4.8050802@satx.rr.com> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1><4B704B75.5010709@satx.rr.com> <4B7050D4.8050802@satx.rr.com> Message-ID: <1A5A60391CF9407A9B10772A59F5C938@DFC68LF1> Sorry, I was so busy working on something. (I was just writing an article for The Scavenger and was referencing _The Judas Mandala_ and had to go to Amazon to get an image in case ...) Now to answer your post, I was simply was referring to the "libertarian transhumanist" phrase which takes away from the value of what James' article because this is old and worn-out. Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Damien Broderick Sent: Monday, February 08, 2010 11:59 AM To: ExI chat list Subject: Re: [ExI] IEET piece re "Problems of Transhumanism" On 2/8/2010 11:35 AM, Damien Broderick wrote: >Natasha Vita-More : >> So what? > Eh? To expand on that: Isn't what I quoted to the point of Hughes' essay, which starts: BUT, he goes on below, and follows with the quote from Max I've now cited twice, which you don't think is relevant (I suppose). Is it really irrelevant? Perhaps so. Here it is again: I certainly don't think that Max's comment on "monkey politics" is a disparagement of *democracy* but rather of tribal power plays, hierarchies of force and authority, etc. And his views are certainly inconsistent with Hughes' glib opening comment about a supposed >H "tendency to disparage liberal democracy in favor of the rule by dei ex machina and technocratic elites." But it doesn't sound like a ringing endorsement of the position that "liberal democracy is the best path to betterment," which is where Hughes started. Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ablainey at aol.com Mon Feb 8 18:30:58 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Mon, 08 Feb 2010 13:30:58 -0500 Subject: [ExI] IEET piece re "Problems of Transhumanism" In-Reply-To: <4B7050D4.8050802@satx.rr.com> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1><4B704B75.5010709@satx.rr.com> <4B7050D4.8050802@satx.rr.com> Message-ID: <8CC7703FD00403A-1250-225@webmail-d042.sysops.aol.com> I agree Damien, Max's quote has little or nothing to do with democracy, rather the way it is played. And now I have read the whole thing I can see I was right in my first impression. Polarise the arguement even though the arguement itself is invalid. Since when is Liberal democracy the opposite of totalitarianism? And how can Libertarianism be the opposite of liberal democracy, while argueing that Libertarian transhumanists support totalitarianism ?? Neither is correct. Contrived. Like I said, I read it before. A -----Original Message----- From: Damien Broderick To: ExI chat list Sent: Mon, 8 Feb 2010 17:58 Subject: Re: [ExI] IEET piece re "Problems of Transhumanism" On 2/8/2010 11:35 AM, Damien Broderick wrote: >Natasha Vita-More : >> So what? > Eh? To expand on that: Isn't what I quoted to the point of Hughes' essay, which starts: BUT, he goes on below, and follows with the quote from Max I've now cited twice, which you don't think is relevant (I suppose). Is it really irrelevant? Perhaps so. Here it is again: I certainly don't think that Max's comment on "monkey politics" is a disparagement of *democracy* but rather of tribal power plays, hierarchies of force and authority, etc. And his views are certainly inconsistent with Hughes' glib opening comment about a supposed >H "tendency to disparage liberal democracy in favor of the rule by dei ex machina and technocratic elites." But it doesn't sound like a ringing endorsement of the position that "liberal democracy is the best path to betterment," which is where Hughes started. Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Feb 8 18:37:24 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 12:37:24 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Problems of Transhumanism" In-Reply-To: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> References: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> Message-ID: <4B7059E4.1010402@satx.rr.com> On 2/8/2010 12:18 PM, Max More wrote: > Saying that democratic arrangements (as they exist at any particular > time) have no intrinsic value is not in the least equivalent to saying > that authoritarian control is better. Should we not strive for something > better than the ugly system of democracy that currently exists? Certainly; see Krugman's essay in the NYT today for a truly egregious example of "democracy" in inaction in the US. > Are authoritarian arrangements the only conceivable alternative? No, and I think I'd favor a decision structure closer to a rhizome than a ladder or pyramid, which is why I'm a sort of communitarian anarchist. (A utopian prospect, admittedly, because people have largely been conned into becoming--as the gibe has it--sheeple.) But as I said a moment ago in another post, your statement doesn't sound like a ringing endorsement of the position that "liberal democracy is the best path to betterment," which is the point Hughes is making about >Humanism. Where he goes from there is questionable if not absurd *as a generalization*--but there's certainly a technocratic, elitist tendency in a lot of >H discourse I've read here over the last 15 years or so. Damien Broderick From thespike at satx.rr.com Mon Feb 8 18:39:08 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 12:39:08 -0600 Subject: [ExI] IEET piece re "Problems of Transhumanism" In-Reply-To: <1A5A60391CF9407A9B10772A59F5C938@DFC68LF1> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1><4B704B75.5010709@satx.rr.com> <4B7050D4.8050802@satx.rr.com> <1A5A60391CF9407A9B10772A59F5C938@DFC68LF1> Message-ID: <4B705A4C.1000907@satx.rr.com> On 2/8/2010 12:19 PM, Natasha Vita-More wrote: > I was simply was referring to the "libertarian > transhumanist" phrase which takes away from the value of what James' article > because this is old and worn-out. Fair enough, and Max certainly deals with that in his interview with RU. From rpwl at lightlink.com Mon Feb 8 18:37:59 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 08 Feb 2010 13:37:59 -0500 Subject: [ExI] Blue Brain Project In-Reply-To: <558651.23421.qm@web113616.mail.gq1.yahoo.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> Message-ID: <4B705A07.1040209@lightlink.com> Ben Zaiboc wrote: > Anyone not familiar with this can read about it here: > http://seedmagazine.com/content/article/out_of_the_blue/P1/ > > The next ten years should be interesting! Or not. Markram is NOT, as many people seem to assume, developing a biologically accurate model of a cortical column circuit. He is instead developing a model that contains neurons that are biologically accurate, down to a certain level of detail, but with random connections between those neurons. The statistical distribution of wires is supposed to be the same as that in a cortical column, but the actual wires... not so much. So, to anyone who thinks that a randomly mode of an i86 computer chip in which all the wiring was replaced by random connections would be a fantastically interesting thing, worth spending a billion dollars to construct, the Blue Brain project must make you delirious with joy. Markram's entire project, then, rests on his hope that if he builds a randomly wired column model, the model will "self-assemble" and do something interesting. He produces no arguments for what those self-assembly mechanisms actually look like, nor does he demonstrate that his model includes those mechanisms. Further, he ignores the possibility that the self-assembly mechanisms are dependent on such factors as (a) specific wiring circuits in the column, or (b) specific wiring in outside structures (subcortical mechanisms, for example) which act as drivers of the self-assembly process. (To couch this in terms of an example, suppose the biology causes loops of ten neurons to be set up all over the column, with the strength of synapses around each loop being extremely specific (say, high, high, low, high, high, low, high, high, low, low). Now suppose that the self-organizing capability of the system is crucially dependent on the presence of these loops. Since Markram is blind to exact wiring he will never see the loops. He certainly would not see the pattern of synaptic strengths, and he probably would never notice the physical pattern of the ten-neurons loops, either.) As far as I can tell, Markram's only reason to believe that his model columns will self-assemble is ... well, just a hunch. If his hunch is wrong, he will have built the world's most expensive white-noise generator. Notice that so far, in the test runs he has done, his evidence that the model circuit actually works has all been based on a low-level statistical correspondence between the patterns of firing in the model and in the original. Given that he went to great trouble to ensure the same distributions in his model, this result gives practically no information at all. Markram does not hesitate to publicize these achievements with words that imply that his model column does actually "function" like a biological column. (Going back to the i86 chip analogy: would a statistically similar signal pattern in a random model of such a chip indicate that the random model was "functioning" like a normal chip?). There are plenty of other criticisms that could be leveled at the Blue Brain project, but this should be enough. Can you say "Neuroscience Star Wars Project", children? Richard Loosemore From lacertilian at gmail.com Mon Feb 8 19:18:57 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 11:18:57 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) Message-ID: JOSHUA JOB : > A car can't have rights because it isn't self-aware. It isn't even alive. It doesn't have choices, and nothing matters to it. It doesn't have the quality of being this weird self-referencing thing with a de se operator, it lacks the capacity of reason. And thus, it can't have any rights. A corpse isn't alive or rational or aware either. A brand-new human is, or at least will be in short order. Proliferating rights in the manner you suggest devalues the word and destroys its meaning, allowing evil people to appropriate it for their own ends, as they did in socialist countries all across the globe. The result? The deaths of tens of millions from starvation, disease, and brutal suppression of dissent. There are people who believe that every single thing in the universe is technically alive. Since cars actually have the ability to burn fuel and propel themselves, traits generally associated with animals, they actually have a better claim to the title than most inanimate objects. You could argue that a car isn't alive because it can't move intentionally, and the obvious counterargument would be to point demonstratively at a plant; but many plants lean towards the sun. I would prefer coral polyps as an example, or, even better, phytoplankton. On the subject of awareness, use of "de se" designators, et cetera, Stefano Vaj points out that unconscious human beings retain the rights of their conscious selves. I would equate this with a house retaining the right not to be demolished even when its residents are away. You're correct that my position devalues and/or destroys the current meaning of "rights", but I don't see how a fungal bloom of evil would follow logically from that. Elaborate please? JOSHUA JOB : > Having rights that you suggest would likely lead to chaos and that would support the rise of oppressive regimes, just as the proliferation of "rights" to jobs, health care, income, education, etc. have caused problems by creating a "need" for ever more oppressive regulations. The result? More chaos, and more regulation. Networks of rational agents generate spontaneous order through rational self-interest. I don't see how you can have any such thing as rational agents, or self-interest, without some "thing" which is an agent and has an interest in its own existence. I'm broadening the definition with the intention to make law in general more sensible: rights prevent wrongs. It is wrong to dump garbage in the ocean. Therefore, the ocean should have rights. At the moment the ocean is effectively piggybacking on tortuously ill-defined rights of humans, such as "the right to live on a planet that isn't broken", which are not explicitly enumerated in any written document that I'm aware of. I'm not saying it would be more morally correct to give the ocean rights, as though it were a living, feeling entity. I'm not even saying it would be more intuitive. I'm saying it would be simpler and more efficient. JOSHUA JOB : > Even if there is no such thing as a "self", there is a thing which employs a de se operator to describe "itself", whatever "it" is, and I'm not clear on what the difference is between such an entity and a "self". It obviously has memory, reasons, and is self-aware (i.e. aware of the the thing that is speaking, thinking, etc., whatever it is). Doesn't some "thing" have to exist to employ such an operator? I don't actually think the optimal system of law would exclude the concept of a rational agent from playing a part. For the sake of argument, I'm simply saying it's possible, and that such a system could be made to work just as well, in effect, as any other. It would likely employ much less concise language to do so, so it wouldn't be as efficient as a "de se"-enabled system with all of the same laws. The difference between "selves" and "de se systems", if that term is accurate, is largely a matter of abstraction. A self is highly complex, and theoretically atomic: it persists from one moment to another, and one can correctly attribute memory, thought, self-awareness, and so on to it. Memory is inextricable from self. All of the parts add up to an indivisible whole. Conversely, a "de se" designator is just a symbol. It points to a greater system, which we normally call a self, but that system is composed of a great many independent parts that interact in complicated ways. Memories are only a part of that system, and purely optional. The symbol doesn't care: it just refers to whatever's present at the time. I talk about "my arm" and "my foot" in exactly the same way that I talk about "my house" and "my computer", as though all of these things were part of me and I would be incomplete without them. This is obviously false for the latter two, and not-so-obviously false for the former two. My arm is part of my self, just as surely as is my computer, but neither are parts of me. I don't have any parts; I don't technically exist outside of the moment in which I write this. There's a more tenuous relationship between quoted passages and responses in this post than I normally display. I found myself copying and pasting paragraphs from beneath one quote to beneath another, because they worked equally well in both places and I wanted to spread things out a little. So it might not sound as if I was "listening" very carefully. Sorry about that; I actually was, but I got a bit sidetracked. You argue against cars having rights, and claim, indirectly, that doing so would cause a great many terrible unintended consequences. Request that you expound on a few of those consequences. You can choose something other than a car, if easier. From jameschoate at austin.rr.com Mon Feb 8 19:42:48 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Mon, 8 Feb 2010 19:42:48 +0000 Subject: [ExI] Blue Brain Project Message-ID: <20100208194249.YWLBW.497187.root@hrndva-web17-z02> There is very little actual 'suppose' in your commentary, the homeobox and related genetic affects are critical to this sort of organization. It is far from stochastic. Besides that, this model completely ignores the chemotaxis effects of development as well. ---- Richard Loosemore wrote: > Markram is NOT, as many people seem to assume, developing a biologically > accurate model of a cortical column circuit. He is instead developing a > model that contains neurons that are biologically accurate, down to a > certain level of detail, but with random connections between those > neurons. The statistical distribution of wires is supposed to be the > same as that in a cortical column, but the actual wires... not so much. > Further, he ignores the possibility that the self-assembly mechanisms > are dependent on such factors as (a) specific wiring circuits in the > column, or (b) specific wiring in outside structures (subcortical > mechanisms, for example) which act as drivers of the self-assembly process. > > (To couch this in terms of an example, suppose the biology causes loops > of ten neurons to be set up all over the column, with the strength of > synapses around each loop being extremely specific (say, high, high, > low, high, high, low, high, high, low, low). Now suppose that the > self-organizing capability of the system is crucially dependent on the > presence of these loops. Since Markram is blind to exact wiring he will > never see the loops. He certainly would not see the pattern of synaptic > strengths, and he probably would never notice the physical pattern of > the ten-neurons loops, either.) -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From lacertilian at gmail.com Mon Feb 8 19:42:36 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 11:42:36 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <468928.31819.qm@web36508.mail.mud.yahoo.com> References: <468928.31819.qm@web36508.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > No, I answer false also. I asked that question about toothaches to point out that subjective facts exist and that consciousness exists. It makes no difference whether your toothache exists as a result of a cavity or as an effect caused by a stage-hypnotist; if you feel the pain then it exists with as much reality as does a heart attack. I reject the notion that toothaches and heart attacks are equally real, on the basis that inherent "reality" does indeed lie on a spectrum rather than being binary. Gordon Swobe : > It should seem obvious that the world contains both subjective facts like toothaches and objective facts like mountains. It should seem equally obvious that consciousness exists, and that consciousness has certain qualities. The majority of people do in fact consider these things perfectly obvious. And contrary to the bafflegab promulgated by some quasi-intellectual pseudo-philosophers, on these subjects the majority of people have it exactly right. I feel like rejecting the notion of quasi-intellectual pseudo-philosophers, as well, but that's really only because of how accurately the term describes me. I do, however, reject the notion of a mountain being an objective fact. They're all just hills which we've decided, by a subjective value judgement, are simply too tall to be called hills! And hills are merely ground that's too bumpy for us to responsibly name as ground. In fact, I'm having difficulty convincing myself that there could be any such thing as an objective fact at all; even electrons are a mathematical abstraction, based on our subjective interpretation of a mountain of empirical evidence. Note that "empirical" means "based on experience", not "inviolably true". The difference between empirical and anecdotal is one of degree, not kind; there is empirical evidence for the existence of alien life. We usually call very consistent findings empirical, and very inconsistent findings, or those with an insufficient sample size to say either way, anecdotal. Truth has never been found outside the laboratory, nor survived without artificial support. It's a little like antimatter. As soon as you turn off the containment field, things go back to being just as they are. Now that I think about it, the distinction between the objective and the subjective is fallacious and misleading. Dualistic thinking at its worst. I propose to do away with it. From thespike at satx.rr.com Mon Feb 8 19:47:49 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 13:47:49 -0600 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: Message-ID: <4B706A65.7010403@satx.rr.com> On 2/8/2010 1:18 PM, Spencer Campbell wrote: > There are people who believe that every single thing in the universe > is technically alive. Yes, there are an awful lot of clueless idiots. I've taken to listening to Coast to Coast while doing evening exercises, and my mind genuinely boggles at the sheer stupidity of its gullible, desperate-to-believe listeners. Damien Broderick From lacertilian at gmail.com Mon Feb 8 19:47:37 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 11:47:37 -0800 Subject: [ExI] The Surrogates, graphic novel In-Reply-To: <62c14241002080642j56bb72c2r969b4ff798608e69@mail.gmail.com> References: <938582.59281.qm@web113609.mail.gq1.yahoo.com> <62c14241002080642j56bb72c2r969b4ff798608e69@mail.gmail.com> Message-ID: Mike Dougherty : > In those scenes where surri were mangled, my wife had a visceral > emotional response. ?I asked her if that was because she was thinking > about the surrigates as people. ?She admitted that she was. ?I found > that interesting because people rarely have such reaction to a car > accident. ?It's the same loss of personal property, but if the human > operator walks away the response is usually "Well at least nobody was > harmed" ?- Why does a machine that looks like a person warrant the > extra attention? Why indeed. Join in on the "Rights without selves" thread! It's alarmingly relevant, car analogy and all. From lacertilian at gmail.com Mon Feb 8 20:01:23 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 12:01:23 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <201002080401.o1841ase025933@andromeda.ziaspace.com> References: <201002080401.o1841ase025933@andromeda.ziaspace.com> Message-ID: Max More : > I know many of you are fascinated by the intricacies and peculiarities of > language. Perhaps you might be motivated to help me figure this out: Yes! Me! Me! Well, except that the one area of linguistics I actively dislike is the art of the portmanteau. I think the word you're thinking of would refer also to "joy in the existence of humanity", "joy in the existence of animals", "joy in the existence of macroscopic life", and on down the evolutionary ladder. It would also explicitly celebrate all progress in the arts and sciences. Joy in evolution and revolution, joy in the complexity of life and technology past, present and future, without any distinction made between the two. It'd be a good word. If you wouldn't mind, I'd be keen on arrogating it for my imaginary language. You know, the one with a complete set of grammatical rules but no words to speak of. The one no one, anywhere, is the least bit interested in but me. Not bitter! Now if you'll excuse me, I have to go pop a few more seriousness pills. From bbenzai at yahoo.com Mon Feb 8 19:51:47 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 8 Feb 2010 11:51:47 -0800 (PST) Subject: [ExI] Blue Brain Project In-Reply-To: Message-ID: <480152.31985.qm@web113604.mail.gq1.yahoo.com> Richard Loosemore wrote: > Ben Zaiboc wrote: > > Anyone not familiar with this can read about it here: > > http://seedmagazine.com/content/article/out_of_the_blue/P1/ > > > > The next ten years should be interesting! > > Or not. > > Markram is NOT, as many people seem to assume, developing a > biologically > accurate model of a cortical column circuit.? He is > instead developing a > model that contains neurons that are biologically accurate, > down to a > certain level of detail, but with random connections > between those > neurons.? The statistical distribution of wires is > supposed to be the > same as that in a cortical column, but the actual wires... > not so much. ... > Markram's entire project, then, rests on his hope that if > he builds a > randomly wired column model, the model will "self-assemble" > and do > something interesting.? He produces no arguments for > what those > self-assembly mechanisms actually look like, nor does he > demonstrate > that his model includes those mechanisms. ... > As far as I can tell, Markram's only reason to believe that > his model > columns will self-assemble is ... well, just a hunch. > This is pretty much what a human brain starts out as. The brain of a baby is *massively* overconnected, and 90% of the connections get pruned away as the baby starts to learn. Markram's reason to believe that his model columns will self assemble is that that's what they do in biological systems. Even so. Suppose this is not the case, suppose Markram's random interconnections fail to capture something intrinsic to the biological situation, it's still not a pointless exercise. We are constantly mapping the connections in the brain (the connectome project), and while hand-wiring them is out of the question, we will surely extract significant statistical patterns that should help with setting up the blue brain with more successful starting patterns. The blue brain project might not produce the results Markram hopes it will, but it can't fail to produce useful information, even if it's just "how not to build a brain". Ben Zaiboc From jonkc at bellsouth.net Mon Feb 8 20:25:56 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Feb 2010 15:25:56 -0500 Subject: [ExI] Blue Brain Project. In-Reply-To: <558651.23421.qm@web113616.mail.gq1.yahoo.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> Message-ID: On Feb 7, 2010, Ben Zaiboc wrote: > Anyone not familiar with this can read about it here: > http://seedmagazine.com/content/article/out_of_the_blue/P1/ > > The next ten years should be interesting! Great article, thanks Ben! sort of makes Gordon Swobe's remark "I look inside your head I see nothing even remotely resembling a digital computer " seem rather medieval. These are some of my favorite quotes : "In ten years, this computer will be talking to us." "Once the team is able to model a complete rat brain?that should happen in the next two years?Markram will download the simulation into a robotic rat, so that the brain has a body. He?s already talking to a Japanese company about constructing the mechanical animal. ?The only way to really know what the model is capable of is to give it legs,? he says. ?If the robotic rat just bumps into walls, then we?ve got a problem.?" ?Now we just have to scale it up.? Blue Brain scientists are confident that, at some point in the next few years, they will be able to start simulating an entire brain. ?If we build this brain right, it will do everything,? Markram says. I ask him if that includes selfconsciousness: Is it really possible to put a ghost into a machine? ?When I say everything, I mean everything,? he says, and a mischievous smile spreads across his face." John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Feb 8 20:53:40 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 14:53:40 -0600 Subject: [ExI] Blue Brain Project. In-Reply-To: References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> Message-ID: <4B7079D4.1090103@satx.rr.com> On 2/8/2010 2:25 PM, John Clark quoted: > "Once the team is able to model a complete rat brain?that should happen > in the next two years?Markram will download the simulation into a > robotic rat, so that the brain has a body. He?s already talking to a > Japanese company about constructing the mechanical animal. A decade or more ago, Hugo de Garis was promising a robot CAM-brain puddytat. He lost several sponsors along the way. Anyone know if he's doing anything along those lines today? (No, I've never heard of Google--how does that work? His own site informs us excitedly of things due to happen in 2006 and 2007...) Damien Broderick From rpwl at lightlink.com Mon Feb 8 20:59:17 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 08 Feb 2010 15:59:17 -0500 Subject: [ExI] Blue Brain Project In-Reply-To: <480152.31985.qm@web113604.mail.gq1.yahoo.com> References: <480152.31985.qm@web113604.mail.gq1.yahoo.com> Message-ID: <4B707B25.1040006@lightlink.com> Ben Zaiboc wrote: > Richard Loosemore wrote: > >> Ben Zaiboc wrote: >>> Anyone not familiar with this can read about it here: >>> http://seedmagazine.com/content/article/out_of_the_blue/P1/ >>> >>> The next ten years should be interesting! >> Or not. >> >> Markram is NOT, as many people seem to assume, developing a >> biologically accurate model of a cortical column circuit. He is >> instead developing a model that contains neurons that are >> biologically accurate, down to a certain level of detail, but with >> random connections between those neurons. The statistical >> distribution of wires is supposed to be the same as that in a >> cortical column, but the actual wires... not so much. > ... > >> Markram's entire project, then, rests on his hope that if he builds >> a randomly wired column model, the model will "self-assemble" and >> do something interesting. He produces no arguments for what those >> self-assembly mechanisms actually look like, nor does he >> demonstrate that his model includes those mechanisms. > ... > >> As far as I can tell, Markram's only reason to believe that his >> model columns will self-assemble is ... well, just a hunch. >> > > This is pretty much what a human brain starts out as. The brain of a > baby is *massively* overconnected, and 90% of the connections get > pruned away as the baby starts to learn. Markram's reason to believe > that his model columns will self assemble is that that's what they do > in biological systems. This is a non sequiteur, surely? Just because the connections are pruned, it doe snot follow that they were random to begin with. > Even so. Suppose this is not the case, suppose Markram's random > interconnections fail to capture something intrinsic to the > biological situation, it's still not a pointless exercise. We are > constantly mapping the connections in the brain (the connectome > project), and while hand-wiring them is out of the question, we will > surely extract significant statistical patterns that should help with > setting up the blue brain with more successful starting patterns. > > The blue brain project might not produce the results Markram hopes it > will, but it can't fail to produce useful information, even if it's > just "how not to build a brain". I wouldn't think much of a billion-dollar project, using some of the world's largest supercomputers, whose goal was to understand computer chips by modeling randomly wired versions of them, with some vague promises that in the future some other project will be supplying more accurate wiring diagrams, and with the fallback position that it would be "bound to produce some valuable information, even if it's just 'how not to build a computer'. Collecting statistical patterns is just an excuse to burn research money without having to think. Richard Loosemore From gts_2000 at yahoo.com Mon Feb 8 21:52:59 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 8 Feb 2010 13:52:59 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <785979.10766.qm@web36505.mail.mud.yahoo.com> --- On Mon, 2/8/10, Spencer Campbell wrote: > I reject the notion that toothaches and heart attacks are > equally real, on the basis that inherent "reality" does indeed lie > on a spectrum rather than being binary. I think it fair to say that either x is real or else x is not real. I also understand that some people here do not distinguish or care about the difference between reality and ~reality. They have medications for that. :) > I do, however, reject the notion of a mountain being an > objective fact. They're all just hills which we've decided, by a > subjective value judgment, are simply too tall to be called hills! But mountains do exist, no matter how you describe them. Yes? > In fact, I'm having difficulty convincing myself that there > could be any such thing as an objective fact at all You've fallen into the hallucination. :) > Note that "empirical" means "based on experience", not > "inviolably true". The word "empirical" needs some disambiguation. On the one hand people mean by the word "empirical" something like "objectively existent facts in the world, which any observer can verify". But on the other hand people mean by it something like "facts which exist in the world, including for example the facts of an entity's subjective experience". Clearly some things exist in the second sense that do no exist in the first sense. We can consider it an empirical fact that your dentist, for example, considers it true that you have a real toothache. Right? -gts From jonkc at bellsouth.net Mon Feb 8 21:55:14 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Feb 2010 16:55:14 -0500 Subject: [ExI] Blue Brain Project In-Reply-To: <4B705A07.1040209@lightlink.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> <4B705A07.1040209@lightlink.com> Message-ID: <4FBF0F0F-4486-4EDA-B705-CE3906028F36@bellsouth.net> On Feb 8, 2010, Richard Loosemore wrote: > Markram is NOT, as many people seem to assume, developing a biologically accurate model of a cortical column circuit. He is instead developing a model that contains neurons that are biologically accurate, down to a certain level of detail That is a legitimate concern, and it's true we don't know everything there is to know about neurons, but we know a lot, and this is by far the best simulation of a large number of neurons ever made. And remember, most of the things that neurons do have nothing to do with thought, they are just the routine (though fabulously complex) housekeeping functions that any cell needs to do to stay alive. > So, to anyone who thinks that a randomly mode of an i86 computer chip in which all the wiring was replaced by random connections would be a fantastically interesting thing, worth spending a billion dollars to construct, the Blue Brain project must make you delirious with joy. But the boosters of biology are always pointing out that 3/4 of a i86 chip is a completely worthless object, while a dog with only 3 legs is not immobile, he can still limp around. Markram wants to incorporate the same robustness into a computer program, and I think he has a fighting chance of pulling it off. > Markram's entire project, then, rests on his hope that if he builds a randomly wired column model, the model will "self-assemble" and do something interesting. Markram says that already his simulation is acting in ways that remind him of the ways real neurons act. OK maybe he's talking Bullshit, but I'm very impressed that in his very next utterance he shows us a way to prove him wrong. He says that in the next 2 to 3 years he will be able to synthesize an entire rat brain. He also says he can link that computer model to a mechanical rat. If that robot rat moves at random then he has failed. If the mechanism moves in more interesting ways then the man is onto something. > Further, he ignores the possibility that the self-assembly mechanisms are dependent on such factors as (a) specific wiring circuits in the column, or (b) specific wiring in outside structures (subcortical mechanisms, for example) which act as drivers of the self-assembly process. To couch this in terms of an example, suppose the biology causes loops of ten neurons to be set up all over the column, with the strength of synapses around each loop being extremely specific (say, high, high, low, high, high, low, high, high, low, low). Now suppose that the self-organizing capability of the system is crucially dependent on the presence of these loops. Since Markram is blind to exact wiring he will never see the loops. He certainly would not see the pattern of synaptic strengths, and he probably would never notice the physical pattern of the ten-neurons loops, either. My neurons are not making the proper connections. If you put a gun to my head I couldn't tell you what the hell you're talking about. > As far as I can tell, Markram's only reason to believe that his model columns will self-assemble is ... well, just a hunch. If his hunch is wrong, he will have built the world's most expensive white-noise generator. If he fails it will be a heroic failure, if he succeeds it will be the the most important work ever done, not just scientific work, work in general. > On Sun Ben Zaiboc wrote: > > I know I mentioned these links a few days ago, but it's worth repeating. > Noah Sutton is making a documentary: > http://thebeautifulbrain.com/2010/02/bluebrain-film-preview/ > There's a longer video that explains what he's up to. > The Emergence of Intelligence in the Neocortical Microcircuit > http://video.google.com/videoplay?docid=-2874207418572601262&ei=lghrS6GmG4jCqQLA1Yz7DA Thanks agin Ben, yet more great stuff! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanite1018 at gmail.com Mon Feb 8 22:39:10 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Mon, 8 Feb 2010 17:39:10 -0500 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: Message-ID: On Feb 8, 2010, at 2:18 PM, Spencer Campbell wrote: > There are people who believe that every single thing in the universe > is technically alive. Since cars actually have the ability to burn > fuel and propel themselves, traits generally associated with animals, > they actually have a better claim to the title than most inanimate > objects. You could argue that a car isn't alive because it can't move > intentionally, and the obvious counterargument would be to point > demonstratively at a plant; but many plants lean towards the sun. I > would prefer coral polyps as an example, or, even better, > phytoplankton. Anyone who thinks everything is alive is, as Damien said, an idiot, or I will add, severely epistemologically confused. Life also maintains homeostasis, grows, etc., things which a car does not do. Also, while rights, in my view, only apply to life, that is not a sufficient condition. The sufficient condition is that they are self-aware and rational, something cars, plants, etc. are not. > On the subject of awareness, use of "de se" designators, et cetera, > Stefano Vaj points out that unconscious human beings retain the rights > of their conscious selves. I would equate this with a house retaining > the right not to be demolished even when its residents are away. Perhaps, though I generally view that as simply the fact that from experience we know the person still exists, and must merely be "woken up" in order to resume reasoning, etc. And damaging that physical object that that entity "resides" in will cause damage to the entity's capacity to continue to exist, and thus, is a violation of its rights (more on that in a moment). > I'm broadening the definition with the intention to make law in > general more sensible: rights prevent wrongs. It is wrong to dump > garbage in the ocean. Therefore, the ocean should have rights. For similar reasons, I argue that things which are not rational cannot have anything have personal meaning to them. The ocean does not employ de se operators, it is not self-aware, and in fact isn't even alive, so nothing can "wrong" it. I'll agree, basically, that rights prevent things from wronging other things, in a very specific sense, but since the ocean does not have any way it can be "wronged", it cannot possibly have rights. > The difference between "selves" and "de se systems", if that term is > accurate, is largely a matter of abstraction. A self is highly > complex, and theoretically atomic: it persists from one moment to > another, and one can correctly attribute memory, thought, > self-awareness, and so on to it. Memory is inextricable from self. All > of the parts add up to an indivisible whole. > > Conversely, a "de se" designator is just a symbol. It points to a > greater system, which we normally call a self, but that system is > composed of a great many independent parts that interact in > complicated ways. Memories are only a part of that system, and purely > optional. The symbol doesn't care: it just refers to whatever's > present at the time. I did not find their argument convincing, because the de se operator refers to the system of which it is a part, i.e. the computational structure (the program, essentially) of their hypothetical robot, or of we human being's brains/minds. That isn't exactly physical, as it is merely a pattern. It's an abstraction of sorts, and "I" is strange in that it cannot be particularly descriptive (it leads to infinite regress, since "I" includes the meaning of "I", which.... and so on). But I have thought, for a long time now, that that is exactly what the "self" is. I understand that this is similar, at least in part, to the thesis of Hofstadter's book "I am a Strange Loop", and while I own it, I have not read it yet. I just thought I should address that point, before diving into rights. > You argue against cars having rights, and claim, indirectly, that > doing so would cause a great many terrible unintended consequences. > Request that you expound on a few of those consequences. You can > choose something other than a car, if easier. Well, first, I say that only entities with de se operators can have things of true "value", i.e. mental representations of things of personal importance, arranged in a hierarchy which is understood conceptually. Such an entity must work in order to continue to exist (at least any entity I have ever been able to conceive). By its nature, it has to choose whether to "live" or "die" (i.e. continue to exist or cease to exist), and does so continuously in all its actions (by pursuing things that help its life or harm it). Now, by its nature as such an entity, it's standard of morality is its own life (i.e. it should do things which help it live, and not do things which harm it), since if it isn't working for its life, it will die, and cease to be able to do anything at all. Now, critically important is that entities such as this (that is, entities that are "self-aware", rational, and operate using concepts) have to decide what is in their own interest, because obviously it's value structure is particular to it and determined by it (that is, it employs de se operators, to use language from the paper). So it is impossible, by definition, for another entity to interfere with another (i.e. forcibly prevent it from taking an action, by threatening its existence), without getting in that entity's way in trying to survive. So you cannot initiate force without interfering with a fundamental requirement for the survival of all entities like you, and thereby reject a principle upon which your own survival is based. That is my basic view of rights. I don't see how an ocean or a car can have rights, because rights need wrongs, and wrongs need values, and values need rational conceptual entities that employ de se operators (and it needs those entities to exist in some manner). What sort of horrible consequences would come if you gave a car rights (like the right to exist, for example)? Well, besides dumping that whole structure of rights, one, in my mind, based on logic, you lose the power of its base of logic, and rejecting logic leaves open any number of possible groundings for "rights", like faith or racism or random whim, etc. And that is bad. But lets be concrete about it. If a car has a right to exist, then that means I can't scrap it if I don't want it (and own it). But if that is the case, that means I am forced to give it to someone else, even if I don't want to (breaching my right to control my life, because I purchased the car with my money, which I used some of my life to acquire). Moreover, let us say that a deer with big antlers jumps in front of my vehicle and I can hit the deer (and quite possibly be killed or at least gored by its antlers, its happened a good bit around where I live to other people), or I can veer off the side of the road, and hit a lamp-post which I know I will likely survive (as I have a wonderfully safe car), but my car will be completely totalled. What should I do? My car has a right to exist, but if I veer off to the side, it will be destroyed, and I will have destroyed it. I have a right to live if I can, but if I want to live, I must destroy my car. So, do you suggest, as you do in an earlier post, that a car has a right to exist? If it does, than I would have to go gently into that good night in my hypothetical situation above (or, more likely, scream and then gurgle as I choke on my own blood). Or, if I have a right to live, and do not have to do this, then the car cannot have a right to exist. Rights are universals, they cannot be contextual, or else they aren't "rights." Everyone can have the right not to initiate force against others, as it leads to no contradictions. My car cannot have a right to exist, because it leads to a contradiction with a logically derived principle that I have a right to my life. Similarly with the ocean, trees, dirt, space shuttles, and asteroids. No inanimate object can possibly have rights, and most living organisms cannot have them either (like bacteria, sea sponges, fish, insects, etc., as they are not even conceivably entities with a conceptual faculty and de se operators). Btw, if my argument sounds similar-ish to the Objectivist argument, it is because I am heavily influenced by Objectivism (and may, though I am not certain, end up subscribing to that view fully). Joshua Job nanite1018 at gmail.com From possiblepaths2050 at gmail.com Tue Feb 9 00:05:36 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 8 Feb 2010 17:05:36 -0700 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: References: <201002080401.o1841ase025933@andromeda.ziaspace.com> Message-ID: <2d6187671002081605p10af2129ha5b58cd7672b0e41@mail.gmail.com> Max More wrote: I have long enjoyed the word "Schadenfreude", meaning pleasure derived from the misfortunes of others. [Note: I've enjoyed the *term* "schadenfreude", not the thing it refers to.] I got to thinking what word (if one could be coined) would mean "pleasure in the transcendence of the species" (i.e., transcendence of the human condition). It may be asking a lot of a word (even a German word) to do all this work, but I'd like to give it a try. >> I think of the German language as being very guttural and ugly to the the native English speaker's ear. But perhaps that is because I have seen so many movies that had yelling Nazi's in them! I recently watched the film "Downfall" and this opinion was only reinforced (but it is a great work of cinema). I think the languages we should be looking at in this "coin a new word quest" are English, Greek and Latin, among others. How about this... *Gaudiumhumanitastranscendia* In other words, "joy at humanity transcending!" "Gaudium-humanitas-transcendia!" I like it! Or another possibility would be "gaudium-humanitas-transcendere," but I think "transcendia" rolls off the tongue better. Taken from the Merriam-Webster online dictionary: Etymology: Middle English, from Latin *transcendere* to climb across, transcend, from *trans-* + *scandere* to climb ? Date: 14th century >> Max, what do you think? Everyone else? John Grigg : ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue Feb 9 00:26:22 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Mon, 8 Feb 2010 19:26:22 -0500 Subject: [ExI] Personal conclusions In-Reply-To: References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: <7641ddc61002081626ocf720du28a01091680bb3d9@mail.gmail.com> On Sat, Feb 6, 2010 at 7:34 PM, Spencer Campbell wrote: > > Considering the thread's subject, it seems safe to burn some bytes on > personal information. So: I subscribe to panexperientialism myself. > Either everything has subjective experience, or nothing does. ### Eh, why? Does everything have the quality of "threeness"? Maybe there is place in this world for an infinity of qualities, such as "being the number 3", "being a quark", or "feeling blue". Experience is just one of an infinity of flavors that parts of reality can have, and there is no reason to insist that all reality has it. There is nothing really special about the human qualia, except us being interested in them. And why wouldn't a syntax have qualia? After all, a human brain is a formal syntax, a concatenation of symbols (synaptic spikes, mostly) produced by chemical and electrical processes, so why not? Rafal From lacertilian at gmail.com Tue Feb 9 01:33:33 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 17:33:33 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <785979.10766.qm@web36505.mail.mud.yahoo.com> References: <785979.10766.qm@web36505.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > I think it fair to say that either x is real or else x is not real. I also understand that some people here do not distinguish or care about the difference between reality and ~reality. They have medications for that. :) I very nearly fall into that category. On closer inspection, however, I go further than distinguishing between real and ~real. I say ~real is synonymous with imaginary, but not with non-existent; there are real things which do not exist, and imaginary things which do. The only trouble is, I don't categorize things consistently. Is consciousness real or imaginary? Does it exist or not? I give different answers at different times. Perhaps I should define my terms better. Gordon Swobe : > But mountains do exist, no matter how you describe them. Yes? In the sense of large mounds of dirt and rocks, yes. I assume so. Personally I've never stepped on an official mountain, myself. So: mountains are real and existing. A toothache caused by a cavity is real, whereas one caused by hypnosis is imaginary. The second toothache certainly exists, for as long as the hypnosis lasts, but the existence of the first toothache is highly variable: if you aren't noticing it, it doesn't exist in that moment. Mine is caused by clenching my teeth while I sleep. I'm not sure what that means. Perhaps it's a complex toothache, in the algebraic sense. Gordon Swobe : > You've fallen into the hallucination. :) The blue pill again! Curse my deuteranomaly! Gordon Swobe : > The word "empirical" needs some disambiguation. > > On the one hand people mean by the word "empirical" something like "objectively existent facts in the world, which any observer can verify". But on the other hand people mean by it something like "facts which exist in the world, including for example the facts of an entity's subjective experience". > > Clearly some things exist in the second sense that do no exist in the first sense. We can consider it an empirical fact that your dentist, for example, considers it true that you have a real toothache. Right? Well, no, wrong. I haven't told my dentist. And I don't think she's especially psychic. For the sake of argument I'm going to pretend I'm in the worldline where I did, though. Right. That would invoke the second sense: my dentist's belief is inaccessible to an outside observer, but perfectly obvious to the dentist herself. She has empirical evidence for her own belief; that is, she can experience it whenever she likes. This is all rather far away from the thread subject. It seems the original topic died off quite a while ago. From lacertilian at gmail.com Tue Feb 9 02:37:13 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 18:37:13 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: Message-ID: JOSHUA JOB : > Anyone who thinks everything is alive is, as Damien said, an idiot, or I will add, severely epistemologically confused. Life also maintains homeostasis, grows, etc., things which a car does not do. Agreed, agreed. > Also, while rights, in my view, only apply to life, that is not a sufficient condition. The sufficient condition is that they are self-aware and rational, something cars, plants, etc. are not. As it stands now, agreed. I've been saying that a new definition, including unaware and irrational creatures or what have you, could be more useful. Not certainly, but possibly. Either way I'm confident that it would be self-consistent at the very least. and probably more so than our current conception of ethics. >> On the subject of awareness, use of "de se" designators, et cetera, >> Stefano Vaj points out that unconscious human beings retain the rights >> of their conscious selves. I would equate this with a house retaining >> the right not to be demolished even when its residents are away. > Perhaps, though I generally view that as simply the fact that from experience we know the person still exists, and must merely be "woken up" in order to resume reasoning, etc. And damaging that physical object that that entity "resides" in will cause damage to the entity's capacity to continue to exist, and thus, is a violation of its rights (more on that in a moment). Thus we come to the problem of comatose patients and undesired fetuses. Invoking the possibility of consciousness or the expectation of future consciousness as a basis for inviolable rights leads very quickly to some major complications in a world where we can't predict the future with much accuracy. > For similar reasons, I argue that things which are not rational cannot have anything have personal meaning to them. The ocean does not employ de se operators, it is not self-aware, and in fact isn't even alive, so nothing can "wrong" it. I'll agree, basically, that rights prevent things from wronging other things, in a very specific sense, but since the ocean does not have any way it can be "wronged", it cannot possibly have rights. As I noted before: "I'm not saying it would be more morally correct to give the ocean rights, as though it were a living, feeling entity". A system of rights that makes no mention of selves would necessarily describe rights in a very diffuse sense; it would be impossible to "wrong" something in particular, anything at all, though it would be quite easy to commit a wrong. "It is wrong to dump garbage in the ocean", but the ocean is not wronged if you dump garbage in it. >> You argue against cars having rights, and claim, indirectly, that >> doing so would cause a great many terrible unintended consequences. >> Request that you expound on a few of those consequences. You can >> choose something other than a car, if easier. (snip) > That is my basic view of rights. I don't see how an ocean or a car can have rights, because rights need wrongs, and wrongs need values, and values need rational conceptual entities that employ de se operators (and it needs those entities to exist in some manner). Normally I agree with you. But for now, I disagree vehemently! Rights need wrongs: granted. Wrongs need values: granted. Values need rational conceptual entities that employ de se operators: contested. Laws are loaded with values, and remain loaded with the same values long after whoever wrote them ceases to exist in any manner. It doesn't matter where the values come from. I could write a quick program that randomly generates grammatically correct value judgements ("it is wrong for jellyfish to vomit"), and it would naturally instantiate a whole litany of injustices in the world. The exigencies of survival are an equally valid source of values, and obviously far more natural. Of course I shouldn't have to point out that "more natural" does not automatically equal "better". It depends entirely on how you go about determining the ultimate good. On this list I generally go under the assumption that everyone agrees increasing extropy is the ultimate good, and nature plays only an incidental part there. > What sort of horrible consequences would come if you gave a car rights (like the right to exist, for example)? Well, besides dumping that whole structure of rights, one, in my mind, based on logic, you lose the power of its base of logic, and rejecting logic leaves open any number of possible groundings for "rights", like faith or racism or random whim, etc. And that is bad. OOH IT'S A SLIPPERY SLOPE, SOMEBODY GRAB THE SNOW TIRES. Couldn't resist. Logic can never serve as the basis of anything. It can only be used to elaborate a perfectly arbitrary set of assumptions to its logical conclusion. Faith and racism are arbitrary, but so is survival. > But lets be concrete about it. Let's! > If a car has a right to exist, then that means I can't scrap it if I don't want it (and own it). But if that is the case, that means I am forced to give it to someone else, even if I don't want to (breaching my right to control my life, because I purchased the car with my money, which I used some of my life to acquire). This seems like a perfectly sensible law, which I would not object to instituting as-is. It's just mandatory recycling really. > Moreover, let us say that a deer with big antlers jumps in front of my vehicle and I can hit the deer (and quite possibly be killed or at least gored by its antlers, its happened a good bit around where I live to other people), or I can veer off the side of the road, and hit a lamp-post which I know I will likely survive (as I have a wonderfully safe car), but my car will be completely totalled. What should I do? My car has a right to exist, but if I veer off to the side, it will be destroyed, and I will have destroyed it. I have a right to live if I can, but if I want to live, I must destroy my car. You and your friend are placed in adjacent cages, with no hope of escape except for two switches reachable only by you. One of them releases you, but kills your friend. The other kills you, but releases your friend. What do you do? Logic dictates weighing the absolute value of you against the absolute value of your friend, which is extraordinarily difficult if both of you are typical healthy human adults. If one of you is only a car, however, the choice should be obvious. > So, do you suggest, as you do in an earlier post, that a car has a right to exist? If it does, than I would have to go gently into that good night in my hypothetical situation above (or, more likely, scream and then gurgle as I choke on my own blood). Or, if I have a right to live, and do not have to do this, then the car cannot have a right to exist. Actually, I was careful to stress the difference between a brand-new car and a complete wreck. But go on. There's something for me to refute here, but now is not the time. > Rights are universals, they cannot be contextual, or else they aren't "rights." Everyone can have the right not to initiate force against others, as it leads to no contradictions. My car cannot have a right to exist, because it leads to a contradiction with a logically derived principle that I have a right to my life. Incorrect, as illustrated by the earlier case of the caged friends! In practice, rights which are guaranteed to be universal and inviolable are pretty much always either impossible or worthless. The right to life is a perfect example. Someday the universe will end, and your right will be null and void. Realistically, though, you can expect it to be violated a good deal before it comes to that. > Btw, if my argument sounds similar-ish to the Objectivist argument, it is because I am heavily influenced by Objectivism (and may, though I am not certain, end up subscribing to that view fully). Ayn Rand? That explains it! Objectivism, according to the Wikipedia at least, explicitly endorses happiness (and rational self-interest) as a valid gauge of morality. My position is known, from an earlier post, to be in stark contrast with this. But that's a mere quibble; I qualify unequivocally as an ethical subjectivist, and even border on metaphysical subjectivism at times. I'll have to post my Napoleon argument to the list soon. This is an argument that sparked the following statement an hour or two after I finished it: "Yesterday I would have said yes, but this morning Spencer shattered my objectivism". (This is paraphrased. He actually didn't say "Spencer", he said "Gomodo", for what I'm sure is a perfectly rational reason.) From spike66 at att.net Tue Feb 9 04:09:16 2010 From: spike66 at att.net (spike) Date: Mon, 8 Feb 2010 20:09:16 -0800 Subject: [ExI] funny headlines again In-Reply-To: <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com><4B5FD28B.50707@satx.rr.com><602376332A22429C851265448732CF56@spike><8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com><75AD3D26B9E24931B21F63C77D5043FE@spike><4100958D1881411A8A5932681CE1980A@spike> <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> Message-ID: Rep. Murtha is being rembered: Why the heck wouldn't spell check have caught this? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook.jpg Type: image/jpeg Size: 38626 bytes Desc: not available URL: From jonkc at bellsouth.net Tue Feb 9 06:18:54 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 9 Feb 2010 01:18:54 -0500 Subject: [ExI] Blue Brain Project. In-Reply-To: <4B707B25.1040006@lightlink.com> References: <480152.31985.qm@web113604.mail.gq1.yahoo.com> <4B707B25.1040006@lightlink.com> Message-ID: <89E309D6-1567-4498-9094-7D2A34050E6F@bellsouth.net> If I were God (and I just don't understand why I was never offered the job) I would offer Markram a blank check to pursue his research, but first I would ask him who he thinks is trying to do the same thing but is going about it in entirely the wrong way. If he said professor X I would then go to professor X and give him a blank check too. But first I would ask him who other than Markram he thinks is trying to do the same thing but is going about it in entirely the wrong way. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From alito at organicrobot.com Tue Feb 9 09:49:49 2010 From: alito at organicrobot.com (Alejandro Dubrovsky) Date: Tue, 09 Feb 2010 20:49:49 +1100 Subject: [ExI] Blue Brain Project. In-Reply-To: <4B7079D4.1090103@satx.rr.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> <4B7079D4.1090103@satx.rr.com> Message-ID: <1265708989.6916.77.camel@localhost> On Mon, 2010-02-08 at 14:53 -0600, Damien Broderic: > A decade or more ago, Hugo de Garis was promising a robot CAM-brain > puddytat. He lost several sponsors along the way. Anyone know if he's > doing anything along those lines today? (No, I've never heard of > Google--how does that work? His own site informs us excitedly of things > due to happen in 2006 and 2007...) > By a decade ago, Robokoneko was practically dead IIRC. Goertzel meets with him when he goes to China, and I think they are working together on something. From what I gather, he's still doing evolvable neural networks and robotics. From pharos at gmail.com Tue Feb 9 10:34:21 2010 From: pharos at gmail.com (BillK) Date: Tue, 9 Feb 2010 10:34:21 +0000 Subject: [ExI] Blue Brain Project. In-Reply-To: <1265708989.6916.77.camel@localhost> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> <4B7079D4.1090103@satx.rr.com> <1265708989.6916.77.camel@localhost> Message-ID: On 2/9/10, Alejandro Dubrovsky wrote: > Goertzel meets with him when he goes to China, and I think they are > working together on something. From what I gather, he's still doing > evolvable neural networks and robotics. > > Aha! That's enough of a clue. Search on -- Goertzel Garis China -- produces interesting results. This is a presentation by Garis at the AGI-09 conference Here, if you have PowerPoint - or here in HTML version - BillK From mbb386 at main.nc.us Tue Feb 9 14:19:20 2010 From: mbb386 at main.nc.us (MB) Date: Tue, 9 Feb 2010 09:19:20 -0500 (EST) Subject: [ExI] funny headlines again In-Reply-To: References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com><4B5FD28B.50707@satx.rr.com><602376332A22429C851265448732CF56@spike><8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com><75AD3D26B9E24931B21F63C77D5043FE@spike><4100958D1881411A8A5932681CE1980A@spike> <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> Message-ID: <36057.12.77.169.53.1265725160.squirrel@www.main.nc.us> > > Rep. Murtha is being rembered: > > > > Why the heck wouldn't spell check have caught this? spike > Perhaps someone entered "rembered" into the spellcheck dictionary additional "correct" list... whatever that's called in their word processor. I found this happening sometimes at my job. Very annoying. Regards, MB From pharos at gmail.com Tue Feb 9 14:28:01 2010 From: pharos at gmail.com (BillK) Date: Tue, 9 Feb 2010 14:28:01 +0000 Subject: [ExI] funny headlines again In-Reply-To: <36057.12.77.169.53.1265725160.squirrel@www.main.nc.us> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <4B5FD28B.50707@satx.rr.com> <602376332A22429C851265448732CF56@spike> <8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com> <75AD3D26B9E24931B21F63C77D5043FE@spike> <4100958D1881411A8A5932681CE1980A@spike> <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> <36057.12.77.169.53.1265725160.squirrel@www.main.nc.us> Message-ID: On 2/9/10, MB wrote: > Perhaps someone entered "rembered" into the spellcheck dictionary additional > "correct" list... whatever that's called in their word processor. > > I found this happening sometimes at my job. Very annoying. > > Yea, well obviously it is the opposite of 'dismembered'. BillK From gts_2000 at yahoo.com Tue Feb 9 14:59:44 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 9 Feb 2010 06:59:44 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <523017.75621.qm@web36506.mail.mud.yahoo.com> --- On Sun, 2/7/10, Stathis Papaioannou wrote: >> The amoeba has no neurons or nervous system, Stathis, >> so "less conscious" is an understatement. It has no >> consciousness at all. > > As far as you're concerned the function of the nervous > system - intelligence - which is due to the interactions between > neurons bears no essential relationship to consciousness. As I define intelligence, some of it is encoded into DNA. Every organism has some intelligence including the lowly amoeba. This hapless creature has enough intelligence to find food and replicate but it has no idea of its own existence. > You believe that consciousness is a property of certain specialised > cells. Yes. And I think you most likely believe so also, except when you have reason to argue for the imaginary mental states of amoebas to support your theory about the imaginary mental states of computers. :) We can suppose theories of alien forms of consciousness that might exist in computers and in amoebas and in other entities that lack nervous systems, as you seem wont to do, but it seems to me that there we cross over the line from science to science-fiction. -gts From estropico at gmail.com Tue Feb 9 16:27:50 2010 From: estropico at gmail.com (estropico) Date: Tue, 9 Feb 2010 16:27:50 +0000 Subject: [ExI] ExtroBritannia: The future of politics. Can politicians prepare society for the major technology challenges ahead? With Darren Reynolds Message-ID: <4eaaa0d91002090827m7a6a3f49y18caf3f378a4cda8@mail.gmail.com> The future of politics. Can politicians prepare society for the major technology challenges ahead? With Darren Reynolds. Venue: Room 416, Birkbeck College, Torrington Square, London WC1E 7HX Date: Saturday 20th February 2010 Time: 2pm-4pm About the talk: With the rapid accelerating changes in many fields of technology and society, there's a risk we'll wake up one morning in (say) 2015 and realise that our politicians have failed to play their role in anticipating and preparing for major risks and opportunities - politicians were focusing on issues of the past, and neglecting issues of the future. What should we be doing, now, to change the topic of political debate, to bring more focus on the transformative potential of emerging technologies? How should the role of politicians evolve, over the near future, to improve the technological leadership of this country (and beyond)? And what role can non-politicians play, to improve the way society makes collective choices about the allocation of funding and resources? About the main speaker: Darren Reynolds is a pro-technology campaigner and local government councillor for the UK's Liberal Democrats. In his professional career he has helped many public and private sector organisations to introduce new technology and improve the way they work. Darren believes in putting choices in the hands of ordinary people and ensuring that tomorrow's technological developments flourish in a balanced regulatory framework. Darren is Chair of the Burnley Liberal Democrats. In 1998, Darren was part of the international team of philosophers and activists who produced the original "Transhumanist Declaration" Opportunity for additional speakers The meeting will also feature a number of 5-minute pitches from audience members (agreed in advance) stating cases for specific changes in the allocation of national budget - for example, which areas of research deserve a larger share of funding (and which areas deserve less). Anyone wishing to take part in this section of the meeting should get in touch asap. There's no charge to attend this meeting, and everyone is welcome. There will be plenty of opportunity to ask questions and to make comments. Discussion will continue after the event, in a nearby pub, for those who are able to stay. ** Why not join some of the Extrobritannia regulars for a drink and/or light lunch beforehand, any time after 12.30pm, in The Marlborough Arms, 36 Torrington Place, London WC1E 7HJ. To find us, look out for a table where there's a copy of the book "Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future" displayed. ** About the venue: Room 416 is on the fourth floor (via the lift near reception) in the main Birkbeck College building, in Torrington Square (which is a pedestrian-only square). Torrington Square is about 10 minutes walk from either Russell Square or Goodge St tube stations. www.extrobritannia.blogspot.com www.uktranshumanistassociation.org From lacertilian at gmail.com Tue Feb 9 16:38:41 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 9 Feb 2010 08:38:41 -0800 Subject: [ExI] Personal conclusions In-Reply-To: <7641ddc61002081626ocf720du28a01091680bb3d9@mail.gmail.com> References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> <7641ddc61002081626ocf720du28a01091680bb3d9@mail.gmail.com> Message-ID: Rafal Smigrodzki : > ### Eh, why? Does everything have the quality of "threeness"? Maybe > there is place in this world for an infinity of qualities, such as > "being the number 3", "being a quark", or "feeling blue". Experience > is just one of an infinity of flavors that parts of reality can have, > and there is no reason to insist that all reality has it. The trick is that every other quality is pretty much meaningless without the corresponding experience of that quality. The universe doesn't recognize three, and three doesn't recognize itself. Something else has to experience the threeness of three. Otherwise it isn't even worth considering, regardless of any potential truth in the statement! My reason to insist that all reality has subjective experience is simply this: I think of myself as a machine, but I don't feel like a machine. I feel like a conscious entity, which seems like a fundamentally different sort of thing. So I have to resolve the discrepancy between these two views if I want to stay sane. There are only two options that I can think of: one is that consciousness is an epiphenomenon, purely an illusion generated by the underlying workings of reality. The other is that consciousness is a real substance built into the fabric of spacetime, as atoms were purported to be, and nucleons and quarks long after them. I'm inclined to believe both of these things, and that they only appear mutually exclusive to my limited mortal logic. It seems to me that a perfectly coherent interpretation of reality could be made from either stance, just as easily as light can be described as waves or particles. Two ways of phrasing the same incomprehensible thing. Supporting that idea to an extent, in a very inconvenient way, both conceptions run into the very same problem. If consciousness is an epiphenomenon: of what? If consciousness is a substance: what attracts it? The only things I see in a brain that I don't see in a stone are intelligence and understanding, and only the latter is unique to brains in general. Both of these are obvious epiphenomena to my mind, which means there must be some simpler phenomenon underlying them. But what? The ability to process information? That's only the ability to exhibit a non-random response to stimuli, and everything in the universe does that constantly. Thus: panexperientialism. My hand is forced. (Everything here is rather compressed, taking all of my personal axioms as universal axioms. So it would be easy to disagree with. I'm not looking to convince anyone here, just giving a window into my worldview. The professorial cadence is an epiphenomenon of my personality.) From gts_2000 at yahoo.com Tue Feb 9 17:04:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 9 Feb 2010 09:04:21 -0800 (PST) Subject: [ExI] Semiotics and Computability Message-ID: <782068.69808.qm@web36507.mail.mud.yahoo.com> --- On Mon, 2/8/10, Spencer Campbell wrote: > Right. That would invoke the second sense: my dentist's > belief is inaccessible to an outside observer, but perfectly obvious > to the dentist herself. She has empirical evidence for her own > belief; that is, she can experience it whenever she likes. I want to get at the idea here that to both you and to your dentist, your toothache exists as an empirical fact of reality. Your dentist may have no interest in philosophy but she will operate on that philosophical assumption: she might ask you, "Where does it hurt, Spencer? Does it hurt more when I press here?" and so on. Your dentist, presumably an educated woman of science, approaches the subject of your toothache as she would any other empirical fact of reality. She does this even though neither she nor anyone else can feel the pain of your toothache. We should, I think, emulate that approach to the subjective mental states of others in general. Mental states really do exist as empirical facts. They differ from other empirical facts as superbowl games and gumball machines only in so much as they have subjective ontologies; instead of existing in the public domain, as it were, someone in particular must "have" them. > This is all rather far away from the thread subject. I think it relates to the subject in that some people seem philosophically inclined to reduce the first-person mental to the third-person physical. Fearing any association with mind/matter dualism, they reject the notion of consciousness and try to explain subjective first-person facts in objective third-person terms. They imagine that if only they could derive a complete objective scientific description of a toothache, they would then know everything one can know about the subject of toothaches. But nothing in their descriptions will capture what it feels like to have one. -gts From nanite1018 at gmail.com Tue Feb 9 19:25:42 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Tue, 9 Feb 2010 14:25:42 -0500 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: Message-ID: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> > On Feb 8, 2010, at 9:37 PM, Spencer Campbell wrote: > Thus we come to the problem of comatose patients and undesired > fetuses. Invoking the possibility of consciousness or the expectation > of future consciousness as a basis for inviolable rights leads very > quickly to some major complications in a world where we can't predict > the future with much accuracy. A fetus is not yet a person, nor has ever been a person (as it is not nor ever has been a rational conceptual entity). So it cannot have any rights. At least until 34 weeks, they cannot possibly be regarded as people, at least from what I've read of neural development in the brain (the neocortex doesn't connect up until around then). After that it gets a little more fuzzy. As for comatose patients, if they have the capacity for brain function (i.e., they are not totally brain damaged), then they count as people, though others have to make decisions for them as that is a state which it is very difficult to recover from. If they are brain dead, then that entity no longer exists nor can it exist again (or at least, if it could, the body itself is likely irrelevant), and so no longer has rights. > "It is wrong to dump garbage in the ocean", but the ocean is not > wronged if you dump garbage in it. I am saying that it cannot be wrong if it does not violate the nature of other conscious entities. The ocean cannot be wronged, only rational conceptual "self"-aware entities can be, because they are the things that can conceivably understand right and wrong. So it can't be wrong (as in, a violation of rights) to do something unless it infringes on the rights of other such entities. I argue that the only way to do this is to infringe with their ability to make decisions for themselves based on the facts of reality (so no force or fraud). Ocean dumping does not do this (unless someone owns the ocean or part of it, etc.), so it cannot be wrong, from the perspective of a discussion of rights. > Laws are loaded with values, and remain loaded with the same values > long after whoever wrote them ceases to exist in any manner. It > doesn't matter where the values come from. I could write a quick > program that randomly generates grammatically correct value judgements > ("it is wrong for jellyfish to vomit"), and it would naturally > instantiate a whole litany of injustices in the world. The exigencies of survival are an equally valid source of values, and obviously far more natural. They aren't equally valid, they have a basis in reality. Statements have to be connected to reality in some way, or they mean literally nothing. "It is wrong for jellyfish to vomit" is totally disconnected with reality, as "wrong" can't be applied objectively to jellyfish vomit, but only in relation to entities for which the words "right" and "wrong" can have any meaning. Laws cease to exist (as laws) when there are no people who obey or enforce them. They apply to entities which can understand the meaning of "law." And yes, they are loaded with values, but they do not retain those values when they cease to be laws. They are, at best, a description of values from the past. But they are no longer laws, and have nothing to do with rights today (as those laws are no longer in effect). > Of course I shouldn't have to point out > that "more natural" does not automatically equal "better". It depends > entirely on how you go about determining the ultimate good. On this > list I generally go under the assumption that everyone agrees > increasing extropy is the ultimate good, and nature plays only an > incidental part there. The idea of rights without selves (including rights of cars, the ocean to not be dumped in or vomited in by jellyfish) cannot possibly increase extropy. To demonstrate, I'll quote the definition of extropy from the ExI website: "Extropy- The extent of a living or organizational system's intelligence, functional order, vitality, and capacity and drive for improvement." By making life of rational conceptual "self"-aware entities the basis for your system of rights, you establish vitality and capacity/drive for improvement at the center of your system of rights. Along with that, you generate order (as in, spontaneous order, haha) by barring the initiation of force, and you set in place a framework that creates a strong drive for self-improvement, including intellectual improvement, in everyone in the society. Granting cars and the ocean rights has nothing to do with increasing intelligence, improving life, or increasing order or vitality of a living/organizational system. So I don't see how, in the context of extropy, one could argue that a system of rights not based on selves (and more properly, rational self-interest), could be extropic. > OOH IT'S A SLIPPERY SLOPE, SOMEBODY GRAB THE SNOW TIRES. > > Couldn't resist. That's fine, I busted a gut at 1:30am when I read this. Tickled my funny bone, haha. > Logic can never serve as the basis of anything. It can only be used to > elaborate a perfectly arbitrary set of assumptions to its logical > conclusion. Faith and racism are arbitrary, but so is survival. True, I'm sorry. Reason is the basis, i.e. a combination of information about reality coupled with the application of logic/reason on that information. In my case, it is the nature of life, rational conceptual self-aware entities in particular, that leads to my conclusion that the only right you have is to not have force initiated against you. Everything else is in the province of personal morality (what should or shouldn't I do, what should my goals be, etc.), not essential right. That needs a self in order to work. Or at least, something equivalent, if you don't want to use "self". > You and your friend are placed in adjacent cages, with no hope of > escape except for two switches reachable only by you. One of them > releases you, but kills your friend. The other kills you, but releases > your friend. What do you do? > ... > Incorrect, as illustrated by the earlier case of the caged friends! In > practice, rights which are guaranteed to be universal and inviolable > are pretty much always either impossible or worthless. > > The right to life is a perfect example. Someday the universe will end, > and your right will be null and void. Realistically, though, you can > expect it to be violated a good deal before it comes to that. Okay, I did a bad example, because rights, in my view, are based on life, and if life is impossible (like two caged friends where one must die), rights don't really apply anymore (all options lead to death). And perhaps I should not have gotten concrete in the way I proposed. Somewhat more abstract would be: I buy a car, but I don't want it after a month. In fact, I really hate it, because, say, my hypothetical girlfriend had a heart attack in it because, I don't know, she slipped over the middle line and had a good scare. Whatever. But I want it destroyed. The law prevents me from doing so. I do it anyway. Then I go to jail, for a good while, because I violated the rights of a car. To me, that makes no sense at all. The car isn't alive. I committed no wrong against it. I merely crushed it and sold it for the metal it had in it. The only way you can justify this (or, try to anyway), is if you related the destruction of the car to me harming other people (like, they didn't get to have it, though in my opinion, that isn't harm, but that's beside the point). Any "right" would have to be connected to something that can be "wronged". Without "wrongs", i.e., actions where something is wronged, you have ludicrous situations such as the above. Moreover, you don't have a right to life. You have a right to your life. Big difference. In the one case, I have a right to never die. In the other, no one else can take my life from me. The universe isn't a person, so it can't "take" my life. I must die eventually (or do I? bum bum bum...) but I can be guaranteed the right not to be killed (unless I'm in a special life-boat type situation). > But that's a mere quibble; I qualify unequivocally as an ethical > subjectivist, and even border on metaphysical subjectivism at times. > I'll have to post my Napoleon argument to the list soon. This is an > argument that sparked the following statement an hour or two after I > finished it: "Yesterday I would have said yes, but this morning > Spencer shattered my objectivism". I'd like to here it. Joshua Job nanite1018 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cluebcke at yahoo.com Tue Feb 9 18:18:06 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 9 Feb 2010 10:18:06 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <782068.69808.qm@web36507.mail.mud.yahoo.com> References: <782068.69808.qm@web36507.mail.mud.yahoo.com> Message-ID: <275932.16385.qm@web111206.mail.gq1.yahoo.com> Hi, noob lurker (me) is unlurking. > ...they reject the notion of consciousness and try to explain subjective first-person facts in objective third-person terms. You'll hopefully forgive my newness to some of the topics covered in this amazing group, but is it the case that an attempt to objectively describe first-person facts requires or equates to a rejection of the notion of consciousness? > But nothing in their descriptions will capture what it feels like to have one. A description is intended to describe, not to emulate. A scientific description of a toothache--including the brain states involved--no more fails because it cannot transmit pain, than a blueprint of a boat fails because it does not sail across the ocean, or impose upon the reader the smell of brine and a difficulty in keeping one's balance. How would one capture a feeling? ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Tue, February 9, 2010 9:04:21 AM Subject: Re: [ExI] Semiotics and Computability --- On Mon, 2/8/10, Spencer Campbell wrote: > Right. That would invoke the second sense: my dentist's > belief is inaccessible to an outside observer, but perfectly obvious > to the dentist herself. She has empirical evidence for her own > belief; that is, she can experience it whenever she likes. I want to get at the idea here that to both you and to your dentist, your toothache exists as an empirical fact of reality. Your dentist may have no interest in philosophy but she will operate on that philosophical assumption: she might ask you, "Where does it hurt, Spencer? Does it hurt more when I press here?" and so on. Your dentist, presumably an educated woman of science, approaches the subject of your toothache as she would any other empirical fact of reality. She does this even though neither she nor anyone else can feel the pain of your toothache. We should, I think, emulate that approach to the subjective mental states of others in general. Mental states really do exist as empirical facts. They differ from other empirical facts as superbowl games and gumball machines only in so much as they have subjective ontologies; instead of existing in the public domain, as it were, someone in particular must "have" them. > This is all rather far away from the thread subject. I think it relates to the subject in that some people seem philosophically inclined to reduce the first-person mental to the third-person physical. Fearing any association with mind/matter dualism, they reject the notion of consciousness and try to explain subjective first-person facts in objective third-person terms. They imagine that if only they could derive a complete objective scientific description of a toothache, they would then know everything one can know about the subject of toothaches. But nothing in their descriptions will capture what it feels like to have one. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From scerir at libero.it Tue Feb 9 21:09:47 2010 From: scerir at libero.it (scerir) Date: Tue, 9 Feb 2010 22:09:47 +0100 (CET) Subject: [ExI] quantum brains Message-ID: <15438936.1102881265749787403.JavaMail.defaultUser@defaultHost> > But I've read arxiv papers recently arguing that photosynthesis > functions via entanglement, so something that basic might be operating > in other bio systems. > Damien Broderick http://arxiv.org/abs/1001.5108 http://www.nature.com/nature/journal/v463/n7281/full/nature08811.html http://blogs.discovermagazine.com/cosmicvariance/2010/02/05/quantum- photosynthesis/ From gts_2000 at yahoo.com Tue Feb 9 21:27:17 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 9 Feb 2010 13:27:17 -0800 (PST) Subject: [ExI] Semiotics and Computability Message-ID: <740405.96745.qm@web36503.mail.mud.yahoo.com> --- On Tue, 2/9/10, Christopher Luebcke wrote: Welcome Christopher. > You'll hopefully forgive my newness to some of the topics > covered in this amazing group, but is it the case that an > attempt to objectively describe first-person facts requires > or equates to a rejection of the notion of consciousness? I had asserted that the world contains both subjective and objective facts or states-of-affairs. The question came up about how or why subjective facts like toothaches can qualify as empirical facts. Often when we speak of "empirical facts" we something like "objectively existent facts, verifiable by any observer". I have no objection to that use of the word, but if we understand empirical only in that limited sense then we may find ourselves dismissing real first-person subjective facts as non-empirical and thus somehow unreal or less real than other facts. I contend that the word empirical often does and should also apply to subjective first-person facts. There exists for example an actual fact of the matter whether or not you feel hungry at this moment. I cannot know that fact without an honest report from you, but this is no way disqualifies it from having status as a real empirical fact. The fact of your feeling hungry or not has as much reality as does anything objectively verifiable. -gts From spike66 at att.net Tue Feb 9 21:44:14 2010 From: spike66 at att.net (spike) Date: Tue, 9 Feb 2010 13:44:14 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <275932.16385.qm@web111206.mail.gq1.yahoo.com> References: <782068.69808.qm@web36507.mail.mud.yahoo.com> <275932.16385.qm@web111206.mail.gq1.yahoo.com> Message-ID: > ...On Behalf Of Christopher Luebcke > Subject: Re: [ExI] Semiotics and Computability > > Hi, noob lurker (me) is unlurking... Hi Chris, welcome! Do tell us something about you, if you wish. Otherwise, welcome anyway. {8-] spike From cluebcke at yahoo.com Tue Feb 9 23:27:11 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 9 Feb 2010 15:27:11 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <740405.96745.qm@web36503.mail.mud.yahoo.com> References: <740405.96745.qm@web36503.mail.mud.yahoo.com> Message-ID: <519473.46888.qm@web111205.mail.gq1.yahoo.com> Thank you for the background and the welcome. > There exists for example an actual fact of the matter whether or not you feel hungry at this moment. I cannot know that fact without an honest report from you, but this is no way disqualifies it from having status as a real empirical fact. The fact of your feeling hungry or not has as much reality as does anything objectively verifiable. I'm not convinced that a subject's sense of "feeling hungry" cannot be objectively verified. I can verify with perfect accuracy whether a light is red without having to experience "purple", by using instruments; in the same way I expect that it will shortly (in historical terms) be possible to verify, via real-time monitoring of the brain, whether a subject is experiencing "feeling hungry", without the observer needing to experience an identical state of hunger. I could quite easily be wrong, of course. ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Tue, February 9, 2010 1:27:17 PM Subject: Re: [ExI] Semiotics and Computability --- On Tue, 2/9/10, Christopher Luebcke wrote: Welcome Christopher. > You'll hopefully forgive my newness to some of the topics > covered in this amazing group, but is it the case that an > attempt to objectively describe first-person facts requires > or equates to a rejection of the notion of consciousness? I had asserted that the world contains both subjective and objective facts or states-of-affairs. The question came up about how or why subjective facts like toothaches can qualify as empirical facts. Often when we speak of "empirical facts" we something like "objectively existent facts, verifiable by any observer". I have no objection to that use of the word, but if we understand empirical only in that limited sense then we may find ourselves dismissing real first-person subjective facts as non-empirical and thus somehow unreal or less real than other facts. I contend that the word empirical often does and should also apply to subjective first-person facts. There exists for example an actual fact of the matter whether or not you feel hungry at this moment. I cannot know that fact without an honest report from you, but this is no way disqualifies it from having status as a real empirical fact. The fact of your feeling hungry or not has as much reality as does anything objectively verifiable. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From cluebcke at yahoo.com Tue Feb 9 23:36:23 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 9 Feb 2010 15:36:23 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: References: <782068.69808.qm@web36507.mail.mud.yahoo.com> <275932.16385.qm@web111206.mail.gq1.yahoo.com> Message-ID: <618013.65100.qm@web111215.mail.gq1.yahoo.com> Thanks Spike, Nothing terribly interesting to tell. I'm a software developer, science and space enthusiast, and armchair philosopher (I opted not to go pro after suffering through a semester-long graduate-level course on utilitarianism). I don't often identify myself as a member of any particular group that isn't defined by biology, but broadly I have strong transhumanist leanings. I'm kinda non-committal there as I'm really just now getting deep into the subject, and I don't think I have a good enough definition of "transhumanist" to know if I fit the bill. I'm also *very* interested in finding groups of very smart people who are working on the kinds of problems and projects that really excite me, and hanging on like a stubborn barnacle, leeching one small unit of knowledge at a time while hoping not to excessively irritate my hosts. Lastly, I'm deeply interested in the future and would quite like to continue to be a part of it :) - Chris ----- Original Message ---- From: spike To: ExI chat list Sent: Tue, February 9, 2010 1:44:14 PM Subject: Re: [ExI] Semiotics and Computability > ...On Behalf Of Christopher Luebcke > Subject: Re: [ExI] Semiotics and Computability > > Hi, noob lurker (me) is unlurking... Hi Chris, welcome! Do tell us something about you, if you wish. Otherwise, welcome anyway. {8-] spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jrd1415 at gmail.com Wed Feb 10 05:28:22 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Tue, 9 Feb 2010 22:28:22 -0700 Subject: [ExI] quantum brains In-Reply-To: <15438936.1102881265749787403.JavaMail.defaultUser@defaultHost> References: <15438936.1102881265749787403.JavaMail.defaultUser@defaultHost> Message-ID: http://arxivblog.com/?p=370 Looks like quantum effects may be ubiquitous. On Tue, Feb 9, 2010 at 2:09 PM, scerir wrote: >> But I've read arxiv papers recently arguing that photosynthesis >> functions via entanglement, so something that basic might be operating >> in other bio systems. >> Damien Broderick > > http://arxiv.org/abs/1001.5108 > http://www.nature.com/nature/journal/v463/n7281/full/nature08811.html > http://blogs.discovermagazine.com/cosmicvariance/2010/02/05/quantum- > photosynthesis/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stathisp at gmail.com Wed Feb 10 07:18:00 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 10 Feb 2010 18:18:00 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <523017.75621.qm@web36506.mail.mud.yahoo.com> References: <523017.75621.qm@web36506.mail.mud.yahoo.com> Message-ID: On 10 February 2010 01:59, Gordon Swobe wrote: >> As far as you're concerned the function of the nervous >> system - intelligence - which is due to the interactions between >> neurons bears no essential relationship to consciousness. > > As I define intelligence, some of it is encoded into DNA. Every organism has some intelligence including the lowly amoeba. This hapless creature has enough intelligence to find food and replicate but it has no idea of its own existence. I'm not arguing for amoeba consciousness, although I think that consciousness and intelligence are roughly proportional to each other and if anything has any intelligence or consciousness there must be a gradation between the amoeba, the human and the Jupiter brain. >> You believe that consciousness is a property of certain specialised >> cells. > > Yes. And I think you most likely believe so also, except when you have reason to argue for the imaginary mental states of amoebas to support your theory about the imaginary mental states of computers. :) > > We can suppose theories of alien forms of consciousness that might exist in computers and in amoebas and in other entities that lack nervous systems, as you seem wont to do, but it seems to me that there we cross over the line from science to science-fiction. The most important property of the nervous system is its ability to process information. Brainstem functions, subcellular functions and low level cortical functions do not manifest as intelligence nor as consciousness. However, you believe that consciousness is only contingently related to intelligence, and you have also implied that the NCC is something other than the complex pattern of neural firings, since that can be reproduced by a computer. Thus there is no logical reason for you to insist that consciousness should be attached to nervous systems. It could be something that is secreted by neurons not associated in systems, or in non-neural cells. -- Stathis Papaioannou From stathisp at gmail.com Wed Feb 10 07:22:18 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 10 Feb 2010 18:22:18 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <519473.46888.qm@web111205.mail.gq1.yahoo.com> References: <740405.96745.qm@web36503.mail.mud.yahoo.com> <519473.46888.qm@web111205.mail.gq1.yahoo.com> Message-ID: On 10 February 2010 10:27, Christopher Luebcke wrote: > Thank you for the background and the welcome. > >> There exists for example an actual fact of the matter whether or not you feel hungry at this moment. I cannot know that fact without an honest report from you, but this is no way disqualifies it from having status as a real empirical fact. The fact of your feeling hungry or not has as much reality as does anything objectively verifiable. > > I'm not convinced that a subject's sense of "feeling hungry" cannot be objectively verified. I can verify with perfect accuracy whether a light is red without having to experience "purple", by using instruments; in the same way I expect that it will shortly (in historical terms) be possible to verify, via real-time monitoring of the brain, whether a subject is experiencing "feeling hungry", without the observer needing to experience an identical state of hunger. We can objectively verify it if we make a correlation between a self-described mental state and a brain state, but only if we have that sort of brain state ourselves can we guess as to what it is like. -- Stathis Papaioannou From gts_2000 at yahoo.com Wed Feb 10 13:41:15 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 10 Feb 2010 05:41:15 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <557161.41950.qm@web36505.mail.mud.yahoo.com> --- On Wed, 2/10/10, Stathis Papaioannou wrote: > I'm not arguing for amoeba consciousness But you did make such an argument: when I asked you if a digital computer had conscious understanding of a symbol by virtue of associating that symbol with an image file, you compared the computer's conscious understanding to that of an amoeba, suggesting that both amoebas and computers have a wee bit of consciousness. Now you seem to agree that amoebas have no consciousness. So I'll ask you again: do you agree that digital computers cannot obtain conscious understanding of symbols by virtue of associating those symbols with image data? > The most important property of the nervous system is its > ability to process information. So they say. > However, you believe that consciousness is only > contingently related to intelligence, I believe consciousness evolved because it enhances intelligence. > and you have also implied that the NCC is something other than the > complex pattern of neural firings, since that can be reproduced by a > computer. Computers can reproduce just about any pattern. But a computerized pattern of a thing does not equal the thing patterned. I can for example reproduce the pattern of a tree leaf on my computer. That digital leaf will not have the properties of a real leaf. No matter what natural things we simulate on a computer, the simulations will always lack the real properties of the things simulated. Digital simulations of things can do no more than *simulate* those things. It mystifies me that people here believe simulations of organic brains should somehow qualify for an exception to this rule. Neuroscientists should someday have at their disposal perfect digital simulations of brains to use as tools for doing computer-simulated brain surgeries. But according to you and some others, those digitally simulated brains will have consciousness and so might qualify as real people. This would mean medical students will have access to computer simulations of hearts to do simulated heart surgeries, but they won't have access to the same kinds of computerized tools for doing simulated brain surgeries. Those darned computer simulated brains won't sign the consent forms. People like me will want to do the simulated surgeries anyway. The Society for the Prevention of Simulated Cruelty to Simulated Brains will oppose me. -gts From jonkc at bellsouth.net Wed Feb 10 16:17:19 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 10 Feb 2010 11:17:19 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <557161.41950.qm@web36505.mail.mud.yahoo.com> References: <557161.41950.qm@web36505.mail.mud.yahoo.com> Message-ID: Since my last post Gordon Swobe has posted 4 times: > when I asked you [Stathis Papaioannou] if a digital computer had conscious understanding of a symbol by virtue of associating that symbol with an image file, you compared the computer's conscious understanding to that of an amoeba, suggesting that both amoebas and computers have a wee bit of consciousness. I can make guesses but there is no way I can know or will ever know if an amoeba is conscious; hell there is no way I can know if a rock is conscious, or even if Gordon Swobe is. Of course some things don't act as if they were conscious, as in people when they are sleeping; the universal rule of thumb for detecting consciousness is intelligent action, even Gordon Swobe uses this rule every day of his life and every hour of his day except when he's sleeping or debating the matter on the Extropian list. I personally wouldn't endow the way an amoeba behaves with the grand title "intelligence", but reasonable people can differ on this matter; Swobe takes a somewhat more liberal view and thinks amoebas are intelligent. What is NOT reasonable is claiming that it is intelligent but not conscious. That is complete gibberish from an evolutionary viewpoint. > do you agree that digital computers cannot obtain conscious understanding of symbols by virtue of associating those symbols with image data? No I don't agree, and I must confess to having a certain feeling of contempt for the idea. I think contempt is the proper word for something that is not only inconsistent with the discoveries made by science but is also inconsistent with the way we live our daily lives when we are not debating on the Extropian List. > I believe consciousness evolved because it enhances intelligence. And in a magnificent demonstration of doublethink Swobe also believes that a behavioral demonstration like the Turing test cannot detect consciousness. > Digital simulations of things can do no more than *simulate* those things. Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Feb 10 16:20:37 2010 From: pharos at gmail.com (BillK) Date: Wed, 10 Feb 2010 16:20:37 +0000 Subject: [ExI] Google Buzz has arrived for Gmail users Message-ID: Gmail users will soon notice a Buzz folder appearing next to their Inbox. (You need to be using the newer version of Gmail, not the simpler older version). So far it seems to be like an instant messaging system to which you can add pictures or videos. It has 'Friends' and 'Followers' to organise the distribution of your Buzz messages. Messages appear immediately, so a conversation can be maintained, or more like a conference call with selected 'Friends'. Google claim that their anti-spam software and their 'Don't like' button will cut down on the amount of garbage that these social systems usually generate. As with all these sharing systems, you should immediately edit your Buzz profile to make sure you are only sharing what you want to share. ;) Commentators are hinting at Google trying to replace Facebook, Twitter, etc. but we will have to wait an see how many people use it. There are more features expected to be rolled out in the coming weeks. More info here: BillK From steinberg.will at gmail.com Wed Feb 10 18:07:04 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 10 Feb 2010 13:07:04 -0500 Subject: [ExI] Mary passes the test Message-ID: <4e3a29501002101007i46d887eeqe3ce2f150dd60b86@mail.gmail.com> Upon emerging from her room, The Scientists present Mary with two large colored squares. One is red, and one is blue. I wholeheartedly believe Mary will be able to tell which one is red. When asked why, Mary says "I had a feeling that's what it would look like." -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Wed Feb 10 18:48:57 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 10 Feb 2010 13:48:57 -0500 Subject: [ExI] Mary passes the test. Message-ID: <4e3a29501002101048t3fa58f24t9c2b5e834559afdc@mail.gmail.com> "Mary the Color Scientist" is often used as an iron defense of the magicalness or platonic reality of qualia. To some, it is enough to say that Mary learns something COMPLETELY new about the color red when she sees it, giving the sense that, even for all fact and physicality, something is missing until Mary sees that color. Why is this thesis so readily expected? I think that qualia, however weird or outside of normal fact, is, at heart, inextricable from those facts. For example: Upon emerging from her room, The Scientists present Mary with two large colored squares. One is red, and one is blue. I wholeheartedly believe Mary will be able to tell which one is red. When asked why, Mary says "I had a feeling that's what it would look like." I would say that Mary, after learning about red and its neural pathways and physical properties, is able to form some conception of the color in her mind's eye, regardless of whether it has been presented to her, because red is "in there" somewhere. Here is another example: The Scientists invent a fake color called "bread." They teach Mary all there is to know about red and about bread. When asked which one is the real color, Mary tells the scientists that it is obviously red, because her mind is able to, unexplainable but surely, delve deeper into the understanding of this. My point is that there is, at least to me, something in the facts that will, however minutely, betray the idea of redness to her. There was recently an article in Discover about the loss of a man's mind's eye wherein he exhibited some blindsight-like phenomena, knowing without seeing. I believe that, in this sense, Mary can KNOW all there is about red, INCLUDING its qualic perception. She still may gain something when she sees the red, but it is, in this case, akin to brushing the dirt of a treasure chest she has found in a hole to reveal it in its totality. The hole is there and some of the dirt is scraped away, and she has a very good idea of what red is. Seeing it merely puts her mind at ease, knowing she was right all along. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Wed Feb 10 18:48:48 2010 From: max at maxmore.com (Max More) Date: Wed, 10 Feb 2010 12:48:48 -0600 Subject: [ExI] The Future of Markets Message-ID: <201002101849.o1AIn1YP011304@andromeda.ziaspace.com> An excellent piece by the former head of Oxford University's business school: The Future of Markets John Kay 20 October 2009, Wincott Foundation http://www.johnkay.com/2009/10/20/the-future-of-markets/ ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From lacertilian at gmail.com Wed Feb 10 21:07:39 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 13:07:39 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> Message-ID: JOSHUA JOB : > A fetus is not yet a person, nor has ever been a person (as it is not nor > ever has been a rational conceptual entity). So it cannot have any rights. > At least until 34 weeks, they cannot possibly be regarded as people, at > least from what I've read of neural development in the brain (the neocortex > doesn't connect up until around then). After that it gets a little more > fuzzy. As for comatose patients, if they have the capacity for brain > function (i.e., they are not totally brain damaged), then they count as > people, though others have to make decisions for them as that is a state > which it is very difficult to recover from. If they are brain dead, then > that entity no longer exists nor can it exist again (or at least, if it > could, the body itself is likely irrelevant), and so no longer has rights. You speak as though these issues have long since been resolved! In reality, all you're giving me are opinions: facts which are currently in contention. Not to say I don't agree with you, but we can't brush the problem of potential consciousness under the table just because neither of us happens to consider it a major problem. It simply wouldn't be prudent. > I am saying that it cannot be wrong if it does not violate the nature of > other conscious entities. The ocean cannot be wronged, only rational > conceptual "self"-aware entities can be, because they are the things that > can conceivably understand right and wrong. So it can't be wrong (as in, a > violation of rights) to do something unless it infringes on the rights of > other such entities. Well, I think we've taken this argument as far as it can go then. Clearly you refuse to even entertain the idea that rights could be anything other than personal rights. I suppose it's an issue of semantics. What if I say cars could have an explicitly-defined correct way to be treated, that is, could be covered by a system of "corrects"? Surely you would agree that, looking at a galaxy-spanning machine devoid of anything remotely resembling rational self-aware thought, we could arbitrarily suppose that it is "meant" to further some purpose and from that assumption determine how well it is operating -- that is, how correctly, or how rightly. I'll start a new thread for the Napoleon argument later on, if I have the time. From lacertilian at gmail.com Wed Feb 10 21:44:15 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 13:44:15 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <557161.41950.qm@web36505.mail.mud.yahoo.com> Message-ID: John Clark : > Gordon Swobe : >> I believe consciousness evolved because it enhances intelligence. > > And in a magnificent demonstration of doublethink Swobe also believes that a > behavioral demonstration like the Turing test cannot detect consciousness. This is actually a very good point. I don't have clearly articulated beliefs regarding the relationship between consciousness and evolution, myself, but clearly Gordon does. Two propositions and two conclusions, based on the inferred worldview of John Clark, to be accepted, rejected, or amended by the reader as seems appropriate: P(a): All behavior is measurable. P(b): Evolution influences behavior and only behavior. C(c): If and only if consciousness is measurable, it is subject to evolution. C(d): If consciousness is subject to evolution, it is necessarily measurable. From spike66 at att.net Wed Feb 10 21:57:00 2010 From: spike66 at att.net (spike) Date: Wed, 10 Feb 2010 13:57:00 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> Message-ID: <617F6A986BE2417B92FC9A187E1A4D71@spike> > ...On Behalf Of Spencer Campbell > Subject: Re: [ExI] Rights without selves (was: Nolopsism) > > JOSHUA JOB : > > A fetus is not yet a person, nor has ever been a person (as > it is not > > nor ever has been a rational conceptual entity). So it > cannot have any rights... > > You speak as though these issues have long since been > resolved! In reality, all you're giving me are opinions... I used to find the most hardcore right-to-lifers and argue that my sperm have rights. Their ova did as well, since both types of gametes fit every definition of "alive." Somehow that argument never went anywhere. {8^D spike From lacertilian at gmail.com Wed Feb 10 21:54:14 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 13:54:14 -0800 Subject: [ExI] Mary passes the test. In-Reply-To: <4e3a29501002101048t3fa58f24t9c2b5e834559afdc@mail.gmail.com> References: <4e3a29501002101048t3fa58f24t9c2b5e834559afdc@mail.gmail.com> Message-ID: Will Steinberg : > The Scientists invent a fake color called "bread."? They teach Mary all > there is to know about red and about bread.? When asked which one is the > real color, Mary tells the scientists that it is obviously red, because her > mind is able to, unexplainable but surely, delve deeper into the > understanding of this.? My point is that there is, at least to me, something > in the facts that will, however minutely, betray the idea of redness to her. This is actually within the realm of possibility, surprisingly enough, so your theory is subject to a certain degree of testing. http://en.wikipedia.org/wiki/Impossible_colors Can you imagine bluish-yellow or greenish-red? The latter is unusually easy for me, because I'm deuteranomalous. Without giving more than a few minutes of effort to it, I'm willing to say that I find the former completely inconceivable. From spike66 at att.net Thu Feb 11 00:08:25 2010 From: spike66 at att.net (spike) Date: Wed, 10 Feb 2010 16:08:25 -0800 Subject: [ExI] too funny to not pass along In-Reply-To: <201002101849.o1AIn1YP011304@andromeda.ziaspace.com> References: <201002101849.o1AIn1YP011304@andromeda.ziaspace.com> Message-ID: In Washington DC today, it was so cold that the flashers are describing themselves. From nanite1018 at gmail.com Thu Feb 11 00:10:24 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Wed, 10 Feb 2010 19:10:24 -0500 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> Message-ID: <8370C045-E609-43A7-A51F-D408D15214AF@GMAIL.COM> On Feb 10, 2010, at 4:07 PM, Spencer Campbell wrote: > You speak as though these issues have long since been resolved! In > reality, all you're giving me are opinions: facts which are currently > in contention. Not to say I don't agree with you, but we can't brush > the problem of potential consciousness under the table just because > neither of us happens to consider it a major problem. It simply > wouldn't be prudent. A fetus (at least in the first 6 or 7 months) cannot be conscious (in the human meaning, not like ants), nor can a brain dead person, this has pretty much been proven. I don't see how anyone can contend that it is not the case. That's why I spoke as if it was fact. > Well, I think we've taken this argument as far as it can go then. > Clearly you refuse to even entertain the idea that rights could be > anything other than personal rights. I suppose it's an issue of > semantics. What if I say cars could have an explicitly-defined correct > way to be treated, that is, could be covered by a system of > "corrects"? > > Surely you would agree that, looking at a galaxy-spanning machine > devoid of anything remotely resembling rational self-aware thought, we > could arbitrarily suppose that it is "meant" to further some purpose > and from that assumption determine how well it is operating -- that > is, how correctly, or how rightly. The problem with "corrects" or your machine is that you are referencing consciousness and entities when even discussing them. I agree fully that if we were to look at some huge machine without anything resembling rational self-aware thought, we could figure out what it seems to do and determine how well it is doing it. Not a problem at all. The problem is that "we" have to determine how well it is doing it. "We" as in us rational, self-aware entities. Without that, anywhere along the line, I can't see how you can create the idea of a "correct" or a "right" or a "purpose" or a "meaning." You've got to have someone who can actually evaluate things, or else there is no meaning, value, or purpose to anything at all. In other words, you need rational self-aware entities, or there can be no idea of meaning or purpose, and thus no rights. Without assuming selves exist (as we perceive them), I don't see how you can get anywhere. I hope I've made my point more clear (I don't think I expressed my position in this paragraph very clearly before). If I refuse to entertain the thought, it is because the thought cuts itself off at the knees, as far as I can tell, haha. Joshua Job nanite1018 at gmail.com From pharos at gmail.com Thu Feb 11 00:28:37 2010 From: pharos at gmail.com (BillK) Date: Thu, 11 Feb 2010 00:28:37 +0000 Subject: [ExI] too funny to not pass along In-Reply-To: References: <201002101849.o1AIn1YP011304@andromeda.ziaspace.com> Message-ID: On Thu, Feb 11, 2010 at 12:08 AM, wrote: > In Washington DC today, it was so cold that the flashers are describing > themselves. > > You'll probably like this as well, then. BillK From emlynoregan at gmail.com Thu Feb 11 01:09:54 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 11 Feb 2010 11:39:54 +1030 Subject: [ExI] google buzz Message-ID: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> Hi all, I'm trolling for contacts on Google Buzz. Any gmailers want to put their hands up as buzzers / potential buzzers? -- Emlyn http://www.productx.net - free rss to email gateway, zero signup http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From pharos at gmail.com Thu Feb 11 01:23:15 2010 From: pharos at gmail.com (BillK) Date: Thu, 11 Feb 2010 01:23:15 +0000 Subject: [ExI] google buzz In-Reply-To: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> Message-ID: On 2/11/10, Emlyn wrote: > I'm trolling for contacts on Google Buzz. Any gmailers want to put > their hands up as buzzers / potential buzzers? > > The best way is to look in your contacts list, or on exi-chat, for gmail addresses, then set your Buzz to 'Follow' those you would like to hear from. Some of them, in turn, might start 'Following' you. This then gets into the Facebook type of etiquette problems. If nobody wants to 'Follow' me should I feel insulted or relieved? If you 'Un-follow' someone, does that make an enemy for life? :) BillK From emlynoregan at gmail.com Thu Feb 11 01:27:57 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 11 Feb 2010 11:57:57 +1030 Subject: [ExI] google buzz In-Reply-To: References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> Message-ID: <710b78fc1002101727q7fd1cf7eu3f1da0cc16c18848@mail.gmail.com> On 11 February 2010 11:53, BillK wrote: > On 2/11/10, Emlyn wrote: >> ?I'm trolling for contacts on Google Buzz. Any gmailers want to put >> ?their hands up as buzzers / potential buzzers? >> >> > > The best way is to look in your contacts list, or on exi-chat, for > gmail addresses, then set your Buzz to 'Follow' those you would like > to hear from. Yeah, gmail seems to have chosen a bunch of people from my contacts already, I'm not really sure how/why. > > Some of them, in turn, might start 'Following' you. It's more like twitter than facebook in that way; it's an asymmetrical relationship. > > This then gets into the Facebook type of etiquette problems. > If nobody wants to 'Follow' me should I feel insulted or relieved? > If you 'Un-follow' someone, does that make an enemy for life? ?:) > > BillK I don't have all that many enemies at the moment (that I know of), so I'll risk it. -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From spike66 at att.net Thu Feb 11 01:32:23 2010 From: spike66 at att.net (spike) Date: Wed, 10 Feb 2010 17:32:23 -0800 Subject: [ExI] too funny to not pass along In-Reply-To: References: <201002101849.o1AIn1YP011304@andromeda.ziaspace.com> Message-ID: <639A19F962994900B075B5E001256BBA@spike> > ...On Behalf Of BillK > Subject: Re: [ExI] too funny to not pass along > > On Thu, Feb 11, 2010 at 12:08 AM, wrote: > > In Washington DC today, it was so cold that the flashers are > > describing themselves. > > > > > > You'll probably like this as well, then. > > snowstorms-freak-out-dc/> > > > BillK Thanks BillK! The whole thing is still really bugging me: the perfect storm in DC, the collapse of the Copenhagen conflab, the new embarrassments coming nearly every day for the world climate experts. I had a great niche business ready to go, and all this is spoiling it. I was going to be a carbon credit provider for global warming skeptics. Oh it would have been fine, and I would have made a cubic buttload of money. I could have a special line of products, such as a square meter of land, growing something, anything. Then if a city slicker wanted to drive a pickup truck and someone gave her a bunch of trash about it, she could pull out the certificate and say "Hey, I own a farm and this is a farm truck. Do yooou own a farm?" Just think of all the stuff we could sell. Looks to me like I could earn subsidies for growing anything carbon-intensive instead of cash crops for instance. It looks to me like the wheels are coming off of the inconvenient truth at a remarkable pace. I fear there will be no more fortunes to be made here. spike From lacertilian at gmail.com Thu Feb 11 01:31:52 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 17:31:52 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <8370C045-E609-43A7-A51F-D408D15214AF@GMAIL.COM> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <8370C045-E609-43A7-A51F-D408D15214AF@GMAIL.COM> Message-ID: JOSHUA JOB : > A fetus (at least in the first 6 or 7 months) cannot be conscious (in the human meaning, not like ants), nor can a brain dead person, this has pretty much been proven. I don't see how anyone can contend that it is not the case. That's why I spoke as if it was fact. Nevertheless, many people make precisely that contention. It isn't an entirely sane position, but it does imply one good point: it's pretty much impossible to draw a precise line between conscious and not-conscious, as indicated by your 6 or 7 months figure. Two weeks is a huge margin of error for just about everything short of geology and astronomy. Sometimes history and biology, depending on how far back you go. > The problem is that "we" have to determine how well it is doing it. "We" as in us rational, self-aware entities. Without that, anywhere along the line, I can't see how you can create the idea of a "correct" or a "right" or a "purpose" or a "meaning." You've got to have someone who can actually evaluate things, or else there is no meaning, value, or purpose to anything at all. In other words, you need rational self-aware entities, or there can be no idea of meaning or purpose, and thus no rights. Without assuming selves exist (as we perceive them), I don't see how you can get anywhere. I hope I've made my point more clear (I don't think I expressed my position in this paragraph very clearly before). If I refuse to entertain the thought, it is because the thought cuts itself off at the knees, as far as I can tell, haha. Yes: this is the best description of your point yet, I think. Nevertheless I still disagree. To say that judgements made by rational, self-aware entities are more valid than those made by mechanistic automatons is entirely arbitrary. I suppose it derives from the claim that an internal experience of deeming something right or wrong is somehow important, but that experience is fleeting and currently inaccessible to all but the mind of origin. Morality and ethics are methods of categorizing actions in the world. Nothing more, nothing less. There isn't anything especially conscious about categorization. http://www.youtube.com/watch?v=ioylPVTwvV4 That's right: coin sorting machine. No philosophical argument can stand up in the face of a coin sorting machine. I'll write up that Napoleon thing, now. Actually I think it will be rather short. From lacertilian at gmail.com Thu Feb 11 01:52:41 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 17:52:41 -0800 Subject: [ExI] The Napoleon problem Message-ID: Suppose you are a doctor in an insane asylum. It's your typical insane asylum, all things considered, hailing from an era before anyone thought to call such things "mental health facilities". Electroshock treatment is the golden standard. You've helped a good number of patients that way. One, however, has proved challenging. This is a man who, for at least as long as he has been capable of speech, has firmly believed that he is Napoleon Bonaparte. He claims to have traveled to the future during the battle of Waterloo, leaving a time-double in his place. Naturally the time-double is a philosophical zombie, he explains, but that is neither here nor there. There has never been a time when the man did not believe he was Napoleon, and today he died from complications during a drastic (and unsuccessful) treatment. He looked nothing like Napoleon, and hardly spoke a word of French. When informed of these facts, he consistently shrugged them off as absurd. He had memorized every known detail of Napoleon's life, to the point where he might have been a better authority on the subject than Napoleon himself ever was. His delusional belief has never wavered to the slightest degree, and now we know for certain that it never will. Having declared time of death, you're left with one niggling question in the back of your mind: Was this man Napoleon? From nanite1018 at gmail.com Thu Feb 11 01:54:13 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Wed, 10 Feb 2010 20:54:13 -0500 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <8370C045-E609-43A7-A51F-D408D15214AF@GMAIL.COM> Message-ID: <80F2A3B3-E11C-4DAA-89D1-A0C60092F02F@GMAIL.COM> On Feb 10, 2010, at 8:31 PM, Spencer Campbell wrote: > Yes: this is the best description of your point yet, I think. > Nevertheless I still disagree. To say that judgements made by > rational, self-aware entities are more valid than those made by > mechanistic automatons is entirely arbitrary. I suppose it derives > from the claim that an internal experience of deeming something right > or wrong is somehow important, but that experience is fleeting and > currently inaccessible to all but the mind of origin. > > Morality and ethics are methods of categorizing actions in the world. > Nothing more, nothing less. There isn't anything especially conscious > about categorization. Well, while morality and ethics are categorizations of actions, I think they are a special type. They are "this is good" or "this is bad." Now you need "good or bad for or to who or what?" I don't see any answer to that question besides "to 'people.'" Or, perhaps, living things. I don't see how something can be good or bad to something that doesn't have an awareness of good or bad, or even an alternative that changes anything. A car, without a person, is useless, it has no meaning, no value. It certainly doesn't value itself, since "it" is incapable of generating evaluations, and doesn't employ de se operators (which I think follows from the paper that started this all off). So it doesn't matter to the car, or anything else (because nothing can "matter" without an evaluator), whether it gets blown up, or rusts, or runs out of gas. Whereas, to life, it is really important. It determines whether it continues to exist or not. A car without evaluating entities is just a piece of matter, and matter just changes forms, it doesn't wink out of existence (I'm including energy as a form of matter). Without something which can say "this is good or bad for/to 'me,'" I'm not sure how you would build something that can say something is good or bad in a non-arbitrary fashion. I say in a non-arbitrary fashion, because while you can build something which says "it is wrong for jellyfish to vomit", it really makes no difference to anything that a jellyfish vomits. But if you say "going and starting to kill people is bad because it hurts everyone's ability to live, including me," then you have an objective basis for that statement: your existence is threatened by people murdering each other. It actually makes a difference whether you live or die, because you can live or die, you can wink out of existence, even if the matter you are composed of is never destroyed. I can't wait to read the Napoleon argument. I'm sure it will, at least, be interesting. Joshua Job nanite1018 at gmail.com From nanite1018 at gmail.com Thu Feb 11 02:01:36 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Wed, 10 Feb 2010 21:01:36 -0500 Subject: [ExI] The Napoleon problem In-Reply-To: References: Message-ID: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> On Feb 10, 2010, at 8:52 PM, Spencer Campbell wrote: > Was this man Napoleon? That was underwhelming, haha. Of course not. You have huge amounts of evidence to show he was born, him growing up, etc., and so obviously he is not actually Napoleon. He is also explicitly said not to speak French, nor look like him. Even if he had read everything Napoleon had ever written, ever word written about him, there would still be information he did not know (since obviously, every single experience he had ever had was not written down, and even if it was, there would necessarily be detail missing). So he doesn't even have all the knowledge Napoleon had. So while he might have believed whatever he liked, he certainly was not the man Napoleon Bonaparte, Emperor of France. He was not born at the same time, nor had all the memories and experiences of the man. An expert on him, certainly. But not the man himself. I don't mean to sound, well, mean, but this strikes me as not a problem at all. Am I missing something? Joshua Job nanite1018 at gmail.com From lacertilian at gmail.com Thu Feb 11 02:26:58 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 18:26:58 -0800 Subject: [ExI] The Napoleon problem In-Reply-To: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> References: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> Message-ID: JOSHUA JOB : > I don't mean to sound, well, mean, but this strikes me as not a problem at all. Am I missing something? Probably not. I was fifteen or something the last time I used it, and now it seems a little trite even to me. There's always a chance, though, so I can elaborate. It's basically an epistemological question: how do we KNOW that he isn't Napoleon? > You have huge amounts of evidence to show he was born, him growing up, etc., and so obviously he is not actually Napoleon. A hole in the experiment, easily patched: of course he believes he regressed to an embryo during the jump through time, shoving aside the soul that otherwise would have occupied it. Not the heart of the problem, though, either way. There is an assumption, built in, that he and Napoleon really were different people. No weird magic took place. Even having established that, there is a (non-intuitive) way to view the question "is he Napoleon" so that it has a very interestingly uncertain answer. > He is also explicitly said not to speak French, nor look like him. Even if he had read everything Napoleon had ever written, ever word written about him, there would still be information he did not know (since obviously, every single experience he had ever had was not written down, and even if it was, there would necessarily be detail missing). So he doesn't even have all the knowledge Napoleon had. And how, precisely, do we know he doesn't? He knows more than we know. If anyone on Earth knew every detail of Napoleon's life, it would be him. > So while he might have believed whatever he liked, he certainly was not the man Napoleon Bonaparte, Emperor of France. He was not born at the same time, nor had all the memories and experiences of the man. An expert on him, certainly. But not the man himself. It's much more likely that your knowledge of Napoleon is false than it is that his knowledge of Napoleon is false. He is not only an expert on Napoleon, but a super-expert; no one can answer Napoleon-related questions with anywhere near the precision or accuracy that he can, even if he blithely confabulates unverifiable details to do so. So, when it comes to the question, "who is Napoleon", who are you going to believe? You or him? The obvious answer is you, because the man is insane. But one has to wonder what makes this Napoleon question so fundamentally different from every other Napoleon question, so that no degree of knowledge about Napoleon is the least bit helpful in answering it. From lacertilian at gmail.com Thu Feb 11 02:53:16 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 18:53:16 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <80F2A3B3-E11C-4DAA-89D1-A0C60092F02F@GMAIL.COM> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <8370C045-E609-43A7-A51F-D408D15214AF@GMAIL.COM> <80F2A3B3-E11C-4DAA-89D1-A0C60092F02F@GMAIL.COM> Message-ID: JOSHUA JOB : > Without something which can say "this is good or bad for/to 'me,'" I'm not sure how you would build something that can say something is good or bad in a non-arbitrary fashion. I say in a non-arbitrary fashion, because while you can build something which says "it is wrong for jellyfish to vomit", it really makes no difference to anything that a jellyfish vomits. We agree. > But if you say "going and starting to kill people is bad because it hurts everyone's ability to live, including me," then you have an objective basis for that statement: your existence is threatened by people murdering each other. It actually makes a difference whether you live or die, because you can live or die, you can wink out of existence, even if the matter you are composed of is never destroyed. We disagree. If it does not make a difference when a jellyfish is made to vomit, then it does not make a difference when you are made to die. I'm going to take the materialist route here: both of these things are just thermodynamical processes, at root, and thus equally important in metaphysical terms. It is arbitrary to say that death (or anything else) is bad, no matter how rational your basis for saying so is. To a subjectivist, an objective basis is just a more-easily-rationalized subjective basis. Both are imposed on reality (by us self-aware folk, if you insist); they are not in any way inherent to reality itself, no matter how neatly they appear to fit. All of this becomes overwhelmingly clear when you accept the premise that (a) you can't wink out of existence because (b) "you" don't exist to begin with. I hinted at an embryonic theory in an earlier post to this list. I'm now calling it the ontological plane. There's a real-imaginary axis, which you could also call a physical-virtual axis, and there's an existent-nonexistent axis. My computer is real and exists; if I made my computer emulate itself or another computer then the virtual computer would be imaginary, but it would still exist. Among the whole field of things possible or impossible, the vast majority are both imaginary and nonexistent. According to this theory, I, in the sense of my quintessential self and not my physical manifestation, am one of the very few things that falls in the "real but nonexistent" quadrant. There is an imaginary symbol, "I", and that exists whenever it's invoked in my brain; but it points to a thing, me, which doesn't. I am regaining hope in the potential for this argument to become productive. What I've written here is far more clear and compelling to me, at least, than the Napoleon problem ever was. One of us may just experience a change of mind, one of these days! Hint hint! (I am implying that it will be you. That is the hint.) From spike66 at att.net Thu Feb 11 03:01:08 2010 From: spike66 at att.net (spike) Date: Wed, 10 Feb 2010 19:01:08 -0800 Subject: [ExI] google buzz In-Reply-To: References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> Message-ID: > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of BillK > ... > > Some of them, in turn, might start 'Following' you. > > This then gets into the Facebook type of etiquette problems. > If nobody wants to 'Follow' me should I feel insulted or relieved? > If you 'Un-follow' someone, does that make an enemy for life? :) > > BillK Facebook has a lot of creepy stuff like that, which is why I flatly refused to play. This is ME talking, Mr. Openness, the one with a G rated life and few if any former girlfriends. What bothered me when my wife started using it is that you get all these requests from old acquaintances wanting to be friends. Well, in the meat world I never turn away anyone wanting to be friends, never. But in the e-world, you almost need to turn away most of these requests, for the lack of time to write. You might get a bunch of people who read your online comments who you never met and are not sure you want to. Then if you say no or ignore their request you feel like a heel. These please-be-my-friend people remind me of the Strangers of America: http://www.theonion.com/content/news/nations_strangers_decry_negative spike From emlynoregan at gmail.com Thu Feb 11 03:41:53 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 11 Feb 2010 14:11:53 +1030 Subject: [ExI] google buzz In-Reply-To: References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> Message-ID: <710b78fc1002101941r444649c3j6f2c7262939e1e6f@mail.gmail.com> On 11 February 2010 13:31, spike wrote: > > >> -----Original Message----- >> From: extropy-chat-bounces at lists.extropy.org >> [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of BillK >> ... >> >> Some of them, in turn, might start 'Following' you. >> >> This then gets into the Facebook type of etiquette problems. >> If nobody wants to 'Follow' me should I feel insulted or relieved? >> If you 'Un-follow' someone, does that make an enemy for life? ?:) >> >> BillK > > Facebook has a lot of creepy stuff like that, which is why I flatly refused > to play. ?This is ME talking, Mr. Openness, the one with a G rated life and > few if any former girlfriends. ?What bothered me when my wife started using > it is that you get all these requests from old acquaintances wanting to be > friends. ?Well, in the meat world I never turn away anyone wanting to be > friends, never. ?But in the e-world, you almost need to turn away most of > these requests, for the lack of time to write. ?You might get a bunch of > people who read your online comments who you never met and are not sure you > want to. ?Then if you say no or ignore their request you feel like a heel. > > These please-be-my-friend people remind me of the Strangers of America: > > http://www.theonion.com/content/news/nations_strangers_decry_negative > > spike The friending thing is odd, but it's a misnomer. "Will you be my friend?" should be "Will you agree to form an edge between our nodes?" As to weird unknown freaks reading your comments... how many years have you been posting on this list???? -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From msd001 at gmail.com Thu Feb 11 03:45:36 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 10 Feb 2010 22:45:36 -0500 Subject: [ExI] google buzz In-Reply-To: <710b78fc1002101941r444649c3j6f2c7262939e1e6f@mail.gmail.com> References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> <710b78fc1002101941r444649c3j6f2c7262939e1e6f@mail.gmail.com> Message-ID: <62c14241002101945r2ca74ca2pf869cb266dafd0c2@mail.gmail.com> On Wed, Feb 10, 2010 at 10:41 PM, Emlyn wrote: > As to weird unknown freaks reading your comments... how many years > have you been posting on this list???? what of the well known freaks reading your comments? (you know who you are :) From steinberg.will at gmail.com Thu Feb 11 03:55:42 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 10 Feb 2010 22:55:42 -0500 Subject: [ExI] The Napoleon problem In-Reply-To: References: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> Message-ID: <4e3a29501002101955u456edc1cuaa2154e49b49df8a@mail.gmail.com> I think Spencer will soon tell us why our believing that the man is not Napoleon is irreconcilable with some other belief many of us hold. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Thu Feb 11 03:56:49 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 10 Feb 2010 22:56:49 -0500 Subject: [ExI] The Napoleon problem In-Reply-To: <4e3a29501002101955u456edc1cuaa2154e49b49df8a@mail.gmail.com> References: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> <4e3a29501002101955u456edc1cuaa2154e49b49df8a@mail.gmail.com> Message-ID: <4e3a29501002101956s4ad74326j1a5adf642864d6ba@mail.gmail.com> Oh, I suppose that sort of happened. I am ignorant. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 11 05:05:17 2010 From: spike66 at att.net (spike) Date: Wed, 10 Feb 2010 21:05:17 -0800 Subject: [ExI] google buzz In-Reply-To: <62c14241002101945r2ca74ca2pf869cb266dafd0c2@mail.gmail.com> References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com><710b78fc1002101941r444649c3j6f2c7262939e1e6f@mail.gmail.com> <62c14241002101945r2ca74ca2pf869cb266dafd0c2@mail.gmail.com> Message-ID: <4C147272924742F0BA9060FCF41EDA0F@spike> > ...On Behalf Of Mike Dougherty > Subject: Re: [ExI] google buzz > > On Wed, Feb 10, 2010 at 10:41 PM, Emlyn wrote: > > As to weird unknown freaks reading your comments... how many years > > have you been posting on this list???? > > what of the well known freaks reading your comments? > (you know who you are :) > _______________________________________________ Ja, well a friend is a person who knows how you really are, and likes you anyway. Regarding the freaks, known and unknown, who hang out on ExI-chat, I have to admit, anyone who wants to hang out with me, I am not sure I want to hang out with them. It is so hard to stay away from the riff-raff when one is riff-raff oneself. My sister-in-law paid me the best compliment ever. She sent me an article from one of her many women's magazines called 100 Things That Are Getting Better. She said it reminded her of my outlook on life, since pretty much everything I really care about is good and getting better all the time. So this magazine had some stuff in there I would never have thought of, as well as things I never heard of, such as shapewear. What the hell is shapewear? That was number 2 on their list of things getting better, and I haven't a clue what it is. Their number 1 was floral arrangements. Huh? Some of the more notable comments: 3) your chances of visiting the moon. Cool. 4) apps to help you lose weight. OK, but my BMI would make me too thin to qualify to be a fashion model, so let us move on. 5) Polyester. Something to do with shapewear perhaps? 6) TV dinners. 8) Our lungs. They talk about the fact that smoking is way down, which means second hand smoking is way down. 13) catching bad guys. Once needed a tissue sample the size of a quarter dollar coin. Now requires only a few cells. 14) e-cards. Absurd of course, no mention of e-MAIL which is how everyone communicates now. 18) dads. They help with the kids now. Cool thanks for noticing. Four out of five children surveyed still prefer the mom, but the father figure at least exists in their lives. 19) robots. Agreed, cool. 20) Hillary Clinton. Hmmm, no comment. So OK I google, I find shapewear is girdles and such. I haven't a clue why this magazine thinks that is improving, or why it matters, but perhaps I just don't understand, and I am at a total loss why this is their big number 1 on the list of things getting better. So now, I ask you, those not involved in the Searle discussion, what things are getting better and why? I offer these few: 1) Computers, in every way, 2) software generally, in terms of availability, capability, stability, even price. 3) Cars. 4) Motorcycles handle waaay better than they used to, better and faster, even if not cheaper. 5) Phones. 6) Availability of useful information in general, the ease of finding things out. In retrospect, I might move this to number one. 7) Health care: neighbor had a heart attack, parameds here within minutes, connected her to electronic devices to communicate with two cardiologists at Stanford Hospital. She was better off in the back of that ambulance than she would have been in the most advanced hospital even just 30 yrs ago. They gave her all the right meds before they even started towards the ER. Little or no permanent heart damage. 8) The environment. It is much cleaner than it used to be. What are your thoughts? spike From stathisp at gmail.com Thu Feb 11 11:28:56 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 11 Feb 2010 22:28:56 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <557161.41950.qm@web36505.mail.mud.yahoo.com> References: <557161.41950.qm@web36505.mail.mud.yahoo.com> Message-ID: On 11 February 2010 00:41, Gordon Swobe wrote: > Computers can reproduce just about any pattern. But a computerized pattern of a thing does not equal the thing patterned. > > I can for example reproduce the pattern of a tree leaf on my computer. That digital leaf will not have the properties of a real leaf. No matter what natural things we simulate on a computer, the simulations will always lack the real properties of the things simulated. > > Digital simulations of things can do no more than *simulate* those things. It mystifies me that people here believe simulations of organic brains should somehow qualify for an exception to this rule. I'll only respond to this as John Clark has responded to your other points. Would you say that a robot that seems to walk isn't really walking because it is not identical to, and therefore lacks all of the properties of, a human walking? The argument is not that a digital computer is *identical* with a biological brain, otherwise it would be a biological brain and not a digital computer. The argument is that the computer can reproduce the consciousness of the brain if it is able to reproduces the brain's behaviour. If it can't, you can't explain what would happen instead, and your solution is to advise that I change the question to one more to your liking. > Neuroscientists should someday have at their disposal perfect digital simulations of brains to use as tools for doing computer-simulated brain surgeries. But according to you and some others, those digitally simulated brains will have consciousness and so might qualify as real people. This would mean medical students will have access to computer simulations of hearts to do simulated heart surgeries, but they won't have access to the same kinds of computerized tools for doing simulated brain surgeries. Those darned computer simulated brains won't sign the consent forms. > > People like me will want to do the simulated surgeries anyway. The Society for the Prevention of Simulated Cruelty to Simulated Brains will oppose me. What if a race of robots landed on Earth and decided to do cruel experiments on humans, on the assumption that mere organic matter couldn't have a mind or feelings, despite behaving as if it did? -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Feb 11 14:06:43 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 06:06:43 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <896407.10281.qm@web36503.mail.mud.yahoo.com> --- On Thu, 2/11/10, Stathis Papaioannou wrote: > Would you say that a robot that seems to walk isn't really > walking because it is not identical to, and therefore lacks all of > the properties of, a human walking? My point here concerns digital simulations of brains, not robots. > The argument is that the computer can reproduce the consciousness of the > brain if it is able to reproduces the brain's behaviour. Characters in video games behave as if they have consciousness. Seems to me that digitally simulated brains will also behave as if they have consciousness, but that they will have no more consciousness than do those characters in video games. I don't play video games myself but I've known children who did. They often spoke of the characters in their video games as if those characters really existed as consciousness entities. Then they matured. -gts From gts_2000 at yahoo.com Thu Feb 11 14:37:35 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 06:37:35 -0800 (PST) Subject: [ExI] evolution of consciousness In-Reply-To: Message-ID: <967863.26874.qm@web36503.mail.mud.yahoo.com> --- On Wed, 2/10/10, Spencer Campbell wrote: > This is actually a very good point. I don't have clearly > articulate beliefs regarding the relationship between consciousness > and evolution, myself, but clearly Gordon does. I think consciousness aids and enhances intelligence, something like the way a flashlight helps one move about in the dark. Unconscious animals like amoebas exhibit a low level of intelligent behavior. Those instinctive behaviors are encoded by DNA. In higher organisms we see nervous systems, and we see how the resulting consciousness increases the intelligence and flexibility of the organisms. It seems probable to me that conscious intelligence involves less biological overhead than instinctive unconscious intelligence, especially when considering complex behaviors such as social behaviors. Perhaps nature selected it for that reason only. -gts From gts_2000 at yahoo.com Thu Feb 11 15:10:29 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 07:10:29 -0800 (PST) Subject: [ExI] Mary passes the test. Message-ID: <155252.92470.qm@web36504.mail.mud.yahoo.com> --- On Wed, 2/10/10, Will Steinberg wrote: > "Mary the Color Scientist" is > often used as an iron defense of the magicalness or platonic > reality of qualia.? To some, it is enough to say that Mary > learns something COMPLETELY new about the color red when she > sees it, giving the sense that, even for all fact and > physicality, something is missing until Mary sees that > color.? Why is this thesis so readily expected?? I think > that qualia, however weird or outside of normal fact, is, at > heart, inextricable from those facts.? For example: > > > Upon emerging from her room, The Scientists present Mary > with two large colored squares.? One is red, and one is blue.? I > wholeheartedly believe Mary will be able to tell which one is red.? > When asked why, Mary says "I had a feeling that's what it would > look like." > > I would say that Mary, after learning about red and its > neural pathways and physical properties, is able to form > some conception of the color in her mind's eye, > regardless of whether it has been presented to her, because > red is "in there" somewhere.? This is very interesting, Will. Most curious to me is that you seem to want to refute a supposed argument for what you call the "platonic reality of quaila", but you do so with an argument that Plato himself would likely agree with. Plato taught that we never learn anything important that we don't already in some sense know, just as Mary in your story in some sense knows the color of red before seeing it. The Greek word for "truth" is "aletheia" which one can translate literally as "un-forgetting" or "remembering". -gts From bbenzai at yahoo.com Thu Feb 11 16:54:05 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 11 Feb 2010 08:54:05 -0800 (PST) Subject: [ExI] Film Script! In-Reply-To: Message-ID: <547545.73654.qm@web113605.mail.gq1.yahoo.com> Stathis Papaioannou wrote in a thread that shall remain nameless: > What if a race of robots landed on Earth and decided to do > cruel > experiments on humans, on the assumption that mere organic > matter > couldn't have a mind or feelings, despite behaving as if it > did? Yay, Film script! It has everything to be a successful 'thinking persons' sf film that is nevertheless popular: The alien robots are advanced AIs (not too advanced of course, or they wouldn't bother invading), plenty of peril, gore and and anguish, maybe nanotech dissection devices that they use to slice and dice and otherwise torture the hapless philosophical zombie humans, in an attempt to find out how we can function, when we clearly possess just an organic simulation of 'real' consciousness. The hero could be a dashing scientist who uses the awesome power of his human brain to save the pretty young lab assistant who is secretly in love with him, when he notices the power lead coming from the flying saucer... Yes, it's just The War of the Worlds/Independence Day all over again, but with a philosophical twist. It could end by leaving you wondering: Are we actually the zombies, and have we just unplugged the only *really* conscious beings in the universe??? Oh, yes, we could widen the audience demographic by having the pretty young lab assistant be a transsexual. And throw in a couple of lesbian bishops, only one of whom survives the alien robots' grisly experiments, which incidentally turn her into an atheist. This could be bigger than "Plan 9 from Outer Space"! Ben Zaiboc From jonkc at bellsouth.net Thu Feb 11 17:00:32 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 11 Feb 2010 12:00:32 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <967863.26874.qm@web36503.mail.mud.yahoo.com> References: <967863.26874.qm@web36503.mail.mud.yahoo.com> Message-ID: <3B7B32F3-20F4-4ABA-A20E-7837B734CCFF@bellsouth.net> Since my last post Gordon Swobe has posted 3 times. > I think consciousness aids and enhances intelligence, something like the way a flashlight helps one move about in the dark. I've said this many many times before but that doesn't prevent it from being true, despite believing the above, in a magnificent demonstration of doublethink, Swobe also believes that a behavioral demonstration like the Turing test cannot detect consciousness. > It seems probable to me that conscious intelligence involves less biological overhead than instinctive unconscious intelligence Then the logical implication is crystal clear, its harder to make an unconscious intelligence than a conscious intelligence. So if you encounter an intelligent machine your default assumption should be that it is conscious. > especially when considering complex behaviors such as social behaviors. Perhaps nature selected it for that reason only. So if Swobe met a robot with greater social intelligence than he has would he consider it conscious? No of course he would not because, because,.... well just because. Actually that is what Swobe would say today but I don't think that's what would really happen. If someone ever met up with such a machine I think it would understand us so well, better than we understand ourselves, that it could convince anyone to believe in anything and could quite literally charm the pants off us. As Swobe points out, even today characters in video games seem to be conscious to some, a robot with a Jupiter Brain would convince even the most sophisticated among us. We would believe the robot was conscious even if we couldn't prove it. I have the same belief regarding Gordon Swobe and the same lack of a proof. John K Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Thu Feb 11 17:29:54 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 11 Feb 2010 11:29:54 -0600 Subject: [ExI] Blue Brain Project. In-Reply-To: <4B7079D4.1090103@satx.rr.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> <4B7079D4.1090103@satx.rr.com> Message-ID: <9B95A2D531B8453685C11772840F74F6@DFC68LF1> Yes. This is from a month or so ago: http://www.thoughtware.tv/videos/watch/4651-China-Radio-International-Interv iew-cri-Vita-more-De-Garis I think he explains what he is up to in this show. Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Damien Broderick Sent: Monday, February 08, 2010 2:54 PM To: ExI chat list Subject: Re: [ExI] Blue Brain Project. On 2/8/2010 2:25 PM, John Clark quoted: > "Once the team is able to model a complete rat brain-that should > happen in the next two years-Markram will download the simulation into > a robotic rat, so that the brain has a body. He's already talking to a > Japanese company about constructing the mechanical animal. A decade or more ago, Hugo de Garis was promising a robot CAM-brain puddytat. He lost several sponsors along the way. Anyone know if he's doing anything along those lines today? (No, I've never heard of Google--how does that work? His own site informs us excitedly of things due to happen in 2006 and 2007...) Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From thespike at satx.rr.com Thu Feb 11 18:01:03 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 11 Feb 2010 12:01:03 -0600 Subject: [ExI] Film Script! Message-ID: <4B7445DF.3040800@satx.rr.com> Stathis wrote: > What if a race of robots landed on Earth and decided to do > cruel experiments on humans, on the assumption that mere organic > matter couldn't have a mind or feelings, despite behaving as if it > did? This is one of the drivers in my two linked novels GODPLAYERS and K-MACHINES. The AIs despise humans as nothing more than organic number-crunchers, insufficiently passionate. Damien Broderick From lacertilian at gmail.com Thu Feb 11 18:38:13 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 10:38:13 -0800 Subject: [ExI] The Napoleon problem In-Reply-To: <4e3a29501002101956s4ad74326j1a5adf642864d6ba@mail.gmail.com> References: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> <4e3a29501002101955u456edc1cuaa2154e49b49df8a@mail.gmail.com> <4e3a29501002101956s4ad74326j1a5adf642864d6ba@mail.gmail.com> Message-ID: Will Steinberg : >I think Spencer will soon tell us why our believing that the man is not Napoleon is irreconcilable with some other belief many of us hold. (later) > Oh, I suppose that sort of happened.? I am ignorant. Aren't we all, Will? Aren't we all. (I'm not sure I find my own argument strong enough to warrant the word "irreconcilable" coming up, but I am proud nonetheless that it did.) From steinberg.will at gmail.com Thu Feb 11 19:05:43 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 11 Feb 2010 14:05:43 -0500 Subject: [ExI] Very long lifespans and accompanying mental milieus Message-ID: <4e3a29501002111105o3ac9433ax703bde1d0b653f6f@mail.gmail.com> When human lifespans lengthen, new mental paradigms are born. This tends to occur when sufficiently large amounts of the population reach a certain age as to be common. For example, humans of the past would have a hard time understanding both the jaded, crotchety old man and the freaking-out fifty year old, simply by virtue that anyone who lived to be this old was usually revered for their luck. Very old people, when there were not a lot of very old people, were Methuselahs. But as medicine advanced, being these ages has become increasingly more common, and as the uniqueness associated with these ages has disappeared, a slew of mental crises have developed. Now, some, as in the case of the elderly, are physiological--an old man is cranky because he is arthritic or forgetful. Yet there are aspects of aging that cannot be associated in this manner, and instead lie completely in the mental realm. The mid-life crisis is and example of a phenomenon that is distinctly new. In the next fifty or so years (and I hope this is overestimating,) there is a good chance that human lifespans will lengthen significantly, perhaps going so far as to double. Now, many of us simply think to ourselves: "More time to think! More time to work!" But who knows what happens when the metaprograms of the brain reshuffle connections for far longer than nature "intended"? Though it is fine, for now, to treat ourselves as having overcome nature and evolution, we must remember that consciousness and intelligence were successful for producing offspring, which are produced relatively early in life, and have far less of a connection to the later years. Is this problem one of value? What if, at one hundred and fifty years of age, man is suddenly compelled to end his life? What if longer life will dictate to us the most obvious example of human pathos--that, for all we love about ourselves, the buck stops for the brain sooner than we might have hoped? It seems in this case that the recent discussions on mental being that have overwhelmed the list are indeed incredibly important, if only for the fact that knowledge of the mental processes of humans must be understood in order to design even better processes that don't hit a wall after extended periods of time. This is Transhumanism, not in the often-held idea of letting "humanness" transcend our current physical limitations, but in scrapping many aspects of that humanness entirely in favor of something unfathomable and better. There is a good chance that we will, at some point, be faced with the problem that the confusing tangle of yarn in our heads produced by evolution is simply not good enough to deal with whatever comes next. And then what? -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Thu Feb 11 19:13:29 2010 From: max at maxmore.com (Max More) Date: Thu, 11 Feb 2010 13:13:29 -0600 Subject: [ExI] Very long lifespans and accompanying mental milieus Message-ID: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> Will Steinberg wrote: >The mid-life crisis is and example of a phenomenon that is distinctly new. But how real is it? Just because it's become such a familiar phrase doesn't mean it's particularly correct. I recently came across some doubts, for instance: Mid-Life Crisis: An Outdated Myth? http://www.foxnews.com/story/0,2933,584133,00.html Max From steinberg.will at gmail.com Thu Feb 11 20:16:48 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 11 Feb 2010 15:16:48 -0500 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> References: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> Message-ID: <4e3a29501002111216j5b2c85b4j2777468c3ac61d96@mail.gmail.com> Well then you may recast my argument, filling in "mid-life-sense-of-pride-and-fulfillment" for "mid-life crisis;" it's still something that people a few centuries ago would not understand very well. I still think we will surely face unprecedented places of the mind as lifespans grow longer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Thu Feb 11 21:25:02 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 13:25:02 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <896407.10281.qm@web36503.mail.mud.yahoo.com> References: <896407.10281.qm@web36503.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Characters in video games behave as if they have consciousness. No, they don't. Not even close. Not even REMOTELY close. To someone who actually grew up playing video games, this is a completely outrageous statement that can hardly even be addressed for how absurd it is. All I have to do to refute you is point at the most sophisticated simulation of humanity ever to appear in a computer game, and say, look. Look at it. It can only converse by following a script, and beyond that it has no apparent reaction to anything short of being shot. Half-Life 2 famously had a mind-bogglingly realistic cast and immersive world. Basically, this means that the "people" in it would turn their heads to look at you if you stood within a defined radius of them. They also had excellent pre-programmed facial expressions to go with their dialogue, recorded from actual human beings. Gordon Swobe : > I don't play video games myself but I've known children who did. They often spoke of the characters in their video games as if those characters really existed as consciousness entities. Then they matured. Exactly the same argument applies to characters in movies. Right now, the game industry is passing through a movie phase. The latest and greatest games, at the height of technology, are just interactive stories. Allowing for a branching storyline, including more than one possible ending, is to this day considered the absolute cutting edge of the medium. It's been that way for decades. This is pretty much the reason I no longer keep up with the high-end games. If you are making money, you are not making games; you are making movies, and I don't much care for movies. Gordon, if you are unable to imagine any simulation of life more sophisticated than that in a computer game (or, much worse, a console game), then you are not qualified to participate in this discussion. Please tell me this is not the case. From lacertilian at gmail.com Thu Feb 11 21:40:58 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 13:40:58 -0800 Subject: [ExI] evolution of consciousness In-Reply-To: <967863.26874.qm@web36503.mail.mud.yahoo.com> References: <967863.26874.qm@web36503.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Unconscious animals like amoebas exhibit a low level of intelligent behavior. Those instinctive behaviors are encoded by DNA. In higher organisms we see nervous systems, and we see how the resulting consciousness increases the intelligence and flexibility of the organisms. Much like how a laptop is more flexible than a PDA, which in turn is more flexible than a pocket calculator. We are, to a substantially greater extent than any other animal, general-purpose information processors. This does not relate to consciousness in any clear way. The only roughly coherent theory behind the evolution of consciousness, in my mind, is embedded within Pollock and Ismael's paper on nolipsism. It goes: "we are conscious because we use de se designators, and we use de se designators so that we can function intelligently in every possible situation". I don't like it, and I don't know if I agree with it, when it comes to the question of subjective experience. I don't see why de se designators should be special, among other symbols, in that particular way. Even so, it's as close as I can come to explaining the evolutionary origins of consciousness. Great for explaining the illusory nature of the self, not so great for explaining the illusory nature of consciousness -- since consciousness is not an illusion. It might be made of illusions, sure, but it isn't one itself. We can measure it. I'm not sure how we can measure it, but if it's favored by evolution then we must necessarily be able to. From stathisp at gmail.com Thu Feb 11 21:43:31 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Feb 2010 08:43:31 +1100 Subject: [ExI] evolution of consciousness In-Reply-To: <967863.26874.qm@web36503.mail.mud.yahoo.com> References: <967863.26874.qm@web36503.mail.mud.yahoo.com> Message-ID: On 12 February 2010 01:37, Gordon Swobe wrote: > --- On Wed, 2/10/10, Spencer Campbell wrote: > >> This is actually a very good point. I don't have clearly >> articulate beliefs regarding the relationship between consciousness >> and evolution, myself, but clearly Gordon does. > > I think consciousness aids and enhances intelligence, something like the way a flashlight helps one move about in the dark. > > Unconscious animals like amoebas exhibit a low level of intelligent behavior. Those instinctive behaviors are encoded by DNA. In higher organisms we see nervous systems, and we see how the resulting consciousness increases the intelligence and flexibility of the organisms. > > It seems probable to me that conscious intelligence involves less biological overhead than instinctive unconscious intelligence, especially when considering complex behaviors such as social behaviors. Perhaps nature selected it for that reason only. But you have clearly stated that consciousness plays no role in behaviour, since you agree that the brain's behaviour can be emulated by a computer and the computer will be unconscious. The computer doesn't have to be tricked up to behave as if it's conscious: all you need do is follow the structural and functional relationships of the brain, and the intelligence will emerge without (you claim) any of the consciousness. -- Stathis Papaioannou From stathisp at gmail.com Thu Feb 11 21:48:01 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Feb 2010 08:48:01 +1100 Subject: [ExI] evolution of consciousness In-Reply-To: References: <967863.26874.qm@web36503.mail.mud.yahoo.com> Message-ID: On 12 February 2010 08:40, Spencer Campbell wrote: > Great for explaining the illusory nature of the self, not so great for > explaining the illusory nature of consciousness -- since consciousness > is not an illusion. It might be made of illusions, sure, but it isn't > one itself. We can measure it. I'm not sure how we can measure it, but > if it's favored by evolution then we must necessarily be able to. The idea is that it *isn't* favoured by evolution: it is a necessary side-effect of intelligence, as walking is a necessary side-effect of putting one foot in front of the other in a coordinated fashion. -- Stathis Papaioannou From lacertilian at gmail.com Thu Feb 11 21:53:22 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 13:53:22 -0800 Subject: [ExI] Film Script! In-Reply-To: <547545.73654.qm@web113605.mail.gq1.yahoo.com> References: <547545.73654.qm@web113605.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > (a whole bunch of brilliant nonsense) Yes. Yes! Maybe run two perspectives through the thing, so that we're following both an organic human hero and a robotic alien antihero. There can be a scene where the two of them enter into an interminable Socratic dialogue! "Why are you trying to kill us?!" "What are you talking about? We can't kill you. You aren't even real." "Of course we're real! We have qualia and everything!" "No, you are only predisposed to claim that you do. You don't actually know what qualia are. If you did, you would be able to tell me what it is like to be swept by a ten millisecond pulse of electromagnetic radiation in the five to six gigahertz spectrum." "Five to... wait, gigahertz? I can't see anything below four hundred terahertz!" "Exactly." Then, a gun fight. (The guns should shoot explosions instead of bullets.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Thu Feb 11 22:26:49 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 11 Feb 2010 16:26:49 -0600 Subject: [ExI] Film Script! In-Reply-To: References: <547545.73654.qm@web113605.mail.gq1.yahoo.com> Message-ID: <4B748429.5050404@satx.rr.com> On 2/11/2010 3:53 PM, Spencer Campbell wrote: > Ben Zaiboc >: > > (a whole bunch of brilliant nonsense) > > Yes. Yes! Maybe run two perspectives through the thing, so that we're > following both an organic human hero and a robotic alien antihero. There > can be a scene where the two of them enter into an interminable Socratic > dialogue! Oh, sort of like this? Lune steps carefully, keeping her balance with outstretched arms. On every side, matted kelp and seaweed coat the sluggish surface of the ocean between the trapped hulks that have drifted here across hundreds, thousands of kilometers, fetching up at the still center of an indefinitely slow vortex on a cognate world of oceans seized by locking land masses. Other spoiled vessels hang trapped in the feral vegetation?s embrace, moving slightly, rocking against each other?s hulls with low, grinding vibrations and deep clangs, scarcely audible, like the booming of whales. This ruined ship, the Argyle, must have dangled here in the jaws of the sea for at least a century and a half. ?I can?t do this any longer,? she says. ?I won?t.? ?How touching.? The K-machine wears a heavy yellow canvas mariner?s coat and black rubber boots, leaning against the stump of Argyle?s fractured main mast. Broken spars and fragments of a fallen sail and tangled rigging cling about it. ?You love him.? The timber planks beneath her foot are pulpy, sagging with every careful step she takes. The ocean has invaded the vessel from within, seeping upward through the wood, rusting and corroding the iron work, without yet swallowing it down into the depths. Perhaps that fate is certain, but it has been delayed for many decades by the matted pelagic vegetation flattening the surface of the water, locking all these marooned vessels into a graveyard without burial. Lune grimaces. The setting is the perfect preference of the thing that regards her approach with deep, gratified irony. ?Yes. I do love him. I did not expect?? ?Because he brought you back from death. This is not love, it?s supine and self-interested gratitude. Get over it.? ?You would have been quite happy to sacrifice my life to extinguish his.? ?Your life, like everyone?s life, is illusory. Is this not what you believe and argue, philosopher? The Schmidhuber ontology, the blasphemy that computation is the basis of reality?? She stares at the thing with disgust and a certain enduring fright that she has known since childhood, when it first made itself known to her. The K-machine possesses power over her, in a measure she does not truly understand. Perhaps it and its kin had slaughtered her parents. Or perhaps, as it argues, that destruction had been, instead, and wickedly, the work of cold humans themselves, intent upon creating their own hell world. The Ensemble instructors had evaded that issue whenever she?d attempted, as an acolyte, to raise it. ?If life is illusory,? she says, ?I lose nothing by living it in the way that I choose. I?m done with you.? But still she stands there. The thing reaches forth an arm and hand made all of black metal, tenderly strokes her cheek. A kind of joyous revulsion rises within her. A memory from just beyond infancy: a figure all in black, shielded against the foul fumes rising to the shattered surface from the piled dead in the concrete caverns below. Dark-clad arms plucking her up, carrying her to safety, her pulse roaring, her terrified childish voice locked in her throat. Septimus, or one of his assistants, she sometimes thinks. Or perhaps, as it claims, this thing slouching at ease before her, or one of its kindred. It is a salvation she can hardly regret, either way, and yet she detests its memory. She waits stock-still as the thing draws a line down her face, withdraws. From thespike at satx.rr.com Thu Feb 11 23:16:17 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 11 Feb 2010 17:16:17 -0600 Subject: [ExI] skyhook elevator Message-ID: <4B748FC1.8090001@satx.rr.com> It occurs to me (can't recall ever seeing this discussed, although it must be an ancient sub-topic of skyhook dynamics) that as your elevator climbs or is a shoved up the thread, you'd not only be pressed against the floor but also against the west wall. Maybe you wouldn't notice if the trip took a couple of days, but you're going from rest to 11,000 km/hr. Is that especially noticeable? What say the space gurus? Damien Broderick From ablainey at aol.com Thu Feb 11 23:33:53 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Thu, 11 Feb 2010 18:33:53 -0500 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: <4e3a29501002111105o3ac9433ax703bde1d0b653f6f@mail.gmail.com> References: <4e3a29501002111105o3ac9433ax703bde1d0b653f6f@mail.gmail.com> Message-ID: <8CC7989CE4091C7-607C-1543@webmail-d011.sysops.aol.com> Odd I was mulling this very issue over this afternoon while thinking about going to the dentist. I was wondering how I would cope with being imortal if the rigours of age still catch up with you. Not the grey hair, wrinkled skin and other signs of a long life. More the niggles like tooth ache from a dogy crown. Back ache from that slipped disc. Arthritis, migraines, bad eye sight and all the other things that detract from lifes quality. Alzhiemers and other degenerative diseases would have been unheard of a couple fo hundred years ago. No one ever lived long enough to develope them. It makes me wonder what new ailments we will discover? Perhaps the equivelent of a mid life crisis every 50 years due to bicentenial cell regeneration? who knows. -----Original Message----- From: Will Steinberg To: ExI chat list Sent: Thu, 11 Feb 2010 19:05 Subject: [ExI] Very long lifespans and accompanying mental milieus When human lifespans lengthen, new mental paradigms are born. This tends to occur when sufficiently large amounts of the population reach a certain age as to be common. For example, humans of the past would have a hard time understanding both the jaded, crotchety old man and the freaking-out fifty year old, simply by virtue that anyone who lived to be this old was usually revered for their luck. Very old people, when there were not a lot of very old people, were Methuselahs. But as medicine advanced, being these ages has become increasingly more common, and as the uniqueness associated with these ages has disappeared, a slew of mental crises have developed. Now, some, as in the case of the elderly, are physiological--an old man is cranky because he is arthritic or forgetful. Yet there are aspects of aging that cannot be associated in this manner, and instead lie completely in the mental realm. The mid-life crisis is and example of a phenomenon that is distinctly new. In the next fifty or so years (and I hope this is overestimating,) there is a good chance that human lifespans will lengthen significantly, perhaps going so far as to double. Now, many of us simply think to ourselves: "More time to think! More time to work!" But who knows what happens when the metaprograms of the brain reshuffle connections for far longer than nature "intended"? Though it is fine, for now, to treat ourselves as having overcome nature and evolution, we must remember that consciousness and intelligence were successful for producing offspring, which are produced relatively early in life, and have far less of a connection to the later years. Is this problem one of value? What if, at one hundred and fifty years of age, man is suddenly compelled to end his life? What if longer life will dictate to us the most obvious example of human pathos--that, for all we love about ourselves, the buck stops for the brain sooner than we might have hoped? It seems in this case that the recent discussions on mental being that have overwhelmed the list are indeed incredibly important, if only for the fact that knowledge of the mental processes of humans must be understood in order to design even better processes that don't hit a wall after extended periods of time. This is Transhumanism, not in the often-held idea of letting "humanness" transcend our current physical limitations, but in scrapping many aspects of that humanness entirely in favor of something unfathomable and better. There is a good chance that we will, at some point, be faced with the problem that the confusing tangle of yarn in our heads produced by evolution is simply not good enough to deal with whatever comes next. And then what? _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Thu Feb 11 23:33:56 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 15:33:56 -0800 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: <4e3a29501002111105o3ac9433ax703bde1d0b653f6f@mail.gmail.com> References: <4e3a29501002111105o3ac9433ax703bde1d0b653f6f@mail.gmail.com> Message-ID: Will Steinberg : > There is a good chance that we will, at some point, be faced with the > problem that the confusing tangle of yarn in our heads produced by evolution > is simply not good enough to deal with whatever comes next.? And then what? I would argue that we hit that point somewhere between five and ten thousand years ago. We've been flying by the seat of our pants ever since, racing to evolve the next big species before we utterly destroy ourselves in the process (along with everything else on Earth that wasn't smart enough to be so stupid). So, yes: what's the solution? I don't think we're close enough to cerebral engineering for that to enter into the discussion as a serious possibility. Before we can really address the hardware problems, I think we have to work out the software problems. The history of religion is basically equivalent to the history of operating systems. I use Linux myself, and I don't identify with any named belief system, including atheism, for precisely the same reason. Actually I want to stop using Linux as soon as possible. Too many stratified layers of ad-hoc compatibility. Have to write my own computer architecture from scratch, at some point. I simply can't be genuinely comfortable until then! Let's say someone invents a religion that is scientifically proven to improve the health, intelligence, and emotional stability of its believers, but it assumes the existence of demonstrably false supernatural phenomena. Let's also say that it comes with an effective method of hypnosis to induce the required delusions. Would you convert? Another take on it: someone invents a novel surgical procedure, crossing the wires in your skull in just such a way that you become a more effective transhuman being. Would you sign up for it? How long would you wait, before having it done, to see if there are any unexpected side effects in the early adopters? Both of these are within the realm of possibility right now, though I have my doubts about how effective that hypnosis would be. Either one could come up within the next five years. In the former case, I might even be the one to design the calculatedly-misleading metaphysics! From pharos at gmail.com Thu Feb 11 23:55:59 2010 From: pharos at gmail.com (BillK) Date: Thu, 11 Feb 2010 23:55:59 +0000 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: <4e3a29501002111216j5b2c85b4j2777468c3ac61d96@mail.gmail.com> References: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> <4e3a29501002111216j5b2c85b4j2777468c3ac61d96@mail.gmail.com> Message-ID: 2010/2/11 Will Steinberg : > Well then you may recast my argument, filling in > "mid-life-sense-of-pride-and-fulfillment" for "mid-life crisis;" it's still > something that people a few centuries ago would not understand very well.? I > still think we will surely face unprecedented places of the mind as > lifespans grow longer. > > I have the feeling that extended lifespans will make people more risk averse. If you know that the only thing that is likely to kill you in the next 200 years is an accident then it seems likely that avoiding dangerous situations will become prevalent. Wars were popular when nobody was likely to live beyond 30 or 40 years anyway. But a risk averse society ..... Everyone driving golf carts surrounded by air bags, no dangerous sports, health and safety regulations everywhere, no motorcycles, etc, I'm sure you can think of more possibilities. BillK From msd001 at gmail.com Fri Feb 12 00:32:35 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 11 Feb 2010 19:32:35 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: <896407.10281.qm@web36503.mail.mud.yahoo.com> References: <896407.10281.qm@web36503.mail.mud.yahoo.com> Message-ID: <62c14241002111632m53d7fdafvd57cdf9b8f325006@mail.gmail.com> On Thu, Feb 11, 2010 at 9:06 AM, Gordon Swobe wrote: > Characters in video games behave as if they have consciousness. Seems to me that digitally simulated brains will also behave as if they have consciousness, but that they will have no more consciousness than do those characters in video games. > > I don't play video games myself but I've known children who did. They often spoke of the characters in their video games as if those characters really existed as consciousness entities. Then they matured. Really? It still amazes me that anyone continues to engage these discussions with you. You comment about children and maturity speaks more to me about the loss of creative imagination and the subjugation of young minds to the expectation of "growing up" and abandoning 'childish things' Has anyone you've discussed this with finally admitted their ideas were wrong and adopted whole cloth your understanding of this issue? There may be a reason for that. From steinberg.will at gmail.com Fri Feb 12 00:38:54 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 11 Feb 2010 19:38:54 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: <62c14241002111632m53d7fdafvd57cdf9b8f325006@mail.gmail.com> References: <896407.10281.qm@web36503.mail.mud.yahoo.com> <62c14241002111632m53d7fdafvd57cdf9b8f325006@mail.gmail.com> Message-ID: <4e3a29501002111638s71abe067va2e2b213b5ef708f@mail.gmail.com> In stride with what Mike just said, could we perhaps (since most of us seem to agree) discuss the actually important notions of semiotics and computability, instead of more pointless antiswobian banter? Nobody is going to budge, I promise. Unstoppable force, immovable post sort of thing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Fri Feb 12 00:39:59 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 11 Feb 2010 19:39:59 -0500 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: References: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> <4e3a29501002111216j5b2c85b4j2777468c3ac61d96@mail.gmail.com> Message-ID: <62c14241002111639w6d7bb9fg75780778fcb41453@mail.gmail.com> On Thu, Feb 11, 2010 at 6:55 PM, BillK wrote: > But a risk averse society ..... ?Everyone driving golf carts > surrounded by air bags, no dangerous sports, health and safety > regulations everywhere, no motorcycles, etc, I'm sure you can think of > more possibilities. Driving? what kind of reckless madman are you? there are so many mechanical failures that could give rise to your untimely death. Better to move into a comfortably controlled cell at the Life Facility to ensure your biological computing substrate is ideally maintained while you flit safely about the many virtual worlds of experience coming soon(tm). From lacertilian at gmail.com Fri Feb 12 00:53:17 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 16:53:17 -0800 Subject: [ExI] evolution of consciousness In-Reply-To: References: <967863.26874.qm@web36503.mail.mud.yahoo.com> Message-ID: Stathis Papaioannou : >Spencer Campbell : >>... if [consciousness is] favored by evolution then we must necessarily be able to [measure it]. > > The idea is that it *isn't* favoured by evolution: it is a necessary > side-effect of intelligence, as walking is a necessary side-effect of > putting one foot in front of the other in a coordinated fashion. If that's the case, I once again run straight into the realm of panpsychism due to the fact it is impossible to create an absolutely unintelligent system. Even rocks are good at staying on the ground. So I'm inclined to believe you. Yeah. There's not a lot I can contribute to this thread. I have no evidence before me indicating that human-level consciousness is an evolutionary inevitability, by any measure, so I don't have a lot to go on. From stathisp at gmail.com Fri Feb 12 00:53:44 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Feb 2010 11:53:44 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <896407.10281.qm@web36503.mail.mud.yahoo.com> References: <896407.10281.qm@web36503.mail.mud.yahoo.com> Message-ID: On 12 February 2010 01:06, Gordon Swobe wrote: > --- On Thu, 2/11/10, Stathis Papaioannou wrote: > >> Would you say that a robot that seems to walk isn't really >> walking because it is not identical to, and therefore lacks all of >> the properties of, a human walking? > > My point here concerns digital simulations of brains, not robots. > >> The argument is that the computer can reproduce the consciousness of the >> brain if it is able to reproduces the brain's behaviour. > > Characters in video games behave as if they have consciousness. Seems to me that digitally simulated brains will also behave as if they have consciousness, but that they will have no more consciousness than do those characters in video games. > > I don't play video games myself but I've known children who did. They often spoke of the characters in their video games as if those characters really existed as consciousness entities. Then they matured. Characters in video games do certain things such as shoot other characters. If they were connected to a robot arm and camera, they might be able to shoot real people, really dead. So it is not a valid objection to say that because the simulation is not identical with the original, it cannot do anything that the original can do. You have to show that consciousness is beyond the power of a simulation to reproduce, not that a computer differs from a brain, which no-one is disputing. -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Feb 12 00:59:52 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 16:59:52 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <769806.60768.qm@web36505.mail.mud.yahoo.com> --- On Thu, 2/11/10, Spencer Campbell wrote: >> I don't play video games myself but I've known >> children who did. They often spoke of the characters in >> their video games as if those characters really existed as >> conscious entities. Then they matured. > > Exactly the same argument applies to characters in movies. Yes indeed. The characters you see on the silver screen, or on your TV screen or on your computer monitor, have no more consciousness than do the characters you see in video games and comic books. And they have no more consciousness than will the characters we might one day create with perfect digital simulations of humans and their brains. Such digital simulations of humans will exist only as mere hi-tech movies of real or imaginary people, as mere models of real or imaginary people, as mere animations of real or imaginary people, as mere caricatures of real or imaginary people, as mere descriptions of real or imaginary people. No matter whether we create those simulations with real or imaginary persons in mind, the simulations themselves will have no more reality than does Fred Flintstone. Yabadabadoo. -gts From spike66 at att.net Fri Feb 12 00:48:16 2010 From: spike66 at att.net (spike) Date: Thu, 11 Feb 2010 16:48:16 -0800 Subject: [ExI] skyhook elevator In-Reply-To: <4B748FC1.8090001@satx.rr.com> References: <4B748FC1.8090001@satx.rr.com> Message-ID: > ...On Behalf Of Damien Broderick > Subject: [ExI] skyhook elevator > > It occurs to me (can't recall ever seeing this discussed, > although it must be an ancient sub-topic of skyhook dynamics) > that as your elevator climbs or is a shoved up the thread, > you'd not only be pressed against the floor but also against > the west wall. Maybe you wouldn't notice if the trip took a > couple of days, but you're going from rest to 11,000 km/hr. > Is that especially noticeable? What say the space gurus? > > Damien Broderick Coriolis effect sounds like what you are describing, and it would be durn near negligible. I will calculate it if you wish. To give you an idea by using only numbers in my head and single digit BOTECs, geo is about 36000 km from the surface as I recall so add 6300 km earth radius and that's close enough to about 40,000 km so the circumference of the orbit is about 6 and some change times that, so 250000 km in 24 hrs, so you accelerate to 10000 km per hour or about 3 km per second or so. How long do you guess it would take to haul you up to GEO? An few hours? Lets say 10. To accelerate 3 km per second eastward in 10 hrs would be about 0.1 meters per second, or 100th of a G. The elevator passengers would scarcely notice. It is proportional of course. If you theorize they get there in 1 hour, then the coriolis component is about a tenth of a G, but if they get all the way to GEO in an hour, there is some serious upward velocity involved. spike From possiblepaths2050 at gmail.com Fri Feb 12 01:19:08 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 11 Feb 2010 18:19:08 -0700 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: <62c14241002111639w6d7bb9fg75780778fcb41453@mail.gmail.com> References: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> <4e3a29501002111216j5b2c85b4j2777468c3ac61d96@mail.gmail.com> <62c14241002111639w6d7bb9fg75780778fcb41453@mail.gmail.com> Message-ID: <2d6187671002111719v4cd08365v5ab279243d0306f2@mail.gmail.com> BillK wrote: If you know that the only thing that is likely to kill you in the next 200 years is an accident then it seems likely that avoiding dangerous situations will become prevalent. >>> I imagine a society somewhat along the lines of Larry Niven's alien species known as the puppeteers, who go to extremes to protect their lives. But because of this humans prize their advanced and nearly full-proof technology! He continues: Wars were popular when nobody was likely to live beyond 30 or 40 years anyway. >>> But war will be fought largely with machines and a strong military will be even more a product of lots of bright and highly educated humans (and of course also supersmart AI's!). I do always envision humanity taking part in battlefield "up-close" operations. But the "human" infantryman/special forces of the future will be hugely enhanced to survive and succeed! lol Joining the military will be strictly voluntary (to avoid social disruption that would dwarf the Vietnam protests), but be seen as vastly more "daring and macho" than it is now. Of course I still see political tyrannies (and even some democracies) playing mind/meme games with their young people to try to get them to throw their indefinite lives away in largely pointless wars. But it won't be near as easy to convince, as it is now. He continues: But a risk averse society ..... Everyone driving golf carts surrounded by air bags, no dangerous sports, health and safety regulations everywhere, no motorcycles, etc, I'm sure you can think of more possibilities. >>> I think we will see a society of extremes at both ends of the personal safety spectrum. "Mature" nanotech clothes & bodies that can save us from most tragedies that now take lives by the millions will be utterly commonplace, but at the same there will be the nearly suicidally daring and bored people, who will risk their lives by engaging in extremely dangerous sports and "leisure pursuits." John Grigg -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Fri Feb 12 01:49:42 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 17:49:42 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <840800.79193.qm@web36503.mail.mud.yahoo.com> --- On Thu, 2/11/10, Stathis Papaioannou wrote: >> I don't play video games myself but I've known >> children who did. They often spoke of the characters in >> their video games as if those characters really existed as >> conscious entities. Then they matured. > > Characters in video games do certain things such as shoot > other characters. If they were connected to a robot arm and > camera, they might be able to shoot real people, really dead. No sir. The supposed lawyer for such a character in such a video game has a good defense: his client could not have shot anyone because his client does not exist except in someone's overly vivid imagination. The sensible people on the jury will find that a mixed-up child or perhaps a philosophically-challenged adult used a hi-tech weapon disguised as a computer game to shoot a real person. -gts From gts_2000 at yahoo.com Fri Feb 12 01:26:44 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 17:26:44 -0800 (PST) Subject: [ExI] evolution of consciousness In-Reply-To: Message-ID: <438778.21885.qm@web36508.mail.mud.yahoo.com> --- On Thu, 2/11/10, Stathis Papaioannou wrote: > But you have clearly stated that consciousness plays no > role in behaviour... I can hardly believe you wrote that. I spent hours explaining to you why and how I reject epiphenomenalism. >... since you agree that the brain's behaviour can be emulated > by a computer and the computer will be unconscious. I argued that we can program an artificial brain to act as if it has consciousness and that said artificial brain will still lack consciousness. This is not the same as arguing that consciousness plays no role in human behavior! By the way what exactly do you mean by "behavior of brains", anyway? When I refer to the brain's behavior I usually mean observable behavior of the organism it controls, behavior such as acts of speech. -gts From steinberg.will at gmail.com Fri Feb 12 02:14:49 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 11 Feb 2010 21:14:49 -0500 Subject: [ExI] evolution of consciousness In-Reply-To: <438778.21885.qm@web36508.mail.mud.yahoo.com> References: <438778.21885.qm@web36508.mail.mud.yahoo.com> Message-ID: <4e3a29501002111814u37d41ae4w7790eae6edbfec9d@mail.gmail.com> > > I don't see why de se > designators should be special, among other symbols, in that particular > way. Not necessarily special, just useful up to a point. One can see conscious systems providing at least SOME benefit to organisms, perhaps in order to make decisions affecting the self in the future, understanding how "I" fits into its surroundings. If conscious systems were developing at the same time as or as a cause/effect of social systems, it is again simple to see how awareness of the self to make decisions in the future in SOCIAL situations would in turn, INCREASE SURVIVAL CHANCES. Any social animal which is able to modify the self and plan for the self in order to set up ideal conditions for mating will have an advantage in mating. Intelligence would also be important, a separate property that enhanced the organization and quality of actions. Having high levels of both would lead to decisions that both pertained more intimately in the self than other animals, and also were of a higher quality and thus more likely to succeed. Since both systems as simple notions are easy enough to see emerging in very small ways, the existence of both would lead to a greater number of animals with slightly higher amounts of both, etc, etc. This would explain why intelligence seems highly correlated with consciousness, with most animals who show signs of self-awareness (many birds, many primates, dolphins, elephants) also performing well in social situations, problem-solving and memory situations, and even having creative functions through crude art. Sociability, insight and creativity are essential components of a blanket intelligence (street smarts, school smarts, art smarts) and seem to correlate with consciousness. This could be both a product of an epi-evolutionary dual helpfulness and the fact that awareness of the self will actually improve the magnitudes of all three. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Feb 12 02:22:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Feb 2010 13:22:47 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <840800.79193.qm@web36503.mail.mud.yahoo.com> References: <840800.79193.qm@web36503.mail.mud.yahoo.com> Message-ID: On 12 February 2010 12:49, Gordon Swobe wrote: > --- On Thu, 2/11/10, Stathis Papaioannou wrote: > >>> I don't play video games myself but I've known >>> children who did. They often spoke of the characters in >>> their video games as if those characters really existed as >>> conscious entities. Then they matured. >> >> Characters in video games do certain things such as shoot >> other characters. If they were connected to a robot arm and >> camera, they might be able to shoot real people, really dead. > > No sir. The supposed lawyer for such a character in such a video game has a good defense: his client could not have shot anyone because his client does not exist except in someone's overly vivid imagination. > > The sensible people on the jury will find that a mixed-up child or perhaps a philosophically-challenged adult used a hi-tech weapon disguised as a computer game to shoot a real person. I was simply pointing out that the shooting would be a REAL shooting resulting in a REAL death, even though the character is simulated. Whether the character understood what it was doing is a different question, but in general you cannot use the argument that it was a simulation to preclude this possibility, because the claim that, a priori, a simulation cannot have ANY property of the thing it is simulating is obviously ridiculous. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 12 02:52:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Feb 2010 13:52:47 +1100 Subject: [ExI] evolution of consciousness In-Reply-To: <438778.21885.qm@web36508.mail.mud.yahoo.com> References: <438778.21885.qm@web36508.mail.mud.yahoo.com> Message-ID: On 12 February 2010 12:26, Gordon Swobe wrote: > --- On Thu, 2/11/10, Stathis Papaioannou wrote: > >> But you have clearly stated that consciousness plays no >> role in behaviour... > > I can hardly believe you wrote that. I spent hours explaining to you why and how I reject epiphenomenalism. > >>... since you agree that the brain's behaviour can be emulated >> by a computer and the computer will be unconscious. > > I argued that we can program an artificial brain to act as if it has consciousness and that said artificial brain will still lack consciousness. This is not the same as arguing that consciousness plays no role in human behavior! > > By the way what exactly do you mean by "behavior of brains", anyway? > > When I refer to the brain's behavior I usually mean observable behavior of the organism it controls, behavior such as acts of speech. A computer model of the brain is made that controls a body, i.e. a robot. The robot will behave exactly like a human. Moreover, it will behave exactly like a human due to an isomorphism between the structure and function of the brain and the structure and function of the computer model, since that is what a model is. Now, you claim that this robot would lack consciousness. This means that there is nothing about the intelligent behaviour of the human that is affected by consciousness. For if consciousness were a separate thing that affected behaviour, there would be some deficit in behaviour if you reproduced the functional relationship between brain components while leaving out the consciousness. Therefore, consciousness must be epiphenomenal. You might have said that you rejected epiphenomenalism, but you cannot do so consistently. The only way you can consistently maintain your position that computers can't reproduce consciousness is to say that they can't reproduce intelligence either. If you don't agree with this you must explain why I am wrong when I point out the self-contradictions that zombies would lead to, and you simply avoid doing this, which is no way to comport yourself in a philosophical debate. I asked if the thought experiments I proposed were clear to everyone else and no-one contacted me to say that they were not. -- Stathis Papaioannou From thespike at satx.rr.com Fri Feb 12 06:07:47 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 12 Feb 2010 00:07:47 -0600 Subject: [ExI] better self-transcendence through selective brain damage Message-ID: <4B74F033.702@satx.rr.com> Links to Spirituality Found in the Brain By LiveScience.com Staff Scientists have identified areas of the brain that, when damaged, lead to greater spirituality. The findings hint at the roots of spiritual and religious attitudes, the researchers say. The study, published in the Feb. 11 issue of the journal Neuron, involves a personality trait called self-transcendence, which is a somewhat vague measure of spiritual feeling, thinking, and behaviors. Self-transcendence "reflects a decreased sense of self and an ability to identify one's self as an integral part of the universe as a whole," the researchers explain. Before and after surgery, the scientists surveyed patients who had brain tumors removed. The surveys generate self-transcendence scores. Selective damage to the left and right posterior parietal regions of the brain induced a specific increase in self-transcendence, or ST, the surveys showed. "Our symptom-lesion mapping study is the first demonstration of a causative link between brain functioning and ST," said Dr. Cosimo Urgesi from the University of Udine in Italy. "Damage to posterior parietal areas induced unusually fast changes of a stable personality dimension related to transcendental self-referential awareness. Thus, dysfunctional parietal neural activity may underpin altered spiritual and religious attitudes and behaviors." Previous neuroimaging studies had linked activity within a large network in the brain that connects the frontal, parietal, and temporal cortexes with spiritual experiences, "but information on the causative link between such a network and spirituality is lacking," explains lead study author, Urgesi said. One study, reported in 2008, suggested that the brain's right parietal lobe defines "Me," and people with less active Me-Definers are more likely to lead spiritual lives. The finding could lead to new strategies for treating some forms of mental illness. "If a stable personality trait like ST can undergo fast changes as a consequence of brain lesions, it would indicate that at least some personality dimensions may be modified by influencing neural activity in specific areas," said Dr. Salvatore M. Aglioti from Sapienza University of Rome. "Perhaps novel approaches aimed at modulating neural activity might ultimately pave the way to new treatments of personality disorders." From stefano.vaj at gmail.com Fri Feb 12 13:12:10 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 12 Feb 2010 14:12:10 +0100 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> Message-ID: <580930c21002120512u4aafa7c7kdf8f5f14e77d5fcd@mail.gmail.com> 2010/2/9 JOSHUA JOB : > A fetus is not yet a person, nor has ever been a person (as it is not nor > ever has been a rational conceptual entity). So it cannot have any rights. Legally wrong. A fetus can inherit, for instance, in a number of circumstances, at least in continental jurisdictions. Even though its capacity is restricted, the same applies to legal entities, for instance. Or even to adult humans (say, a life prisoner). > I am saying that it cannot be wrong if it does not violate the nature of > other conscious entities. The ocean cannot be wronged, only rational > conceptual "self"-aware entities can be, because they are the things that > can conceivably understand right and wrong. Let say somebody cannot conceivably understand right and wrong (say, out of certified "moral folly", or whatever the psychiatric terms may be in English). Does it stop being a natural person under existing laws? No. Do great apes with a greater ability to distinguish right and wrong than, say, a human infant or a severely mentally handicapped human being, have rights? Again no, at least for the time being. In legal terms, a person is what you say it is. Thus, I would not start from personhood to deduce rights, but rather from rights provided for by a legal system, on the basis of value judgments, to infer the personhood status they involve. -- Stefano Vaj From gts_2000 at yahoo.com Fri Feb 12 13:41:46 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 12 Feb 2010 05:41:46 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <957422.65114.qm@web36508.mail.mud.yahoo.com> --- On Thu, 2/11/10, Stathis Papaioannou wrote: >> The sensible people on the jury will find that a >> mixed-up child or perhaps a philosophically-challenged adult >> used a hi-tech weapon disguised as a computer game to shoot >> a real person. >... the claim that, a priori, a simulation cannot have ANY property of > the thing it is simulating is obviously ridiculous. Is it? In the incident you describe, only a *depiction* of a shooter exists in the game, and depictions of people have no reality. Or to put it another way: they have the same kind of reality, and the same legal standing, as photographs and drawings and other depictions of people. The human game-developer or the human game-player will go to prison or to a psychiatric facility for the criminally insane. The simulated shooter in the game will never know or care; he has no real existence. -gts From bbenzai at yahoo.com Fri Feb 12 14:50:33 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 12 Feb 2010 06:50:33 -0800 (PST) Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: Message-ID: <226303.69206.qm@web113612.mail.gq1.yahoo.com> Damien Broderick informed us: > Links to Spirituality Found in the Brain ... > The finding could lead to new strategies for treating some > forms of > mental illness. > ... > "Perhaps novel approaches aimed at modulating > neural activity > might ultimately pave the way to new treatments of > personality disorders." Wow. Could it be? Might we find a cure for religion? Ben Zaiboc From stathisp at gmail.com Fri Feb 12 15:25:39 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 13 Feb 2010 02:25:39 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <957422.65114.qm@web36508.mail.mud.yahoo.com> References: <957422.65114.qm@web36508.mail.mud.yahoo.com> Message-ID: On 13 February 2010 00:41, Gordon Swobe wrote: > --- On Thu, 2/11/10, Stathis Papaioannou wrote: > >>> The sensible people on the jury will find that a >>> mixed-up child or perhaps a philosophically-challenged adult >>> used a hi-tech weapon disguised as a computer game to shoot >>> a real person. > >>... the claim that, a priori, a simulation cannot have ANY property of >> the thing it is simulating is obviously ridiculous. > > Is it? In the incident you describe, only a *depiction* of a shooter exists in the game, and depictions of people have no reality. Or to put it another way: they have the same kind of reality, and the same legal standing, as photographs and drawings and other depictions of people. > > The human game-developer or the human game-player will go to prison or to a psychiatric facility for the criminally insane. The simulated shooter in the game will never know or care; he has no real existence. I'll say it again: the claim that, a priori, a simulation cannot have ANY property of the thing it is simulating is obviously ridiculous. This does not entail that a simulation will necessarily have ALL the properties of the thing being simulated. For example, if a simulation of a human is as intelligent and conscious as a real human that does not mean it will weigh the same and smell the same as a real human. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 12 15:41:05 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 13 Feb 2010 02:41:05 +1100 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <226303.69206.qm@web113612.mail.gq1.yahoo.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> Message-ID: On 13 February 2010 01:50, Ben Zaiboc wrote: > Wow. ?Could it be? ?Might we find a cure for religion? In psychiatry patients with religious delusions are very common, but occasionally there is doubt as to whether they are really psychotic. The only decent diagnostic test we have for this problem is a therapeutic trial of an antipsychotic medication. If the religious ideas go away then almost certainly they were part of a psychosis: the test has a very low false positive rate. It's harder to interpret the test if the religious ideas do not go away, since about 20-30% of patients with clearly psychotic symptoms and 100% of patients who are just religious do not respond to medication. In other words, reasonably effective treatments are available at present for the crazy but not for the merely gullible. -- Stathis Papaioannou From pharos at gmail.com Fri Feb 12 15:57:25 2010 From: pharos at gmail.com (BillK) Date: Fri, 12 Feb 2010 15:57:25 +0000 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <226303.69206.qm@web113612.mail.gq1.yahoo.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> Message-ID: On Fri, Feb 12, 2010 at 2:50 PM, Ben Zaiboc wrote: > Wow. ?Could it be? ?Might we find a cure for religion? > > Sounds like the opposite to me. Damage certain parts of the brain and you get transcendental experiences. Once the technique is formalized, religious sects have the ability to guarantee profound religious experiences to their members. We just stick a wire in ... here..... "Ohhhhh Gawd" ........ and another follower is reborn. BillK From gts_2000 at yahoo.com Fri Feb 12 16:25:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 12 Feb 2010 08:25:34 -0800 (PST) Subject: [ExI] Semiotics and Computability Message-ID: <964516.40828.qm@web36504.mail.mud.yahoo.com> "Depiction" seems like the perfect word for conveying my meaning. ------ Main Entry: de?pict Pronunciation: \di-?pikt, d?-\ Function: transitive verb Etymology: Latin depictus, past participle of depingere, from de- + pingere to paint ? more at paint Date: 15th century 1 : to represent by or as if by a picture 2 : describe ------ If and when we develop the technology to create complete digital simulations of people, we will then have only the capacity to perfectly depict people in digital form. Those digital depictions of people will only *represent* the real or imaginary people they depict. They will have the same reality status as do less sophisticated kinds of depictions, e.g., digital photographs, digital paintings, digital drawings and digital cartoons. It seems to me that no matter how hi-tech and life-like our depictions become, there will always exist an important difference between the depiction of the thing and the thing depicted. Some people will however become so mesmerized by the life-like realism of the digital depictions that they will conflate the depictions with the real or imaginary things they depict. They will forget the difference between the photographs of people and the people in the photographs. -gts From aware at awareresearch.com Fri Feb 12 16:15:32 2010 From: aware at awareresearch.com (Aware) Date: Fri, 12 Feb 2010 08:15:32 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <4B74F033.702@satx.rr.com> References: <4B74F033.702@satx.rr.com> Message-ID: On Thu, Feb 11, 2010 at 10:07 PM, Damien Broderick wrote: > Links to Spirituality Found in the Brain > > By LiveScience.com Staff > > Scientists have identified areas of the brain that, when damaged, lead to > greater spirituality. The findings hint at the roots of spiritual and > religious attitudes, the researchers say. > > The study, published in the Feb. 11 issue of the journal Neuron, involves a > personality trait called self-transcendence, which is a somewhat vague > measure of spiritual feeling, thinking, and behaviors. Self-transcendence > "reflects a decreased sense of self and an ability to identify one's self as > an integral part of the universe as a whole," the researchers explain. It's probably worth pointing out, despite a high probability of being misunderstood, that these experiences of "spirituality" and "self-transcendence" and other phenomena such as great joy or bliss orient one's thinking in a manner virtually opposite and certain to exclude that of Zen enlightenment (which never claims to be religious or spiritual.) Such misconceived expectations are key impediments to those who aim to attain a *coherent* understanding of the relationship of the observer to the observed (even, and especially, when the observer IS the observed.) Zen awakening is accompanied by none of these phenomena, and quite likely only by a laugh or smile at the realization of how simple and how close it was all along. "Before I had studied Zen for thirty years, I saw mountains as mountains, and waters as waters. When I arrived at a more intimate knowledge, I came to the point where I saw that mountains are not mountains, and waters are not waters. But now that I have got its very substance I am at rest. For it's just that I see mountains once again as mountains, and waters once again as waters." ? Ch'uan Teng Lu The Way of Zen, p126 - Jef - Jef From thespike at satx.rr.com Fri Feb 12 17:10:08 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 12 Feb 2010 11:10:08 -0600 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <580930c21002120512u4aafa7c7kdf8f5f14e77d5fcd@mail.gmail.com> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <580930c21002120512u4aafa7c7kdf8f5f14e77d5fcd@mail.gmail.com> Message-ID: <4B758B70.3040101@satx.rr.com> On 2/12/2010 7:12 AM, Stefano Vaj wrote: > A fetus can inherit, for instance, in a number of > circumstances, at least in continental jurisdictions. Would that apply to a frozen embryo? If a billionaire had an embryo put on ice with the expectation of having it implanted later (perhaps in a host mother) and then died, could an estate be locked up for decades while the small mass of cells hung changelessly inside a Dewer? Damien Broderick From jonkc at bellsouth.net Fri Feb 12 17:24:53 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 12 Feb 2010 12:24:53 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <438778.21885.qm@web36508.mail.mud.yahoo.com> References: <438778.21885.qm@web36508.mail.mud.yahoo.com> Message-ID: <1136ABC5-0122-4736-ACC2-6DF2006A8A24@bellsouth.net> Since my last post Gordon Swobe has posted 5 times. > >> But you have clearly stated that consciousness plays no >> role in behaviour... > > I can hardly believe you [Stathis Papaioannou] wrote that. I spent hours explaining to you why and how I reject epiphenomenalism. I can hardly believe Swobe is surprised that Stathis doesn't understand his position, I don't believe that Swobe himself understands his position. Consciousness effects behavior enough for evolution to produce it but not enough for the Turing Test to detect it. Nuts. > > I argued that we can program an artificial brain to act as if it has consciousness and that said artificial brain will still lack consciousness. Even his contradictions are contradictory. We are intelligent and we are conscious, I am anyway; if the 2 are separate then consciousness must just be tacked on, a sort of consciousness circuit. But Evolution has absolutely no reason to develop a consciousness circuit. > This is not the same as arguing that consciousness plays no role in human behavior! And yet in his next breath Swobe will tell us that consciousness plays no role in the Turing Test! > No matter whether we create those simulations with real or imaginary persons in mind, the simulations themselves will have no more reality than does Fred Flintstone. Fred Flintstone certainly isn't real as I am real and probably isn't real as Gordon Swobe is real, but one can't help from wondering why he used that as an example rather than, say, "Krotchly Q Kumberbun". Presumedly it's because one meme has enough reality to allow for communication while the other has so little reality that his readers wouldn't even know what he's referring to or the point he's trying to make. > there will always exist an important difference between the depiction of the thing and the thing depicted. But not if the "thing" in question is not a thing at all and is in fact not even a noun. > Those digital depictions of people will only *represent* the real or imaginary people they depict. Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. John K Clark > > > -gts > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Feb 12 18:59:00 2010 From: pharos at gmail.com (BillK) Date: Fri, 12 Feb 2010 18:59:00 +0000 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <4B758B70.3040101@satx.rr.com> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <580930c21002120512u4aafa7c7kdf8f5f14e77d5fcd@mail.gmail.com> <4B758B70.3040101@satx.rr.com> Message-ID: On Fri, Feb 12, 2010 at 5:10 PM, Damien Broderick wrote: > Would that apply to a frozen embryo? If a billionaire had an embryo put on > ice with the expectation of having it implanted later (perhaps in a host > mother) and then died, could an estate be locked up for decades while the > small mass of cells hung changelessly inside a Dewer? > > I think it is safe to say that the law has not yet decided on that possibility. :) Inheritance law differs greatly between countries. For example, in some jurisdictions females cannot inherit. Strictly speaking, a fetus has no inheritance rights. (Controversial - you are getting into the abortion debate here). After-birth children (or posthumous children) born after the death of the parents can inherit and have rights after they are born live. Some laws say that the child has to be born within the gestation period following the death of the father. (i.e. frozen embryos not considered). If an estate was waiting on a birth to see if a child could inherit, then a trustee would be appointed to look after the estate. Presumably this could also be done for a frozen embryo, but I think it would be unlikely. An embryo might never be implanted and a live baby produced, so it only has a conditional chance of life. And there might be twenty or thirty frozen embryos. Could the first inherit, when more might arrive later? And if a woman freezes her eggs, she could produce many half-siblings who would also have inheritance rights. It all sounds too complicated to me. If I was a lawyer, I'd say forget about frozen embryos and eggs inheriting. BillK From stefano.vaj at gmail.com Fri Feb 12 19:04:33 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 12 Feb 2010 20:04:33 +0100 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <4B758B70.3040101@satx.rr.com> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <580930c21002120512u4aafa7c7kdf8f5f14e77d5fcd@mail.gmail.com> <4B758B70.3040101@satx.rr.com> Message-ID: <580930c21002121104u40de5b2uf47fb432bbdd55c2@mail.gmail.com> On 12 February 2010 18:10, Damien Broderick wrote: > On 2/12/2010 7:12 AM, Stefano Vaj wrote: > >> A fetus can inherit, for instance, in a number of >> circumstances, at least in continental jurisdictions. > > Would that apply to a frozen embryo? If a billionaire had an embryo put on > ice with the expectation of having it implanted later (perhaps in a host > mother) and then died, could an estate be locked up for decades while the > small mass of cells hung changelessly inside a Dewer? The common wisdom is that only an in situ embryo can be considered as a subject. You cannot for instance have a special attorney appointed to protect the interest of an egg. You can however make a not-yet-conceived child your conditional heir... -- Stefano Vaj From spike66 at att.net Fri Feb 12 19:34:37 2010 From: spike66 at att.net (spike) Date: Fri, 12 Feb 2010 11:34:37 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> Message-ID: > Subject: Re: [ExI] better self-transcendence through > selective brain damage > > On Fri, Feb 12, 2010 at 2:50 PM, Ben Zaiboc wrote: > > Wow. ?Could it be? ?Might we find a cure for religion? There would be very little demand for a cure for religion. Just the opposite, the market would be in stimulating that part of the brain to induce a religious experience. Ohhh my, just thinking of the money to be made here makes by butt hurt. It's a good hurt. Do let me be perfectly clear on that last commentary: I do not wish to be anything like L. Ron, who had more or less the same idea, along with his countless predecessors down thru the ages. These charlatans made money from religion unethically, deceptively. The way we would do this is to market absolutely truthfully: as a service for those who are atheists but want euphoric religious experiences. We would make it clear right up front that this is not about some random deity, and we do not solicit donations. The client must supply her own random deity. We arrange communication with that or them. This idea is rather more analogous to the office of a chiropractor, except without all the odd notions of aligning to body to focus energy and releasing tension and all that. This would be absolutely truthful: we discovered the part of the brain that is responsible for the epiphany phenomenon, and we think we can cause one in you. A hundred bucks, hop in the chair. spike From hkeithhenson at gmail.com Fri Feb 12 19:36:07 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 12 Feb 2010 12:36:07 -0700 Subject: [ExI] skyhook elevator Message-ID: On Fri, Feb 12, 2010 at 5:00 AM, "spike" wrote: >> ...On Behalf Of Damien Broderick >> Subject: [ExI] skyhook elevator >> >> It occurs to me (can't recall ever seeing this discussed, >> although it must be an ancient sub-topic of skyhook dynamics) >> that as your elevator climbs or is a shoved up the thread, >> you'd not only be pressed against the floor but also against >> the west wall. Maybe you wouldn't notice if the trip took a >> couple of days, but you're going from rest to 11,000 km/hr. >> Is that especially noticeable? What say the space gurus? >> >> Damien Broderick > > > Coriolis effect sounds like what you are describing, and it would be durn > near negligible. ?I will calculate it if you wish. ?To give you an idea by > using only numbers in my head and single digit BOTECs, geo is about 36000 km > from the surface as I recall so add 6300 km earth radius and that's close > enough to about 40,000 km so the circumference of the orbit is about 6 and > some change times that, so 250000 km in 24 hrs, so you accelerate to 10000 > km per hour or about 3 km per second or so. > > How long do you guess it would take to haul you up to GEO? ?An few hours? > Lets say 10. ?To accelerate 3 km per second eastward in 10 hrs would be > about 0.1 meters per second, or 100th of a G. ?The elevator passengers would > scarcely notice. > > It is proportional of course. ?If you theorize they get there in 1 hour, > then the coriolis component is about a tenth of a G, but if they get all the > way to GEO in an hour, there is some serious upward velocity involved. > > spike Spike got it right. Some years ago I spent some effort on this in the context of a story and wrote a paper for an ESA conference from that work. I analyzed a driven endless loop cable moving about 1000 mph--which will get you to GEO in 22 hours. Lifting 100 tons per hour it took 1.5 GW. While the Coriolis effect isn't noticeable to a passenger, it provides plenty of force to keep the up and down cables well separated. The acceleration to 3 km/sec at GEO (from under 1/2 km/sec on the surface) is "free." The cable leans west from the rotation to the east and extracts energy from the rotation of the earth slowing the earth down by an extremely small amount. Importing more than you are exporting would have the opposite effect. People opposing to extracting rotational energy from the earth could use the slogan "Conserve Angular Momentum!" Keith PS. Lunar elevators really can be made of dental floss. From cluebcke at yahoo.com Fri Feb 12 18:28:47 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Fri, 12 Feb 2010 10:28:47 -0800 (PST) Subject: [ExI] skyhook elevator In-Reply-To: References: <4B748FC1.8090001@satx.rr.com> Message-ID: <887255.3523.qm@web111208.mail.gq1.yahoo.com> > How long do you guess it would take to haul you up to GEO? An few hours?Lets say 10. >From what I've read it'll be closer to 3 days. Even that view could get tiresome after three days on the bus. ----- Original Message ---- From: spike To: ExI chat list Sent: Thu, February 11, 2010 4:48:16 PM Subject: Re: [ExI] skyhook elevator > ...On Behalf Of Damien Broderick > Subject: [ExI] skyhook elevator > > It occurs to me (can't recall ever seeing this discussed, > although it must be an ancient sub-topic of skyhook dynamics) > that as your elevator climbs or is a shoved up the thread, > you'd not only be pressed against the floor but also against > the west wall. Maybe you wouldn't notice if the trip took a > couple of days, but you're going from rest to 11,000 km/hr. > Is that especially noticeable? What say the space gurus? > > Damien Broderick Coriolis effect sounds like what you are describing, and it would be durn near negligible. I will calculate it if you wish. To give you an idea by using only numbers in my head and single digit BOTECs, geo is about 36000 km from the surface as I recall so add 6300 km earth radius and that's close enough to about 40,000 km so the circumference of the orbit is about 6 and some change times that, so 250000 km in 24 hrs, so you accelerate to 10000 km per hour or about 3 km per second or so. How long do you guess it would take to haul you up to GEO? An few hours? Lets say 10. To accelerate 3 km per second eastward in 10 hrs would be about 0.1 meters per second, or 100th of a G. The elevator passengers would scarcely notice. It is proportional of course. If you theorize they get there in 1 hour, then the coriolis component is about a tenth of a G, but if they get all the way to GEO in an hour, there is some serious upward velocity involved. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Fri Feb 12 19:56:25 2010 From: spike66 at att.net (spike) Date: Fri, 12 Feb 2010 11:56:25 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> Message-ID: <7A7A0CB2766A4C879426F3F66CD94F0C@spike> > ...On Behalf Of spike ... > > ...I do not wish to be anything like L. Ron...made money from religion > unethically, deceptively. The way we would do this is to > market absolutely truthfully: as a service for those who are > atheists but want euphoric religious experiences... spike Just after I hit send, I recognized the fatal flaw in this argument. Did you spot it? There are areas of our lives in which we have come so accustomed to spin and outright deception that we do not even know what the truth sounds like. If told the actual truth, we naturally assume some kind of deception. Analogy: consider a case where a cop says "Move along citizens, there is nothing to see here." Citizens: Where? Cop: Here. Citizens: What? Cop: Nothing. Citizens: But there is nothing here! Cop: Exactly, just what I said. Citizens: All right, so what's the catch, officer?... etc. Religion and love are two areas where the actual truth is really never uttered, or if so it fails so spectacularly that it isn't attempted a second time. If we could stimulate the religion center of the brain, and tell the client or patient *exactly* what is being done, and why, and the expected outcome, that would represent a true scientific and ethical breakthrough. spike From lacertilian at gmail.com Fri Feb 12 21:05:44 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 12 Feb 2010 13:05:44 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <4e3a29501002111638s71abe067va2e2b213b5ef708f@mail.gmail.com> References: <896407.10281.qm@web36503.mail.mud.yahoo.com> <62c14241002111632m53d7fdafvd57cdf9b8f325006@mail.gmail.com> <4e3a29501002111638s71abe067va2e2b213b5ef708f@mail.gmail.com> Message-ID: Will Steinberg : > In stride with what Mike just said, could we perhaps (since most of us seem > to agree) discuss the actually important notions of semiotics and > computability, instead of more pointless antiswobian banter? Apparently not. Being the guy who started the thread to begin with, I would theoretically be all over such a discussion. I already said all I could in my first post, though. If anyone else wants to revive that line of reasoning, I would certainly be with them. (From what I recall, I was mainly talking about the inextricability of any one branch of semiotics, say semantics, from any other, such as syntax.) From spike66 at att.net Fri Feb 12 21:39:02 2010 From: spike66 at att.net (spike) Date: Fri, 12 Feb 2010 13:39:02 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <7A7A0CB2766A4C879426F3F66CD94F0C@spike> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> Message-ID: <87670FEF1360437981B4B84931C5FF94@spike> > ...On Behalf Of spike > Subject: Re: [ExI] better self-transcendence through selective brain damage > > > ...On Behalf Of spike > ... > There are areas of our lives in > which we have come so accustomed to spin and outright > deception that we do not even know what the truth sounds > like... Religion and love are two areas where the actual truth is > really never uttered... spike For instance, imagine giving your sweetheart a Valentine that says: Sweetheart, the depth of my love for you is equal to that which is induced by several milligrams of endorphine-precursor pheromones which are responsible for this class of emotions in humans. I mean it from the bottom of my brain. This message might be technically "true" however your ass is technically "dead meat." Your sperm will never get the opportunity to express their right to life in this manner. Getting back to religion, I am intrigued by the notion of being able to advertise something like Faith-R-Us: We do not believe your religion, but for a modest fee, we can help you believe in it with ever greater fervor. spike From eric at m056832107.syzygy.com Fri Feb 12 22:10:46 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 12 Feb 2010 22:10:46 -0000 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <87670FEF1360437981B4B84931C5FF94@spike> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> Message-ID: <20100212221046.5.qmail@syzygy.com> Spike writes: >For instance, imagine giving your sweetheart a Valentine that says: > >Sweetheart, the depth of my love for you is equal to that which is induced >by several milligrams of endorphine-precursor pheromones which are >responsible for this class of emotions in humans. I mean it from the bottom >of my brain. Once again, xkcd has this covered, with a comic about overly scientific valentines: http://xkcd.com/701/ -eric From spike66 at att.net Fri Feb 12 22:28:47 2010 From: spike66 at att.net (spike) Date: Fri, 12 Feb 2010 14:28:47 -0800 Subject: [ExI] better self-transcendence through selective brain damage References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> Message-ID: <0752CA13909A46049BF86B410F43711C@spike> > ... > Getting back to religion, I am intrigued by the notion of > being able to advertise something like > > Faith-R-Us: We do not believe your religion, but for a > modest fee, we can help you believe in it with ever greater fervor. > > spike Last reply to my own comment, the I need to run, and will be away for a few. It is clear that entire religions can be built around some particular psychological phenomenon. Two examples, much of that sector of modern christianity that has the notion of born-again-ism can be explained by the poorly understood phenomenon sometimes called the epiphany experience. The Apostle Paul may have been describing it in the book of Acts chapter 9: http://www.biblegateway.com/passage/?search=Acts+9&version=kjv Modern science recognizes the phenomenon of de javu as a signal path abnormality that causes the brain to perceive information coming from the senses as coming from the memory. The Hindu religion explains de javu by theorizing shadow memories from previous lives. Monte Python take: http://www.youtube.com/watch?v=QWKdokcvM7A It looks to me like if we find how to stimulate the right neural centers in the brain, we should be able to induce any religious experience we want, while being perfectly honest, scientific and ethical about it. And make a cubic buttload of money of course. spike From cluebcke at yahoo.com Fri Feb 12 21:24:31 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Fri, 12 Feb 2010 13:24:31 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence Message-ID: <780009.58882.qm@web111203.mail.gq1.yahoo.com> I was wondering, given the lively back and forth I've seen on this list, whether the participants are agreed on the meanings of the terms "consciousness" and "intelligence". For example, were someone to posit that you cannot have consciousness without intelligence, are the meanings of those two terms widely agreed on? I don't mean to be presumptuous and the question is not intended to be sarcastic or ironic, but I have wondered, reading the ongoing debates about matters such as whether a computer (or a computer program) can have "intelligence" or "consciousness", whether the people debating various aspects of those questions are actually in agreement on the terms. If there are commonly-agreed-upon definitions of these terms, I'd appreciate it if somebody could provide me references. If not...well, that might explain the apparent imperviousness of certain positions to apparently well-formed critiques. Regards, Chris Luebcke From lacertilian at gmail.com Fri Feb 12 22:42:13 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 12 Feb 2010 14:42:13 -0800 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <780009.58882.qm@web111203.mail.gq1.yahoo.com> References: <780009.58882.qm@web111203.mail.gq1.yahoo.com> Message-ID: Christopher Luebcke : > I was wondering, given the lively back and forth I've seen on this list, whether the participants are agreed on the meanings of the terms "consciousness" and "intelligence". I've been bandying about my own personal definition of intelligence, at least, for something on the order of two weeks. Below is a segment of one of my posts from eight days ago. It is startlingly apropos. Spencer Campbell : > Stefano Vaj : >> ... very poorly defined Aristotelic essences would per se exist >> corresponding to the symbols "mind", "consciousness", "intelligence" ... > > Actually, I gave a fairly rigorous definition for intelligence in an > earlier message. I've refined it since then: > > The intelligence of a given system is inversely proportional to the > average action (time * work) which must be expended before the system > achieves a given purpose, assuming that it began in a state as far > away as possible from that purpose. > > (As I said before, this definition won't work unless you assume an > arbitrary purpose for the system in question. Purposes are roughly > equivalent to attractors here, but the system may itself be part of a > larger system, like us. Humans are tricky: the easiest solution is to > say they swap purposes many times a day, which means their measured > intelligence would change depending on what they're currently doing. > Which is consistent with observed reality.) > > I can't give similarly precise definitions for "mind" or > consciousness, and I wouldn't be able to describe the latter at all. > Tentatively, I think consciousness is devoid of measurable qualities. > This would make it impossible to prove its existence, which to my mind > is a pretty solid argument for its nonexistence. Nevertheless, we talk > about it all the time, throughout history and in every culture. So > even if it doesn't exist, it seems reasonable to assume that it is at > least meaningful to think about. From ablainey at aol.com Sat Feb 13 02:43:27 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 12 Feb 2010 21:43:27 -0500 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: References: <780009.58882.qm@web111203.mail.gq1.yahoo.com> Message-ID: <8CC7A6D7452E660-E28-6F3C@webmail-d019.sysops.aol.com> hmm, my own person 'vox pop' definition of Inteligence is simple. Given two know facts inteligence allows for a third derived fact to be created/known. The level of inteligence can be measured by how far apart the initial facts are, also the leap to and level of confidence in this third fact. Conciousness can party be defined by the self awareness of the above. For this 'Fact' is flexible and can be thought of in literal terms or as sensory input. -----Original Message----- From: Spencer Campbell To: ExI chat list Sent: Fri, 12 Feb 2010 22:42 Subject: Re: [ExI] Newbie Question: Consciousness and Intelligence Christopher Luebcke : > I was wondering, given the lively back and forth I've seen on this list, whether the participants are agreed on the meanings of the terms "consciousness" and "intelligence". I've been bandying about my own personal definition of intelligence, at least, for something on the order of two weeks. Below is a segment of one of my posts from eight days ago. It is startlingly apropos. Spencer Campbell : > Stefano Vaj : >> ... very poorly defined Aristotelic essences would per se exist >> corresponding to the symbols "mind", "consciousness", "intelligence" ... > > Actually, I gave a fairly rigorous definition for intelligence in an > earlier message. I've refined it since then: > > The intelligence of a given system is inversely proportional to the > average action (time * work) which must be expended before the system > achieves a given purpose, assuming that it began in a state as far > away as possible from that purpose. > > (As I said before, this definition won't work unless you assume an > arbitrary purpose for the system in question. Purposes are roughly > equivalent to attractors here, but the system may itself be part of a > larger system, like us. Humans are tricky: the easiest solution is to > say they swap purposes many times a day, which means their measured > intelligence would change depending on what they're currently doing. > Which is consistent with observed reality.) > > I can't give similarly precise definitions for "mind" or > consciousness, and I wouldn't be able to describe the latter at all. > Tentatively, I think consciousness is devoid of measurable qualities. > This would make it impossible to prove its existence, which to my mind > is a pretty solid argument for its nonexistence. Nevertheless, we talk > about it all the time, throughout history and in every culture. So > even if it doesn't exist, it seems reasonable to assume that it is at > least meaningful to think about. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Sat Feb 13 03:42:56 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 12 Feb 2010 21:42:56 -0600 Subject: [ExI] Fwd: [Comp-neuro] Blue Brain is hiring a Postdoctoral Researcher in Neuron Modeling In-Reply-To: References: Message-ID: <55ad6af71002121942ka96f000yae310cd06dce8a74@mail.gmail.com> ---------- Forwarded message ---------- From: Sean Hill Date: Fri, Feb 12, 2010 at 6:06 AM Subject: [Comp-neuro] Blue Brain is hiring a Postdoctoral Researcher in Neuron Modeling To: comp-neuro at neuroinf.org Job Description: Postdoctoral Researcher in Neuron Modeling The Blue Brain Project, headquartered in Lausanne, Switzerland, is an international research venture to reverse-engineer the brain and enable next-generation fundamental and medical research through simulation. BBP is now seeking for immediate hire a Postdoctoral Researcher in Neuron Modeling, for immediate hire, to strengthen the project?s computational neuroscience team and to prepare it for the next steps of growth. The primary objective for this position is to contribute to the ongoing large-scale detailed neuron modeling and validation efforts and to advance the model generation and validation of the full diversity of neuron electrical properties and dendritic integration. Scientific leadership is expected to cosupervise computational neuroscience students and the software expertise to interface with the technical teams. The position will involve a close interaction between the electrophysiology lab and computer simulations. Detailed Requirements: ? PhD in the field of computational neuroscience ? expert knowledge in NEURON and multi-compartment conductance-based modeling ? expert knowledge in whole cell electrophysiology, ion channel experiments and models ? expert knowledge in Python and Matlab ? profound knowledge of model specification languages such as NeuroML ? profound knowledge in other programming languages (C++) and parallel computing is of advantage ? ?can-do? attitude for pragmatic prototypes that accompany the global model building and validation strategy ? co-supervision of PhD students ? fluent written and spoken English What we offer: ? An internationally visible and rising project successfully connecting the demanding challenges of research with industry-strength solutions ? Supervision of research projects and publication opportunities ? A young, dynamic, inter-disciplinary, and international working environment Competitive salary Interested applicants please send CV, 3 references and a statement of research interests to: Felix Schuermann (felix.schuermann at epfl.ch) -- *Sean Hill, Ph.D.* *Blue Brain Project * *Project Manager for Computational Neuroscience* * * Brain Mind Institute *EPFL - Station 15* *CH-1015 Lausanne* *Switzerland* *Tel +41 21 693.96 78* *Fax +41 21 693.18 00* * * *sean.hill at epfl.ch* * * * ** ** * _______________________________________________ Comp-neuro mailing list Comp-neuro at neuroinf.org http://www.neuroinf.org/mailman/listinfo/comp-neuro -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharris at livelib.com Sat Feb 13 10:37:28 2010 From: dharris at livelib.com (David C. Harris) Date: Sat, 13 Feb 2010 02:37:28 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <0752CA13909A46049BF86B410F43711C@spike> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <0752CA13909A46049BF86B410F43711C@spike> Message-ID: <4B7680E8.8070702@livelib.com> spike wrote: > > we should be able to induce any religious experience we want, > while being perfectly honest, scientific and ethical about it. > > And make a cubic buttload of money of course. > I trust you meant "a boatload of money"? Much more pleasant way to carry the stuff. - David From cluebcke at yahoo.com Fri Feb 12 22:25:33 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Fri, 12 Feb 2010 14:25:33 -0800 (PST) Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <87670FEF1360437981B4B84931C5FF94@spike> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> Message-ID: <811542.89157.qm@web111209.mail.gq1.yahoo.com> > Faith-R-Us: We do not believe your religion, but for a modest fee, we canhelp you believe in it with ever greater fervor. Could there be something in the contract where the customer agrees to refrain from voting until the effect wears off? ----- Original Message ---- From: spike To: ExI chat list Sent: Fri, February 12, 2010 1:39:02 PM Subject: Re: [ExI] better self-transcendence through selective brain damage > ...On Behalf Of spike > Subject: Re: [ExI] better self-transcendence through selective brain damage > > > ...On Behalf Of spike > ... > There are areas of our lives in > which we have come so accustomed to spin and outright > deception that we do not even know what the truth sounds > like... Religion and love are two areas where the actual truth is > really never uttered... spike For instance, imagine giving your sweetheart a Valentine that says: Sweetheart, the depth of my love for you is equal to that which is induced by several milligrams of endorphine-precursor pheromones which are responsible for this class of emotions in humans. I mean it from the bottom of my brain. This message might be technically "true" however your ass is technically "dead meat." Your sperm will never get the opportunity to express their right to life in this manner. Getting back to religion, I am intrigued by the notion of being able to advertise something like Faith-R-Us: We do not believe your religion, but for a modest fee, we can help you believe in it with ever greater fervor. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From gts_2000 at yahoo.com Sat Feb 13 18:42:05 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 10:42:05 -0800 (PST) Subject: [ExI] evolution of consciousness Message-ID: <810447.49620.qm@web36507.mail.mud.yahoo.com> --- On Thu, 2/11/10, Stathis Papaioannou wrote: > A computer model of the brain is made that controls a body, > i.e. a robot. The robot will behave exactly like a human. Such a robot may be made, yes. > Moreover, it will behave exactly like a human due to an isomorphism > between the structure and function of the brain and the structure and > function of the computer model, since that is what a model is. Now, you > claim that this robot would lack consciousness. I never made that claim. In fact I don't recall that you and I ever discussed robots until now. I suppose that you've made this claim above on my behalf. > This means that there is nothing about the intelligent behaviour of the > human that is affected by consciousness. For if consciousness were a > separate thing that affected behaviour, there would be some deficit in > behaviour if you reproduced the functional relationship between brain > components while leaving out the consciousness. Therefore, consciousness > must be epiphenomenal. You might have said that you rejected > epiphenomenalism, but you cannot do so consistently. No, I reject epiphenomenalism consistently. In the *actual* thought experiment that you proposed, in which the human patient presented with a lesion in Wernicke's area causing a semantic deficit and in which the surgeon used p-peurons, the patient DID suffer from a deficit after the surgery. And that deficit was due precisely to the fact that he would lack the experience of his own understanding, which would in turn affect his behavior. I.e., epiphenomenalism is false. This is why I stated multiple times that the surgeon would need to keep working until he finally would get the patient's behavior right. In the end his patient would pass the turing test yet still have no conscious understanding of words. > The only way you can consistently maintain your position > that computers can't reproduce consciousness is to say that they > can't reproduce intelligence either. Not so. > If you don't agree with this you must explain why I am wrong when I > point out the self-contradictions that zombies would lead to, and you > simply avoid doing this, I just don't see the contradictions that you see, Stathis. Let me ask you this: your general position seems to be that weak AI is false; that if weak AI is possible then strong AI must also be possible, because the distinction between weak and strong is false and anything that passes the Turing must have strong AI. Is that your position? -gts From stefano.vaj at gmail.com Sat Feb 13 19:52:58 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Feb 2010 20:52:58 +0100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <780009.58882.qm@web111203.mail.gq1.yahoo.com> References: <780009.58882.qm@web111203.mail.gq1.yahoo.com> Message-ID: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> On 12 February 2010 22:24, Christopher Luebcke wrote: > I was wondering, given the lively back and forth I've seen on this list, whether the participants are agreed on the meanings of the terms "consciousness" and "intelligence". For example, were someone to posit that you cannot have consciousness without intelligence, are the meanings of those two terms widely agreed on? I do not think so. This may be the crux of the problem. Intelligence comes easier (btw, even stupid people are sometimes conscious...). Consciousness would be easy enough as well but only as long as you do not charge it with some transcendental meaning having to do with self-perception of the self, etc. -- Stefano Vaj From nebathenemi at yahoo.co.uk Sat Feb 13 20:34:13 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Sat, 13 Feb 2010 20:34:13 +0000 (GMT) Subject: [ExI] Glacier Geoengineering In-Reply-To: Message-ID: <324067.57495.qm@web27005.mail.ukl.yahoo.com> Keith put numbers on pumping ocean water on to the East Antarctic plateau as follows: Interesting concept though. To put numbers on it, the area of the earth is ~5.1 x 10E14 square meters. 3/4 of that is water, so ~3.8 10E15 square meters. To lower the oceans by a meter in a year would require pumping at 1.21 x 10E7 cubic meters per second. 12,100,000 cubic meters per second. Hmm The flow of the Amazon is 219,000 cubic meters per second, so it would take 55 times the flow of the Amazon. Pumping it up some 3000 meters to the ice sheet would take considerable energy, P=Q*g*h*1/pump efficiency (0.9). 1.21*10E7*9.8*3000/0.9 = 396 GW. 400 one GW reactors would do the job. (Please check this number.) Keith First, a quick double-check - wikipedia reckons ~3.61 x 10E14 square meters of earth's surface is ocean, so pretty close to Keith's figure. I just plugged 3.61E14 / (60x60x24x365) to get 11,447,235 cubic meters per second, so within about 5% of Keith's figure. So, even with pumps converting 90% of energy going in to potential energy of the water, we're looking in the 350-400GW/yr range to do one meter/year. But, do we really need to pump a whole meter in year? 1993 to 2003 had an average sea level rise of 3.1mm/year. The IPCC estimates http://www.ipcc.ch/publications_and_data/ar4/syr/en/spms3.html come out at from 1.1m to 6.4m higher in 2090-99 compared to 1980-99. So, say we start planning now and are ready to start the pumping in ten years - 2020. Then we need to do 1.1m to 6.4m over 80 years. Let's say 4m as a middling figure - then instead of 400GW for 4 years, we can do 20GW over 80 years. Sure, the power plants will need rebuilding more than once. I'm assuming we're going with nuclear to avoid shipping large quantities of fuel to the antarctic, and to keep it soot-free and low carbon (don't want to accelerate that antarctic coastal glacier breakup!). UK nuclear plants built since the 1970s have expected operating lives of 30-40 years http://www.world-nuclear.org/info/inf84.html If we're being optimistic and following Sizewell B, you're getting 1.1Gw for 40 years, but a couple of reactors on that table are "running at 70% indefinitely" so instead of 18 of these needing replacing once, we'll probably need 25 replacing once or twice. This will cost many tens of billions, but should keep pace with ocean sea level rise and avoid the risk of losing places like London, New York, most of the Netherlands (worth a lot in terms of real estate) and the human misery of trying to resettle tens of millions from Bangladesh and low-lieing areas around the Indian Ocean. Tom (enjoying geoengineering chat as a change from philosophical musings) From jrd1415 at gmail.com Sat Feb 13 21:02:48 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sat, 13 Feb 2010 14:02:48 -0700 Subject: [ExI] Semiotics and Computability In-Reply-To: <964516.40828.qm@web36504.mail.mud.yahoo.com> References: <964516.40828.qm@web36504.mail.mud.yahoo.com> Message-ID: I think maybe I've figured out where Gordon is coming from, so I'll attempt an explanation. Also, I find myself in agreement with him, partly. I think a substantial portion of the problem other posters have had with Gordon stems from an INCOMPLETE understanding of and consequently an INCOMPLETE addressing of his thesis. I'm gonna try to not get all flowery and elaborate here, just semi- sort of bare bones. Gordon says he rejects the concept of mind-body dualism. Rather, he asserts a timeworn philosophical alternative, that of an inseparable unity, a mind-body unity if you will. Then when others propose a simulation of mind, Gordon objects, seeming to me, and logically, to be saying you can't get a faithful recreation of mind if you leave out the body part. The recent comments regarding amoeba consciousness and the nature of pain, combined with a years-long backlog of free-floating yet related bits finally coalesced for me into an epiphany. And so I have come to agree with Gordon, partly. All the talk of neuron by neuron replacements is fine as far as it goes, but Gordon is reasonable in rejecting this -- though I wish he would have explained himself better -- based on the principle of incompleteness. Half a thing is not the thing. The 'body' part is missing. To faithfully reproduce the mind/persona/organism you have to reproduce the whole body, all the somatic cells, neural and non, with their particular and varied influences on the persona. At this point I will state, without elaboration, that I have come to believe that consciousness arises at the cellular level, and that any variant of consciousness in a highly complex multi-cellular organisms -- in particular, it's penultimate form in humans, of cognitive ability and an awareness self and universe -- arises from a combination of somatic and cerebral consciousness. To make things worse -- again without elaboration -- it is difficult for me to avoid the further conclusion that the bulk of the phenomenon of consciousness comes from the contribution of the somatic cells. To soften this seemingly outrageous assertion -- that the God-like nature of man ... "What a piece of work is a man, how noble in reason, how infinite in faculties; in form and moving how express and admirable, in action how like an angel, in apprehension how like a god: the beauty of the world, the paragon of animals! ...is more about the influence of gut, bone, blood, and sinew, than brain -- let me remind that the mammalian brain with all its glorious capability is a relatively recent add-on to the ancient partnership of sensory apparatus and the less glamorous support soma. That said, returning to the idea of an authentic simulation, I believe if the simulation includes the somatic contribution, thus comprehensively simulating both mind and body, that there is no reason the simulation won't fully and faithfully remanifest the original mind-body persona. What do you think Gordon? Does this work for you, or no? Now I'll elaborate a little on how I got here. Some previously unconnected bits. Years ago, I chanced to wonder what it must be like to be a liver cell; to live in a world where all the glory was reserved for 'elitest' neural tissue. Cells are cells, and logically should be equal. Some conundrum there. I used to visualize the human body devoid of all but neural tissue -- remember the old plastic educational toy, the visible man? I would think of this -- the brain and the neural filigree extending out from it -- as the "real" person, and would demote the remainder to a lower order, mechanistic, almost lifeless status. Same conundrum as above, but unrecognized at the time. Then I chanced upon Paul Pietsch's "Shufflebrain" http://www.indiana.edu/~pietsch/home.html where the author writes of memory and seemingly-deliberative behavior in single-celled organisms. From this I concluded that information processing need not be the exclusive province of multicellular neural tissue. Then Nova or the National Geographic channel produced a program about the microscopic world. They advertised it with a bit of video showing living paramecium, about three or four seconds worth. I never saw the program, but I saw that video clip five or six times, and it had a huge impact. A paramecium swims along, impressively vigorous and vital in its movement, then it stops for a moment, deliberates (processes information?) and then heads off in a different direction. Call me a fool, call it anthropomorphic projection, but I swear I saw deliberation and intentionality. The scene I saw was a scene of life, and life is recognizable from just these features. Then Jef Albright posted re Nolopsism http://oscarhome.soc-sci.arizona.edu/ftp/PAPERS/Nolipsism.pdf where, on page three (lucky, since I haven't read the whole thing yet) the author, speaking of mental events, writes: "...having or feeling a pain is identified with a neurological event, but the pain itself is distinct from the having of the pain ? it is not an event." It was at this point that Gordon mentioned amoebas and pain, and all the bits fell into place. That pain is a scream from the distant somatic cells, conveyed by neural tissue, yes, but an example of the tangible distant assertiveness of non-neural tissue under assault. But this new notion of an active somatic consciousness had further implications. Biology is ***EVOLVED***. The evolutionary process takes place in an environment where ***ALL*** extant physical mechanisms are at play. So three and a half billion years ago, when bacterial life first appeared, any quirk of physical mechanism and morphology which might afford a selection advantage, would have been evolutionarily selected and genetically preserved. It is on this basis that I conclude that deliberation (information processing) and intentionality emerged very early in biological evolution because of its clear survival advantage. Emerged in bacteria probably, and then over the next 3 billion years was subject to further refinement, because evolution never sleeps. Then eukaryotic cells emerged, single cells at first -- amoeba and paramecium -- followed, as we know, by the Cambrian explosion. Which led to macroscopic multi-cellular organisms -- humans among them -- creatures composed of the descendants of those first single cell creatures, and bringing with them the advantages of cellular consciousness, even further refined by the unstinting influence of evolution. Enough embarrassment for one day. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From stefano.vaj at gmail.com Sat Feb 13 21:15:48 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Feb 2010 22:15:48 +0100 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> References: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> Message-ID: <580930c21002131315u676a38f1y9f019fbe33850291@mail.gmail.com> On 8 February 2010 19:18, Max More wrote: > Damien, you seem to be suggesting ("Still, Max is quoted as saying") that > Hughes "implication of antidemocratic or top-down bias" is understandable > because of my statement that "Democratic arrangements have no intrinsic > value; they have value only to the extent that they enable us to achieve > shared goals while protecting our freedom." If so, I don't understand how > you can say that. Saying that democratic arrangements (as they exist at any > particular time) have no intrinsic value is not in the least equivalent to > saying that authoritarian control is better. Should we not strive for > something better than the ugly system of democracy that currently exists? > Are authoritarian arrangements the only conceivable alternative? I have another remark. How comes that self-declared technoprogressives and socialists find themselves aligned to a ritual, mechanical defence of what Marx used to call the "board of directors of the bourgeoisie"? :-) Speaking of democracy more in general, a feel-good concept which has been used in so many different meanings, I am personally rather fond of the concept of "popular sovereignty", or rather "sovereignties", firstly in terms of collective self-determination (a principle having much to do with transhumanism, IMHO), secondly as it suggests that we have the freedom to adopt the norms and legal systems of our choice, rather than having simply to recognise or enforcing a set of universal and eternal laws, thirdly because it implies political pluralism (meaning a radical wariness of dreams of world governments of any kind which would be entitled to ignore the willingness or not of a community to participate in them). All in all, this also sounds as the best bet of transhumanism, including in terms of "national Darwinism" which would strongly discourage the implementation of neoluddite policies even by governments who might be ideologically tempted by them. -- Stefano Vaj From stefano.vaj at gmail.com Sat Feb 13 21:23:29 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Feb 2010 22:23:29 +0100 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Problems of Transhumanism" In-Reply-To: <4B7059E4.1010402@satx.rr.com> References: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> <4B7059E4.1010402@satx.rr.com> Message-ID: <580930c21002131323j13c21d41ma0f41cf3e9da4b7c@mail.gmail.com> On 8 February 2010 19:37, Damien Broderick wrote: > But as I said a moment ago in another post, your statement doesn't sound > like a ringing endorsement of the position that "liberal democracy is the > best path to betterment," which is the point Hughes is making about >>Humanism. Where he goes from there is questionable if not absurd *as a > generalization*--but there's certainly a technocratic, elitist tendency in a > lot of >H discourse I've read here over the last 15 years or so. One could contend, exactly from what I have always taken as James Hughes' position, that "liberal democracy" *is* an ?litist system, only one where the circulation of the ?lites is very slow and limited, and where the selection criteria thereof is very debatable... :-) -- Stefano Vaj From stathisp at gmail.com Sat Feb 13 21:49:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 08:49:08 +1100 Subject: [ExI] evolution of consciousness In-Reply-To: <810447.49620.qm@web36507.mail.mud.yahoo.com> References: <810447.49620.qm@web36507.mail.mud.yahoo.com> Message-ID: On 14 February 2010 05:42, Gordon Swobe wrote: > --- On Thu, 2/11/10, Stathis Papaioannou wrote: > >> A computer model of the brain is made that controls a body, >> i.e. a robot. The robot will behave exactly like a human. > > Such a robot may be made, yes. > >> Moreover, it will behave exactly like a human due to an isomorphism >> between the structure and function of the brain and the structure and >> function of the computer model, since that is what a model is. Now, you >> claim that this robot would lack consciousness. > > I never made that claim. In fact I don't recall that you and I ever discussed robots until now. I suppose that you've made this claim above on my behalf. You posted a link to a site that purported to show that not even a robot which had input from the real world would have understanding. Have you changed your mind about this? >> This means that there is nothing about the intelligent behaviour of the >> human that is affected by consciousness. For if consciousness were a >> separate thing that affected behaviour, there would be some deficit in >> behaviour if you reproduced the functional relationship between brain >> components while leaving out the consciousness. Therefore, consciousness >> must be epiphenomenal. You might have said that you rejected >> epiphenomenalism, but you cannot do so consistently. > > No, I reject epiphenomenalism consistently. In the *actual* thought experiment that you proposed, in which the human patient presented with a lesion in Wernicke's area causing a semantic deficit and in which the surgeon used p-peurons, the patient DID suffer from a deficit after the surgery. And that deficit was due precisely to the fact that he would lack the experience of his own understanding, which would in turn affect his behavior. I.e., epiphenomenalism is false. This is why I stated multiple times that the surgeon would need to keep working until he finally would get the patient's behavior right. In the end his patient would pass the turing test yet still have no conscious understanding of words. The patient COULD NOT suffer from a deficit after the surgery. Otherwise you would be saying that both P and ~P are true, where P = "the p-neurons exactly reproduce the I/O behaviour of the biological neurons". >> The only way you can consistently maintain your position >> that computers can't reproduce consciousness is to say that they >> can't reproduce intelligence either. > > Not so. > >> If you don't agree with this you must explain why I am wrong when I >> point out the self-contradictions that zombies would lead to, and you >> simply avoid doing this, > > I just don't see the contradictions that you see, Stathis. > > Let me ask you this: your general position seems to be that weak AI is false; that if weak AI is possible then strong AI must also be possible, because the distinction between weak and strong is false and anything that passes the Turing must have strong AI. Is that your position? Yes. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Feb 13 21:23:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 13:23:34 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> Message-ID: <405824.5794.qm@web36506.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stefano Vaj wrote: >> I was wondering, given the lively back and forth I've >> seen on this list, whether the participants are agreed on >> the meanings of the terms "consciousness" and >> "intelligence". > I do not think so. This may be the crux of the problem. I think you have a point there. I wonder for example why some people here have trouble with my observation that amoebas (and most organisms on earth) have intelligence but no consciousness. It seems to me obvious that amoebas and other single-celled organisms have some intelligence: they can find food and procreate and so on. But because they lack nervous systems, it looks to me like these simple creatures live out their entire lives unconsciously. -gts From stathisp at gmail.com Sat Feb 13 22:04:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 09:04:12 +1100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <405824.5794.qm@web36506.mail.mud.yahoo.com> References: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> <405824.5794.qm@web36506.mail.mud.yahoo.com> Message-ID: On 14 February 2010 08:23, Gordon Swobe wrote: > It seems to me obvious that amoebas and other single-celled organisms have some intelligence: they can find food and procreate and so on. But because they lack nervous systems, it looks to me like these simple creatures live out their entire lives unconsciously. What about flatworms? -- Stathis Papaioannou From thespike at satx.rr.com Sat Feb 13 22:08:58 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 13 Feb 2010 16:08:58 -0600 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: References: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> <405824.5794.qm@web36506.mail.mud.yahoo.com> Message-ID: <4B7722FA.6050100@satx.rr.com> On 2/13/2010 4:04 PM, Stathis Papaioannou wrote: > What about flatworms? They have a very one-dimensional consciousness. From gts_2000 at yahoo.com Sat Feb 13 22:15:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 14:15:33 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <23758.31471.qm@web36506.mail.mud.yahoo.com> --- On Sat, 2/13/10, Jeff Davis wrote: > Gordon says he rejects the concept of mind-body dualism.? Rather, he > asserts a timeworn philosophical alternative, that of an inseparable > unity, a mind-body unity if you will.? Yes. > Then when others propose a simulation of mind, Gordon objects, seeming > to me, and logically, to be saying you can't get a faithful recreation > of mind if you leave out the body part. Yes. > And so I have come to agree with Gordon, partly. Be careful. My views haven't helped me win any popularity contests. :) > All the talk of neuron by neuron replacements is fine as > far as it goes, but Gordon is reasonable in rejecting this -- though > I wish he would have explained himself better -- based on the > principle of incompleteness.? Half a thing is not the thing. > > The 'body' part is missing.? To faithfully reproduce > the mind/persona/organism you have to reproduce the whole body, > all the somatic cells, neural and non, with their particular and > varied influences on the persona. Yes. I think we cannot extract the mind from the nervous system. In my view the mind exists as a *high-level physical feature* of the nervous system. Of interest to extropians, the mind does not exist as something separate from the brain, like software running on hardware. I see the human brain/mind as all hardware. Now at this point some people will think: "How could the mind exist as a physical feature of anything? Can't we make a real distinction between the mental and the physical?" Most people think this way, including even many philosophical materialists who claim not to. But it's only the dualistic voice of Descartes speaking to us from beyond the grave. We are his intellectual descendants. I finally shook off that Cartesian illusion and now the world makes a lot more sense. Unfortunately this world-view does not fit well with the extropian vision of uploading and so on. Oh well. > That said, returning to the idea of an authentic > simulation, I believe if the simulation includes the somatic > contribution, thus comprehensively simulating both mind and body, that > there is no reason the simulation won't fully and faithfully remanifest > the original mind-body persona. > > What do you think Gordon?? Does this work for you, or > no? I think we basically agree, though I need to know more what you mean by "somatic contribution". As I stated at the outset, I have no objection to strong AI per se. I just don't think it can happen on the dualistic software/hardware model. I wonder also if what you've described here even qualifies as anything I would call a simulation. Thanks for the thoughtful post, Jeff. -gts From stefano.vaj at gmail.com Sat Feb 13 22:40:57 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Feb 2010 23:40:57 +0100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <4B7722FA.6050100@satx.rr.com> References: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> <405824.5794.qm@web36506.mail.mud.yahoo.com> <4B7722FA.6050100@satx.rr.com> Message-ID: <580930c21002131440h6e465c19l914d43d89343e96f@mail.gmail.com> On 13 February 2010 23:08, Damien Broderick wrote: > On 2/13/2010 4:04 PM, Stathis Papaioannou wrote: > >> What about flatworms? > > They have a very one-dimensional consciousness. "Flat"worms... Shouldn't it be bi-dimensional? ;-) -- Stefano Vaj From gts_2000 at yahoo.com Sat Feb 13 22:42:06 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 14:42:06 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: Message-ID: <158383.1762.qm@web36504.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stathis Papaioannou wrote: > What about flatworms? I can't know what it's like to be a flatworm, assuming there is something it is like to be one, but clearly full-scale consciousness begins to appear somewhere on the evolutionary scale from flatworms to humans. I feel comfortable saying 1) humans have consciousness, 2) some other organisms with highly developed nervous systems almost certainly have consciousness (chimps, etc) and 3) simple organisms that completely lack nervous systems do not have it. -gts From stathisp at gmail.com Sat Feb 13 22:49:34 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 09:49:34 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <23758.31471.qm@web36506.mail.mud.yahoo.com> References: <23758.31471.qm@web36506.mail.mud.yahoo.com> Message-ID: On 14 February 2010 09:15, Gordon Swobe wrote: > Yes. I think we cannot extract the mind from the nervous system. In my view the mind exists as a *high-level physical feature* of the nervous system. Of interest to extropians, the mind does not exist as something separate from the brain, like software running on hardware. I see the human brain/mind as all hardware. Software exists separately from hardware in the same way as an architectural drawing exists separately from the building it depicts. > Now at this point some people will think: "How could the mind exist as a physical feature of anything? Can't we make a real distinction between the mental and the physical?" Most people think this way, including even many philosophical materialists who claim not to. But it's only the dualistic voice of Descartes speaking to us from beyond the grave. We are his intellectual descendants. > > I finally shook off that Cartesian illusion and now the world makes a lot more sense. Unfortunately this world-view does not fit well with the extropian vision of uploading and so on. Oh well. There is no real distinction between software and hardware. When you program a computer you make actual physical changes to it, and the "software" is just a scheme that you have in mind to help you make the right physical changes so that the hardware does what you want it to do. The computer is just dumb matter which has no understanding whatsoever of the program, the programmer, its own design, the existence of the world or anything else. Its parts follow the laws of physics but even this they don't understand: they just do it. Exactly the same is true of human brains. But when the hardware is set up just right, in a brain or a computer, it behaves in an intelligent manner, and intelligence from the point of view of the system displaying it is consciousness. -- Stathis Papaioannou From thespike at satx.rr.com Sat Feb 13 22:53:20 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 13 Feb 2010 16:53:20 -0600 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <580930c21002131440h6e465c19l914d43d89343e96f@mail.gmail.com> References: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> <405824.5794.qm@web36506.mail.mud.yahoo.com> <4B7722FA.6050100@satx.rr.com> <580930c21002131440h6e465c19l914d43d89343e96f@mail.gmail.com> Message-ID: <4B772D60.2060204@satx.rr.com> On 2/13/2010 4:40 PM, Stefano Vaj wrote: >> They have a very one-dimensional consciousness. > > "Flat"worms... Shouldn't it be bi-dimensional? ;-) Only just. Not enough for semantics, though, merely syntax, poor little syntagmatic things. From stathisp at gmail.com Sat Feb 13 22:57:55 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 09:57:55 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <964516.40828.qm@web36504.mail.mud.yahoo.com> Message-ID: On 14 February 2010 08:02, Jeff Davis wrote: > All the talk of neuron by neuron replacements is fine as far as it > goes, but Gordon is reasonable in rejecting this -- though I wish he > would have explained himself better -- based on the principle of > incompleteness. ?Half a thing is not the thing. But half a thing may still perform the function of the thing. > The 'body' part is missing. ?To faithfully reproduce the > mind/persona/organism you have to reproduce the whole body, all the > somatic cells, neural and non, with their particular and varied > influences on the persona. > > At this point I will state, without elaboration, that I have come to > believe that consciousness arises at the cellular level, and that any > variant of consciousness in a highly complex multi-cellular organisms > -- in particular, it's penultimate form in humans, of cognitive > ability and an awareness self and universe -- arises from a > combination of somatic and cerebral consciousness. ?To make things > worse -- again without elaboration -- it is difficult for me to avoid > the further conclusion that the bulk of the phenomenon of > consciousness comes from the contribution of the somatic cells. ?To > soften this seemingly outrageous assertion -- that the God-like nature > of man ... > > "What a piece of work is a man, how noble in reason, how infinite in > faculties; in form and moving how express and admirable, in action how > like an angel, in apprehension how like a god: the beauty of the > world, the paragon of animals! > > ...is more about the influence of gut, bone, blood, and sinew, than > brain -- let me remind that the mammalian brain with all its glorious > capability is a relatively recent add-on to the ancient partnership of > sensory apparatus and the less glamorous support soma. Then there would be a problem with the consciousness of people who have lost limbs or various internal organs. -- Stathis Papaioannou From stathisp at gmail.com Sat Feb 13 23:01:21 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 10:01:21 +1100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <158383.1762.qm@web36504.mail.mud.yahoo.com> References: <158383.1762.qm@web36504.mail.mud.yahoo.com> Message-ID: On 14 February 2010 09:42, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: > >> What about flatworms? > > I can't know what it's like to be a flatworm, assuming there is something it is like to be one, but clearly full-scale consciousness begins to appear somewhere on the evolutionary scale from flatworms to humans. > > I feel comfortable saying 1) humans have consciousness, 2) some other organisms with highly developed nervous systems almost certainly have consciousness (chimps, etc) and 3) simple organisms that completely lack nervous systems do not have it. It's the same with computers. There aren't any yet which match the processing ability of a mouse brain, let alone that of a chimp or human. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Feb 13 23:21:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 15:21:36 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <620471.11317.qm@web36505.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stathis Papaioannou wrote: > There is no real distinction between software and hardware. > When you program a computer you make actual physical changes to it, > and the "software" is just a scheme that you have in mind to help > you make the right physical changes so that the hardware does what you > want it to do. Okay, I'll go along with that. > The computer is just dumb matter which has no understanding > whatsoever of the program, the programmer, its own design, > the existence of the world or anything else. Right, that's what I've been trying to tell you! :) > Its parts follow the laws of physics but even this they don't > understand: they just do it. Right. Your logic looks perfect so far. > Exactly the same is true of human brains. I can't speak for you, but it sure seems like my brain has conscious understanding of things. And according to your logic above, computers do not have this understanding. So then if computers don't have it, but my brain does, then logic forces me to conclude that my brain does not equal a computer. -gts From stefano.vaj at gmail.com Sat Feb 13 23:58:54 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Feb 2010 00:58:54 +0100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <158383.1762.qm@web36504.mail.mud.yahoo.com> References: <158383.1762.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c21002131558m9e6df01w7fb9dbc96ffbe339@mail.gmail.com> On 13 February 2010 23:42, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: >> What about flatworms? > > I can't know what it's like to be a flatworm Nor can you know what it is like to be a woman, a chinese, an octuagenarian, or for that matter an identical twin of yourself. What you can tell, however, is that they are in principle able to succeed in a Turing test. ;-) Do you really need to make further assumptions as to hypothetical non-phenomenical features shared by them but not by other entities who may be able to pass it just as well? I am not trying to persuade you, as others seem unrelently to be doing, that a computer can be "conscious". I am simply insisting that any such immaterial feature that can be "projected" on other living organisms, or on the restricted subset thereof represented by alert, non-infant, educated, Turing-test qualified human beings can be with identical plausibility projected on anything else exhibiting a good enough analogy of the interactions you can have with them. -- Stefano Vaj From gts_2000 at yahoo.com Sun Feb 14 00:02:02 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 16:02:02 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: Message-ID: <789829.39304.qm@web36502.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stathis Papaioannou wrote: > > I feel comfortable saying 1) humans have > consciousness, 2) some other organisms with highly developed > nervous systems almost certainly have consciousness (chimps, > etc) and 3) simple organisms that completely lack nervous > systems do not have it. > > It's the same with computers. There aren't any yet which > match the processing ability of a mouse brain, let alone that of a > chimp or human. If and when someone builds a human-like nervous system in a box with sense organs, I'll call that box conscious but I won't call it a digital computer. Neither will you, because it won't be one. -gts From gts_2000 at yahoo.com Sun Feb 14 00:37:47 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 16:37:47 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <580930c21002131558m9e6df01w7fb9dbc96ffbe339@mail.gmail.com> Message-ID: <502441.97360.qm@web36506.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stefano Vaj wrote: >> I can't know what it's like to be a flatworm > > Nor can you know what it is like to be a woman, a chinese, > an octuagenarian, or for that matter an identical twin of > yourself. Actually I can infer a great deal about what it's like to be them. I see that they have nervous systems very much like mine, eyes and skin and noses and ears very much like mine, and so on, and from these biological facts *in combination* with their intelligent communications I can know with near certainty that they have consciousness very much like mine. I cannot do the same for flatworms or computers or amoebas. > Do you really need to make further assumptions as to > hypothetical non-phenomenical features shared by them but not by other > entities who may be able to pass it just as well? As you probably know by now if you've read my posts, I think the Turing test will give false positives for certain computers that we may develop in the not too distant future. By "false positives" I mean that weak AI computers will pass the test, but that the test does not measure the existence of consciousness or subjective experience or intentionality; it does not and cannot test for strong AI. The Turing test will give false positives for strong AI. > I am simply insisting that any such immaterial feature that can be > "projected" on other living organisms, or on the restricted subset > thereof represented by alert, non-infant, educated, Turing-test > qualified human beings can be with identical plausibility projected on > anything else exhibiting a good enough analogy of the interactions you > can have with them. Again, the Turing test does not measure the strong AI attributes that concern me. It is true that I "project" consciousness onto other humans but I don't do this based solely on their ability to pass the Turing test or on their interactions with me. As above, I do it based also on their physiology. -gts From jrd1415 at gmail.com Sun Feb 14 00:38:48 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sat, 13 Feb 2010 17:38:48 -0700 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <964516.40828.qm@web36504.mail.mud.yahoo.com> Message-ID: On Sat, Feb 13, 2010 at 3:57 PM, Stathis Papaioannou wrote: > On 14 February 2010 08:02, Jeff Davis wrote: >> ?Half a thing is not the thing. > But half a thing may still perform the function of the thing. If the nature of the half thing is profoundly different from that of the whole, the nature of its performance may also be radically altered/diminished. >> ...the God-like nature of man is more about the influence >> of gut, bone, blood, and sinew, than brain... > Then there would be a problem with the > consciousness of people who > have lost limbs or various internal organs. Regarding the loss of limbs, kidney, gall bladder, stomach, lengths of intestine, a lung, etc.... I agree the persona and consciousness appear unaltered. That said, I have heard of people who lose a portion of their visual field, but are unaware of the alteration in their consciousness. However, that may be the result of brain damage, not somatic damage. But mostly I was thinking of basic human impulses and feelings: hunger and the urge to feed, the reproductive impulse, acquisitiveness(greed?), the various behaviors arising from the instinct for survival: fight or flight, fear, anger, hatred, dominance, submission, anxiety, depression, shock. These things are primitive, and certainly pre-date the features of mammalian (ie higher) brain function. I view these impulses as the foundation AND BULK of animal and human behavior, and gut-centered, with higher-level mental activity a more recent development. I wonder if feelings in the gut aren't in fact real -- like pain -- and our awareness of them just an additional fact, a mental fact. So what would consciousness be, what would a mind be without this foundational context built up over three and a half billion years? That's why I think the gut (soma) may be critical in defining mind. But, to be honest with you, I feel way out on a limb here. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From gts_2000 at yahoo.com Sun Feb 14 00:49:20 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 16:49:20 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <431764.52879.qm@web36501.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stathis Papaioannou wrote: > But when the hardware is set up just right, in a brain or a computer, it > behaves in an intelligent manner, and intelligence from the point of view > of the system displaying it is consciousness. My watch tells the time intelligently. Does it therefore have consciousness from its "point of view"? I don't think so. I don't think my watch has a point of view. But then maybe it isn't set up right. :) -gts From stefano.vaj at gmail.com Sun Feb 14 01:17:13 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Feb 2010 02:17:13 +0100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <502441.97360.qm@web36506.mail.mud.yahoo.com> References: <580930c21002131558m9e6df01w7fb9dbc96ffbe339@mail.gmail.com> <502441.97360.qm@web36506.mail.mud.yahoo.com> Message-ID: <580930c21002131717j6f50fa59k1e39bbe3cb0aee3b@mail.gmail.com> On 14 February 2010 01:37, Gordon Swobe wrote: > It is true that I "project" consciousness onto other humans but I don't do this based solely on their ability to pass the Turing test or on their interactions with me. As above, I do it based also on their physiology. If the ghost of one's wife, or her upload on a digital computer, manages to persuade somebody that she is his wife, one will identify her as his wife, irrespective of the physiology, if any, Sociologically there is nothing else to say. Sure, somebody may refuse his conclusion for philosophical and ultimately arbitrary reasons, on the same basis that he may contend that she is not the same person anymore after seven years, because in average all her atoms have been replaced after such a perod. But this would end up being nothing else than a very idiosincratic POV. -- Stefano Vaj From stathisp at gmail.com Sun Feb 14 03:30:22 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 14:30:22 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <964516.40828.qm@web36504.mail.mud.yahoo.com> Message-ID: On 14 February 2010 11:38, Jeff Davis wrote: > On Sat, Feb 13, 2010 at 3:57 PM, Stathis Papaioannou wrote: >> On 14 February 2010 08:02, Jeff Davis wrote: > >>> ?Half a thing is not the thing. > >> But half a thing may still perform the function of the thing. > > If the nature of the half thing is profoundly different from that of > the whole, the nature of its performance may also be radically > altered/diminished. Yes it may, but it depends on what the function is. An artificial joint is not *identical* with a natural joint, but it can function just as well. >>> ...the God-like nature of man is more about the influence >>> of gut, bone, blood, and sinew, than brain... > >> Then there would be a problem with the >> consciousness of people who >> have lost limbs or various internal organs. > > Regarding the loss of limbs, kidney, gall bladder, stomach, lengths of > intestine, a lung, etc.... ?I agree the persona and consciousness > appear unaltered. ?That said, I have heard of people who lose a > portion of their visual field, but are unaware of the alteration in > their consciousness. ?However, that may be the result of brain damage, > not somatic damage. Some patients with damage to the visual cortex are completely blind but insist that they can see normally, even while they stagger around bumping into things. They are not lying or even "in denial", they honestly believe it. That is, they are delusional, and this is an example of a type of delusional disorder called anosognosia (meaning inability to recognise that you have an illness). Interestingly, it usually doesn't happen if the lesion is in the eye or optic nerve. > But mostly I was thinking of basic human impulses and feelings: > hunger and the urge to feed, the reproductive impulse, > acquisitiveness(greed?), the various behaviors arising from the > instinct for survival: fight or flight, fear, anger, hatred, > dominance, submission, anxiety, depression, shock. ?These things are > primitive, and certainly pre-date the features of mammalian (ie > higher) brain function. ?I view these impulses as the foundation AND > BULK of animal and human behavior, and gut-centered, with higher-level > mental activity a more recent development. ?I wonder if feelings in > the gut aren't in fact real -- like pain -- and our awareness of them > just an additional fact, a mental fact. > > So what would consciousness be, what would a mind be without this > foundational context built up over three and a half billion years? > That's why I think the gut (soma) may be critical in defining mind. > > But, to be honest with you, I feel way out on a limb here. There is no doubt that many of our feelings are based in the body, but they are *felt* in the brain. If you could reproduce the inputs the brain receives from the body, you would reproduce the associated feelings. -- Stathis Papaioannou From stathisp at gmail.com Sun Feb 14 03:41:45 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 14:41:45 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <431764.52879.qm@web36501.mail.mud.yahoo.com> References: <431764.52879.qm@web36501.mail.mud.yahoo.com> Message-ID: On 14 February 2010 11:49, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: > >> But when the hardware is set up just right, in a brain or a computer, it >> behaves in an intelligent manner, and intelligence from the point of view >> of the system displaying it is consciousness. > > My watch tells the time intelligently. Does it therefore have consciousness from its "point of view"? I don't think so. I don't think my watch has a point of view. But then maybe it isn't set up right. :) The watch performs the function of telling the time just fine. It simulates a sundial or hour glass in this respect. However, when a human tells the time there are thousands of extra nuances which a watch just doesn't have. So the watch tells the time, but it doesn't understand the concept of of time. Comparing a watch with a human is like comparing a nematode with a human, only more so. What would you say to the non-organic alien visitors who make the case that since a nematode is not conscious, neither can a human be conscious, since basically a human is just a more complex nematode? -- Stathis Papaioannou From stathisp at gmail.com Sun Feb 14 05:41:33 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 16:41:33 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <620471.11317.qm@web36505.mail.mud.yahoo.com> References: <620471.11317.qm@web36505.mail.mud.yahoo.com> Message-ID: On 14 February 2010 10:21, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: > >> There is no real distinction between software and hardware. >> When you program a computer you make actual physical changes to it, >> and the "software" is just a scheme that you have in mind to help >> you make the right physical changes so that the hardware does what you >> want it to do. > > Okay, I'll go along with that. > >> The computer is just dumb matter which has no understanding >> whatsoever of the program, the programmer, its own design, >> the existence of the world or anything else. > > Right, that's what I've been trying to tell you! :) > >> Its parts follow the laws of physics but even this they don't >> understand: they just do it. > > Right. Your logic looks perfect so far. > >> Exactly the same is true of human brains. > > I can't speak for you, but it sure seems like my brain has conscious understanding of things. And according to your logic above, computers do not have this understanding. So then if computers don't have it, but my brain does, then logic forces me to conclude that my brain does not equal a computer. Your brain, when it is working properly, has understanding as an emergent property of the system, even though the matter in your brain, each individual neuron, is completely stupid. The Chinese Room thought experiment should make that clear to you. Since it doesn't, I have proposed a variation in which your neurons *do* have an understanding of their own basic tasks, but still no understanding of the big picture. You haven't responded to this. -- Stathis Papaioannou From stathisp at gmail.com Sun Feb 14 06:17:11 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 17:17:11 +1100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <789829.39304.qm@web36502.mail.mud.yahoo.com> References: <789829.39304.qm@web36502.mail.mud.yahoo.com> Message-ID: On 14 February 2010 11:02, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: > >> > I feel comfortable saying 1) humans have >> consciousness, 2) some other organisms with highly developed >> nervous systems almost certainly have consciousness (chimps, >> etc) and 3) simple organisms that completely lack nervous >> systems do not have it. >> >> It's the same with computers. There aren't any yet which >> match the processing ability of a mouse brain, let alone that of a >> chimp or human. > > If and when someone builds a human-like nervous system in a box with sense organs, I'll call that box conscious but I won't call it a digital computer. Neither will you, because it won't be one. That's what is being attempted by Henry Markram's group. They may fail, but probably only because they are taking shortcuts in the modelling in order to reduce by orders of magnitude the required amount of processing. If they model a complete mouse brain, connect it to a mouse avatar or robot mouse, and it displays mouselike behaviour, that would be indication at the very least that that any separate mouse consciousness can only be epiphenomenal. As John Clark keeps reminding us, it would then be very difficult to explain how consciousness could have evolved. -- Stathis Papaioannou From bbenzai at yahoo.com Sun Feb 14 09:39:30 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 14 Feb 2010 01:39:30 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <783130.46814.qm@web113618.mail.gq1.yahoo.com> Jeff Davis wrote: > But mostly I was thinking of basic human impulses and > feelings: > hunger and the urge to feed, the reproductive impulse, > acquisitiveness(greed?), the various behaviors arising from > the > instinct for survival: fight or flight, fear, anger, > hatred, > dominance, submission, anxiety, depression, shock.? > These things are > primitive, and certainly pre-date the features of mammalian > (ie > higher) brain function.? I view these impulses as the > foundation AND > BULK of animal and human behavior, and gut-centered, with > higher-level > mental activity a more recent development.? I wonder > if feelings in > the gut aren't in fact real -- like pain -- and our > awareness of them > just an additional fact, a mental fact. > > So what would consciousness be, what would a mind be > without this > foundational context built up over three and a half billion > years? > That's why I think the gut (soma) may be critical in > defining mind. > > But, to be honest with you, I feel way out on a limb here. > I wouldn't argue with this view in principle, but would point out that the contribution of the actual body parts involved (actual gut tissue, muscles, etc.) is likely to be very very small. What's almost certainly more important is the maps in the brain that represent these body parts, and they could be hooked up to 'fake' body parts that produce the same signals with no loss of, or change in, any mental functions, as long as the fake parts behaved in a manner consistent with the real equivalent (produced hunger signals when blood glucose is low, etc.) I definitely agree that the lower brain functions, to do with somatic sensing and control and emotional states, are probably a vital component in any attempt to build an artificial mind, and this seems to be largely neglected by the AI community. Maybe actual sex just isn't as sexy for them as maze-navigation! Ben Zaiboc From bbenzai at yahoo.com Sun Feb 14 13:41:48 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 14 Feb 2010 05:41:48 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: Message-ID: <348836.89645.qm@web113619.mail.gq1.yahoo.com> Stathis Papaioannou > On 14 February 2010 11:02, Gordon Swobe > > If and when someone builds a human-like nervous system > in a box with sense organs, I'll call that box conscious but > I won't call it a digital computer. Neither will you, > because it won't be one. > > That's what is being attempted by Henry Markram's group. > They may > fail, but probably only because they are taking shortcuts > in the > modelling in order to reduce by orders of magnitude the > required > amount of processing. If they model a complete mouse brain, > connect it > to a mouse avatar or robot mouse, and it displays mouselike > behaviour, > that would be indication at the very least that that any > separate > mouse consciousness can only be epiphenomenal. As John > Clark keeps > reminding us, it would then be very difficult to explain > how > consciousness could have evolved. Ah, but don't you see Stathis? Blue Brain is a *digital computer*. Therefore it can't possibly produce consciousness, because, you know, because it's *digital*. And digital computers can't produce consciousness. QED. Ben Zaiboc From hkeithhenson at gmail.com Sun Feb 14 14:45:48 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 14 Feb 2010 07:45:48 -0700 Subject: [ExI] extropy-chat Digest, Vol 77, Issue 26 In-Reply-To: References: Message-ID: > On 14 February 2010 11:38, Jeff Davis wrote: > Regarding the loss of limbs, kidney, gall bladder, stomach, lengths of > intestine, a lung, etc.... ?I agree the persona and consciousness > appear unaltered. In a brain transplant operation, you want to be the donor. Keith From msd001 at gmail.com Sun Feb 14 18:20:33 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 14 Feb 2010 13:20:33 -0500 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <348836.89645.qm@web113619.mail.gq1.yahoo.com> References: <348836.89645.qm@web113619.mail.gq1.yahoo.com> Message-ID: <62c14241002141020g2278ddb5r5ee36d550b5567b@mail.gmail.com> On Sun, Feb 14, 2010 at 8:41 AM, Ben Zaiboc wrote: > Ah, but don't you see Stathis? > Blue Brain is a *digital computer*. ?Therefore it can't possibly produce consciousness, because, you know, because it's *digital*. > And digital computers can't produce consciousness. Digital computers produce discrete consciousness. Quantum computers produce indeterminate consciousness until observed. From jonkc at bellsouth.net Sun Feb 14 18:28:33 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 14 Feb 2010 13:28:33 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <405824.5794.qm@web36506.mail.mud.yahoo.com> References: <405824.5794.qm@web36506.mail.mud.yahoo.com> Message-ID: <5F9A9DFE-77FA-44C4-B8AE-53B5496EB99C@bellsouth.net> Since my last post Gordon Swobe has posted 8 times. > I wonder for example why some people here have trouble with my observation that amoebas (and most organisms on earth) have intelligence but no consciousness. The answer is that unlike Swobe some people on this list have knowledge about how Evolution works and realize that intelligence without consciousness is biological gibberish. The also know its rather dumb to talk about observing consciousness. > In my view the mind exists as a *high-level physical feature* of the nervous system. Rather like "fast" is a high-level feature of a racing car. > the mind does not exist as something separate from the brain Fast is separate from a racing car. > Can't we make a real distinction between the mental and the physical? Yes. > Most people think this way, including even many philosophical materialists who claim not to. But it's only the dualistic voice of Descartes speaking to us from beyond the grave. We are his intellectual descendants. I finally shook off that Cartesian illusion and now the world makes a lot more sense. I have not observed Swobe shaking off any of his illusions, Cartesian or otherwise; and just like anybody else continues to use both the words "mind" and "brain", and in his usage he carefully distinguishes between the two. Swobe would never say "he had an operation to remove a mind tumor" or "I have changed my brain" but if he really wasn't a dualist he should be comfortable with both those sentences. > I feel comfortable saying 1) humans have consciousness I am quite certain that Swobe is NOT comfortable in saying that all humans are conscious, just humans that act intelligently. I am quite certain Swobe doesn't think sleeping people are conscious, or people in a deep coma, or dead people. > the Turing test does not measure the strong AI attributes that concern me. Nothing can measure "strong AI", not the Turing Test not Evolution and not even Gordon Swobe; and if you can't measure something Science has no use for it, although religion might. > It is true that I "project" consciousness onto other humans but I don't do this based solely on their ability to pass the Turing test or on their interactions with me. As above, I do it based also on their physiology. So before Gordon Swobe learned physiology in college he thought he was the only conscious being in the universe, but that changed when he learned physiology even though he freely and frequently tells us that neither he nor anybody else has any idea how consciousness is produced. Interestingly just a century ago when almost no physiology was known nobody thought other minds existed, and even today most are ignorant on the subject so they must believe they are alone too. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Sun Feb 14 20:18:55 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 14 Feb 2010 12:18:55 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <23758.31471.qm@web36506.mail.mud.yahoo.com> Message-ID: Stathis Papaioannou : > Software exists separately from hardware in the same way as an > architectural drawing exists separately from the building it depicts. When I read this, a billion neurons in unison screeched "NOOO!!" inside my skull. It was an interesting experience. Now that it's over, I find myself with a whole litany of much more verbose corrections and objections. I feel I must share. One: software does not exist separately from the underlying hardware. It is epiphenomenal, which in my terminology is synonymous with virtual and imaginary, meaning that it relies on a more real substrate to support its existence. When you pull the plug, the hardware only gets quieter; the software ceases to exist. Two: a blueprint is hardware, not software, unless it is encoded on a hard drive and displayed on a monitor. Even in this case, there is no relationship between the blueprint and the building except in the imagination of a mind considering the two of them. Demolishing the building has no effect whatsoever on the blueprint, except perhaps to increase the number of people giving it wistful looks. Three: in stark contrast to Gordon's views, I believe that software is capable of (and is in practice) far more than the mere depiction of things. A blueprint depicts a building, but a structural simulation instantiates an actual virtual building that obeys its own laws of physics (which, we hope, are similar to our own). There is a surprisingly subtle difference between these two things. A virtual building has no more relationship to the building it emulates than does a blueprint of that building. However, unlike the blueprint, the virtual building really is a building. You could put little virtual people in it if you wanted (and if you could figure out how to make virtual people to begin with). It's a difficult conclusion to convey because I didn't arrive at it through wholly rational means. As shorthand, I might refer to the two things as a depiction and a simulation. A depiction can only be related to what it depicts by a third party, which might be identical with the thing depicted (as in the case of looking at a picture of myself). A simulation has no such inherent limitation; it can relate itself to what it simulates, just as I can relate myself to a person whom I am doing a rather good impression of. This has nothing at all to do with whether or not simulator and simulated are identical. Obviously, they are not. Even a perfect simulation of a mind is not that mind. However, it would be *a* mind; it would be able to do everything that a mind can be expected to do, and be everything a mind could possibly be. Including conscious. I was careful to say "mind" instead of "brain", here. We could make a virtual brain that does everything a real brain does, but it would probably be a waste of processing power: there isn't much sense in granting the ability to be squished, unless you want specifically to perform various questionably-ethical stress tests on your imaginary brain construct. From jrd1415 at gmail.com Sun Feb 14 22:25:00 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sun, 14 Feb 2010 15:25:00 -0700 Subject: [ExI] Semiotics and Computability In-Reply-To: <783130.46814.qm@web113618.mail.gq1.yahoo.com> References: <783130.46814.qm@web113618.mail.gq1.yahoo.com> Message-ID: On Sun, Feb 14, 2010 at 2:39 AM, Ben Zaiboc wrote: > What's almost certainly more important is the maps in the brain that represent these body parts, and they could be hooked up to 'fake' body parts that produce the same signals with no loss of, or change in, any mental functions, as long as the fake parts behaved in a manner consistent with the real equivalent (produced hunger signals when blood glucose is low, etc.) Yes. This solves the original problem -- which came about, as I see it, due to incompleteness in defining the problem, and a consequent incompleteness in the simulation -- by completing the simulation. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From emlynoregan at gmail.com Mon Feb 15 00:26:31 2010 From: emlynoregan at gmail.com (Emlyn) Date: Mon, 15 Feb 2010 10:56:31 +1030 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <20100212221046.5.qmail@syzygy.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> Message-ID: <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> On 13 February 2010 08:40, Eric Messick wrote: > Spike writes: >>For instance, imagine giving your sweetheart a Valentine that says: >> >>Sweetheart, the depth of my love for you is equal to that which is induced >>by several milligrams of endorphine-precursor pheromones which are >>responsible for this class of emotions in humans. ?I mean it from the bottom >>of my brain. > > Once again, xkcd has this covered, with a comic about overly > scientific valentines: > > http://xkcd.com/701/ > > -eric I was listening to a physicist being interviewed on This American Life recently. He was talking about how he and his girlfriend, in the very early stages of their relationship, were talking about how great it was that they were in love and that they had found each other. IIRC, she asked him if he thought she was the only woman for him, and he considered it and said "I think you're 1 in 100,000". Apparently this started their first fight. Hell of a time for the rational machinery to kick in :-) -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From gts_2000 at yahoo.com Mon Feb 15 00:46:53 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 14 Feb 2010 16:46:53 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <580930c21002131717j6f50fa59k1e39bbe3cb0aee3b@mail.gmail.com> Message-ID: <409052.58624.qm@web36506.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stefano Vaj wrote: >> It is true that I "project" consciousness onto other >> humans but I don't do this based solely on their ability to >> pass the Turing test or on their interactions with me. As >> above, I do it based also on their physiology. > > If the ghost of one's wife, or her upload on a digital > computer, manages to persuade somebody that she is his wife, one will > identify her as his wife, irrespective of the physiology, if any, It seems to me a digital simulation of a person, i.e., an upload existent on a computer, will have no more reality than does a digital movie of that person on a computer today. Depictions of things, digital or otherwise, do not equal the things they depict no matter how complete and realistic the depiction. This does not preclude the possibility that a complete digital description of a person might serve as a reliable blueprint for reconstructing that person in material form, but I see that as a separate question. > Sure, somebody may refuse his conclusion for philosophical > and ultimately arbitrary reasons Come the singularity, some people will lose their grips on reality and find themselves believing such absurdities as that digital depictions of people have real mental states. A few lonely philosophers of my stripe will try in vain to restore their sanity. :) -gts From thespike at satx.rr.com Mon Feb 15 00:49:09 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 14 Feb 2010 18:49:09 -0600 Subject: [ExI] Valentine's probability factor In-Reply-To: <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> Message-ID: <4B789A05.8030806@satx.rr.com> On 2/14/2010 6:26 PM, Emlyn wrote: > she asked him if he thought she was the only woman for him, and he > considered it and said "I think you're 1 in 100,000". Apparently this > started their first fight. Hell of a time for the rational machinery > to kick in :-) Nah, it's a perfect test of compatibility with others (the 1 in 100K or fewer of them) of like mind. Barbara and I were boggled at the apparent unlikelihood of our having found each other (and we talked about this right from the start)--which would have been wildly unlikely before the internet, ExI, affordable international travel, etc. Suddenly we had the whole English speaking population of the planet to trawl through--rather than the workplace, university, church, club, etc--presorted by handy detectors for IQ, personality type, unusual interests, etc. Damien Broderick From x at extropica.org Mon Feb 15 00:28:06 2010 From: x at extropica.org (x at extropica.org) Date: Sun, 14 Feb 2010 16:28:06 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <783130.46814.qm@web113618.mail.gq1.yahoo.com> Message-ID: On Sun, Feb 14, 2010 at 2:25 PM, Jeff Davis wrote: > On Sun, Feb 14, 2010 at 2:39 AM, Ben Zaiboc wrote: > >> What's almost certainly more important is the maps in the brain that represent these body parts, and they could be hooked up to 'fake' body parts that produce the same signals with no loss of, or change in, any mental functions, as long as the fake parts behaved in a manner consistent with the real equivalent (produced hunger signals when blood glucose is low, etc.) > > Yes. ?This solves the original problem -- which came about, as I see > it, due to incompleteness in defining the problem, and a consequent > incompleteness in the simulation -- by completing the simulation. Seems to me your extension makes no qualitative difference in regard to the issue at hand. You already had much of the machinery, and you added some you realized you left out. Still nowhere in any formal description of that machinery, no matter how carefully you look, will you find any actual "meaning." You''ll find only patterns of stimulus and response, syntactically complete, semantically vacant. I. You're missing the basic systems-theoretic understanding that the behavior of any system is meaningful only within the context of it's environment of interaction. Take the "whole human", e.g. a description of everything within the boundaries of the skin, and execute its syntax and you won't get human-like behavior--unless you also provide (simulate) a suitable environment of interaction. II. Now ahead and simulate the human, within an appropriate environment. You'll get human-like behavior, indistinguishable in principle from the real thing. Now you're back to the very correct point of Searle's Chinese Room Argument: There is no "meaning" to be found anywhere in the system, no matter how precise your simulation. Now Daniel Dennett or Thomas Metzinger or John Pollock (when feeling bold enough to say it) or Siddh?rtha Gautama, or Jef will say "Of course. The "consciousness" you seek is a function of the observer, and you've removed the observer role from the system under observation. There is no *essential* consciousness. Never had it, never will. The very suggestion is incoherent: it can't be defined." The logic of the CRA is correct. But it reasons from a flawed premise: That the human organism has this somehow ontologically special thing called "consciousness." So restart the music, and the merry-go-round. I'm surprised no one's mentioned the Giant Look Up Table yet. - Jef From olga.bourlin at gmail.com Mon Feb 15 01:11:06 2010 From: olga.bourlin at gmail.com (Olga Bourlin) Date: Sun, 14 Feb 2010 17:11:06 -0800 Subject: [ExI] Valentine's probability factor In-Reply-To: <4B789A05.8030806@satx.rr.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <4B789A05.8030806@satx.rr.com> Message-ID: On Sun, Feb 14, 2010 at 4:49 PM, Damien Broderick wrote: > On 2/14/2010 6:26 PM, Emlyn wrote: > >> she asked him if he thought she was the only woman for him, and he >> considered it and said "I think you're 1 in 100,000". Apparently this >> started their first fight. Hell of a time for the rational machinery >> to kick in :-) > > Nah, it's a perfect test of compatibility with others (the 1 in 100K or > fewer of them) of like mind. Barbara and I were boggled at the apparent > unlikelihood of our having found each other (and we talked about this right > from the start)--which would have been wildly unlikely before the internet, > ExI, affordable international travel, etc. Suddenly we had the whole English > speaking population of the planet to trawl through--rather than the > workplace, university, church, club, etc--presorted by handy detectors for > IQ, personality type, unusual interests, etc. > > Damien Broderick Me, three! (... even though Patrick and I met just a few years before the Internet swept into our homes) I was born in Nanking, China - my husband was born in Roscrea, Ireland. We met in Kansas City (attending a Free Inquiry magazine convention in 1991), and now reside in the emerald city of Seattle. I am neither young nor impressionable, but find myself amazed anew each day at the improbable event of Patrick and me having found each other. Olga From gts_2000 at yahoo.com Mon Feb 15 01:25:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 14 Feb 2010 17:25:45 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <744080.66241.qm@web36507.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stathis Papaioannou wrote: > The watch performs the function of telling the time just > fine. It simulates a sundial or hour glass in this respect. However, > when a human tells the time there are thousands of extra nuances > which a watch just doesn't have. So the watch tells the time, but > it doesn't understand the concept of of time. Comparing a watch with a > human is like comparing a nematode with a human, only more so. I compare my watch to a computer, not to a human. My digital watch has intelligence in the same sense as does my digital computer, and in the same sense as does the most powerful digital computer conceivable. I think you want me to believe that my watch has a small amount of consciousness by virtue of it having a small amount of intelligence. But I don't think that makes even a small amount of sense. It seems to me that my watch has no consciousness whatsoever, and that to say otherwise is to conflate science with science-fiction. > What would you say to the non-organic alien visitors who make the case > that since a nematode is not conscious, neither can a human be > conscious, since basically a human is just a more complex nematode? You beg the question of non-organic consciousness. As far as we know, "non-organic alien visitors" amounts to a completely meaningless concept. As for nematodes, I have no idea whether their primitive nervous systems support what I mean by consciousness. I doubt it but I don't know. I classify them in the gray area between unconscious amoebas and conscious humans. -gts From aware at awareresearch.com Mon Feb 15 01:30:16 2010 From: aware at awareresearch.com (Aware) Date: Sun, 14 Feb 2010 17:30:16 -0800 Subject: [ExI] Valentine's probability factor In-Reply-To: <4B789A05.8030806@satx.rr.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <4B789A05.8030806@satx.rr.com> Message-ID: On Sun, Feb 14, 2010 at 4:49 PM, Damien Broderick wrote: > On 2/14/2010 6:26 PM, Emlyn wrote: > >> she asked him if he thought she was the only woman for him, and he >> considered it and said "I think you're 1 in 100,000". Apparently this >> started their first fight. Hell of a time for the rational machinery >> to kick in :-) > > Nah, it's a perfect test of compatibility with others (the 1 in 100K or > fewer of them) of like mind. Barbara and I were boggled at the apparent > unlikelihood of our having found each other (and we talked about this right > from the start)--which would have been wildly unlikely before the internet, > ExI, affordable international travel, etc. Suddenly we had the whole English > speaking population of the planet to trawl through--rather than the > workplace, university, church, club, etc--presorted by handy detectors for > IQ, personality type, unusual interests, etc. Happily works for me and Lizbeth. We found each other through an online matching service. I was her #1 match and she was my #2 match. It may have been in our favor that we both answered the profile questions honestly and in detail. :-) From gts_2000 at yahoo.com Mon Feb 15 01:54:22 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 14 Feb 2010 17:54:22 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <896902.79613.qm@web36507.mail.mud.yahoo.com> --- On Sun, 2/14/10, x at extropica.org wrote: > The logic of the CRA is correct.? But it reasons from > a flawed premise: That the human organism has this somehow > ontologically special thing called "consciousness." It does not matter whether you believe in some "special thing called 'consciousness." Call it what you will, or call it nothing at all. It matters only that you understand that the man cannot grok the symbols by virtue of manipulating them according to the rules of syntax specified in the program. -gts From ablainey at aol.com Mon Feb 15 02:15:04 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Sun, 14 Feb 2010 21:15:04 -0500 Subject: [ExI] Valentine's probability factor In-Reply-To: <4B789A05.8030806@satx.rr.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com><710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <4B789A05.8030806@satx.rr.com> Message-ID: <8CC7BFBD25C775F-108C-BD05@webmail-d073.sysops.aol.com> I just grabbed my wife backside in a crowded Pub. The fact that I didn't get a slap preved it was true love and that was 17 years ago. We were very different people back then and we have both changed so much since. We still like very different things but are fundamentally similar and we are still together. Go figure?!? If you get the basic compatability right it will work regardless of matching up your likes and dislikes. -----Original Message----- From: Damien Broderick Nah, it's a perfect test of compatibility with others (the 1 in 100K or fewer of them) of like mind. Barbara and I were boggled at the apparent unlikelihood of our having found each other (and we talked about this right from the start)--which would have been wildly unlikely before the internet, ExI, affordable international travel, etc. Suddenly we had the whole English speaking population of the planet to trawl through--rather than the workplace, university, church, club, etc--presorted by handy detectors for IQ, personality type, unusual interests, etc. Damien Broderick -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Feb 15 09:28:24 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 15 Feb 2010 20:28:24 +1100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <409052.58624.qm@web36506.mail.mud.yahoo.com> References: <580930c21002131717j6f50fa59k1e39bbe3cb0aee3b@mail.gmail.com> <409052.58624.qm@web36506.mail.mud.yahoo.com> Message-ID: On 15 February 2010 11:46, Gordon Swobe wrote: > It seems to me a digital simulation of a person, i.e., an upload existent on a computer, will have no more reality than does a digital movie of that person on a computer today. Depictions of things, digital or otherwise, do not equal the things they depict no matter how complete and realistic the depiction. > > This does not preclude the possibility that a complete digital description of a person might serve as a reliable blueprint for reconstructing that person in material form, but I see that as a separate question. > >> Sure, somebody may refuse his conclusion for philosophical >> and ultimately arbitrary reasons > > Come the singularity, some people will lose their grips on reality and find themselves believing such absurdities as that digital depictions of people have real mental states. A few lonely philosophers of my stripe will try in vain to restore their sanity. :) You keep repeating this as a fact but you don't explain why a digital depiction of a person won't have real mental status. A pile of bricks can duplicate the mass of a person; a car can duplicate the speed of a person; an artificial joint can duplicate the function of a human's joint. These devices are all very different from the the thing they are copying. Why should the mind resist copying in anything other than the original substrate? -- Stathis Papaioannou From stathisp at gmail.com Mon Feb 15 09:43:57 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 15 Feb 2010 20:43:57 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <744080.66241.qm@web36507.mail.mud.yahoo.com> References: <744080.66241.qm@web36507.mail.mud.yahoo.com> Message-ID: On 15 February 2010 12:25, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: > >> The watch performs the function of telling the time just >> fine. It simulates a sundial or hour glass in this respect. However, >> when a human tells the time there are thousands of extra nuances >> which a watch just doesn't have. So the watch tells the time, but >> it doesn't understand the concept of of time. Comparing a watch with a >> human is like comparing a nematode with a human, only more so. > > I compare my watch to a computer, not to a human. My digital watch has intelligence in the same sense as does my digital computer, and in the same sense as does the most powerful digital computer conceivable. > > I think you want me to believe that my watch has a small amount of consciousness by virtue of it having a small amount of intelligence. But I don't think that makes even a small amount of sense. It seems to me that my watch has no consciousness whatsoever, and that to say otherwise is to conflate science with science-fiction. If you don't have a problem with a continuous increase in intelligence and consciousness between a nematode and a human, then why do you have a problem with a continuous increase in intelligence and consciousness between a watch and an AI of the future? >> What would you say to the non-organic alien visitors who make the case >> that since a nematode is not conscious, neither can a human be >> conscious, since basically a human is just a more complex nematode? > > You beg the question of non-organic consciousness. As far as we know, "non-organic alien visitors" amounts to a completely meaningless concept. What?? > As for nematodes, I have no idea whether their primitive nervous systems support what I mean by consciousness. I doubt it but I don't know. I classify them in the gray area between unconscious amoebas and conscious humans. At some point, either gradually or abruptly, consciousness will happen in the transition from nematode to human or watch to AI. -- Stathis Papaioannou From gts_2000 at yahoo.com Mon Feb 15 13:49:20 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 05:49:20 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: Message-ID: <930797.89044.qm@web36503.mail.mud.yahoo.com> --- On Mon, 2/15/10, Stathis Papaioannou wrote: > You keep repeating this as a fact but you don't explain why > a digital depiction of a person won't have real mental status. If I make a jpeg of you with my digital camera, that digital depiction of you will have no mental states. If I make a digital movie of you on my webcam, that digital depiction of you will have no mental states. A complete three-dimensional animated digital depiction of you made with futuristic digital simulation technology will amount to just another kind of depiction of you, and so it will likewise have no mental states. It does not matter whether we create our depictions of things on the walls of caves or on computers. Depictions of things do not equal the things they depict. -gts From gts_2000 at yahoo.com Mon Feb 15 15:28:29 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 07:28:29 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <153083.39349.qm@web36503.mail.mud.yahoo.com> --- On Mon, 2/15/10, Stathis Papaioannou wrote: >> I think you want me to believe that my watch has a >> small amount of consciousness by virtue of it having a small >> amount of intelligence. But I don't think that makes even a >> small amount of sense. It seems to me that my watch has no >> consciousness whatsoever, and that to say otherwise is to >> conflate science with science-fiction. > > If you don't have a problem with a continuous increase in > intelligence and consciousness between a nematode and a human, then why > do you have a problem with a continuous increase in intelligence and > consciousness between a watch and an AI of the future? I know a priori that the human nervous system supports consciousness. I cannot say anything like that about my watch or about computers without stepping outside the bounds of science to science-fiction. >> You beg the question of non-organic consciousness. As >> far as we know, "non-organic alien visitors" amounts to a >> completely meaningless concept. > > What?? You ask what I would say to non-organic alien visitors, and I suppose you assume those non-organic alien visitors have consciousness. But non-organic consciousness is what is at issue here. >> As for nematodes, I have no idea whether their >> primitive nervous systems support what I mean by >> consciousness. I doubt it but I don't know. I classify them >> in the gray area between unconscious amoebas and conscious >> humans. > > At some point, either gradually or abruptly, consciousness > will happen in the transition from nematode to human or watch to AI. I consider it a scientific fact that consciousness arises between the nematode to the human. But only in science-fiction does consciousness happen in digital watches or digital computers. -gts From natasha at natasha.cc Mon Feb 15 15:12:52 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 15 Feb 2010 09:12:52 -0600 Subject: [ExI] Valentine's probability factor In-Reply-To: <8CC7BFBD25C775F-108C-BD05@webmail-d073.sysops.aol.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com><710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com><4B789A05.8030806@satx.rr.com> <8CC7BFBD25C775F-108C-BD05@webmail-d073.sysops.aol.com> Message-ID: Emlyn, Damien, Olga, Aware, Ablainey -- all such beautiful stories! Nlogo1.tif Natasha Vita-More _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of ablainey at aol.com Sent: Sunday, February 14, 2010 8:15 PM To: extropy-chat at lists.extropy.org Subject: Re: [ExI] Valentine's probability factor I just grabbed my wife backside in a crowded Pub. The fact that I didn't get a slap preved it was true love and that was 17 years ago. We were very different people back then and we have both changed so much since. We still like very different things but are fundamentally similar and we are still together. Go figure?!? If you get the basic compatability right it will work regardless of matching up your likes and dislikes. -----Original Message----- From: Damien Broderick Nah, it's a perfect test of compatibility with others (the 1 in 100K or fewer of them) of like mind. Barbara and I were boggled at the apparent unlikelihood of our having found each other (and we talked about this right from the start)--which would have been wildly unlikely before the internet, ExI, affordable international travel, etc. Suddenly we had the whole English speaking population of the planet to trawl through--rather than the workplace, university, church, club, etc--presorted by handy detectors for IQ, personality type, unusual interests, etc. Damien Broderick -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From gts_2000 at yahoo.com Mon Feb 15 16:10:30 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 08:10:30 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <466012.12578.qm@web111205.mail.gq1.yahoo.com> Message-ID: <963975.24812.qm@web36502.mail.mud.yahoo.com> --- On Mon, 2/15/10, Christopher Luebcke wrote: > If my understanding of the CRA is > correct (it may not be), it seems to me that Searle is > arguing that because one component of the system does not > understand the symbols, the system doesn't understand the > symbols. This to me is akin to claiming that because my > fingers do not understand the words they are currently > typing out, neither do I. Searle speaks for himself: My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him. Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html -gts From cluebcke at yahoo.com Mon Feb 15 07:54:13 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Sun, 14 Feb 2010 23:54:13 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <409052.58624.qm@web36506.mail.mud.yahoo.com> References: <409052.58624.qm@web36506.mail.mud.yahoo.com> Message-ID: <841756.99168.qm@web111212.mail.gq1.yahoo.com> Thanks for all the responses to my queries. At least among the respondents, it seems that there are not sufficiently-agreed-upon definitions of "consciousness" or "intelligence" (nor, I suspect, of "mind") for truly effective communication to take place. Moreover, it seems that there is wide disagreement over how "intelligence" and "consciousness" are related to one another. I am no cognitive scientist, biologist or psychologist (or any sort of useful -ist) and I wouldn't be nearly so bold as to propose definitions of these terms that I think are "correct". I would, however, humbly suggest that if the application of these terms is important, yet their definitions are not agreed-upon, it may be that the definitions can be decomposed into small enough constituent parts that a fruitful conversation could be had about the genesis, relationship, importance and reproducibility of those constituents. It does seem, from what I've read, that there is no consensus definition of "intelligence" or "consciousness" even among the experts in these fields. Assuming that is the case, then the definitions could vary so widely that the statements Intelligence cannot exist without consciousness and Intelligence can exist without consciousness could in fact both be true, because the persons making these statements could have divergent enough definitions for "intelligence" and "consciousness" that the two statements, inflated to replace the terms in question with their intended meanings, might not be contradictory. I won't be quite so humble as to not give my two cents, though, which are essentially that if it can't be detected or measured, it's not fruitful to have a debate about anything empirical relating to it. In the can-be-measured category I believe we can place "intelligence", as the various definitions, while divergent, do seem to share an empirical bent. In the can't-be-measured category I suspect we'll find "consciousness"--though we may well find that several of the things we're thinking of when we say "consciousness" can in fact be measured or detected. Cheers, Chris From cluebcke at yahoo.com Mon Feb 15 08:13:50 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Mon, 15 Feb 2010 00:13:50 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <896902.79613.qm@web36507.mail.mud.yahoo.com> References: <896902.79613.qm@web36507.mail.mud.yahoo.com> Message-ID: <466012.12578.qm@web111205.mail.gq1.yahoo.com> If my understanding of the CRA is correct (it may not be), it seems to me that Searle is arguing that because one component of the system does not understand the symbols, the system doesn't understand the symbols. This to me is akin to claiming that because my fingers do not understand the words they are currently typing out, neither do I. If a system is said to understand symbols, it does not follow that all components of the system understand the symbols. My pinky finger, or a particular neuron, or a transistor, or a man in a room following orders about moving squiggly cards around, need not understand symbols for the system they compose a part of to understand symbols. It is the system as whole that is said to understand symbols, not necessarily any of the parts. As a demonstration of my point, I ask you to simply modify the CBA a bit. You place a Chinese-speaking person in the room, and don't provide him any of the rules. He executes the test perfectly. Surely you don't then draw the conclusion that the room does indeed understand symbols, do you? (It is also, I think, far from clear that such a system as the CBA could be executed by a finite set of discrete rules.) As a final note, forgive me for repeatedly saying "understanding symbols", but while imperfect, I think it'll be less prone to misunderstandings due to ambiguity than "intelligence" or "consciousness". ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Sun, February 14, 2010 5:54:22 PM Subject: Re: [ExI] Semiotics and Computability --- On Sun, 2/14/10, x at extropica.org wrote: > The logic of the CRA is correct. But it reasons from > a flawed premise: That the human organism has this somehow > ontologically special thing called "consciousness." It does not matter whether you believe in some "special thing called 'consciousness." Call it what you will, or call it nothing at all. It matters only that you understand that the man cannot grok the symbols by virtue of manipulating them according to the rules of syntax specified in the program. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From gts_2000 at yahoo.com Mon Feb 15 18:10:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 10:10:21 -0800 (PST) Subject: [ExI] glutamine and life-extension Message-ID: <505300.77015.qm@web36506.mail.mud.yahoo.com> Stathis, All this talk of neurons reminds me of a paper I wrote circa 1999: Glutamine Based Growth Hormone Releasing Products: A Bad Idea? http://www.newtreatments.org/loadlocal.php?hid=974 My article above created a surprising amount of controversy among life-extensionists. I closed down my website, but recently found my paper republished on the site above without my permission. Thought you might find it interesting given your profession and the general theme. -gts From avantguardian2020 at yahoo.com Mon Feb 15 18:12:14 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Mon, 15 Feb 2010 10:12:14 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence Message-ID: <955672.41841.qm@web65607.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stathis Papaioannou > To: gordon.swobe at yahoo.com; ExI chat list > Sent: Sat, February 13, 2010 2:04:12 PM > Subject: Re: [ExI] Newbie Question: Consciousness and Intelligence > > On 14 February 2010 08:23, Gordon Swobe <> ymailto="mailto:gts_2000 at yahoo.com" > href="mailto:gts_2000 at yahoo.com">gts_2000 at yahoo.com> wrote: > > It seems to me obvious that amoebas and other single-celled organisms have some > intelligence: they can find food and procreate and so on. But because they lack > nervous systems, it looks to me like these simple creatures live out their > entire lives unconsciously. Intelligence and consciousness?are not independent phenomenon. I tend to view consciousness as awareness. When somebody is anesthetised, they become unconscious because they lack awareness of external or internal phenomena. In so far as an amoeba is?more aware of?its environment than an deeply?anesthetised human, I would say it does have a small measure of consciousness. If one puts a drop of acid on the skin of an anesthetised person that person would not react. If I put a droplet of acid on a microscope slide adjacent to an amoeba, the amoeba would run as fast and as far is its little pseudopodia could carry it. ? Ameobae do have sensory apparatus and they do process sensory information. Their senses are not as rich as that of people, being mostly chemical receptors and the like, but the same could be said of a single neuron. Indeed an amoeba is probably more intelligent/conscious than an individual neuron. Why??I would bet an amoeba could?survive in someone's?brain longer than a neuron could survive?in a pond. ? At what point does intelligence lead to consciousness??That is like asking, "at?what temperature does something get hot?" It's completely relative. For a person whose body temperature is 37 degrees Centigrade, boiling water is hot. But a person is hotter relative to the rings of saturn than boiling water is relative to a person. I am starting to think that a similar argument could be used for intelligence and consciousness. And consequently zombies are bogus. > What about flatworms? They are capable learning. That has been demonstrated. ? http://www.mnstate.edu/wisenden/reprint%20pdfs/2001%20planaria%20An%20Beh%2062%20761-766.pdf ? Incidently, I don't think consciousness is either an evolutionary "spandrel"?or simply an epiphenomenon. Women pointedly do choose mates who *pay attention* to them. As such natural selection does assess?consciousness as a fitness function?via sexual selection if not by the more primitive fitness function of figuring out that one is being stalked by a predator.? Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From cluebcke at yahoo.com Mon Feb 15 16:47:22 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Mon, 15 Feb 2010 08:47:22 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <963975.24812.qm@web36502.mail.mud.yahoo.com> References: <963975.24812.qm@web36502.mail.mud.yahoo.com> Message-ID: <752079.20710.qm@web111202.mail.gq1.yahoo.com> I've just read the entirety of Searle's response to the systems argument (thank you for the link), and near as I can tell, it is that the system as a whole is not intelligent because, unlike the man, it doesn't really understand the symbols. Yet determining whether a given system understands symbols ought to be what we're finding out, not what we presume prior to beginning the experiment. Double-quoting Searle: "All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him" Again, how one determines whether a system--a man, a room, a man in a room (or a savant in the woods)--"understands symbols" is crucial to the point, yet I find nothing in what I've read so far where Searle sets out the criteria by which he makes this judgement. It seems to me that Searle is rather making an a priori argument, believing that he (and we) all know ahead of time that the system comprised of the man and the rules is not intelligent, and then scoffs at any suggestion that it might be. My response to the quote above is twofold: 1. I believe that understanding symbols is a process, not a state, and therefore to talk about what things comprise a system isn't nearly so useful as to talk about what it is they're doing 2. I believe that it is fruitless to debate whether a system understands symbols without having a common agreement on a. What it means to "understand symbols" b. How we measure or detect whether a system (a man, a room, a computer) can understand symbols To me the problem is fundamentally empirical. ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Mon, February 15, 2010 8:10:30 AM Subject: Re: [ExI] Semiotics and Computability --- On Mon, 2/15/10, Christopher Luebcke wrote: > If my understanding of the CRA is > correct (it may not be), it seems to me that Searle is > arguing that because one component of the system does not > understand the symbols, the system doesn't understand the > symbols. This to me is akin to claiming that because my > fingers do not understand the words they are currently > typing out, neither do I. Searle speaks for himself: My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him. Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From lacertilian at gmail.com Mon Feb 15 18:43:11 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 10:43:11 -0800 Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) Message-ID: Gordon Swobe : > I know a priori that the human nervous system supports consciousness. I cannot say anything like that about my watch or about computers without stepping outside the bounds of science to science-fiction. (snip) > I consider it a scientific fact that consciousness arises between the nematode to the human. But only in science-fiction does consciousness happen in digital watches or digital computers. Well, clearly you are wrong to consider that a scientific fact. Science is based on experiments, not on reason. A priori knowledge has no scientific validity. If you're going to bring the scientific method into this, then the burden is on you to provide an experiment which tests for the existence of consciousness. The Chinese Room does nothing like this. The Turing test at least looks like it does, but you have stated repeatedly that it will give false positives when exposed to a weak AI. The way I see it, Gordon, you have only two choices: stay within the realm of a priori reason and logic, wherein intuition and subjective experience may be invoked as compelling pieces of evidence; or take the argument into a purely scientific frontier, wherein only objective measurements count for anything. Treating the two with a mix-and-match attitude is disingenuous at best, fallacious at worst. A little bit is unavoidable, but it disappoints me to see you doing so in such a flagrantly shameless way. From gts_2000 at yahoo.com Mon Feb 15 18:54:11 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 10:54:11 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <955672.41841.qm@web65607.mail.ac4.yahoo.com> Message-ID: <877779.10984.qm@web36502.mail.mud.yahoo.com> --- On Mon, 2/15/10, The Avantguardian wrote: > I tend to view consciousness as awareness. You might recall that you and I actually discussed this very subject here a number of years ago, and that we agreed there exists a sense in which amoebas and similar organisms have awareness. We made a distinction then, or at least I did, between the awareness of amoebas and the kind of consciousness that I refer to here several years later. Consciousness, as I mean it today, entails the ability to have conscious intentional states. That is, it entails the ability to have something consciously "in mind" as opposed to merely having the ability to respond intelligently to the environment in the way that amoebas do. So far as we know, intentionality so defined requires a nervous system and most likely a well-developed brain, both of which amoebas lack. So then either amoebas have no consciousness, so defined, or else they have mysterious nervous systems that we cannot see or understand. -gts From lacertilian at gmail.com Mon Feb 15 20:04:18 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 12:04:18 -0800 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <877779.10984.qm@web36502.mail.mud.yahoo.com> References: <955672.41841.qm@web65607.mail.ac4.yahoo.com> <877779.10984.qm@web36502.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Consciousness, as I mean it today, entails the ability to have conscious intentional states. I'll note for the record that I now have the concept of "consciousness" clear enough in my mind to state, confidently, that consciousness, intelligence, and intentionality are all completely different things. Artificial systems could be made to possess any one in absence of the other two. However, I am by now convinced that it is not even theoretically possible to scientifically confirm that a given system has consciousness. This leads to the inevitable conclusion that consciousness does not exist, by my definition: anything which exists can potentially influence everything else that exists. All things in existence can, *by definition*, be measured. So, I create new conscious systems all the time. Pretty much every time I make use of a toilet. I defy any of you to prove otherwise. I've even started a separate thread for it! From gts_2000 at yahoo.com Mon Feb 15 20:12:32 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 12:12:32 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: Message-ID: <275598.51562.qm@web36502.mail.mud.yahoo.com> --- On Mon, 2/15/10, Spencer Campbell wrote: > If you're going to bring the scientific method into this, > then the burden is on you to provide an experiment which tests for > the existence of consciousness. Can you see these words, Spencer? If so then you have what I mean by consciousness. -gts From sparge at gmail.com Mon Feb 15 20:16:08 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 15 Feb 2010 15:16:08 -0500 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: References: <955672.41841.qm@web65607.mail.ac4.yahoo.com> <877779.10984.qm@web36502.mail.mud.yahoo.com> Message-ID: On Mon, Feb 15, 2010 at 3:04 PM, Spencer Campbell wrote: > > However, I am by now convinced that it is not even theoretically > possible to scientifically confirm that a given system has > consciousness. This leads to the inevitable conclusion that > consciousness does not exist, by my definition: anything which exists > can potentially influence everything else that exists. All things in > existence can, *by definition*, ?be measured. Wow, what an excellent way to create interminable threads--just start using your own definitions for common terms. -Dave From scerir at libero.it Mon Feb 15 20:37:37 2010 From: scerir at libero.it (scerir) Date: Mon, 15 Feb 2010 21:37:37 +0100 (CET) Subject: [ExI] QET Message-ID: <8885036.1601541266266257004.JavaMail.defaultUser@defaultHost> Masahiro Hotta (Tohoku University, Japan) has come up with an exotic idea. Why not use the known quantum principles to teleport energy? The idea is not so simple and not so sound, but more or less this process of teleportation seems to involve making a measurement on one particle (of an entangled pair of particles), which would inject quantum energy into the system, then it seems to involve making a measurement on the second and distant particle (of the entangled pair), and this would extract the original energy, while retaining relativistic causality and conservation of energy. http://www.technologyreview.com/blog/arxiv/24759/ http://arxiv.org/abs/0908.2674 http://arxiv.org/abs/0911.3430 http://arxiv.org/abs/1002.0200 No idea about this specific process of dr. Hotta. But it seems to me that the strange nature of quantum mechanical phenomena, and in particular quantum non- locality and quantum non-separability, could be easily extended - at least heuristically - to different contexts (i.e. gravitational fields) to get new and relevant results (i.e. non-local gravitational fields). See i.e. Adrian Kent here http://arxiv.org/abs/gr-qc/0507045 From lacertilian at gmail.com Mon Feb 15 20:38:00 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 12:38:00 -0800 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <275598.51562.qm@web36502.mail.mud.yahoo.com> References: <275598.51562.qm@web36502.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Can you see these words, Spencer? If so then you have what I mean by consciousness. Great! Now, how do I know that you do? From gts_2000 at yahoo.com Mon Feb 15 20:49:20 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 12:49:20 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: Message-ID: <431085.67926.qm@web36504.mail.mud.yahoo.com> --- On Mon, 2/15/10, Spencer Campbell wrote: >> Can you see these words, Spencer? If so then you have >< what I mean by consciousness. > > Great! Now, how do I know that you do? Sorry, I forgot to mention that nothing in the universe aside from you has consciousness. -gts From steinberg.will at gmail.com Mon Feb 15 21:15:21 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 15 Feb 2010 16:15:21 -0500 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <431085.67926.qm@web36504.mail.mud.yahoo.com> References: <431085.67926.qm@web36504.mail.mud.yahoo.com> Message-ID: <4e3a29501002151315o33b97318w676b530f396ad850@mail.gmail.com> I don't really understand what some of you think with regards to consciousness. "Is consciousness real?" is a bunk question. Some people take physicality and extend it into the notion of consciousness not being real, which I don't understand. If consciousness is not real, what would it be like if it were real? If real is being aware of something, I am surely aware of my own awareness. Even in the event of the mind's total epiphenominalism, our being able to differentiate between consciousness and unconsciousness means that some physical factor is changing, GIVEN PHYSICALISM. Consciousness, whether of discrete or indiscrete locus, whether of a quantum or classical nature, is an observation based on physical or mathematical factors. The axiom of consciousness can be seen as on par with the anthropic principle or the fact that we can know G is true without proving it; it is unprovable because the means of proof lie outside the system. Proof is based on awareness of observation, and consciousness IS awareness. It is our G, as has been supposed by a few forward-minded folk. When you ask for a proof of consciousness, you validate the existence of it in your question. I suggest rephrasing it in the manner you mean, which seems to boil down to "is any sort of dualism real?", an important question but not the same one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Feb 15 20:56:01 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 15 Feb 2010 15:56:01 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <409052.58624.qm@web36506.mail.mud.yahoo.com> References: <409052.58624.qm@web36506.mail.mud.yahoo.com> Message-ID: Since my last post Gordon Swobe has posted 10 times. > > Come the singularity, some people will lose their grips on reality and find themselves believing such absurdities as that digital depictions of people have real mental states. A few lonely philosophers of my stripe will try in vain to restore their sanity. As far as the future is concerned it really doesn't matter if Swobe's ideas are right or wrong, either way they're as dead as the Dodo. Even if he's 100% right and I am 100% wrong people with my ideas will have vastly more influence than people like him because we will not be held back by superstitious ideas about "THE ORIGINAL". So it's pedal to the metal upgrading, Jupiter brain ahead. Swobe just won't be able to keep up with the electronic competition. Only a few axons in the brain can send signals as fast as 100 meters per second, non-myelinated axon's are only able to go about 1 meter per second. Light moves at 300,000,000 meters per second. Perhaps after the singularity the more conservative and superstitious among us could still survive in some little backwater somewhere, like the Amish do today, but I doubt it. > I think you want me to believe that my watch has a small amount of consciousness by virtue of it having a small amount of intelligence. But I don't think that makes even a small amount of sense. It seems to me that my watch has no consciousness I'm not surprised Swobe can't make sense of it all, nothing in the Biological sciences makes any sense without Evolution, and he has shown a profound ignorance not only of that theory but of the fossil record in general. Evolution found it far harder to come up with intelligence than consciousness, the brain structures that produce the basic emotions we share with many other animals and are many hundreds of millions of years old, while the higher brain structures that produce language, mathematics and abstract thought in general, things that make humans unique, are less than a million years old and possibly much less. Swobe does not use his higher brain structures to think with and prefers to think with his gut; but many animals have an intestinal tract and to my knowledge none of them are particularly good philosophers. > Consciousness, as I mean it today, entails the ability to have conscious intentional states. That is, it entails the ability to have something consciously "in mind" So consciousness means the ability to be conscious, that is to say the ability to consciously think about stuff. Thank you so much for those words of wisdom! > If I make a jpeg of you with my digital camera, that digital depiction of you will have no mental states. Swobe may very well be right in this particular instance, but it illustrates the useless nature of the grotesque exercises he gives the grandiose name "thought experiment". Swobe has no way to directly measure the mental states even of his fellow human beings much less that of a digital camera; and yet over the last few months he has made grand pronouncements about the mental states of literally hundreds of things. To add insult to injury the mental state of things is exactly what he's trying to prove; he just doesn't understand that saying X has no consciousness is not the same as proving X has no consciousness. > The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. Swobe admits, and if fact seems delighted by the fact, that he has absolutely no idea what causes consciousness; nevertheless he thinks he can always determine a priori what has consciousness and what does not, and it has nothing to do with the way they behave. The conjunction of a person with bits of paper might display intelligence, in fact there is no doubt that it could, but it could never be conscious because, because, well just because; but Swobe thinks 3 pounds of grey goo being conscious is perfectly logical. Can Swobe explain why one thing is ridiculous and the other logical? Nope, it's just that he's accustomed to one and not the other. That's it. > Depictions of things, digital or otherwise, do not equal the things they depict Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. > the man cannot grok the symbols by virtue of manipulating them according to the rules of syntax > Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. > Depictions of things do not equal the things they depict. Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Mon Feb 15 21:45:48 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 13:45:48 -0800 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <4e3a29501002151315o33b97318w676b530f396ad850@mail.gmail.com> References: <431085.67926.qm@web36504.mail.mud.yahoo.com> <4e3a29501002151315o33b97318w676b530f396ad850@mail.gmail.com> Message-ID: 2010/2/15 Will Steinberg : > When you ask for a proof of consciousness, you validate the existence of it > in your question.? I suggest rephrasing it in the manner you mean, which > seems to boil down to "is any sort of dualism real?", an important question > but not the same one. But I don't mean it in that manner! I'm not asking for proof that consciousness exists *at all*. I'm asking for proof that consciousness exists *in a given system*. So, thought experiment. I'm going to jump in a box, and then ask you to prove that the box contains consciousness. Can you? Can anyone? (Obviously, if you jump in the box instead, I can not. But perhaps this only means that I make a bad metrologist.) From stefano.vaj at gmail.com Mon Feb 15 22:02:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Feb 2010 23:02:50 +0100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <275598.51562.qm@web36502.mail.mud.yahoo.com> References: <275598.51562.qm@web36502.mail.mud.yahoo.com> Message-ID: <580930c21002151402r6b60b32es4188cb333aac8c61@mail.gmail.com> On 15 February 2010 21:12, Gordon Swobe wrote: > Can you see these words, Spencer? If so then you have what I mean by consciousness. Come on. Any trivial detector can differentiate black words on a white background. The burden of proof regards the ineffable difference that one would make regarding one's own perception (and other entities he cares to project on such difference). -- Stefano Vaj From stefano.vaj at gmail.com Mon Feb 15 22:12:15 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Feb 2010 23:12:15 +0100 Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) In-Reply-To: References: Message-ID: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> On 15 February 2010 19:43, Spencer Campbell wrote: > If you're going to bring the scientific method into this, then the > burden is on you to provide an experiment which tests for the > existence of consciousness. This is an unreasonable demand. The scientific method cannot offer evidence of something which in somebody's view s not phenomenical by definition, but is an a priori of his worldview. -- Stefano Vaj From steinberg.will at gmail.com Mon Feb 15 22:28:16 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 15 Feb 2010 17:28:16 -0500 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <580930c21002151402r6b60b32es4188cb333aac8c61@mail.gmail.com> References: <275598.51562.qm@web36502.mail.mud.yahoo.com> <580930c21002151402r6b60b32es4188cb333aac8c61@mail.gmail.com> Message-ID: <4e3a29501002151428t5d0bd99coa29a9495e2e3bc6e@mail.gmail.com> Sorry, I assumed by Gordon's response that the argument was about its true existence. But it would seem that, since behavioral observation cannot prove zombism, one needs to know the mental structure or emergent properties leading to consciousness, which leaves us in the same place in the same question of "What is consciousness?", though I'm sure there exist some tests that happen to have scores that correlate very strongly with consciousness. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Feb 16 00:24:20 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Feb 2010 01:24:20 +0100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <431764.52879.qm@web36501.mail.mud.yahoo.com> Message-ID: <580930c21002151624m41cefb81x57ce57745c46560c@mail.gmail.com> On 14 February 2010 04:41, Stathis Papaioannou wrote: > The watch performs the function of telling the time just fine. It > simulates a sundial or hour glass in this respect. Why, you should know by now that this does not mean that it emulates the qualia of a sundial or hour glass... :-D -- Stefano Vaj From lacertilian at gmail.com Tue Feb 16 01:09:29 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 17:09:29 -0800 Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) In-Reply-To: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> References: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> Message-ID: Stefano Vaj : > On 15 February 2010 19:43, Spencer Campbell wrote: > > If you're going to bring the scientific method into this, then the > > burden is on you to provide an experiment which tests for the > > existence of consciousness. > > This is an unreasonable demand. The scientific method cannot offer > evidence of something which in somebody's view s not phenomenical by > definition, but is an a priori of his worldview. For reference, the remark that spurred this reaction from me was: Gordon Swobe : > I consider it a scientific fact that consciousness arises between the nematode to the human. To demand that a person provide a scientific basis for what they themselves consider a scientific fact is the very definition* of reasonableness! If he had said "objective fact" instead, I would only have been very dubious about it. It's very close to synonymous, but just fuzzy enough to avoid the fundamental error made by conflating "scientific" with, say, "inarguable". In case it isn't clear: the fact in question is not objective, scientific, or inarguable. Certainly not inarguable. Not on Extropy-Chat, at least. *No not really. I'm being... metaphorical! Yeah, let's go with that. From gts_2000 at yahoo.com Tue Feb 16 01:36:11 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 17:36:11 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: Message-ID: <742345.22234.qm@web36504.mail.mud.yahoo.com> To deny the existence of consciousness one must deny one's own awareness of one's own experience. Some muddled thinkers try to do this. I find it hard to take them seriously. -gts From lacertilian at gmail.com Tue Feb 16 02:01:36 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 18:01:36 -0800 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <742345.22234.qm@web36504.mail.mud.yahoo.com> References: <742345.22234.qm@web36504.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > To deny the existence of consciousness one must deny one's own awareness of one's own experience. Some muddled thinkers try to do this. I find it hard to take them seriously. To deny consciousness in general, yes. To deny a specific case of consciousness, no. We've been over this! I can't deny that I myself am conscious (according to Pollock). Such would be irresponsible at best. However, I can easily deny that my computer is conscious; maybe not as easily as Searle, but still pretty easily. By exactly the same token, I can deny that you (the reader) are conscious. For this thread only, consider me a solipsist. I hold that only Spencer Campbell brains are capable of sustaining consciousness. I grant that other human brains are similar to Spencer Campbell brains, in precisely the same way that a correctly-programmed digital computer is similar to SC brains, but I reject the notion that they generate even the most rudimentary of subjective experiences. What logic could you possibly bring to bear on a person with this set of beliefs? Call me irrational if you will, but you can't call me illogical. Solipsism is a self-consistent position. I don't want it to be, so I would appreciate it if someone could show me otherwise, but I am presently convinced that it is. Have I gone over my eight-post limit today? I should start keeping track of that. From avantguardian2020 at yahoo.com Tue Feb 16 03:45:04 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Mon, 15 Feb 2010 19:45:04 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: References: <742345.22234.qm@web36504.mail.mud.yahoo.com> Message-ID: <948496.39262.qm@web65615.mail.ac4.yahoo.com> ----- Original Message ---- > From: Spencer Campbell > To: ExI chat list > Sent: Mon, February 15, 2010 6:01:36 PM > Subject: Re: [ExI] The alleged existence of consciousness > I can't deny that I myself > am conscious (according to Pollock). Such would be irresponsible at best. > However, I can easily deny that my computer is conscious; maybe not as easily > as Searle, but still pretty easily. By exactly the same token, I can deny > that you (the reader) are conscious. Since you seem to buying into Descartes' "evil daemon" argument, allow me to play the devil's advocate. You?could deny the consciousness of other beings, but it?is extraordinarily difficult?to do, because it?goes against your survival instincts. You?may *claim* that you are the only conscious person in existence but?I *dare* you to act on that belief even for a single day. You will find yourself reacting to and anticipating the actions of?people around you, no matter how hard you try.?If you are brave enough to scientifically test your hypothesis, the easiest experiment to see if someone else is conscious or not is to pinch them. I guarantee you will get some data on which to base your conclusion. For this thread only, consider me > a solipsist. I hold that only Spencer Campbell brains are capable of > sustaining consciousness. I grant that other human brains are similar to > Spencer Campbell brains, in precisely the same way that a > correctly-programmed digital computer is similar to SC brains, but I reject > the notion that they generate even the most rudimentary of subjective > experiences. From worm to man, *pain* is?possibly the?most rudimentary of subjective experiences. Therefore I challenge you to cause pinch every creature you meet for just a single day and you will have your answer. What logic could you possibly bring to bear on a person with > this set of beliefs? The logic of pain. That which reacts to pain *must* have the subjective experience of pain because there?can be?no such thing as objective pain or objective suffering of any kind for that matter. > Call me irrational if you will, but you can't > call me illogical. Solipsism is a self-consistent position. I don't want it > to be, so I would appreciate it if someone could show me otherwise, but I > am presently convinced that it is. It is a hypocritical position where a person's?arguments and a person's actions are in contradiction. Have I gone over my eight-post > limit today? I should start keeping track of > that. You need not answer me until you have performed your experiment or chickened out of it. ;-) Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From cluebcke at yahoo.com Mon Feb 15 20:22:42 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Mon, 15 Feb 2010 12:22:42 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: <275598.51562.qm@web36502.mail.mud.yahoo.com> References: <275598.51562.qm@web36502.mail.mud.yahoo.com> Message-ID: <330027.14333.qm@web111201.mail.gq1.yahoo.com> The question, I believe, is how you determine that somebody (or something) else is "conscious", not how you determine that you yourself are. If there isn't a reproducible, agreed-upon method for determining the truth of the statement "System X is conscious", then can there be any point in having a conversation about it? In other words, don't tell me how to detemine ifI'm conscious; tell me how to determine that you are. ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Mon, February 15, 2010 12:12:32 PM Subject: Re: [ExI] The alleged existence of consciousness --- On Mon, 2/15/10, Spencer Campbell wrote: > If you're going to bring the scientific method into this, > then the burden is on you to provide an experiment which tests for > the existence of consciousness. Can you see these words, Spencer? If so then you have what I mean by consciousness. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From cluebcke at yahoo.com Tue Feb 16 02:58:50 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Mon, 15 Feb 2010 18:58:50 -0800 (PST) Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) In-Reply-To: References: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> Message-ID: <361849.34294.qm@web111207.mail.gq1.yahoo.com> Indeed, if it's a scientific fact that consciousness arises between the nematode and the human, then consciousness must be detectable. If not, then there are some very nonstandard versions of the terms "scientific" and/or "fact" in play. ----- Original Message ---- From: Spencer Campbell To: ExI chat list Sent: Mon, February 15, 2010 5:09:29 PM Subject: Re: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) Stefano Vaj : > On 15 February 2010 19:43, Spencer Campbell wrote: > > If you're going to bring the scientific method into this, then the > > burden is on you to provide an experiment which tests for the > > existence of consciousness. > > This is an unreasonable demand. The scientific method cannot offer > evidence of something which in somebody's view s not phenomenical by > definition, but is an a priori of his worldview. For reference, the remark that spurred this reaction from me was: Gordon Swobe : > I consider it a scientific fact that consciousness arises between the nematode to the human. To demand that a person provide a scientific basis for what they themselves consider a scientific fact is the very definition* of reasonableness! If he had said "objective fact" instead, I would only have been very dubious about it. It's very close to synonymous, but just fuzzy enough to avoid the fundamental error made by conflating "scientific" with, say, "inarguable". In case it isn't clear: the fact in question is not objective, scientific, or inarguable. Certainly not inarguable. Not on Extropy-Chat, at least. *No not really. I'm being... metaphorical! Yeah, let's go with that. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jonkc at bellsouth.net Tue Feb 16 06:29:45 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 16 Feb 2010 01:29:45 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <5F9A9DFE-77FA-44C4-B8AE-53B5496EB99C@bellsouth.net> References: <405824.5794.qm@web36506.mail.mud.yahoo.com> <5F9A9DFE-77FA-44C4-B8AE-53B5496EB99C@bellsouth.net> Message-ID: Since my last post Gordon Swobe has posted 11 times. > > Come the singularity, some people will lose their grips on reality and find themselves believing such absurdities as that digital depictions of people have real mental states. A few lonely philosophers of my stripe will try in vain to restore their sanity. As far as the future is concerned it really doesn't matter if Swobe's ideas are right or wrong, either way they're as dead as the Dodo. Even if he's 100% right and I am 100% wrong people with my ideas will have vastly more influence than people like him because we will not be held back by superstitious ideas about "THE ORIGINAL". So it's pedal to the metal upgrading, Jupiter brain ahead. Swobe just won't be able to keep up with the electronic competition. Only a few axons in the brain can send signals as fast as 100 meters per second, non-myelinated axon's are only able to go about 1 meter per second. Light moves at 300,000,000 meters per second. Perhaps after the singularity the more conservative and superstitious among us could still survive in some little backwater somewhere, like the Amish do today, but I doubt it. > I think you want me to believe that my watch has a small amount of consciousness by virtue of it having a small amount of intelligence. But I don't think that makes even a small amount of sense. It seems to me that my watch has no consciousness I'm not surprised Swobe can't make sense of it all, nothing in the Biological sciences makes any sense without Evolution, and he has shown a profound ignorance not only of that theory but of the fossil record in general. Evolution found it far harder to come up with intelligence than consciousness, the brain structures that produce the basic emotions we share with many other animals and are many hundreds of millions of years old, while the higher brain structures that produce language, mathematics and abstract thought in general, things that make humans unique, are less than a million years old and possibly much less. Swobe does not use his higher brain structures to think with and prefers to think with his gut; but many animals have an intestinal tract and to my knowledge none of them are particularly good philosophers. > Consciousness, as I mean it today, entails the ability to have conscious intentional states. That is, it entails the ability to have something consciously "in mind" So consciousness means the ability to be conscious, that is to say the ability to consciously think about stuff. Thank you so much for those words of wisdom! > If I make a jpeg of you with my digital camera, that digital depiction of you will have no mental states. Swobe may very well be right in this particular instance, but it illustrates the useless nature of the grotesque exercises he gives the grandiose name "thought experiment". Swobe has no way to directly measure the mental states even of his fellow human beings much less that of a digital camera; and yet over the last few months he has made grand pronouncements about the mental states of literally hundreds of things. To add insult to injury the mental state of things is exactly what he's trying to prove; he just doesn't understand that saying X has no consciousness is not the same as proving X has no consciousness. > The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. Swobe admits, and if fact seems delighted by the fact, that he has absolutely no idea what causes consciousness; nevertheless he thinks he can always determine a priori what has consciousness and what does not, and it has nothing to do with the way they behave. The conjunction of a person with bits of paper might display intelligence, in fact there is no doubt that it could, but it could never be conscious because, because, well just because; but Swobe thinks 3 pounds of grey goo being conscious is perfectly logical. Can Swobe explain why one thing is ridiculous and the other logical? Nope, it's just that he's accustomed to one and not the other. That's it. > Depictions of things, digital or otherwise, do not equal the things they depict Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. > the man cannot grok the symbols by virtue of manipulating them according to the rules of syntax > Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. > Depictions of things do not equal the things they depict. Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Feb 16 08:35:54 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 16 Feb 2010 19:35:54 +1100 Subject: [ExI] glutamine and life-extension In-Reply-To: <505300.77015.qm@web36506.mail.mud.yahoo.com> References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On 16 February 2010 05:10, Gordon Swobe wrote: > Stathis, > > All this talk of neurons reminds me of a paper I wrote circa 1999: > > Glutamine Based Growth Hormone Releasing Products: A Bad Idea? > http://www.newtreatments.org/loadlocal.php?hid=974 > > My article above created a surprising amount of controversy among life-extensionists. I closed down my website, but recently found my paper republished on the site above without my permission. > > Thought you might find it interesting given your profession and the general theme. Thank-you for this, it is not something I knew much about. It's important to know about the *negative* effects of any treatment, and this is often overlooked by those into non-conventional medicine. Of course, it is sometimes overlooked by those prescribing conventional treatments as well. Nevertheless, my bias is to follow conventional medicine, and only rarely does conventional medicine consider that there is enough evidence to recommend treatments with dietary supplements. A common response to this is that medical professionals are unduly influenced by drug companies, and there may be some truth to that, but I work in the public health system where there is an explicit emphasis on using the cheapest effective treatment: amino acids in bulk are very cheap compared to most drugs, and they would be used if there were good evidence for their efficacy. The other point to make is that much of what doctors do is longevity treatment, even if it isn't seen as that. Preventing heart disease, diabetes, cancer, dementia etc. is equivalent to preventing the patient from getting physiologically old and decrepit and dying early. -- Stathis Papaioannou From stathisp at gmail.com Tue Feb 16 10:56:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 16 Feb 2010 21:56:12 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <963975.24812.qm@web36502.mail.mud.yahoo.com> References: <466012.12578.qm@web111205.mail.gq1.yahoo.com> <963975.24812.qm@web36502.mail.mud.yahoo.com> Message-ID: On 16 February 2010 03:10, Gordon Swobe wrote: > --- On Mon, 2/15/10, Christopher Luebcke wrote: > >> If my understanding of the CRA is >> correct (it may not be), it seems to me that Searle is >> arguing that because one component of the system does not >> understand the symbols, the system doesn't understand the >> symbols. This to me is akin to claiming that because my >> fingers do not understand the words they are currently >> typing out, neither do I. > > Searle speaks for himself: > > My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him. > > Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. > > http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html I have proposed the example of a brain which has enough intelligence to know what the neurons are doing: "neuron no. 15,576,456,757 in the left parietal lobe fires in response to noradrenaline, then breaks down the noradrenaline by means of MAO and COMT", and so on, for every brain event. That would be the equivalent of the man in the CR: there is understanding of the low level events, but no understanding of the high level intelligent behaviour which these events give rise to. Do you see how there might be *two* intelligences here, a high level and a low level one, with neither necessarily being aware of the other? -- Stathis Papaioannou From stathisp at gmail.com Tue Feb 16 11:09:22 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 16 Feb 2010 22:09:22 +1100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <930797.89044.qm@web36503.mail.mud.yahoo.com> References: <930797.89044.qm@web36503.mail.mud.yahoo.com> Message-ID: On 16 February 2010 00:49, Gordon Swobe wrote: > --- On Mon, 2/15/10, Stathis Papaioannou wrote: > >> You keep repeating this as a fact but you don't explain why >> a digital depiction of a person won't have real mental status. > > If I make a jpeg of you with my digital camera, that digital depiction of you will have no mental states. If I make a digital movie of you on my webcam, that digital depiction of you will have no mental states. A complete three-dimensional animated digital depiction of you made with futuristic digital simulation technology will amount to just another kind of depiction of you, and so it will likewise have no mental states. > > It does not matter whether we create our depictions of things on the walls of caves or on computers. Depictions of things do not equal the things they depict. But the picture has some *properties* of the thing it represents. A statue has other properties, and a computer simulation has other properties still. There is no reason why, unique among all the properties that a human has, "mind" should not be duplicable in anything other than the original substrate. -- Stathis Papaioannou From stathisp at gmail.com Tue Feb 16 11:30:26 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 16 Feb 2010 22:30:26 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <153083.39349.qm@web36503.mail.mud.yahoo.com> References: <153083.39349.qm@web36503.mail.mud.yahoo.com> Message-ID: On 16 February 2010 02:28, Gordon Swobe wrote: >>> You beg the question of non-organic consciousness. As >>> far as we know, "non-organic alien visitors" amounts to a >>> completely meaningless concept. >> >> What?? > > You ask what I would say to non-organic alien visitors, and I suppose you assume those non-organic alien visitors have consciousness. But non-organic consciousness is what is at issue here. I thought you said before that you did not rule the possibility of non-organic consciousness. If the aliens had clockwork brains, you might allow that they are conscious, but not if they have digital computers as brains. Of course, their technology might be so weird that you would be unable to tell what was clockwork and what was a digital computer, especially if there was a mixture of the two. Nevertheless, even if they are zombies, you can still have a discussion with them once you have figured out each other's language. The aliens would insist that they were conscious, and question whether you were conscious. What could you say to them to convince them otherwise? >>> As for nematodes, I have no idea whether their >>> primitive nervous systems support what I mean by >>> consciousness. I doubt it but I don't know. I classify them >>> in the gray area between unconscious amoebas and conscious >>> humans. >> >> At some point, either gradually or abruptly, consciousness >> will happen in the transition from nematode to human or watch to AI. > > I consider it a scientific fact that consciousness arises between the nematode to the human. But only in science-fiction does consciousness happen in digital watches or digital computers. I guess you mean that you know that you're conscious - that is your empirical evidence. But the alien visitors would say the same, and there is no test you could do on them (or they on you) to prove consciousness. If they presented you with an argument purporting to prove that organic matter can't have a mind you would dismiss it out of hand, no matter how clever it was; and similarly they would dismiss any of your arguments purporting to show that they cannot be conscious. -- Stathis Papaioannou From stefano.vaj at gmail.com Tue Feb 16 12:27:04 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Feb 2010 13:27:04 +0100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <742345.22234.qm@web36504.mail.mud.yahoo.com> References: <742345.22234.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c21002160427m18cd353es7e757faf68e8bf77@mail.gmail.com> On 16 February 2010 02:36, Gordon Swobe wrote: > To deny the existence of consciousness one must deny one's own awareness of one's own experience. Some muddled thinkers try to do this. I find it hard to take them seriously. OK, this is your mantra. But the argument that since nobody denies the existence of, say, "entrepreneurial spirit", or the "spirit of the old days", we have to infer that spirits fluctuate around us is not very compelling. Consciousness is a perfectly useful concept to describe the set of phenomena leading us to conclude, e.g., that the perpetrator had not actually knocked unconscious when the crime was committed. Its "entification" is however a Platonic petition of principle, similar to that maintaining that "Evil exists" in the sense of of a horned monster living very deeply underground. Entia non sunt multiplicanda sine necessitate. -- Stefano Vaj From stefano.vaj at gmail.com Tue Feb 16 12:34:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Feb 2010 13:34:50 +0100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <330027.14333.qm@web111201.mail.gq1.yahoo.com> References: <275598.51562.qm@web36502.mail.mud.yahoo.com> <330027.14333.qm@web111201.mail.gq1.yahoo.com> Message-ID: <580930c21002160434n3e8bfa0cm21b709ed15eae54f@mail.gmail.com> On 15 February 2010 21:22, Christopher Luebcke wrote: > The question, I believe, is how you determine that somebody (or something) else is "conscious", not how you determine that you yourself are. I would go a little farther. Consciousness is a social construct even when it refers to oneself (and, btw, social constructs *do* exist, simply they are not homunculi). And a-priori evidence does not really cut it otherwise. Somebody can well be persuaded to be conscious, and even be subvocalising on the subject, e.g., while dreaming, having a near-death experience, or being under some drugs, even though in fact he or she is not. -- Stefano Vaj From gts_2000 at yahoo.com Tue Feb 16 12:45:56 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 16 Feb 2010 04:45:56 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <534964.96008.qm@web36503.mail.mud.yahoo.com> --- On Tue, 2/16/10, Stathis Papaioannou wrote: > I have proposed the example of a brain which has enough > intelligence to know what the neurons are doing: "neuron no. > 15,576,456,757 in the left parietal lobe fires in response to > noradrenaline, then breaks down the noradrenaline by means of MAO and > COMT", and so on, for every brain event. That would be the equivalent of > the man in the CR: there is understanding of the low level events, but no > understanding of the high level intelligent behaviour which these events > give rise to. Do you see how there might be *two* intelligences here, a > high level and a low level one, with neither necessarily being aware of > the other? Doesn't matter. If you cannot see yourself understanding the symbols as either the man considered as the program (IN the room or AS a neuron) or as the man considered as the system (AS the room or AS a brain) then Searle has proved his point. And it seems he has proved his point to you, but that you want nevertheless to fabricate some imaginary way around the conclusion. These attempts of yours amount to saying "Suppose that even though Searle is right that the man cannot understand the symbols either as the program or as the system, pink unicorns on the moon do nevertheless understand the symbols." :) -gts From gts_2000 at yahoo.com Tue Feb 16 13:20:00 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 16 Feb 2010 05:20:00 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: <580930c21002160434n3e8bfa0cm21b709ed15eae54f@mail.gmail.com> Message-ID: <801240.98026.qm@web36505.mail.mud.yahoo.com> --- On Tue, 2/16/10, Stefano Vaj wrote: > Consciousness is a social construct... A social construct? Is digestion a social construct? Seems to me consciousness amounts to just another biological process like digestion. The brain does this thing called consciousness when in the awake state or in the state of sleep+dreaming. It stops doing it temporarily after physical shocks such as blows to the head, or when in the presence of too much alcohol, or when it sleeps but does not dream. It stops for the last time at death. -gts From stathisp at gmail.com Tue Feb 16 13:54:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 17 Feb 2010 00:54:27 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <534964.96008.qm@web36503.mail.mud.yahoo.com> References: <534964.96008.qm@web36503.mail.mud.yahoo.com> Message-ID: On 16 February 2010 23:45, Gordon Swobe wrote: > --- On Tue, 2/16/10, Stathis Papaioannou wrote: > >> I have proposed the example of a brain which has enough >> intelligence to know what the neurons are doing: "neuron no. >> 15,576,456,757 in the left parietal lobe fires in response to >> noradrenaline, then breaks down the noradrenaline by means of MAO and >> COMT", and so on, for every brain event. That would be the equivalent of >> the man in the CR: there is understanding of the low level events, but no >> understanding of the high level intelligent behaviour which these events >> give rise to. Do you see how there might be *two* intelligences here, a >> high level and a low level one, with neither necessarily being aware of >> the other? > > Doesn't matter. If you cannot see yourself understanding the symbols as either the man considered as the program (IN the room or AS a neuron) or as the man considered as the system (AS the room or AS a brain) then Searle has proved his point. > > And it seems he has proved his point to you, but that you want nevertheless to fabricate some imaginary way around the conclusion. These attempts of yours amount to saying "Suppose that even though Searle is right that the man cannot understand the symbols either as the program or as the system, pink unicorns on the moon do nevertheless understand the symbols." :) I think you have missed the point: even though we agree that the man who internalises the room has no understanding, this does *not* mean that the system has no understanding. The man's intelligence is only a component of the system even if the man internalises the room. As a general comment, it is normal in philosophical debate to set up a sometimes complex argument, thought experiment etc. in order to prove a point which the parties disagree on. I might think that the whole CRA and the idea it purports to prove is ridiculous, but it's bad form to just dismiss an argument like that. Instead, I have to pick it apart, show where there are hidden assumptions, or think of a variation which leads to the opposite conclusion. This sometimes leads to the pursuit of what you may consider is a minor technical point, while you are eager to return to restating what you consider is the big picture. But it is important to pursue these apparently minor technical points, since if they fall, the whole argument falls. That does not necessarily mean the initial proposition was wrong, but it does mean that the particular argument chosen to support it is wrong, and cannot be used any more. -- Stathis Papaioannou From stathisp at gmail.com Tue Feb 16 13:57:53 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 17 Feb 2010 00:57:53 +1100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <801240.98026.qm@web36505.mail.mud.yahoo.com> References: <580930c21002160434n3e8bfa0cm21b709ed15eae54f@mail.gmail.com> <801240.98026.qm@web36505.mail.mud.yahoo.com> Message-ID: On 17 February 2010 00:20, Gordon Swobe wrote: > --- On Tue, 2/16/10, Stefano Vaj wrote: > >> Consciousness is a social construct... > > A social construct? Is digestion a social construct? > > Seems to me consciousness amounts to just another biological process like digestion. The brain does this thing called consciousness when in the awake state or in the state of sleep+dreaming. It stops doing it temporarily after physical shocks such as blows to the head, or when in the presence of too much alcohol, or when it sleeps but does not dream. It stops for the last time at death. But digestion is nothing over and above the chemical breakdown of food in the gut. Similarly, consciousness is nothing over and above the enacting of intelligent behaviour. -- Stathis Papaioannou From sparge at gmail.com Tue Feb 16 15:03:09 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Feb 2010 10:03:09 -0500 Subject: [ExI] glutamine and life-extension In-Reply-To: References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On Tue, Feb 16, 2010 at 3:35 AM, Stathis Papaioannou wrote: > ... Nevertheless, my bias is to follow conventional > medicine, and only rarely does conventional medicine consider that > there is enough evidence to recommend treatments with dietary > supplements. A common response to this is that medical professionals > are unduly influenced by drug companies, and there may be some truth > to that, but I work in the public health system where there is an > explicit emphasis on using the cheapest effective treatment: amino > acids in bulk are very cheap compared to most drugs, and they would be > used if there were good evidence for their efficacy. So who is motivated to research therapeutic effects of cheap supplements? Drug companies are motivated by the profit potential of a new, exclusive drug. There are no big payoffs in researching cheap supplements. -Dave From cluebcke at yahoo.com Tue Feb 16 16:15:31 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 16 Feb 2010 08:15:31 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <534964.96008.qm@web36503.mail.mud.yahoo.com> References: <534964.96008.qm@web36503.mail.mud.yahoo.com> Message-ID: <782982.71028.qm@web111207.mail.gq1.yahoo.com> Gordon, if I may ask directly: How do you determine whether someone, or something, besides yourself is conscious? ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Tue, February 16, 2010 4:45:56 AM Subject: Re: [ExI] Semiotics and Computability --- On Tue, 2/16/10, Stathis Papaioannou wrote: > I have proposed the example of a brain which has enough > intelligence to know what the neurons are doing: "neuron no. > 15,576,456,757 in the left parietal lobe fires in response to > noradrenaline, then breaks down the noradrenaline by means of MAO and > COMT", and so on, for every brain event. That would be the equivalent of > the man in the CR: there is understanding of the low level events, but no > understanding of the high level intelligent behaviour which these events > give rise to. Do you see how there might be *two* intelligences here, a > high level and a low level one, with neither necessarily being aware of > the other? Doesn't matter. If you cannot see yourself understanding the symbols as either the man considered as the program (IN the room or AS a neuron) or as the man considered as the system (AS the room or AS a brain) then Searle has proved his point. And it seems he has proved his point to you, but that you want nevertheless to fabricate some imaginary way around the conclusion. These attempts of yours amount to saying "Suppose that even though Searle is right that the man cannot understand the symbols either as the program or as the system, pink unicorns on the moon do nevertheless understand the symbols." :) -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jonkc at bellsouth.net Tue Feb 16 16:27:11 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 16 Feb 2010 11:27:11 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <801240.98026.qm@web36505.mail.mud.yahoo.com> References: <801240.98026.qm@web36505.mail.mud.yahoo.com> Message-ID: <4CE641C3-70EE-4B6C-AA81-0EEBA87A36DC@bellsouth.net> Since my last post Gordon Swobe has posted 2 times. > > Seems to me consciousness amounts to just another biological process like digestion. No, Swobe is incorrect even about what things seem like to him. Swobe can explain how digestion came to occur on planet Earth but he has no explanation how consciousness did. I have an explanation for both, but then unlike Swobe I think intelligent behavior implies consciousness. Swobe says he believes in Evolution but the fact that he thinks intelligence and consciousness are separate things and yet Evolution produced them both is clear proof that the man has no comprehension how that vitally important process works. > If you cannot see yourself understanding the symbols as either the man considered as the program (IN the room or AS a neuron) or as the man considered as the system (AS the room or AS a brain) then Searle has proved his point. True, I can not see myself understanding those Chinese symbols, but I can't see myself manipulating them to produce intelligent output either; perhaps something could do that but whatever it is it wouldn't be human. > These attempts of yours amount to saying "Suppose that even though Searle is right that the man cannot understand the symbols either as the program or as the system, pink unicorns on the moon do nevertheless understand the symbols." This shows that not only does Swobe misunderstand Evolution he doesn't understand understanding either. Swobe seems obsessed in determining the exact spacial coordinates of consciousness, which makes about as much sense as demanding to know where fast is, or the number eleven. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Feb 16 17:13:51 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Feb 2010 18:13:51 +0100 Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) In-Reply-To: References: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> Message-ID: <580930c21002160913l51d29a10sab21e88db547a5fe@mail.gmail.com> On 16 February 2010 02:09, Spencer Campbell wrote: > Stefano Vaj : >> This is an unreasonable demand. The scientific method cannot offer >> evidence of something which in somebody's view s not phenomenical by >> definition, but is an a priori of his worldview. > > For reference, the remark that spurred this reaction from me was: > > Gordon Swobe : >> I consider it a scientific fact that consciousness arises between the nematode to the human. > > To demand that a person provide a scientific basis for what they > themselves consider a scientific fact is the very definition* of > reasonableness! I did not write "unfair", I said "unreasonable". If somebody were to tell you "I consider it a fact that Allah exists", would you take his at his own word? :-) -- Stefano Vaj From max at maxmore.com Tue Feb 16 17:50:20 2010 From: max at maxmore.com (Max More) Date: Tue, 16 Feb 2010 11:50:20 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn't settled Message-ID: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> Interesting: Phil Jones momentous Q&A with BBC reopens the ?science is settled? issues emperor is, if not naked, scantily clad, vindicating key skeptic arguments http://wattsupwiththat.com/2010/02/14/phil-jones-momentous-qa-with-bbc-reopens-the-science-is-settled-issues/ Columnist Indur Goklany summarizes: Specifically, the Q-and-As confirm what many skeptics have long suspected: Neither the rate nor magnitude of recent warming is exceptional. There was no significant warming from 1998-2009. According to the IPCC we should have seen a global temperature increase of at least 0.2?C per decade. The IPCC models may have overestimated the climate sensitivity for greenhouse gases, underestimated natural variability, or both. This also suggests that there is a systematic upward bias in the impacts estimates based on these models just from this factor alone. The logic behind attribution of current warming to well-mixed man-made greenhouse gases is faulty. The science is not settled, however unsettling that might be. There is a tendency in the IPCC reports to leave out inconvenient findings, especially in the part(s) most likely to be read by policy makers. Compare the above to the "orthodox" view: http://www.realclimate.org/index.php/archives/2010/02/daily-mangle/ ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From lacertilian at gmail.com Tue Feb 16 19:07:46 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 16 Feb 2010 11:07:46 -0800 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <948496.39262.qm@web65615.mail.ac4.yahoo.com> References: <742345.22234.qm@web36504.mail.mud.yahoo.com> <948496.39262.qm@web65615.mail.ac4.yahoo.com> Message-ID: The Avantguardian : >Spencer Campbell : >> What logic could you possibly bring to bear on a person with >> this set of beliefs? > > The logic of pain. That which reacts to pain *must* have the subjective experience of pain because there?can be?no such thing as objective pain or objective suffering of any kind for that matter. The ouch-test! Okay, I'll play your little game. First, it's obvious that false negatives are possible. Even if a reaction to pinching implies consciousness, it does not follow that no reaction implies no consciousness. If I concentrate on it, I can ignore a pinch of virtually any intensity. Even so, it's still scientifically useful in theory. Let's try it out on a Furby! http://www.youtube.com/watch?v=aPgfP5UO8o8 Furbies do not appear to be capable of pain. Perhaps if we programmed one to scream in agony and gave it a linear actuator so that it can propel itself away, whenever touched? These sedentary, polysensuously perverse monstrosities are clearly unconscious; yet with the addition of just a few strong negative reactions, we could turn one into the perfect consciousness generator. Too sarcastic? You see my point, anyway. It's trivially easy to reproduce the outward appearance of pain without the corresponding subjective experience. Consciousness does not imply a reaction to pinching. A reaction to pinching does not imply consciousness. Thank you, try again! The Avantguardian : > It is a hypocritical position where a person's?arguments and a person's actions are in contradiction. Granted. Hypocrisy does not imply falsehood, you'll note. The Avantguardian : > You need not answer me until you have performed your experiment or chickened out of it. ;-) That's okay, I have time! I'll note that your test would work if we had rayguns that shoot pure ionized pain, but sadly qualia have not yet been weaponized. C'est la vie. From lacertilian at gmail.com Tue Feb 16 20:28:40 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 16 Feb 2010 12:28:40 -0800 Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) In-Reply-To: <580930c21002160913l51d29a10sab21e88db547a5fe@mail.gmail.com> References: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> <580930c21002160913l51d29a10sab21e88db547a5fe@mail.gmail.com> Message-ID: Stefano Vaj : > Spencer Campbell : >> To demand that a person provide a scientific basis for what they >> themselves consider a scientific fact is the very definition* of >> reasonableness! > > I did not write "unfair", I said "unreasonable". If somebody were to > tell you "I consider it a fact that Allah exists", would you take his > at his own word? :-) I would. I find it completely believable that someone would consider it a fact that Allah exists, and I have no objections to the phrasing. I don't agree with them, personally, but I have no reason to call their belief logically invalid. The word "unreasonable" means something like "lacking reason", or "irrational". Of course I had some reason to make my ultimatum, otherwise I wouldn't have done so, and if I'm being irrational then I'm not even sane enough to see it. To insist on correct usage of terminology, when that terminology is being twisted just to lend false credibility to a statement, seems perfectly rational to me. I think it was a well-reasoned position, but you could conceivably convince me otherwise; I can be reasoned with. I am nothing if not reasonable. OH WAIT A SMILEY FACE I didn't need to deliver a stern dissertation at all! Your criticism was a mere jibe, some harmless japery, all in good fun. Ha ha. Ha. From cluebcke at yahoo.com Tue Feb 16 20:52:35 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 16 Feb 2010 12:52:35 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn't settled In-Reply-To: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> References: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> Message-ID: <354088.25240.qm@web111202.mail.gq1.yahoo.com> I'll continue to rely mainly on what peer-reviewed science has to say on the matter, not the IPCC or the BBC. I do find it sad that suddenly, after months of having his character shat upon, there's a great rush to take something Phil Jones has said as unquestioningly accurate. ----- Original Message ---- From: Max More To: Extropy-Chat Sent: Tue, February 16, 2010 9:50:20 AM Subject: [ExI] Phil Jones acknowledging that climate science isn't settled Interesting: Phil Jones momentous Q&A with BBC reopens the ?science is settled? issues emperor is, if not naked, scantily clad, vindicating key skeptic arguments http://wattsupwiththat.com/2010/02/14/phil-jones-momentous-qa-with-bbc-reopens-the-science-is-settled-issues/ Columnist Indur Goklany summarizes: Specifically, the Q-and-As confirm what many skeptics have long suspected: Neither the rate nor magnitude of recent warming is exceptional. There was no significant warming from 1998-2009. According to the IPCC we should have seen a global temperature increase of at least 0.2?C per decade. The IPCC models may have overestimated the climate sensitivity for greenhouse gases, underestimated natural variability, or both. This also suggests that there is a systematic upward bias in the impacts estimates based on these models just from this factor alone. The logic behind attribution of current warming to well-mixed man-made greenhouse gases is faulty. The science is not settled, however unsettling that might be. There is a tendency in the IPCC reports to leave out inconvenient findings, especially in the part(s) most likely to be read by policy makers. Compare the above to the "orthodox" view: http://www.realclimate.org/index.php/archives/2010/02/daily-mangle/ ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From bbenzai at yahoo.com Tue Feb 16 20:36:57 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 16 Feb 2010 12:36:57 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <916006.14487.qm@web113610.mail.gq1.yahoo.com> Jef wrote: > The logic of the CRA is correct. But it reasons from a flawed > premise: That the human organism has this somehow ontologically > special thing called "consciousness." > So restart the music, and the merry-go-round. I'm surprised no one's > mentioned the Giant Look Up Table yet. I was thinking about this very thing (even though I said I'd no longer discuss the dread CRA), and I disagree that the logic is correct. It suffers from a fundamental flaw, as I see it: the (usually unquestioned) assumption that it's possible, even in principle, to have a set of rules that can answer any possible question about a set of data (the 'story'), in a consistently sensible fashion, without having any 'understanding'. Searle just casually tosses this assertion out as though it was obvious that it was possible, when it seems to me to be so unlikely that it's a simply outrageous assumption. Before anyone uses it in an argument, they need to demonstrate that it's possible. Without doing this, any argument based on it is totally invalid. If you think about it, we actually use this principle to actually test for understanding of a subject. We put people through exams where they're supposed to demonstrate their understanding by answering questions. If the questions are good ones (difficult to anticipate, posing a variety of different problems, etc.), and the answers are good ones (clearly stating how to solve the problems posed), we conclude that the person has demonstrated understanding of the subject. Our education system pretty much depends on this. So why on earth would anyone suggest that exactly the same setup - asking questions about a set of data, and seeing if the answers are correct and consistent - could be used in an argument to claim that understanding is absent? Absurd. Ben From max at maxmore.com Tue Feb 16 21:11:23 2010 From: max at maxmore.com (Max More) Date: Tue, 16 Feb 2010 15:11:23 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> >I'll continue to rely mainly on what peer-reviewed science has to >say on the matter, not the IPCC or the BBC. The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. >I do find it sad that suddenly, after months of having his character >shat upon, there's a great rush to take something Phil Jones has >said as unquestioningly accurate. Which of the things Jones is saying do you think are inaccurate? What I thought was a little encouraging was some small sign of uncertainty from one of those representing apparent certainty concerning a matter revolving around unreliable models. Perhaps, one day, more economists and risk managers will also become a little more modest and less dogmatic regarding their clearly non-scientific discipline. Max From stathisp at gmail.com Tue Feb 16 21:38:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 17 Feb 2010 08:38:12 +1100 Subject: [ExI] glutamine and life-extension In-Reply-To: References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On 17 February 2010 02:03, Dave Sill wrote: > On Tue, Feb 16, 2010 at 3:35 AM, Stathis Papaioannou wrote: >> ... Nevertheless, my bias is to follow conventional >> medicine, and only rarely does conventional medicine consider that >> there is enough evidence to recommend treatments with dietary >> supplements. A common response to this is that medical professionals >> are unduly influenced by drug companies, and there may be some truth >> to that, but I work in the public health system where there is an >> explicit emphasis on using the cheapest effective treatment: amino >> acids in bulk are very cheap compared to most drugs, and they would be >> used if there were good evidence for their efficacy. > > So who is motivated to research therapeutic effects of cheap > supplements? Drug companies are motivated by the profit potential of a > new, exclusive drug. There are no big payoffs in researching cheap > supplements. Medical researchers, usually publicly or not-for-profit funded. -- Stathis Papaioannou From cluebcke at yahoo.com Tue Feb 16 22:25:00 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 16 Feb 2010 14:25:00 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> Message-ID: <412543.58639.qm@web111214.mail.gq1.yahoo.com> > The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. No, but almost all of it supports the positions that the Earth has been warming over the last century, that the warming has primarily been caused by mankind's introduction of greenhouse gasses into the atmosphere, and that this warming trend will continue, with projected results over the next 100 years ranging, roughly, from pretty bad to catastrophic in terms of human suffering. Naturally there are scientists who take a minority view of this, but the number of qualified climatologists taking the minority view is quite small. This does not mean that they are incorrect, but they're likely to be. I feel the same way about this as I do about the Big Bang theory and its modest, dwindling competitor, plasma cosmology. > Which of the things Jones is saying do you think are inaccurate? I'm actually not deeply interested in what Phil Jones has to say on the matter, outside (once again) of his scientific work. I just find it sadly ironic, and a dark reflection on the petty, fiercely partisan, politicized nature of what should in fact be a sober, scientific discussion, that a man accused (in the court of political opinion) of fraud should suddenly be taken at his word by those same accusers, when that word apparently agrees with their position. I'm not, of course, accusing you (Max) of doing so. But the whole affair just saddens me. > What I thought was a little encouraging was some small sign of uncertainty from one of those representing apparent certainty concerning a matter revolving around unreliable models. Perhaps, one day, more economists and risk managers will also become a little more modest and less dogmatic regarding their clearly non-scientific discipline. I don't disagree, and from what I've read Phil Jones is somewhat less than the perfect scientist. But the whole swirl of activity around this indicates a personalization of the subject of AGW to Phil Jones, or CRU, when in fact there are thousands of scientists, thousands of peer-reviewed papers, and dozens if not hundreds of organizations working on this problem. The AGW-denying crowd's recent focus on Phil Jones reminds me of nothing so much as creationists' pathological obsession with, and hatred of, Charles Darwin (and I find little surprise that the two groups share a large intersection). ----- Original Message ---- From: Max More To: Extropy-Chat Sent: Tue, February 16, 2010 1:11:23 PM Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled > I'll continue to rely mainly on what peer-reviewed science has to say on the matter, not the IPCC or the BBC. The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. > I do find it sad that suddenly, after months of having his character shat upon, there's a great rush to take something Phil Jones has said as unquestioningly accurate. Which of the things Jones is saying do you think are inaccurate? What I thought was a little encouraging was some small sign of uncertainty from one of those representing apparent certainty concerning a matter revolving around unreliable models. Perhaps, one day, more economists and risk managers will also become a little more modest and less dogmatic regarding their clearly non-scientific discipline. Max _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From max at maxmore.com Wed Feb 17 01:03:24 2010 From: max at maxmore.com (Max More) Date: Tue, 16 Feb 2010 19:03:24 -0600 Subject: [ExI] Shifting demand suggests a future of endless oil Message-ID: <201002170103.o1H13W90026666@andromeda.ziaspace.com> MSNBC, which is usually home to the usual catastrophist messages, has a story that reflects information that's been around for a while, suggesting a non-disastrous energy shift ahead: Shifting demand suggests a future of endless oil http://www.msnbc.msn.com/id/34770285/ns/business-oil_and_energy/ And another hopeful sign (although I'm no fan of any kind of government subsidy -- I'd rather reduce artificially-imposed costs on nuclear energy): Obama renews commitment to nuclear energy http://www.msnbc.msn.com/id/35421517/ns/business-oil_and_energy/ Max From sparge at gmail.com Wed Feb 17 01:13:51 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Feb 2010 20:13:51 -0500 Subject: [ExI] glutamine and life-extension In-Reply-To: References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On Tue, Feb 16, 2010 at 4:38 PM, Stathis Papaioannou wrote: > On 17 February 2010 02:03, Dave Sill wrote: >> >> So who is motivated to research therapeutic effects of cheap >> supplements? Drug companies are motivated by the profit potential of a >> new, exclusive drug. There are no big payoffs in researching cheap >> supplements. > > Medical researchers, usually publicly or not-for-profit funded. That was sort of a rhetorical question, but since you answered it, I have to point out that there are multiple orders of magnitude of difference in funding between public/non-profit and for-profit. The drug companies are frantically inventing new stuff, spending billions on R&D. Compare that to what's spent on supplement research. I think it's highly likely that there are effective therapeutic uses of supplements that aren't being investigated due to lack of funding. -Dave From stathisp at gmail.com Wed Feb 17 01:29:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 17 Feb 2010 12:29:47 +1100 Subject: [ExI] glutamine and life-extension In-Reply-To: References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On 17 February 2010 12:13, Dave Sill wrote: > On Tue, Feb 16, 2010 at 4:38 PM, Stathis Papaioannou wrote: >> On 17 February 2010 02:03, Dave Sill wrote: >>> >>> So who is motivated to research therapeutic effects of cheap >>> supplements? Drug companies are motivated by the profit potential of a >>> new, exclusive drug. There are no big payoffs in researching cheap >>> supplements. >> >> Medical researchers, usually publicly or not-for-profit funded. > > That was sort of a rhetorical question, but since you answered it, I > have to point out that there are multiple orders of magnitude of > difference in funding between public/non-profit and for-profit. The > drug companies are frantically inventing new stuff, spending billions > on R&D. Compare that to what's spent on supplement research. > > I think it's highly likely that there are effective therapeutic uses > of supplements that aren't being investigated due to lack of funding. I can't easily find actual figures but I think in the world as a whole most medical research is publicly funded. The purpose of publicly funded research is to discover things that are interesting or useful, which is not always the same as discovering things that can be sold for a lot of money. Drug companies generally won't spend money researching dietary supplements or doing basic research because they have nothing to gain from it, even though society does. If public researchers do not think it is worthwhile investigating something then it is probably because they think it is unlikely it will yield useful results. -- Stathis Papaioannou From sparge at gmail.com Wed Feb 17 02:08:10 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Feb 2010 21:08:10 -0500 Subject: [ExI] glutamine and life-extension In-Reply-To: References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On Tue, Feb 16, 2010 at 8:29 PM, Stathis Papaioannou wrote: > > I can't easily find actual figures but I think in the world as a whole > most medical research is publicly funded. Most *medical* research? Maybe. Most *drug* research? No way. > The purpose of publicly > funded research is to discover things that are interesting or useful, > which is not always the same as discovering things that can be sold > for a lot of money. Drug companies generally won't spend money > researching dietary supplements or doing basic research because they > have nothing to gain from it, even though society does. If public > researchers do not think it is worthwhile investigating something then > it is probably because they think it is unlikely it will yield useful > results. Exactly. That was point of my first posting in this thread. -Dave From alfio.puglisi at gmail.com Wed Feb 17 02:36:27 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 17 Feb 2010 03:36:27 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn't settled In-Reply-To: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> References: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> Message-ID: <4902d9991002161836w487577b5iefaac10dc3ba51a4@mail.gmail.com> On Tue, Feb 16, 2010 at 6:50 PM, Max More wrote: > Interesting: > > Phil Jones momentous Q&A with BBC reopens the ?science is settled? issues > emperor is, if not naked, scantily clad, vindicating key skeptic arguments > > http://wattsupwiththat.com/2010/02/14/phil-jones-momentous-qa-with-bbc-reopens-the-science-is-settled-issues/ > > Columnist Indur Goklany summarizes: > > Specifically, the Q-and-As confirm what many skeptics have long suspected: > Neither the rate nor magnitude of recent warming is exceptional. > There was no significant warming from 1998-2009. According to the IPCC we > should have seen a global temperature increase of at least 0.2?C per decade. > The IPCC models may have overestimated the climate sensitivity for > greenhouse gases, underestimated natural variability, or both. > This also suggests that there is a systematic upward bias in the impacts > estimates based on these models just from this factor alone. > The logic behind attribution of current warming to well-mixed man-made > greenhouse gases is faulty. > The science is not settled, however unsettling that might be. > There is a tendency in the IPCC reports to leave out inconvenient findings, > especially in the part(s) most likely to be read by policy makers. > Reading the article, you discover that most of those points are just made up by the columnist. Why do you think that they are interesting? Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Wed Feb 17 03:58:41 2010 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 16 Feb 2010 19:58:41 -0800 (PST) Subject: [ExI] Consciouness and paracrap Message-ID: <517167.25357.qm@web110408.mail.gq1.yahoo.com> I know I'm simple minded but I don't understand why consciousness is such a philosophical debate. I wonder why science sometimes complicates things. Let's say hypothetically I could change the definitions. (This is against academic rules but if Gordon GTE's to play master of definitions why shouldn't I:). Let's say the word conscious meant alive versus dead. Anything that is conscious is alive and awake. Could everyone agree on that? The problem lies within determining the levels of consciousness. Does a worm possess consciousness? Will an AI? I'm rather curious as to understand why the scientific community is taboo against using the term the "subconscious" as it would be much easier to explain if they did. What if the subconscious is Darwin's Theory of Evolution? The basic instinct for survival. A tree requires many things to keep it alive but it doesn't know it. It depends on the sun, the earth, the rain and it will continue to grow or it will die. We need trees and they are part of the evolutionary process. I would say they are subconsciously alive. In humans it's the instinct to take your hand off a hot stove. The embedded codes that evolution has installed. A person who is under anaesthesia should therefore be conscious and subconsciously alive. The person requires oxygen, food and water yet has no idea. This would then mean that consciousness and awareness go hand in hand. What if consciousness is to be awake allowing the memories capability of recalling, extracting and processing information while awareness is intelligence, experience and sense? A worm doesn't live in a subconscious state, it recalls, extracts and processes information but does it recall the experience or have a sense as to why it does the things it does? Shouldn't this be an underlining question? If we knew the worm felt something when we poked it with a knife would we declare it "aware"? I know my cat is aware because once he decided to stick his head too far into a bottle, he never did it again. I think to be human is to have awareness as well as consciousness. I believe a strong as well as weak AI will not be conscious or have any subconscious (well at least until technology merges with biology then at least that will be a great philosophical discussion) but only strong AI will have consciousness and somewhat awareness. What is scary about Strong AI is that it may have the maximum capacity of intelligence yet have no experience or sense. We had better hope that the programmer is fully aware. Stathis Papaioannou stathisp at gmail.com Sun Dec 13 23:08:02 UTC 2009 questioned gts_2000 at yahoo.com >To address the strong AI / weak AI distinction I put to you a question you haven't yet answered: what do you think would happen if part of your brain, say your visual cortex, were replaced with components that behaved normally in their interaction with the remaining biological neurons, but lacked the essential ingredient for consciousness? My observation: Well my contacts work fine. My memory would recall, extract and process the information. If the contacts are too weak and I can't see then yes one of my awareness factors would be limited but it would not stop me from being conscious or have consciousness. Btw...Even if all my crazy posts don't amount to anything I have to say that the Extropy Chat creates a whirl of imagination. I can read something that may lead to me to investigate something truly beneficial to my understanding. Thanks. Ok back to music... Anna:) __________________________________________________________________ Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From emlynoregan at gmail.com Wed Feb 17 05:37:24 2010 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 17 Feb 2010 16:07:24 +1030 Subject: [ExI] Phil Jones acknowledging that climate science isn't settled In-Reply-To: <4902d9991002161836w487577b5iefaac10dc3ba51a4@mail.gmail.com> References: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> <4902d9991002161836w487577b5iefaac10dc3ba51a4@mail.gmail.com> Message-ID: <710b78fc1002162137g524be15ai31739f5cc1769949@mail.gmail.com> 2010/2/17 Alfio Puglisi : > > > On Tue, Feb 16, 2010 at 6:50 PM, Max More wrote: >> >> Interesting: >> >> Phil Jones momentous Q&A with BBC reopens the ?science is settled? issues >> emperor is, if not naked, scantily clad, vindicating key skeptic arguments >> >> http://wattsupwiththat.com/2010/02/14/phil-jones-momentous-qa-with-bbc-reopens-the-science-is-settled-issues/ >> >> Columnist Indur Goklany summarizes: >> >> Specifically, the Q-and-As confirm what many skeptics have long suspected: >> Neither the rate nor magnitude of recent warming is exceptional. >> There was no significant warming from 1998-2009. According to the IPCC we >> should have seen a global temperature increase of at least 0.2?C per decade. >> The IPCC models may have overestimated the climate sensitivity for >> greenhouse gases, underestimated natural variability, or both. >> This also suggests that there is a systematic upward bias in the impacts >> estimates based on these models just from this factor alone. >> The logic behind attribution of current warming to well-mixed man-made >> greenhouse gases is faulty. >> The science is not settled, however unsettling that might be. >> There is a tendency in the IPCC reports to leave out inconvenient >> findings, especially in the part(s) most likely to be read by policy makers. > > Reading the article, you discover that most of those points are just made up > by the columnist. Why do you think that they are interesting? > > Alfio > I concur. This is largely commentary by the columnist, not what actually transpired in the interview. I haven't read the whole thing in detail, but one early piece stuck out like a sore thumb, which is the old saw that there is no significant warming since 1998. That's just flat out wrong, and it's wrong because 1998 itself stuck out like a sore thumb; it was a statistical anomaly, which anyone who wasn't being entirely disingenuous would agree with. Here's a discussion of that issue, along with graphs showing the 1998 sore thumb: http://scienceblogs.com/illconsidered/2006/04/warming-stopped-in-1998.php -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From stathisp at gmail.com Wed Feb 17 10:27:15 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 17 Feb 2010 21:27:15 +1100 Subject: [ExI] Consciouness and paracrap In-Reply-To: <517167.25357.qm@web110408.mail.gq1.yahoo.com> References: <517167.25357.qm@web110408.mail.gq1.yahoo.com> Message-ID: On 17 February 2010 14:58, Anna Taylor wrote: > I know I'm simple minded but I don't understand why consciousness is such > a philosophical debate. ?I wonder why science sometimes complicates > things. > > Let's say hypothetically I could change the definitions. (This is > against academic rules but if Gordon GTE's to play master of definitions > why shouldn't I:). Let's say the word conscious meant alive versus > dead. Anything that is conscious is alive and awake. ?Could everyone > agree on that? > The problem lies within determining the levels of consciousness. > Does a worm possess consciousness? Will an AI? > > I'm rather curious as to understand why the scientific community is > taboo against using the term the "subconscious" as it would be much > easier to explain if they did. What if the subconscious is Darwin's > Theory of Evolution? ?The basic instinct for survival. ?A tree > requires many things to keep it alive but it doesn't know it. > It depends on the sun, the earth, the rain and it will continue to > grow or it will die. ?We need trees and they are part of the > evolutionary process. I would say they are subconsciously alive. > In humans it's the instinct to take your hand off a hot stove. > The embedded codes that evolution has installed. > > A person who is under anaesthesia should therefore be conscious and > subconsciously alive. The person requires oxygen, food and water yet > has no idea. This would then mean that consciousness and awareness > go hand in hand. > > What if consciousness is to be awake allowing the memories capability > of recalling, extracting and processing information while awareness is > intelligence, experience and sense? ?A worm doesn't live in a > subconscious state, it recalls, extracts and processes information > but does it recall the experience or have a sense as to why it does > the things it does? Shouldn't this be an underlining question? If we > knew the worm felt something when we poked it with a knife would we > declare it "aware"? ?I know my cat is aware because once he decided to > stick his head too far into a bottle, he never did it again. ?I think > to be human is to have awareness as well as consciousness. > > I believe a strong as well as weak AI will not be conscious or > have any subconscious (well at least until technology merges with > biology then at least that will be a great philosophical discussion) > but only strong AI will have consciousness and somewhat awareness. > What is scary about Strong AI is that it may have the maximum capacity > of intelligence yet have no experience or sense. ?We had better hope > that the programmer is fully aware. > > Stathis Papaioannou stathisp at gmail.com > Sun Dec 13 23:08:02 UTC 2009 questioned gts_2000 at yahoo.com > >>To address the strong AI / weak AI distinction I put to you a > question you haven't yet answered: what do you think would happen > if part of your brain, say your visual cortex, were replaced with components that behaved normally in their interaction with the > remaining biological neurons, but lacked the essential ingredient for > consciousness? > > My observation: > Well my contacts work fine. My memory would recall, extract and process > the information. ?If the contacts are too weak and I can't see then > yes one of my awareness factors would be limited but it would not stop > me from being conscious or have consciousness. > > Btw...Even if all my crazy posts don't amount to anything I have to > say that the Extropy Chat creates a whirl of imagination. ?I can read > something that may lead to me to investigate something truly beneficial > to my understanding. ?Thanks. ?Ok back to music... Anna, here are some definitions that I use: Consciousness - hard to define, but if you have it you know it; Strong AI - an artificial intelligence that is both intelligent and conscious; Weak AI - an artificial intelligence that is intelligent but lacks consciousness; Philosophical zombie - same as weak AI. Several people have commented that we need a definition of consciousness to proceed, but I disagree. I think everyone knows what is meant by the word and so we can have a complete discussion without at any point defining it. For those who say that consciousness does not really exist: consciousness is that thing you are referring to when you say that consciousness does not really exist. With the brain replacement experiment, the idea is that the visual cortex is where visual perceptions (visual experiences/ consciousness/ qualia) occur. If your visual cortex is destroyed then you are blind, even if your eyes and optic nerve are intact. When you see something and describe it, information goes from your eyes to your visual cortex, from your visual cortex to your speech centre, and finally from your speech centre to your vocal cords. The question is, what would happen if your visual cortex were replaced with an artificial part that sent the same signals to the rest of your brain in response to signals from your eyes, but lacked visual perception? By definition, you would see nothing; but also by definition, you would describe everything put in front of you correctly and you would claim and honestly believe that you could see normally. How could you be completely blind but not notice you were blind and behave as if you had normal vision? And if you think that is a coherent state of affairs, how do you know you are not currently blind? The purpose of the above is to show that it is impossible (logically impossible, not just physically impossible) to make a brain part, and hence a whole brain, that behaves exactly like a biological brain but lacks consciousness. Either it isn't possible to make such an artificial component at all, or else it is possible to make such a component but it will necessarily also have consciousness. The alternative is to say that you're happy with the idea that you may be blind, deaf, unable to understand English etc. but neither you nor anyone else has noticed. Gordon Swobe's response is that this thought experiment is ridiculous and I should come up with another one that doesn't challenge the self-evident fact that digital computers cannot be conscious. -- Stathis Papaioannou From gts_2000 at yahoo.com Wed Feb 17 14:05:51 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 17 Feb 2010 06:05:51 -0800 (PST) Subject: [ExI] Semiotics and Computability Message-ID: <579192.36870.qm@web36507.mail.mud.yahoo.com> --- On Tue, 2/16/10, Christopher Luebcke wrote: > Gordon, if I may ask directly: How do you determine whether someone, or > something, besides yourself is conscious? I believe we know enough about the brain and nervous system to infer the existence of subjective experience in other animals that have the same kind of apparatus. We see that other people and other primates and certain other animals have nervous systems very much like ours, eyes and skin and noses and ears very much like ours, and so on, and from these physiological facts in combination with their behaviors and reports of subjective experiences we can infer with near certainty that they do in fact have subjective experiences. -gts From gts_2000 at yahoo.com Wed Feb 17 14:45:31 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 17 Feb 2010 06:45:31 -0800 (PST) Subject: [ExI] glutamine and life-extension In-Reply-To: Message-ID: <836720.57934.qm@web36507.mail.mud.yahoo.com> --- On Tue, 2/16/10, Stathis Papaioannou wrote: >> All this talk of neurons reminds me of a paper I wrote >> circa 1999: >> >> Glutamine Based Growth Hormone Releasing Products: A >> Bad Idea? >> http://www.newtreatments.org/loadlocal.php?hid=974 >> >> My article above created a surprising amount of >> controversy among life-extensionists. I closed down my >> website, but recently found my paper republished on the site >> above without my permission. >> >> Thought you might find it interesting given your >> profession and the general theme. > > Thank-you for this, it is not something I knew much about. You're welcome. > It's important to know about the *negative* effects of any > treatment, and this is often overlooked by those into non-conventional I agree completely. You might find it surprising how much controversy my paper above created. At the time (1999-2000) some people interested in life-extension considered megadoses of glutamine as a means for delaying the effects of aging. My paper exposed the risks associated with such megadosing. I took my research over to the LEF forum. As I describe in the article, they put a temporary moratorium on the subject of glutamine in their online forum (no surprise there: LEF sells the stuff) until they could get their facts straight. To LEF's credit, they took my findings seriously enough to stop promoting megadosing of glutamine on their forum, and invited me back to educate people about the risks. A small victory for me. As you can see I'm no stranger to controversy. :-) -gts From cetico.iconoclasta at gmail.com Wed Feb 17 16:27:34 2010 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado (CI)) Date: Wed, 17 Feb 2010 14:27:34 -0200 Subject: [ExI] QET References: <8885036.1601541266266257004.JavaMail.defaultUser@defaultHost> Message-ID: <013201caafee$1dec8ec0$fd00a8c0@cpdhemm> No idea about this specific process of dr. Hotta. But it seems to me that the strange nature of quantum mechanical phenomena, and in particular quantum non- locality and quantum non-separability, could be easily extended - at least heuristically - to different contexts (i.e. gravitational fields) to get new and relevant results (i.e. non-local gravitational fields). See i.e. Adrian Kent here http://arxiv.org/abs/gr-qc/0507045 By non-local gravitational fields you mean putting gravity on a spaceship for instance? Now that would be interesting. From natasha at natasha.cc Wed Feb 17 16:55:51 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Wed, 17 Feb 2010 10:55:51 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <412543.58639.qm@web111214.mail.gq1.yahoo.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> Message-ID: <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> Christopher Luebcke writes: "... I just find it sadly ironic, and a dark reflection on the petty, fiercely partisan, politicized nature of what should in fact be a sober, scientific discussion, that a man accused (in the court of political opinion) of fraud should suddenly be taken at his word by those same accusers, when that word apparently agrees with their position." Many of us find it frustrating. Nevertheless the biggest problem and disappointment is that, while everyone seems to annoyed and saddened, there is a lack of intelligent communication between all parties in viewing the situation from diverse perspectives. Max is working to develop a level ground for discussion. Bravo to him. Best, Natasha From rafal.smigrodzki at gmail.com Wed Feb 17 16:59:10 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 17 Feb 2010 11:59:10 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <412543.58639.qm@web111214.mail.gq1.yahoo.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> Message-ID: <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> On Tue, Feb 16, 2010 at 5:25 PM, Christopher Luebcke wrote: >> The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. > > No, but almost all of it supports the positions that the Earth has been warming over the last century, that the warming has primarily been caused by mankind's introduction of greenhouse gasses into the atmosphere, and that this warming trend will continue, with projected results over the next 100 years ranging, roughly, from pretty bad to catastrophic in terms of human suffering. ### Did you ever read any of this peer-reviewed literature? Most likely not, since if you did (as I did), you wouldn't have written the paragraph. In fact, only a minority of peer-reviewed literature actively endorses the statements you made, and most of this has been produced by environmental activists who infiltrated CRU, GISS, and NOAA. Give me a reference to a peer-reviewed primary research paper showing manmade global warming and I'll give you two disagreeing with it. Rafal From pharos at gmail.com Wed Feb 17 17:05:48 2010 From: pharos at gmail.com (BillK) Date: Wed, 17 Feb 2010 17:05:48 +0000 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> Message-ID: On Wed, Feb 17, 2010 at 4:55 PM, Natasha Vita-More wrote: > Many of us find it frustrating. Nevertheless the biggest problem and > disappointment is that, while everyone seems to annoyed and saddened, there > is a lack of intelligent communication between all parties in viewing the > situation from diverse perspectives. > > Max is working to develop a level ground for discussion. ?Bravo to him. > > The discussion is over and science has lost. Once the argument was moved to lobbying and politics and PR campaigns then Big Carbon can do this sooooo much better than scientists that science was swept aside. So, carry on as normal for the big corporations. Once New York floods, then there will be more big profits to be made. BillK From cluebcke at yahoo.com Wed Feb 17 16:44:15 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 08:44:15 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <579192.36870.qm@web36507.mail.mud.yahoo.com> References: <579192.36870.qm@web36507.mail.mud.yahoo.com> Message-ID: <218661.53873.qm@web111208.mail.gq1.yahoo.com> Could one detect or measure consciousness on the basis of "behaviors and reports of subjective experiences" alone, without direct anatomical knowledge of a "brain and nervous system"? Conversely, does a system with a "brain and nervous system" necessarily have consciousness, even in the absence of "behaviors and reports of subjective experience"? ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Wed, February 17, 2010 6:05:51 AM Subject: Re: [ExI] Semiotics and Computability --- On Tue, 2/16/10, Christopher Luebcke wrote: > Gordon, if I may ask directly: How do you determine whether someone, or > something, besides yourself is conscious? I believe we know enough about the brain and nervous system to infer the existence of subjective experience in other animals that have the same kind of apparatus. We see that other people and other primates and certain other animals have nervous systems very much like ours, eyes and skin and noses and ears very much like ours, and so on, and from these physiological facts in combination with their behaviors and reports of subjective experiences we can infer with near certainty that they do in fact have subjective experiences. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From natasha at natasha.cc Wed Feb 17 17:19:11 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Wed, 17 Feb 2010 11:19:11 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com><412543.58639.qm@web111214.mail.gq1.yahoo.com><81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> Message-ID: <26B286FC724A4FF2A04AAF4F353B882B@DFC68LF1> BillK wrote: > Many of us find it frustrating. Nevertheless the biggest problem and > disappointment is that, while everyone seems to annoyed and saddened, > there is a lack of intelligent communication between all parties in > viewing the situation from diverse perspectives. > > Max is working to develop a level ground for discussion. ?Bravo to him. "The discussion is over and science has lost. Once the argument was moved to lobbying and politics and PR campaigns then Big Carbon can do this sooooo much better than scientists that science was swept aside. So, carry on as normal for the big corporations. Once New York floods, then there will be more big profits to be made." Science has not lost because science is not a fait accompli. Just because we lack a skill set for finding solutions to the problem does not mean anyone wins. Natasha From x at extropica.org Wed Feb 17 17:29:34 2010 From: x at extropica.org (x at extropica.org) Date: Wed, 17 Feb 2010 09:29:34 -0800 Subject: [ExI] Being No One: The Self-Model Theory of Subjectivity Message-ID: On Sun Nov 23 17:39:04 UTC 2003, Jef wrote: > I received my copy of this book from Amazon and it exceeded my expectations. > The first chapter describes what I think are the key issues today in > undertanding the illusion of self that colors so much of the thinking and > discussion on this list. The text is dense, not for the casual reader. >Highly recommended for those seeking a wider view of consciousness that > encompasses the "paradoxes" of qualia and subjectivity and David Chalmers' > so-called "hard problem of consciousness." > > - Jef [Note that I'm replying to a thread on this list from 2003.] For those *serious* participants in this discussion who for some reason aren't already familiar with Metzinger's work, there is now an easy one hour introduction by video available at http://www.youtube.com/watch?v=mthDxnFXs9k. > > > Human Nature Review wrote: >> Human Nature Review 2003 Volume 3: 450-454 ( 17 November ) >> URL of this document http://human-nature.com/nibbs/03/metzinger.html >> >> Book Review >> >> Being No One: The Self-Model Theory of Subjectivity >> by Thomas Metzinger >> MIT Press, (2003), pp. 699, ISBN: 0-262-13417-9 >> >> Reviewed by Marcello Ghin. >> >> The notion of consciousness has been suspected of being too vague for >> being a topic of scientific investigation. Recently, consciousness >> has become more interesting in the light of new neuroscientific >> imaging studies. Scientists from all over the world are searching for >> neural correlates of consciousness. However, finding the neural basis >> is not enough for a scientific explanation of conscious experience. >> After all, we are still facing the 'hard problem', as David Chalmers >> dubbed it: why are those neural processes accompanied by conscious >> experience at all? Maybe we can reformulate the question in this way: >> Which constraints does a system have to satisfy in order to generate >> conscious experience? Being No One is an attempt to give an answer to >> the latter question. To be more precise: it is an attempt to give an >> answer to the question of how information processing systems generate >> the conscious experience of being someone. >> >> We all experience ourselves as being someone. For example, at this >> moment you will have the impression that it is you who is actually >> reading this review. And it is you who is forming thoughts about it. >> Could it be otherwise? Could I be wrong about what I myself am >> experiencing? Our daily experiences make us think that we are someone >> who is experiencing the world. We commonly refer to this phenomenon >> by speaking of the 'self'. Metzinger claims that no such things as >> 'selves' exist in the world. All that exists are phenomenal >> self-models, that is continuously updated dynamic >> self-representational processes of biological organisms. Conscious >> beings constantly confuse themselves with the content of their actual >> phenomenal self-model, thinking that they are identical with a self. >> According to Metzinger, this is due to the nature of the >> representational process generating the self-model. The self-model is >> mostly transparent - the information that it is a model is not >> carried on the level of content - we are looking through it, having >> the impression of being in direct contact with our own body and the >> world. If you are now thinking that this idea is at least >> counterintuitive, you should read Being No One and find out why it is >> counterintuitive, and yet that there are good reasons to believe that >> it is correct. >> >> Full text >> http://human-nature.com/nibbs/03/metzinger.html >> >> Being No One: The Self-Model Theory of Subjectivity >> by Thomas Metzinger (Author) >> Hardcover: 584 pages ; Dimensions (in inches): 1.56 x 9.25 x 7.32 >> Publisher: MIT Press; (January 24, 2003) ISBN: 0262134179 >> AMAZON - US >> http://www.amazon.com/exec/obidos/ASIN/0262134179/darwinanddarwini/ >> AMAZON - UK >> http://www.amazon.co.uk/exec/obidos/ASIN/0262134179/humannaturecom/ >> >> Editorial Reviews >> Book Info >> Johannes Gutenberg-Universitat, Mainz, Germany. Text introduces two >> theoretical entities that may form the decisive conceptual link >> between first-person and third-person approaches to the conscious >> mind. Explores evolutionary roots of intersubjectivity, artifical >> subjectivity, and future connections between philosophy of mind and >> ethics. >> >> Book Description >> According to Thomas Metzinger, no such things as selves exist in the >> world: nobody ever had or was a self. All that exists are phenomenal >> selves, as they appear in conscious experience. The phenomenal self, >> however, is not a thing but an ongoing process; it is the content of >> a "transparent self-model." In Being No One, Metzinger, a German >> philosopher, draws strongly on neuroscientific research to present a >> representationalist and functional analysis of what a consciously >> experienced first-person perspective actually is. Building a bridge >> between the humanities and the empirical sciences of the mind, he >> develops new conceptual toolkits and metaphors; uses case studies of >> unusual states of mind such as agnosia, neglect, blindsight, and >> hallucinations; and offers new sets of multilevel constraints for the >> concept of consciousness. Metzinger's central question is: How >> exactly does strong, consciously experienced subjectivity emerge out >> of objective events in the natural world? His epistemic goal is to >> determine whether conscious experience, in particular the experience >> of being someone that results from the emergence of a phenomenal >> self, can be analyzed on subpersonal levels of description. He also >> asks if and how our Cartesian intuitions that subjective experiences >> as such can never be reductively explained are themselves ultimately >> rooted in the deeper representational structure of our conscious >> minds. Metzinger introduces two theoretical entities--the "phenomenal >> self-model" and the "phenomenal model of the intentionality >> relation"--that may form the decisive conceptual link between >> first-person and third-person approaches to the conscious mind and >> between consciousness research in the humanities and in the sciences. >> He also discusses the roots of intersubjectivity, artificial >> subjectivity (the issue of nonbiological phenomenal selves), and >> connections between philosophy of mind and ethics. >> >> Human Nature Review http://human-nature.com >> Evolutionary Psychology http://human-nature.com/ep >> Human Nature Daily Review http://human-nature.com/nibbs From cluebcke at yahoo.com Wed Feb 17 17:33:16 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 09:33:16 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> Message-ID: <154647.10956.qm@web111212.mail.gq1.yahoo.com> Let me just add: I am not a climatologist, and therefore even if I had read a wide variety of peer-reviewed papers on the subject, I would not be qualified to determine whether I had made an accurate sampling, much less judge the papers on their merits. My claim comes in fact from paying attention to those organizations who are responsible for gathering and summarizing professional research and judgement on the subject: Not just IPCC, but NAS, AMS, AGU and AAAS are all organizations, far as I can tell, that are both qualified to comment on the matter, and who have supported the general position that AGW is real. If you are going to dismiss well-respected scientific bodies that hold positions contrary to your own as necessarily having been "infiltrated by environmental activists", then it is incumbent upon you to provide some evidence that such infiltration by such people has actually taken place. I wonder if it wouldn't be possible to disagree with their positions, though, without also presuming that the people you disagree with are wicked? That's the larger point of what I was trying to get at. ----- Original Message ---- From: Rafal Smigrodzki To: ExI chat list Sent: Wed, February 17, 2010 8:59:10 AM Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled On Tue, Feb 16, 2010 at 5:25 PM, Christopher Luebcke wrote: >> The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. > > No, but almost all of it supports the positions that the Earth has been warming over the last century, that the warming has primarily been caused by mankind's introduction of greenhouse gasses into the atmosphere, and that this warming trend will continue, with projected results over the next 100 years ranging, roughly, from pretty bad to catastrophic in terms of human suffering. ### Did you ever read any of this peer-reviewed literature? Most likely not, since if you did (as I did), you wouldn't have written the paragraph. In fact, only a minority of peer-reviewed literature actively endorses the statements you made, and most of this has been produced by environmental activists who infiltrated CRU, GISS, and NOAA. Give me a reference to a peer-reviewed primary research paper showing manmade global warming and I'll give you two disagreeing with it. Rafal _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From cluebcke at yahoo.com Wed Feb 17 17:18:53 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 09:18:53 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> Message-ID: <723938.90546.qm@web111204.mail.gq1.yahoo.com> > most of thishas been produced by environmental activists who infiltrated CRU, GISS, and NOAA. Infiltrators? Oh my. Sounds like we need a purge. ----- Original Message ---- From: Rafal Smigrodzki To: ExI chat list Sent: Wed, February 17, 2010 8:59:10 AM Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled On Tue, Feb 16, 2010 at 5:25 PM, Christopher Luebcke wrote: >> The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. > > No, but almost all of it supports the positions that the Earth has been warming over the last century, that the warming has primarily been caused by mankind's introduction of greenhouse gasses into the atmosphere, and that this warming trend will continue, with projected results over the next 100 years ranging, roughly, from pretty bad to catastrophic in terms of human suffering. ### Did you ever read any of this peer-reviewed literature? Most likely not, since if you did (as I did), you wouldn't have written the paragraph. In fact, only a minority of peer-reviewed literature actively endorses the statements you made, and most of this has been produced by environmental activists who infiltrated CRU, GISS, and NOAA. Give me a reference to a peer-reviewed primary research paper showing manmade global warming and I'll give you two disagreeing with it. Rafal _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From alfio.puglisi at gmail.com Wed Feb 17 18:10:00 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 17 Feb 2010 19:10:00 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> Message-ID: <4902d9991002171010q488c79e1td46df27ddcd96f77@mail.gmail.com> On Wed, Feb 17, 2010 at 5:59 PM, Rafal Smigrodzki < rafal.smigrodzki at gmail.com> wrote: > On Tue, Feb 16, 2010 at 5:25 PM, Christopher Luebcke > wrote: > >> The peer-reviewed science (which, as has been clear for some time, is > not always a guarantee of accuracy), does not all agree. So that doesn't > settle the issue. > > > > No, but almost all of it supports the positions that the Earth has been > warming over the last century, that the warming has primarily been caused by > mankind's introduction of greenhouse gasses into the atmosphere, and that > this warming trend will continue, with projected results over the next 100 > years ranging, roughly, from pretty bad to catastrophic in terms of human > suffering. > > ### Did you ever read any of this peer-reviewed literature? > > Most likely not, since if you did (as I did), you wouldn't have > written the paragraph. In fact, only a minority of peer-reviewed > literature actively endorses the statements you made, > > Give me a reference to a peer-reviewed primary research paper showing > manmade global warming and I'll give you two disagreeing with it. > Mmm.... each chapter of the IPCC report has dozens of references. Can you really find two times that amount? > and most of this > has been produced by environmental activists who infiltrated CRU, > GISS, and NOAA. And they managed to convince the uK Royal Society, many national academies of science, and even the American Association of Petroleum Geologists. "infiltration" doesn't begin to describe it. Alfio > Rafal > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Wed Feb 17 18:15:04 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 17 Feb 2010 19:15:04 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> Message-ID: <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> On Wed, Feb 17, 2010 at 5:55 PM, Natasha Vita-More wrote: > Christopher Luebcke writes: > > "... I just find it sadly ironic, and a dark reflection on the petty, > fiercely partisan, politicized nature of what should in fact be a sober, > scientific discussion, that a man accused (in the court of political > opinion) of fraud should suddenly be taken at his word by those same > accusers, when that word apparently agrees with their position." > > Many of us find it frustrating. Nevertheless the biggest problem and > disappointment is that, while everyone seems to annoyed and saddened, there > is a lack of intelligent communication between all parties in viewing the > situation from diverse perspectives. > > Max is working to develop a level ground for discussion. Bravo to him. > I don't agree. Posting all those links to garbage blogs like WUWT, and quoting them like they had any value instead of laughing at them (or desperating, depends on the mood), does a disservice to the Extropy list. It shows how a group of intelligent people can be easily manipulated by PR spin. I find it depressing. Alfio > Best, > Natasha > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Wed Feb 17 19:49:45 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 17 Feb 2010 14:49:45 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> Message-ID: <20100217144945.t2astxtwhwowssg8@webmail.natasha.cc> Your observation is irrational. Natasha Quoting Alfio Puglisi : > On Wed, Feb 17, 2010 at 5:55 PM, Natasha Vita-More wrote: > >> Christopher Luebcke writes: >> >> "... I just find it sadly ironic, and a dark reflection on the petty, >> fiercely partisan, politicized nature of what should in fact be a sober, >> scientific discussion, that a man accused (in the court of political >> opinion) of fraud should suddenly be taken at his word by those same >> accusers, when that word apparently agrees with their position." >> >> Many of us find it frustrating. Nevertheless the biggest problem and >> disappointment is that, while everyone seems to annoyed and saddened, there >> is a lack of intelligent communication between all parties in viewing the >> situation from diverse perspectives. >> >> Max is working to develop a level ground for discussion. Bravo to him. >> > > I don't agree. Posting all those links to garbage blogs like WUWT, and > quoting them like they had any value instead of laughing at them (or > desperating, depends on the mood), does a disservice to the Extropy list. It > shows how a group of intelligent people can be easily manipulated by PR > spin. I find it depressing. > > Alfio > > > >> Best, >> Natasha >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > From thespike at satx.rr.com Wed Feb 17 19:51:14 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Feb 2010 13:51:14 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> Message-ID: <4B7C48B2.1030303@satx.rr.com> On 2/17/2010 12:15 PM, Alfio Puglisi wrote: >> Max is working to develop a level ground for discussion. Bravo to him. > I don't agree. Posting all those links to garbage blogs like WUWT, and > quoting them like they had any value instead of laughing at them (or > desperating, depends on the mood), does a disservice to the Extropy > list. It shows how a group of intelligent people can be easily > manipulated by PR spin. I find it depressing. Have to agree with Alfio--sorry. Quoting should at least have url'd the full BBC interview text, which gives a rather different impression (to me, anyway): Damien Broderick From natasha at natasha.cc Wed Feb 17 20:07:39 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 17 Feb 2010 15:07:39 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <4B7C48B2.1030303@satx.rr.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> <4B7C48B2.1030303@satx.rr.com> Message-ID: <20100217150739.8qmf6mzz4gokc8cc@webmail.natasha.cc> You agree with the irrational of Alfio that Max is easly manipulated by PR spins? We'd have to ask why the other url was not included; but I hardly think it is sufficient evidence that one is being manipulated. Natasha Quoting Damien Broderick : > On 2/17/2010 12:15 PM, Alfio Puglisi wrote: > >>> Max is working to develop a level ground for discussion. Bravo to him. > >> I don't agree. Posting all those links to garbage blogs like WUWT, and >> quoting them like they had any value instead of laughing at them (or >> desperating, depends on the mood), does a disservice to the Extropy >> list. It shows how a group of intelligent people can be easily >> manipulated by PR spin. I find it depressing. > > Have to agree with Alfio--sorry. Quoting should at least have url'd the > full BBC interview text, which gives a rather different impression (to > me, anyway): > > > > Damien Broderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From thespike at satx.rr.com Wed Feb 17 20:36:29 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Feb 2010 14:36:29 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <20100217150739.8qmf6mzz4gokc8cc@webmail.natasha.cc> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> <4B7C48B2.1030303@satx.rr.com> <20100217150739.8qmf6mzz4gokc8cc@webmail.natasha.cc> Message-ID: <4B7C534D.5040807@satx.rr.com> On 2/17/2010 2:07 PM, natasha at natasha.cc wrote: > You agree with the irrational of Alfio that Max is easly manipulated by > PR spins? Alfio was making a general and quite rational point about some posters to the list, I think, and in this case it does look as if Max jumped the gun in citing spin stories rather than the original interview. Calling Alfio "irrational" doesn't get us very far in advancing the discussion. Christopher commented: "I wonder if it wouldn't be possible to disagree with their positions, though, without also presuming that the people you disagree with are wicked?" Ditto "irrational." HOWEVER... that doesn't mean some players in the supposed debate *aren't* wicked--consider, by analogy,. the decades-long and perhaps equivalent role of corporate advocates for carcinogenic smoking. If that was not wickedness, what is? It is arguable that climate change deniers are in a similar position. That said, I agree with Barbara Lamar, who comments: Damien Broderick From lacertilian at gmail.com Wed Feb 17 20:38:24 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 17 Feb 2010 12:38:24 -0800 Subject: [ExI] Consciouness and paracrap In-Reply-To: <517167.25357.qm@web110408.mail.gq1.yahoo.com> References: <517167.25357.qm@web110408.mail.gq1.yahoo.com> Message-ID: Anna Taylor : > I know I'm simple minded but I don't understand why consciousness is such > a philosophical debate. ?I wonder why science sometimes complicates > things. Simplicity is a virtue. But, have you seen much scientific reasoning going on here? I haven't! Anna Taylor : > Let's say the word conscious meant alive versus > dead. Anything that is conscious is alive and awake. ?Could everyone > agree on that? I have no problem with that definition, but then you also need to define "alive". Lots of disagreement there. I think "awake" is sufficiently unambiguous. You might run into the question of what androids dream of, though. Does it make sense to speak of sleeping rocks? Being alive may be a prerequisite for being awake or asleep. Anna Taylor : > My observation: > Well my contacts work fine. My memory would recall, extract and process > the information. ?If the contacts are too weak and I can't see then > yes one of my awareness factors would be limited but it would not stop > me from being conscious or have consciousness. This isn't actually what Stathis was talking about. He wants the answer to this question: If I were to replace your visual cortex with a device* wired to exactly reproduce its normal inputs and outputs, would you still have the subjective experience of seeing? *There is an implication that the device in question is a digital computer, which I take to mean something with registers and clock cycles. Stathis Papaioannou : > Several people have commented that we need a definition of > consciousness to proceed, but I disagree. I think everyone knows what > is meant by the word and so we can have a complete discussion without > at any point defining it. Dude, I barely know what I mean by the word when I use it in my own head. Are you talking about access consciousness? Phenomenal consciousness? Reflexive consciousness? All of the above? http://www.def-logic.com/articles/silby011.html The reason I haven't supplied a rigorous definition for consciousness, as I have for intelligence, is because I can't articulate the meaning of it for myself. This, to me, does not seem ancillary to the discussion; it seems to be the very root of the discussion, namely the question, "what is consciousness?". Stathis Papaioannou : > For those who say that consciousness does > not really exist: consciousness is that thing you are referring to > when you say that consciousness does not really exist. That's fair. There isn't any question of what I'm talking about when I refer to the Flying Spaghetti Monster. I can describe the FSM to you in great detail, however. I can't do the same with consciousness, except perhaps to say that, if it exists, it occasionally compels normally sane people to begin a sentence with "dude". Stathis Papaioannou : > The purpose of the above is to show that it is impossible (logically > impossible, not just physically impossible) to make a brain part, and > hence a whole brain, that behaves exactly like a biological brain but > lacks consciousness. Either it isn't possible to make such an > artificial component at all, or else it is possible to make such a > component but it will necessarily also have consciousness. The > alternative is to say that you're happy with the idea that you may be > blind, deaf, unable to understand English etc. but neither you nor > anyone else has noticed. > > Gordon Swobe's response is that this thought experiment is ridiculous > and I should come up with another one that doesn't challenge the > self-evident fact that digital computers cannot be conscious. Gordon doesn't disagree with that proposition as-stated, even if he sometimes claims that he does (for some reason). He's consistently said that we should be able to engineer artificial consciousness, but that to do so requires more than a clever piece of software in a digital computer. So, I suggest that you rephrase the experiment so that it explicitly involves replacing neurons, cortices, or whole brains with microprocessor-driven prosthetics. We know that he believes the whole-brain version will be a zombie, but I haven't been able to discern any clear conclusions from him on the other two. He has said before that partial replacement only confuses the matter, implying that it's a useless thought experiment. I do not see why he would think that, though. The only coherent answer of his I remember goes something like this: a man has a damaged language center, and a surgeon replaces neurons with artificial substitutes one by one. This works so poorly that the surgeon must replace the entire brain before language function is returned, at which point the man is a philosophical zombie. But we always start with the assumption that computerized neurons do not work poorly, indeed that they "depict" ordinary neurons perfectly (using that depiction as a guide to manipulate their synthetic axons and such), and I've never seen him explain why he considers this assumption to be inherently false. From joe.dalton23 at yahoo.com Wed Feb 17 20:14:07 2010 From: joe.dalton23 at yahoo.com (Joe Dalton) Date: Wed, 17 Feb 2010 12:14:07 -0800 (PST) Subject: [ExI] Test from new subscriber Message-ID: <939179.93622.qm@web113907.mail.gq1.yahoo.com> Testing -- Subscribed to this list a few months ago... only got one message thru. Trying again from a new address. Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Wed Feb 17 20:49:31 2010 From: max at maxmore.com (Max More) Date: Wed, 17 Feb 2010 14:49:31 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <201002172049.o1HKnewH023195@andromeda.ziaspace.com> Damien says: >Alfio was making a general and quite rational point about some posters >to the list, I think, and in this case it does look as if Max jumped the >gun in citing spin stories rather than the original interview. Oh give me a break. The piece cited included a link to the original BBC story in the *first sentence*. I should note that I will not engage Alfio in discussion on this issue, since it's clear to me that anything that disagrees with his view is automatically dismissed. The latest evidence for that is his calling WUWT a "garbage blog". It is not. I could reply -- equally unreasonably -- by calling RealClimate a garbage blog. Neither of them are garbage, though both may contain some mistaken material. Max From lacertilian at gmail.com Wed Feb 17 21:03:24 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 17 Feb 2010 13:03:24 -0800 Subject: [ExI] Test from new subscriber In-Reply-To: <939179.93622.qm@web113907.mail.gq1.yahoo.com> References: <939179.93622.qm@web113907.mail.gq1.yahoo.com> Message-ID: Joe Dalton : > > Testing -- Subscribed to this list a few months ago... only got one message thru. Trying again from a new address. > > Joe Seems to be working on our side. From joe.dalton23 at yahoo.com Wed Feb 17 20:37:25 2010 From: joe.dalton23 at yahoo.com (Joe Dalton) Date: Wed, 17 Feb 2010 12:37:25 -0800 (PST) Subject: [ExI] Fw: Test from new subscriber Message-ID: <534353.36367.qm@web113909.mail.gq1.yahoo.com> Arg. Trying again. Is someone trying to keep me out?? ----- Forwarded Message ---- From: Joe Dalton To: Extropy-Chat Sent: Wed, February 17, 2010 2:14:07 PM Subject: Test from new subscriber Testing -- Subscribed to this list a few months ago... only got one message thru. Trying again from a new address. Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Wed Feb 17 21:11:03 2010 From: mbb386 at main.nc.us (MB) Date: Wed, 17 Feb 2010 16:11:03 -0500 (EST) Subject: [ExI] Test from new subscriber In-Reply-To: <939179.93622.qm@web113907.mail.gq1.yahoo.com> References: <939179.93622.qm@web113907.mail.gq1.yahoo.com> Message-ID: <38630.12.77.168.255.1266441063.squirrel@www.main.nc.us> Received here. Looks fine. Regards, MB > Testing -- Subscribed to this list a few months ago... only got one message thru. > Trying again from a new address. > From thespike at satx.rr.com Wed Feb 17 21:11:57 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Feb 2010 15:11:57 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002172049.o1HKnewH023195@andromeda.ziaspace.com> References: <201002172049.o1HKnewH023195@andromeda.ziaspace.com> Message-ID: <4B7C5B9D.7010502@satx.rr.com> On 2/17/2010 2:49 PM, Max More wrote: > Oh give me a break. The piece cited included a link to the original BBC > story in the *first sentence*. My goof, sorry. Didn't notice that, because the url was embedded behind a bolded phrase in the "Annotated Version of the Phil & Roger Show". Damien Broderick From natasha at natasha.cc Wed Feb 17 21:15:09 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 17 Feb 2010 16:15:09 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <4B7C534D.5040807@satx.rr.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> <4B7C48B2.1030303@satx.rr.com> <20100217150739.8qmf6mzz4gokc8cc@webmail.natasha.cc> <4B7C534D.5040807@satx.rr.com> Message-ID: <20100217161509.hvzuebfszockcwkg@webmail.natasha.cc> Quoting Damien Broderick : > On 2/17/2010 2:07 PM, natasha at natasha.cc wrote: > >> You agree with the irrational of Alfio that Max is easily manipulated by >> PR spins? > > Alfio was making a general and quite rational point about some posters > to the list, I think, and in this case it does look as if Max jumped > the gun in citing spin stories rather than the original interview. This was not my take on it at all: The first sentence of the first paragraph of the article has a link to the original BBC Q&A interview. And in Max's post, he said that the piece was "interesting". Interesting to me means that it gets a person's attention - whether positive or negative and does not mean that one agrees or disagrees. Further, when asking why a person did not find it interesting, invites an exchange into thinking processes (maybe not totally Socratic, but it opens dialogue). But, let's step back a moment: this topic is fascinating and, while I was out of the country (in Auzzi land) when it broke, it got a heck of a lot of attention in lots of circles and this BBC-WUWT is the first I personally have read about it since that time ... so for me anyway, it is "interesting" and, mind you, I am not saying that I agree with it or disagree with it. I am merely absorbing. > Calling Alfio "irrational" doesn't get us very far in advancing the > discussion. Let's step back a moment: A perspective that lacks understanding (in this case an understanding of a person's intention (which we do not know yet because Max is not in this conversation and I do not speak for him) is not rational, especially when taking a post to court because it did not include a URL and taking "interesting" as more powerful than it could have been intended. If you read my first post in this thread concerning the issue of global warming and discourse surrounding global warming, -- I wrote: "Nevertheless the biggest problem and disappointment is that, while everyone seems to annoyed and saddened, there is a lack of intelligent communication between all parties in viewing the situation from diverse perspectives." This sums up my view. > Christopher commented: "I wonder if it wouldn't be possible to disagree > with their positions, though, without also presuming that the people > you disagree with are wicked?" Ditto "irrational." This is incorrect. Wicked does not equal irrational. And again here is another supposition. Irrational in this instance means a "LACK OF UNDERSTANDING". However, it is very true that irrational can be taken as pejorative - which is not how I meant it - I simply found the lack of dialogue, stated in an absolutist fashion, to be the result of a missed understanding. Nevertheless, let's move on (or should I say backwards): What does "manipulated" mean? Let's see: I "assume" that Christopher means being "control shrewdly" or maybe "deviously". But let's suppose the article does have a devious characteristic - does this result in a person being influenced by deception just because s/he says it is interesting? I think not. > HOWEVER... that > doesn't mean some players in the supposed debate *aren't* > wicked--consider, by analogy,. the decades-long and perhaps equivalent > role of corporate advocates for carcinogenic smoking. If that was not > wickedness, what is? It is arguable that climate change deniers are in > a similar position. I'd love to have a darn good discussion about players of debates, advertizing, PR, marketing, etc. under a different thread subject line which is more appropriate to its contents. Best, Natasha From joe.dalton23 at yahoo.com Wed Feb 17 20:52:50 2010 From: joe.dalton23 at yahoo.com (Joe Dalton) Date: Wed, 17 Feb 2010 12:52:50 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <960175.20814.qm@web113903.mail.gq1.yahoo.com> Puglisi: Mmm.... each chapter of the IPCC report has dozens of references. Can you really find two times that amount? Doubt it. But there's no shortage of skeptical peer-reviewed papers. e.g.: http://wattsupwiththat.com/2009/11/15/reference-450-skeptical-peer-reviewed-papers/#more-12801 JoeD -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Wed Feb 17 21:24:54 2010 From: max at maxmore.com (Max More) Date: Wed, 17 Feb 2010 15:24:54 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <201002172125.o1HLP3dO006994@andromeda.ziaspace.com> Thanks, Damien. Since the article I did cite linked so immediately to the BBC piece, I didn't think it necessary to separately include it in my post. Still, in one way, I may have to agree that I might have "jumped the gun" in posting that. I did think it was interesting that a central figure was moderating his position, but perhaps I should have gone to the trouble of analyzing where I agreed, disagreed, or doubted the summary and inferences made by Goklany. I should also have anticipated the renewed firestorm it might set off. Or perhaps this was a sinister plot by me to draw attention away from the endless consciousness/Searle discussion... >On 2/17/2010 2:49 PM, Max More wrote: > > > Oh give me a break. The piece cited included a link to the original > > BBC story in the *first sentence*. > >My goof, sorry. Didn't notice that, because the url was embedded behind >a bolded phrase in the "Annotated Version of the Phil & Roger Show". > >Damien Broderick Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From max at maxmore.com Wed Feb 17 21:35:26 2010 From: max at maxmore.com (Max More) Date: Wed, 17 Feb 2010 15:35:26 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> Emlyn: >I haven't read the whole thing in detail, but >one early piece stuck out like a sore thumb, >which is the old saw that there is no >significant warming since 1998. That's just flat >out wrong, and it's wrong because 1998 itself >stuck out like a sore thumb; it was a >statistical anomaly, which anyone who wasn't >being entirely disingenuous would agree with. > >Here's a discussion of that issue, along with >graphs showing the 1998 sore thumb: > >http://scienceblogs.com/illconsidered/2006/04/warming-stopped-in-1998.php The source you cite (which is almost four years old now), seems to rely for the recent period exclusively on NASA GISS analysis. (References to CRU data are for other periods. I didn't see any comparison with UAH or RSS.) In contrast, the following piece.. http://wattsupwiththat.com/2008/03/08/3-of-4-global-metrics-show-nearly-flat-temperature-anomaly-in-the-last-decade/ ...compares that analysis to three other sources and notes "NASA GISS land-ocean anomaly data showing a ten year trend of 0.151?C, which is about 5 times larger than the largest of the three metrics above, which is UAH at 0.028?C /ten years. " Is there a good reason to rely completely on the source that seems out of alignment with the others? (I'm going to look at the two contrasting sources more closely when I have more time.) I'm not clear whether any or all of the four sources count as showing "statistically significant warming" (though RSS obviously does not, since it shows a very slight decline), but they do at least show warming greatly below IPCC trend. To be clear: Whether or not warming since 1995 or 1998 has stopped or considerably slowed down is of little importance. The orthodoxy and the skeptics can agree that one decade is too short to show anything significant about long-term trends. It does, however, raise additional doubts about AGW models. I haven't seen a good explanation of why the models completely fail to account for this -- and previous multi-decade, industrial-age pauses in warming, if CO2 really is the main driver. Not only is the one-decade/12-year record of little importance, I still have not seen adequate reason to maintain my doubts about the claim that century-long warming is definitely and entirely due to human activity rather than to a natural cyclical recovery from a cold period. But, obviously, that must be because I'm either stupid, evil, or probably both. ;-) Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From scerir at libero.it Wed Feb 17 21:55:03 2010 From: scerir at libero.it (scerir) Date: Wed, 17 Feb 2010 22:55:03 +0100 (CET) Subject: [ExI] QET Message-ID: <18113742.1858661266443703358.JavaMail.defaultUser@defaultHost> Henrique: > By non-local gravitational fields you mean putting gravity on a spaceship for instance? Now that would be interesting. Paul Simon sings: < "The problem is all inside your head", she said to me / The answer is easy if you take it logically [...] > (from '50 Ways To Leave Your Lover', 1975) So, let us start from the beginning. In "Relativity and the Problem of Space" (1952), Albert Einstein wrote: "When a smaller box s is situated, relatively at rest, inside the hollow space of a larger box S, then the hollow space of s is a part of the hollow space of S, and the same "space", which contains both of them, belongs to each of the boxes. When s is in motion with respect to S, however, the concept is less simple. One is then inclined to think that s encloses always the same space, but a variable part of the space S. It then becomes necessary to apportion to each box its particular space, not thought of as bounded, and to assume that these two spaces are in motion with respect to each other. Before one has become aware of this complication, space appears as an unbounded medium or container in which material objects swim around. But it must now be remembered that there is an infinite number of spaces, which are in motion with respect to each other. The concept of space as something existing objectively and independent of things belongs to pre-scientific thought, but not so the idea of the existence of an infinite number of spaces in motion relatively to each other." If we follow Einstein, and encode gravity in the geometry of space-time, matter curves space-time, and its metric is no longer fixed. However, space- time is still somehow represented by a *smooth continuum*. To restore coherence of physics or - to say it better - to get a perfect coherence between GR and QT (not just the present "peaceful coexistence") one has to abandon the idea that space-time is fixed, immune to change. One has to encode gravity into the very geometry of space-time, thereby making this geometry *dynamical*. Thus, while spacetime can be defined by the objects themselves, and their dynamics, it is well known the nonlocal (rectius: nonseparable) behaviour of entangled particles, and these entangled particles should live in the Hilbert spaces but also in a well-designed space-time. Now, a simple question would be: if spacetime is defined by objects, and if the nature of these objects may be quantal, can we say that spacetime may be 'nonlocal' (or 'nonlocally causal')? Does it make any sense? For, general relativity completely ignores quantum effects and we have learned that these effects become important both in the physics of the *small* and in the physics of long distance *correlations* (even between *spacelike separated* regions of the universe, at least in principle). It has been said that primary goal of *quantum gravity* is to uncover the quantal structure of spacetime, and coarse-graining, backreaction, fluctuations and correlations may play an essential role in such a quest. Quantum gravity is not equivalent to a local field theory in the (bulk) spacetime and there's a lot of powerful evidence that quantum gravity is not strictly local or causal (holography; getting the information out of the black hole; there is no connection operator in LQG and as a result the curvature operator has to be expressed in terms of holonomies and becomes non-local, etc.). Summing up. It is not about 'putting gravity on a space-ship'. It is more about thinking the space-time as something strictly dependent of the dynamics of massive objects and of quantal objects, it is more about the possibility of changing the gravitational field at-a-distance, via quantum entanglement correlations coupled to massive objects, or via more efficient quantum gravity mechanisms. (Quantal randomness and related a-causality might still preserve the no-signaling postulate.) From alfio.puglisi at gmail.com Wed Feb 17 22:02:17 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 17 Feb 2010 23:02:17 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002172049.o1HKnewH023195@andromeda.ziaspace.com> References: <201002172049.o1HKnewH023195@andromeda.ziaspace.com> Message-ID: <4902d9991002171402p44079507p9ebe64e9576a6d9e@mail.gmail.com> On Wed, Feb 17, 2010 at 9:49 PM, Max More wrote: I should note that I will not engage Alfio in discussion on this issue, > since it's clear to me that anything that disagrees with his view is > automatically dismissed. > Let's rewrite that: anything that disagrees with the current scientific understading is dismissed. And not automatically. I usually try to show why, and I think that I posted more than enough links in the previous global warming thread. I am very disappointed that you think that I'm just defending a personal opinion. English is not my mother language, but I thought that I was writing at a decent enough level to be understood. We just got another link to WUWT from JoeD a few minutes ago. Oh, and another one from you while I'm writing this. I have seen so many links to it on this list, from multiple people, that it seems to have become the primary source of information for many people. Or at least, the source that they are going to cite. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Wed Feb 17 22:07:57 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 17 Feb 2010 23:07:57 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <20100217144945.t2astxtwhwowssg8@webmail.natasha.cc> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> <20100217144945.t2astxtwhwowssg8@webmail.natasha.cc> Message-ID: <4902d9991002171407p5ef97706sd378917a44e17434@mail.gmail.com> On Wed, Feb 17, 2010 at 8:49 PM, wrote: > Your observation is irrational. > > It's not. The global warming debate only exists because of PR, media and political spin. The scientific debate was over many years ago. I highly suggest Spencer Weart's history of global warming science: http://www.aip.org/history/climate/ Alfio > Natasha > > > Quoting Alfio Puglisi : > > On Wed, Feb 17, 2010 at 5:55 PM, Natasha Vita-More > >wrote: >> >> Christopher Luebcke writes: >>> >>> "... I just find it sadly ironic, and a dark reflection on the petty, >>> fiercely partisan, politicized nature of what should in fact be a sober, >>> scientific discussion, that a man accused (in the court of political >>> opinion) of fraud should suddenly be taken at his word by those same >>> accusers, when that word apparently agrees with their position." >>> >>> Many of us find it frustrating. Nevertheless the biggest problem and >>> disappointment is that, while everyone seems to annoyed and saddened, >>> there >>> is a lack of intelligent communication between all parties in viewing the >>> situation from diverse perspectives. >>> >>> Max is working to develop a level ground for discussion. Bravo to him. >>> >>> >> I don't agree. Posting all those links to garbage blogs like WUWT, and >> quoting them like they had any value instead of laughing at them (or >> desperating, depends on the mood), does a disservice to the Extropy list. >> It >> shows how a group of intelligent people can be easily manipulated by PR >> spin. I find it depressing. >> >> Alfio >> >> >> >> Best, >>> Natasha >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >>> >> > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Wed Feb 17 22:40:56 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 17 Feb 2010 14:40:56 -0800 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> References: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> Message-ID: Max More : > Not only is the one-decade/12-year record of little importance, I still have not seen adequate reason to maintain my doubts about the claim that century-long warming is definitely and entirely due to human activity rather than to a natural cyclical recovery from a cold period. Haven't seen reason to maintain your doubts, eh? Perhaps you should abandon them, and jump on the mankind-driven bandwagon instead! Couldn't resist. Personally I am more-or-less with Max here in saying that we do not yet have nearly enough evidence to support either view. Truly conclusive proof would require, at the very least, an alternate Earth (real or simulated) in which no species ever evolved to the point where it became a good idea to just start burning everything. It's fairly obvious, though, that if we are the chief cause of global warming then we have no idea what we should do differently to prevent it or even slow it down. http://en.wikipedia.org/wiki/Greenhouse_gas#Greenhouse_effects_in_Earth.27s_atmosphere http://www.aip.org/history/climate/othergas.htm http://www.geocraft.com/WVFossils/greenhouse_data.html Ugh, I actually hadn't read any of these before now (except the Wikipedia one, maybe, but I only skimmed it). Just Googled them up on the spot. We don't know anything! Damien Broderick : > I would far rather see money spent on developing plants that are not > sensitive to heat & cold (interestingly, when plants are bred for cold > tolerance, they often have heat tolerance as well, as "side effect."), on > efficient energy production (so we can create affordable microclimates and > deal with rising sea levels, if we have to), etc. In other words - figure > out how to DEAL with the problem, not STOP it.> Seconded. More heat just means more energy. Let's load up on Stirling engines* and start building cities underwater! No sense in waiting for the sea level to rise and do it for us. Or, you know, ideas that work. Like hardier plants. This is not my job. *Actually, as I understand it, global warming could more accurately be called global climate change. The hots get hotter and the colds get colder; everything, everywhere, grows extreme. So a continent-spanning array of Stirling engines might actually be useful, taking advantage of temperature differentials on global scales. Any of our geoengineers want to run the numbers on that? I'll bet you it would only be economical if we could build such a device for almost nothing. From thespike at satx.rr.com Wed Feb 17 23:01:48 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Feb 2010 17:01:48 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: References: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> Message-ID: <4B7C755C.6060104@satx.rr.com> On 2/17/2010 4:40 PM, Spencer Campbell wrote: > Actually, as I understand it, global warming could more accurately be > called global climate change. The hots get hotter and the colds get > colder; everything, everywhere, grows extreme. Yes, but the increased volatility is caused by an overall increase in trapped heat. So "global climate change" is a handy term to block idiots from waving "IT CAN'T BE GETTING HOTTER--THERE WAS SNOW HERE THIS WEEK!" mid-winter signs. But it's still warming on a global scale. Damien Broderick From lacertilian at gmail.com Wed Feb 17 23:17:01 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 17 Feb 2010 15:17:01 -0800 Subject: [ExI] Dresden Codak has our number Message-ID: I think Aaron Diaz might be reading the ongoing (and going, and going) Extropy-Chat consciousness debate. http://dresdencodak.com/2010/02/16/artificial-flight-and-other-myths-a-reasoned-examination-of-af-by-top-birds/ So uh, just in case: Hi Aaron. Update more. Thanks. Bye. (I am walking on thin ice by linking to something with so few qualifying statements. Of this, I am aware. What's the worst that could happen?) From kanzure at gmail.com Wed Feb 17 23:21:14 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 17 Feb 2010 17:21:14 -0600 Subject: [ExI] Dresden Codak has our number In-Reply-To: References: Message-ID: <55ad6af71002171521k4fed6cf2p7aa423fb2528ca4e@mail.gmail.com> On Wed, Feb 17, 2010 at 5:17 PM, Spencer Campbell wrote: > I think Aaron Diaz might be reading the ongoing (and going, and going) > Extropy-Chat consciousness debate. > > http://dresdencodak.com/2010/02/16/artificial-flight-and-other-myths-a-reasoned-examination-of-af-by-top-birds/ > > So uh, just in case: > > Hi Aaron. Update more. Thanks. Bye. I saw Aaron at the last Singularity Summit. I was talking with him for a while until I realized who he was. I had to interrupt him mid-sentence with "YOU ARE A GOD". Also, he's too humble. - Bryan http://heybryan.org/ 1 512 203 0507 From cluebcke at yahoo.com Wed Feb 17 23:30:20 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 15:30:20 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: References: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> Message-ID: <758946.78766.qm@web111202.mail.gq1.yahoo.com> > we do not > yet have nearly enough evidence to support either view. I think you may be in the minority view there; a lot of people seem to think that there's quite enough evidence. > Truly> conclusive proof would require, at the very least, an alternate Earth > (real or simulated) in which no species ever evolved to the point > where it became a good idea to just start burning everything. This is in fact what climate modelers attempt to do, with imperfect but increasing accuracy (the accuracy of models being measured by "backcasting"--starting at a point some time in the past, running the model, and seeing if the predicted results match observations). See http://en.wikipedia.org/wiki/Global_climate_model#Accuracy_of_models_that_predict_global_warming for some examples of how this problem is addressed. No models are perfect and climate is notoriously difficult to model, but that doesn't mean we know nothing, and it doesn't mean we can't improve our knowledge. I will also second the proposition that the best use of our resources, outside of research dollars to improve our understanding and forecasting abilities, is to start planning adaptation now. Given where I live, this involves adding flippers to my earthquake preparedness kit :P ----- Original Message ---- > From: Spencer Campbell > To: ExI chat list > Sent: Wed, February 17, 2010 2:40:56 PM > Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled > > Max More : > > Not only is the one-decade/12-year record of little importance, I still have > not seen adequate reason to maintain my doubts about the claim that century-long > warming is definitely and entirely due to human activity rather than to a > natural cyclical recovery from a cold period. > > Haven't seen reason to maintain your doubts, eh? Perhaps you should > abandon them, and jump on the mankind-driven bandwagon instead! > > Couldn't resist. > > Personally I am more-or-less with Max here in saying that we do not > yet have nearly enough evidence to support either view. Truly > conclusive proof would require, at the very least, an alternate Earth > (real or simulated) in which no species ever evolved to the point > where it became a good idea to just start burning everything. > > It's fairly obvious, though, that if we are the chief cause of global > warming then we have no idea what we should do differently to prevent > it or even slow it down. > > http://en.wikipedia.org/wiki/Greenhouse_gas#Greenhouse_effects_in_Earth.27s_atmosphere > > http://www.aip.org/history/climate/othergas.htm > > http://www.geocraft.com/WVFossils/greenhouse_data.html > > Ugh, I actually hadn't read any of these before now (except the > Wikipedia one, maybe, but I only skimmed it). Just Googled them up on > the spot. We don't know anything! > > Damien Broderick : > > I would far rather see money spent on developing plants that are not > > sensitive to heat & cold (interestingly, when plants are bred for cold > > tolerance, they often have heat tolerance as well, as "side effect."), on > > efficient energy production (so we can create affordable microclimates and > > deal with rising sea levels, if we have to), etc. In other words - figure > > out how to DEAL with the problem, not STOP it.> > > Seconded. > > More heat just means more energy. Let's load up on Stirling engines* > and start building cities underwater! No sense in waiting for the sea > level to rise and do it for us. > > Or, you know, ideas that work. Like hardier plants. > > This is not my job. > > > *Actually, as I understand it, global warming could more accurately be > called global climate change. The hots get hotter and the colds get > colder; everything, everywhere, grows extreme. So a continent-spanning > array of Stirling engines might actually be useful, taking advantage > of temperature differentials on global scales. Any of our geoengineers > want to run the numbers on that? I'll bet you it would only be > economical if we could build such a device for almost nothing. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From gts_2000 at yahoo.com Wed Feb 17 23:16:48 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 17 Feb 2010 15:16:48 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <218661.53873.qm@web111208.mail.gq1.yahoo.com> Message-ID: <760637.43302.qm@web36501.mail.mud.yahoo.com> --- On Wed, 2/17/10, Christopher Luebcke wrote: > Could one detect or measure consciousness on the basis of "behaviors and > reports of subjective experiences" alone, without direct anatomical > knowledge of a "brain and nervous system"? No, I don't think so. Under those circumstances we could only speculate. > Conversely, does a system with a "brain and nervous system" > necessarily have consciousness, even in the absence of > "behaviors and reports of subjective experience"? If you have a brain and a nervous system but exhibit no associated behaviors then it seems to me that you have some serious neurologoical issues. What about you, Chris? Do you have a physical brain capable of having conscious thoughts? -gts From cluebcke at yahoo.com Wed Feb 17 23:31:36 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 15:31:36 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <4B7C755C.6060104@satx.rr.com> References: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> <4B7C755C.6060104@satx.rr.com> Message-ID: <320317.89684.qm@web111209.mail.gq1.yahoo.com> Yes, every spring propels us towards Waterworld, and every fall sees us skidding towards the next ice age. ----- Original Message ---- > From: Damien Broderick > To: ExI chat list > Sent: Wed, February 17, 2010 3:01:48 PM > Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled > > On 2/17/2010 4:40 PM, Spencer Campbell wrote: > > > Actually, as I understand it, global warming could more accurately be > > called global climate change. The hots get hotter and the colds get > > colder; everything, everywhere, grows extreme. > > Yes, but the increased volatility is caused by an overall increase in trapped > heat. So "global climate change" is a handy term to block idiots from waving "IT > CAN'T BE GETTING HOTTER--THERE WAS SNOW HERE THIS WEEK!" mid-winter signs. But > it's still warming on a global scale. > > Damien Broderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From emlynoregan at gmail.com Thu Feb 18 00:37:00 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 18 Feb 2010 11:07:00 +1030 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> References: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> Message-ID: <710b78fc1002171637v76c67b76s781d9960af2faa4e@mail.gmail.com> On 18 February 2010 08:05, Max More wrote: > Emlyn: > >> I haven't read the whole thing in detail, but one early piece stuck out >> like a sore thumb, which is the old saw that there is no significant warming >> since 1998. That's just flat out wrong, and it's wrong because 1998 itself >> stuck out like a sore thumb; it was a statistical anomaly, which anyone who >> wasn't being entirely disingenuous would agree with. >> >> Here's a discussion of that issue, along with graphs showing the 1998 sore >> thumb: >> >> http://scienceblogs.com/illconsidered/2006/04/warming-stopped-in-1998.php > > The source you cite (which is almost four years old now), seems to rely for > the recent period exclusively on NASA GISS analysis. (References to CRU data > are for other periods. I didn't see any comparison with UAH or RSS.) > > In contrast, the following piece.. > http://wattsupwiththat.com/2008/03/08/3-of-4-global-metrics-show-nearly-flat-temperature-anomaly-in-the-last-decade/ > > ...compares that analysis to three other sources and notes "NASA GISS > land-ocean anomaly data showing a ten year trend of 0.151?C, which is about > 5 times larger than the largest of the three metrics above, which is UAH at > 0.028?C /ten years. " You know, what struck me about all the data presented in that article is that they include the anomolous data from approx '98, and nothing from before it. Have a look at those graphs, and tell me if you wouldn't see a more positive trend if you either excluded the oldest 12 months (the anomaly), or included a few more years before that? *Clearly* that would be the case. It's just exactly the same cherry picking as before. Have a look at the first graph: http://wattsupwiththat.files.wordpress.com/2008/03/elnino-vs-hadcrut.png?w=510 and look at the clear spike in the data around '98. See it rises higher than all the other points? If you include that anomaly and begin there, of course you'll get a flat or shallow gradient on average over a ten year period, even though the temperature (excluding that anomaly) is clearly continuing to trend strongly upward. The irony here is that people are using an anomalously hot year in a particular way to obscure the upward trend. Hilarious. I'll bet you that in a couple of years, these kinds of "skeptics" stop using the 10 year data, and start finding an excuse to use a 12 year window, 13 years, etc, for some reason, *or* inexplicably keep using the current data sets (which stop in 2007). > > Is there a good reason to rely completely on the source that seems out of > alignment with the others? (I'm going to look at the two contrasting sources > more closely when I have more time.) I don't think it's out of alignment. If he'd taken the CRU data over the same period, wouldn't it be similarly crippled by the initial anomaly? > > I'm not clear whether any or all of the four sources count as showing > "statistically significant warming" (though RSS obviously does not, since it > shows a very slight decline), but they do at least show warming greatly > below IPCC trend. really? > > To be clear: Whether or not warming since 1995 or 1998 has stopped or > considerably slowed down is of little importance. The orthodoxy and the > skeptics can agree that one decade is too short to show anything significant > about long-term trends. It does, however, raise additional doubts about AGW > models. I haven't seen a good explanation of why the models completely fail > to account for this -- and previous multi-decade, industrial-age pauses in > warming, if CO2 really is the main driver. > > Not only is the one-decade/12-year record of little importance, I still have > not seen adequate reason to maintain my doubts about the claim that > century-long warming is definitely and entirely due to human activity rather > than to a natural cyclical recovery from a cold period. As to that, it's a different claim, and none of the previous stuff speaks to it, absolutely. All I was pointing out was that there is an anomaly at 1998, and it is clearly disingenuous to include it at the start of a period then do a simple regression. Why that is important, is that it speaks to the motive of the person using the argument. Someone with solid ground to stand on, with a rational basis for their position, simply wont also include discredit arguments like the no warming since 1998 one, because it would undermine their otherwise grounded opinion and make them look like a liar. > But, obviously, that must be because I'm either stupid, evil, or probably > both. ;-) > > Max I'd never call you any of those things, that'd be crazy. But, doesn't the difference in styles of argument alone tell you something about the global warming hypothesis? Doesn't the way that the no-warming side keeps using discredited arguments, slipping from position to position ("there is no warming" becomes "well, ok, maybe there is warming, but it's not anthropogenic" becomes "well ok maybe there is some anthropogenic warming, but it's too late to act"), doesn't all that raise any red flags about what is going on here? -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From cluebcke at yahoo.com Thu Feb 18 00:28:21 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 16:28:21 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <760637.43302.qm@web36501.mail.mud.yahoo.com> References: <760637.43302.qm@web36501.mail.mud.yahoo.com> Message-ID: <209813.32982.qm@web111211.mail.gq1.yahoo.com> > > Could one detect or measure consciousness on the basis of "behaviors and > > reports of subjective experiences" alone, without direct anatomical > > knowledge of a "brain and nervous system"? > > No, I don't think so. Under those circumstances we could only speculate. I don't see how adding a knowledge of anatomy moves our position from speculation to certainty. > > Conversely, does a system with a "brain and nervous system" > > necessarily have consciousness, even in the absence of > > "behaviors and reports of subjective experience"? > > If you have a brain and a nervous system but exhibit no associated behaviors > then it seems to me that you have some serious neurologoical issues. Doubtless, but you didn't answer the question. > What about you, Chris? Do you have a physical brain capable of having conscious > thoughts? I would prefer not to partake in rhetorical questions; life's too short. I suspect you have a point primed and ready to fire for the answer that I'm bound to give to such a question, and it would save time if you simply made it. For the record, I am not interested in detecting consciousness in myself, but in other systems. ----- Original Message ---- > From: Gordon Swobe > To: ExI chat list > Sent: Wed, February 17, 2010 3:16:48 PM > Subject: Re: [ExI] Semiotics and Computability > > --- On Wed, 2/17/10, Christopher Luebcke wrote: > > > Could one detect or measure consciousness on the basis of "behaviors and > > reports of subjective experiences" alone, without direct anatomical > > knowledge of a "brain and nervous system"? > > No, I don't think so. Under those circumstances we could only speculate. > > > Conversely, does a system with a "brain and nervous system" > > necessarily have consciousness, even in the absence of > > "behaviors and reports of subjective experience"? > > If you have a brain and a nervous system but exhibit no associated behaviors > then it seems to me that you have some serious neurologoical issues. > > What about you, Chris? Do you have a physical brain capable of having conscious > thoughts? > > -gts > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From joe.dalton23 at yahoo.com Thu Feb 18 00:33:58 2010 From: joe.dalton23 at yahoo.com (Joe Dalton) Date: Wed, 17 Feb 2010 16:33:58 -0800 (PST) Subject: [ExI] K2, fake pot, a new massive threat to society? Message-ID: <925660.90709.qm@web113904.mail.gq1.yahoo.com> Ahhhh! Noooo! People are getting high and having fun and we don't have no law that lets us throw them in jail! The world's coming to and end! http://blogs.pitch.com/plog/2009/11/product_review_will_k2_synthetic_marijuana_get_you_high.php and Fake pot that acts real stymies law enforcementhttp://www.msnbc.msn.com/id/35444158/ns/health-addictions/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Thu Feb 18 01:13:34 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 17 Feb 2010 20:13:34 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <466012.12578.qm@web111205.mail.gq1.yahoo.com> <963975.24812.qm@web36502.mail.mud.yahoo.com> Message-ID: <62c14241002171713h7a1ad5c1xf6a6107846dfd396@mail.gmail.com> On Tue, Feb 16, 2010 at 5:56 AM, Stathis Papaioannou wrote: > I have proposed the example of a brain which has enough intelligence > to know what the neurons are doing: "neuron no. 15,576,456,757 in the > left parietal lobe fires in response to noradrenaline, then breaks > down the noradrenaline by means of MAO and COMT", and so on, for every > brain event. That would be the equivalent of the man in the CR: there > is understanding of the low level events, but no understanding of the > high level intelligent behaviour which these events give rise to. Do > you see how there might be *two* intelligences here, a high level and > a low level one, with neither necessarily being aware of the other? I had a thought to add the following twist to CR: The man in the box has no knowledge of the symbols he's manipulating on his first day on the job. Over time, he notices a correlation to certain values in his lookup table(s) and the food slot opening and a tray being slid in... I understand the man in the room is a metaphor for rules-processing by wrote, but what if we take the literal approach that he IS a man - even a supremely gifted intellectual who is informed that eventually these symbols will reveal the means by which he can escape? This scenario segues to the boxing problem of keeping a recursively improving AI constrained by 'friendliness' or some other artificially added bounds. (I understand that FAI is about being inherently friendly and remains friendly after infinite recursion) So assuming the man in the box has an infinite supply of pen/paper with which to keep notes on the relationship of input and output (as well as his lookup table for I/O transformations) - does it change the thought experiment considerably if there is motivation for escaping the room by learning how to manipulate the symbols? From steinberg.will at gmail.com Thu Feb 18 01:23:17 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 17 Feb 2010 20:23:17 -0500 Subject: [ExI] K2, fake pot, a new massive threat to society? In-Reply-To: <925660.90709.qm@web113904.mail.gq1.yahoo.com> References: <925660.90709.qm@web113904.mail.gq1.yahoo.com> Message-ID: <4e3a29501002171723y211a33d6v41baab037a81e949@mail.gmail.com> What's better, if one really wanted they could order a gram or two of pure JWH-018 or some of the other synthetic cannabinoids, soak some cigarettes/smokable herbs in a solution of it and sell the "buds" at a huge premium, which is pretty much what those K2 folk are doing (and the Spice guys too.) Of course, methinks money would be perhaps better allocated to one of the wonderful 2C compounds...and it's only a matter of time before those sweeter RCs start looking different enough from illegal chemicals that they'll be legal too, and the age of legal 'cid cometh... A boy can dream, can't he? -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Feb 18 01:48:10 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 18 Feb 2010 12:18:10 +1030 Subject: [ExI] Chemists deserve more credit: Atoms, Einstein, and the Matthew Effect Message-ID: <710b78fc1002171748l73316747xa75fa249927c6770@mail.gmail.com> Some more non-controversial controversy for the list seems like a good idea. I thought this article was interesting. ---------- Forwarded message ---------- From: Newsfeed to Email Gateway Date: 18 February 2010 12:03 Subject: Metamodern (1 new item) To: emlynoregan at gmail.com Metamodern (1 new item) Item 1 (02/17/10 23:41:52 UTC): Chemists deserve more credit: Atoms, Einstein, and the Matthew Effect [image: Cork cells, from Hooke?s Micrographia] Johann Josef Loschmidt Chemist, atomic scientist Chemists understood the atomic structure of molecules in the 1800s, yet many say that Einstein established the existence of atoms in a paper on Brownian motion, ?Die von der Molekularkinetischen Theorie der W?rme Gefordete Bewegung von in ruhenden Fl?ssigkeiten Suspendierten Teilchen?, published in 1905. This is perverse, and has seemed strange to me ever since I began reading the history of organic chemistry. Chemists often don?t get the credit they deserve, and this provides an outstanding example. For years, I?ve read statements like this: [Einstein] offered an experimental test for the theory of heat and proof of the existence of atoms?. [?The Hundredth Anniversary of Einstein?s Annus Mirabilis? ] Perhaps this was so for physicists in thrall (or opposition) to the philosophical ideas of another physicist, Ernst Mach;he had odd convictions about the relationship between primate eyes and physical reality, and denied the reality of invisible atoms. Confusion among physicists, however, gives reason for more (not less!) respect for the chemists who had gotten the facts right long before, and in more detail: that matter consists of atoms of distinct chemical elements, that the atoms of different elements have specific ratios of mass, and that molecules consist not only of groups of atoms, but of atoms linked by bonds (?Verwandtschaftseinheiten?) to form specific structures. When say ?more detail?, I mean a *lot* more detail than merely inferring that atoms exist. For example, organic chemists had deduced that carbon atoms form four bonds, typically (but not always) directed tetrahedrally, and that the resulting molecules can as a consequence have left- and right-handed forms. The chemists? understanding of bonding had many non-trivial consequences. For example, it made the atomic structure of benzene a problem, and made a six-membered ring of atoms with alternating single and double bonds a solution to that problem. Data regarding chemical derivatives of benzene indicated a further problem, leading to the inference that the six bonds are equivalent. Decades later, quantum mechanics provided the explanation. The evidence for these detailed and interwoven facts about atoms included a range of properties of gases, the compositions of compounds, the symmetric and asymmetric shapes of crystals, the rotation of polarized light, and the specific numbers of chemically distinct forms of molecules with related structures and identical numbers of atoms. And chemists not only understood many facts about atoms, they understood how to make new molecular structures, pioneering the subtle methods of organic synthesis that are today an integral part of the leading edge of atomically precise nanotechnology. All this atom-based knowledge and capability was in place, as I said, before 1900, courtesy of chemical research by scientists including Dalton, van ?t Hoff, Kekul?, and Pasteur. But was it really *knowledge?* By ?knowledge?, I don?t mean to imply that universal consensus had been achieved at the time, or that knowledge can ever be philosphically and absolutely certain, but I think the term fits: A substantial community of scientists had a body of theory that explained a wide range of phenomena, including the many facets of the kinetic theory of gases and a host of chemical transformations, and more. That community of scientists grew, and progressively elaborated this body of atom-based theory and technology to up to the present day, and it was confirmed, explained, and extended by physics along the way. Should we deny that this constituted knowledge, brush it all aside, and credit 20th century physics with establishing that atoms even exist? As I said: perverse. But what about *quantitative* knowledge? There is a more modest claim for Einstein?s 1905 paper: ?the bridge between the microscopic and macroscopic world was built by A. Einstein: his fundamental result expresses a macroscopic quantity ? the coefficient of diffusion ? in terms of microscopic data (elementary jumps of atoms or molecules). [?One and a Half Centuries of Diffusion: Fick, Einstein, Before and Beyond? ] This claim for the primacy of physics also seem dubious. A German chemist, Johann Josef Loschmidt, had already used macroscopic data to deduce the size of molecules in a gas. He built this quantitative bridge in a paper, ?Zur Gr?sse der Luftmolek?le?, published in 1865. ------------------------------ I had overlooked Loschmidt?s accomplishment before today. I knew of Einstein?s though, and of a phenomenon that the sociologists of science call the Matthew Effect. ------------------------------ *See also:* - A Map of Science - How to Learn About Everything ------------------------------ Free Newsfeed to RSS gateway Unsubscribe -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Feb 18 01:49:01 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 18 Feb 2010 12:49:01 +1100 Subject: [ExI] Consciouness and paracrap In-Reply-To: References: <517167.25357.qm@web110408.mail.gq1.yahoo.com> Message-ID: On 18 February 2010 07:38, Spencer Campbell wrote: > Stathis Papaioannou : >> Several people have commented that we need a definition of >> consciousness to proceed, but I disagree. I think everyone knows what >> is meant by the word and so we can have a complete discussion without >> at any point defining it. > > Dude, I barely know what I mean by the word when I use it in my own > head. Are you talking about access consciousness? Phenomenal > consciousness? Reflexive consciousness? All of the above? > > http://www.def-logic.com/articles/silby011.html > > The reason I haven't supplied a rigorous definition for consciousness, > as I have for intelligence, is because I can't articulate the meaning > of it for myself. This, to me, does not seem ancillary to the > discussion; it seems to be the very root of the discussion, namely the > question, "what is consciousness?". > > Stathis Papaioannou : >> For those who say that consciousness does >> not really exist: consciousness is that thing you are referring to >> when you say that consciousness does not really exist. > > That's fair. There isn't any question of what I'm talking about when I > refer to the Flying Spaghetti Monster. > > I can describe the FSM to you in great detail, however. I can't do the > same with consciousness, except perhaps to say that, if it exists, it > occasionally compels normally sane people to begin a sentence with > "dude". You can't define it, but when I ask you if you are conscious now do you have to stop and think? It is this immediately understood sense I am referring to. This is not to say that further elaboration is useless, but you can go a long way discussing it without explicit definition. > Stathis Papaioannou : >> The purpose of the above is to show that it is impossible (logically >> impossible, not just physically impossible) to make a brain part, and >> hence a whole brain, that behaves exactly like a biological brain but >> lacks consciousness. Either it isn't possible to make such an >> artificial component at all, or else it is possible to make such a >> component but it will necessarily also have consciousness. The >> alternative is to say that you're happy with the idea that you may be >> blind, deaf, unable to understand English etc. but neither you nor >> anyone else has noticed. >> >> Gordon Swobe's response is that this thought experiment is ridiculous >> and I should come up with another one that doesn't challenge the >> self-evident fact that digital computers cannot be conscious. > > Gordon doesn't disagree with that proposition as-stated, even if he > sometimes claims that he does (for some reason). He's consistently > said that we should be able to engineer artificial consciousness, but > that to do so requires more than a clever piece of software in a > digital computer. > > So, I suggest that you rephrase the experiment so that it explicitly > involves replacing neurons, cortices, or whole brains with > microprocessor-driven prosthetics. We know that he believes the > whole-brain version will be a zombie, but I haven't been able to > discern any clear conclusions from him on the other two. The thought experiment involves replacing brain components with artificial components that perfectly reproduce the I/O behaviour of the original components, but not the consciousness. Gordon agrees that this is possible. However, he then either claims that the artificial components will not behave the same as the biological components (even though it is an assumption of the experiment that they will) or else says the experiment is ridiculous. > He has said before that partial replacement only confuses the matter, > implying that it's a useless thought experiment. I do not see why he > would think that, though. Perhaps because he can see that it shows that his thesis that it is possible to separate consciousness from behaviour is false. It's either that or accept the possibility of partial zombies. > The only coherent answer of his I remember goes something like this: a > man has a damaged language center, and a surgeon replaces neurons with > artificial substitutes one by one. This works so poorly that the > surgeon must replace the entire brain before language function is > returned, at which point the man is a philosophical zombie. > > But we always start with the assumption that computerized neurons do > not work poorly, indeed that they "depict" ordinary neurons perfectly > (using that depiction as a guide to manipulate their synthetic axons > and such), and I've never seen him explain why he considers this > assumption to be inherently false. That's the problem: he could say that they can't work properly on the grounds that there is something non-computable about neuronal behaviour, but he does not. Instead, he agrees that they will work properly, then in the next breath says they will not work properly. -- Stathis Papaioannou From max at maxmore.com Thu Feb 18 02:00:26 2010 From: max at maxmore.com (Max More) Date: Wed, 17 Feb 2010 20:00:26 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <201002180200.o1I20aeI019649@andromeda.ziaspace.com> Emlyn, See this is what happens... I'm spending way too much time on a point that really doesn't matter... I just find it frustrating that it's difficult to come to a conclusion even about a relatively narrow issue like temperatures trends in the most recent years. >You know, what struck me about all the data >presented in that article is that they include >the anomolous data from approx '98, and nothing from before it. Yes, you're right. That is a problem. Picking different base years would affect the results. But how much? Look at the charts again -- it seems clear that the moderated warming/actual cooling picks up in the last years. So, just start in 1999, 2000, or 2001, and recompute the numbers. I think the point would be essentially the same, although the analysis would yield somewhat different numbers. Although I don't have them at hand right now, I know I have seen other analyses which definitely and explicitly avoided starting at 1998. Googling around, I see this: http://rankexploits.com/musings/2008/ipcc-projections-overpredict-recent-warming/ -- which argues that temperature trends *since 2001* fall well below IPCC projections. I remember reading other reasonably credible sources that agree with that, and others that disputed it. One commentator says: John V (Comment#1009) March 10th, 2008 at 8:53 pm Hmmm, my numbers don?t quite match yours. I hope you don?t mind a question or two to track down the discrepancies: Using monthly data, I get the following global trends from Jan 2001 to Feb 2008: GISS: +0.83 C/century HadC: -0.55 C/century RSS: +0.41 C/century UAH: -0.07 C/century AVERAGE: +0.16 C/century However, when I compute the trends from Jan 2002 to Feb 2008 I get: GISS: -0.29 C/century HadC: -1.67 C/century RSS: -0.91 C/century UAH: -1.71 C/century AVERAGE: -1.14 C/century I cannot evaluate those numbers, since I don't know Cochrane-Orcutt. Anyway, as I said before (and I think you agreed), these short-term trends really don't tell us anything, so I'm going to try not to spend more time on this particular point. Actually, it's really quite annoying that the author did use 1998 as a base year. That just obscures the point that was to be made by opening him (rightly) to cherry-picking charges. It is clear to me that there are plenty of analyses suggesting that warming has recently been below trend that do NOT depend on starting with 1998. For example: The following post seems quite interesting and helpful. It explains why Lindzen's claim that there has been no "statistically significant" warming over the last 14 years (since 1995, not 1998, note) is "not wrong, per se, but neither are they particularly robust": A Cherry-Picker's Guide to Temperature Trends (down, flat?even up) http://masterresource.org/?p=5240 and, related: http://rogerpielkejr.blogspot.com/2009/10/cherry-pickers-guide-to-global.html BTW, I notice that even Gavin at RealClimate acknowledges that "(2) It is highly questionable whether this ?pause? is even real. It does show up to some extent (no cooling, but reduced 10-year warming trend) in the Hadley Center data" http://www.realclimate.org/index.php/archives/2009/10/a-warming-pause/comment-page-7/#comment-138126 Again the GISS data gives a different result. (At a quick look, I don't see him discuss the other two datasets.) Of course RealClimate attacks the analysis, then that attack is attacked... There's more commentary on the disagreement here (but, note, using only Hadley and GISS): http://rankexploits.com/musings/2009/adding-apples-and-oranges-to-cherry-picking/ Among "lucia's" conclusions: "It does look like both RC [RealClimate] and Lindzen are doing some cherry picking of different sorts as suggested by Chip in his article." I don't think you can rightly dismiss doubts about claims of continued or accelerated recent warming by looking only for those who start their charts with 1998. I agree completely, however, that it's right to criticize those who do so. Sorry for wasting your time and mine on this rather insignificant (but annoyingly nagging) issue. Max From cluebcke at yahoo.com Thu Feb 18 01:43:17 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 17:43:17 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <62c14241002171713h7a1ad5c1xf6a6107846dfd396@mail.gmail.com> References: <466012.12578.qm@web111205.mail.gq1.yahoo.com> <963975.24812.qm@web36502.mail.mud.yahoo.com> <62c14241002171713h7a1ad5c1xf6a6107846dfd396@mail.gmail.com> Message-ID: <465007.28439.qm@web111210.mail.gq1.yahoo.com> It's also worth considering in what way teaching the man to follow all the rules for manipulating symbols is a different activity than teaching him Chinese. It may be different at the start, but I suspect that, if successful, it amounts to the same thing at the end. ----- Original Message ---- > From: Mike Dougherty > To: ExI chat list > Sent: Wed, February 17, 2010 5:13:34 PM > Subject: Re: [ExI] Semiotics and Computability > > On Tue, Feb 16, 2010 at 5:56 AM, Stathis Papaioannou wrote: > > I have proposed the example of a brain which has enough intelligence > > to know what the neurons are doing: "neuron no. 15,576,456,757 in the > > left parietal lobe fires in response to noradrenaline, then breaks > > down the noradrenaline by means of MAO and COMT", and so on, for every > > brain event. That would be the equivalent of the man in the CR: there > > is understanding of the low level events, but no understanding of the > > high level intelligent behaviour which these events give rise to. Do > > you see how there might be *two* intelligences here, a high level and > > a low level one, with neither necessarily being aware of the other? > > I had a thought to add the following twist to CR: The man in the box > has no knowledge of the symbols he's manipulating on his first day on > the job. Over time, he notices a correlation to certain values in his > lookup table(s) and the food slot opening and a tray being slid in... > I understand the man in the room is a metaphor for rules-processing by > wrote, but what if we take the literal approach that he IS a man - > even a supremely gifted intellectual who is informed that eventually > these symbols will reveal the means by which he can escape? This > scenario segues to the boxing problem of keeping a recursively > improving AI constrained by 'friendliness' or some other artificially > added bounds. (I understand that FAI is about being inherently > friendly and remains friendly after infinite recursion) > > So assuming the man in the box has an infinite supply of pen/paper > with which to keep notes on the relationship of input and output (as > well as his lookup table for I/O transformations) - does it change the > thought experiment considerably if there is motivation for escaping > the room by learning how to manipulate the symbols? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From rafal.smigrodzki at gmail.com Thu Feb 18 03:55:34 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 17 Feb 2010 22:55:34 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <154647.10956.qm@web111212.mail.gq1.yahoo.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> <154647.10956.qm@web111212.mail.gq1.yahoo.com> Message-ID: <7641ddc61002171955i33b65d34i80dcef79dc21669b@mail.gmail.com> On Wed, Feb 17, 2010 at 12:33 PM, Christopher Luebcke wrote: > Let me just add: I am not a climatologist, and therefore even if I had read a wide variety of peer-reviewed papers on the subject, I would not be qualified to determine whether I had made an accurate sampling, much less judge the papers on their merits. > > My claim comes in fact from paying attention to those organizations who are responsible for gathering and summarizing professional research and judgement on the subject: Not just IPCC, but NAS, AMS, AGU and AAAS are all organizations, far as I can tell, that are both qualified to comment on the matter, and who have supported the general position that AGW is real. > > If you are going to dismiss well-respected scientific bodies that hold positions contrary to your own as necessarily having been "infiltrated by environmental activists", then it is incumbent upon you to provide some evidence that such infiltration by such people has actually taken place. ### Actually it's the other way around - once I started having doubts (a long time ago) about the catastrophic AGW scare, I decided to check some crucial publications myself, including their reviews by minority scientists. Luckily, climate science is not rocket science or string theory, so a mentally agile layperson can get a good idea of plausibility of claims presented there, and can follow lines of argumentation laid out by opposing sides sufficiently well to spot gross aberrations, and very importantly, separate the actual claims advanced in primary publications from the distortions introduced by secondary publications (i.e. reviews in peer-reviewed literature), and complete garbage spouted by tertiary publications (which is unfortunately the only source of climate information for 99.9% participants in the debate). Now, once I became in this way convinced that peer-reviewed literature emphatically does not support AGW, I had to explain why the tertiary literature and non-peer-reviewed communications of some climate seem to be telling a dramatically different story. So far my best explanation is that there is a clique of environmental activists (James Hansen, Thomas Karl, Michael Mann) who were appointed to a few key positions in the science establishment, including review groups, and have since then manufactured the AGW scare. Of course, scientific organizations, such as NAS or APS don't have an opinion about science - they always rely on the input of a small number of active researchers on any particular issue, and if their input (produced by Mann et al.) is corrupt, their output in the form of policy statements will be corrupt as well. GIGO. But, the question of exactly what mechanisms ("infiltration" or others) caused the science establishment to fail so badly here is just a side issue - the key question is whether AGW is real, and for that I can only urge you to delve into the primary literature and form an opinion directly. --------------------------------- > > I wonder if it wouldn't be possible to disagree with their positions, though, without also presuming that the people you disagree with are wicked? That's the larger point of what I was trying to get at. ### The core group of about 50 climate activists (The Team, as they refer to themselves), are wicked. They intentionally forged and misrepresented data to advance a preconceived position. The remaining "thousands of scientists" who lent their support to the AGW scare just failed to read and critically analyze the literature, which makes them incompetent but not wicked. Rafal From emlynoregan at gmail.com Thu Feb 18 04:01:03 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 18 Feb 2010 14:31:03 +1030 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002180200.o1I20aeI019649@andromeda.ziaspace.com> References: <201002180200.o1I20aeI019649@andromeda.ziaspace.com> Message-ID: <710b78fc1002172001w2274af4dyaf24e6ee712f09bf@mail.gmail.com> On 18 February 2010 12:30, Max More wrote: > Emlyn, > > See this is what happens... I'm spending way too much time on a point that > really doesn't matter... I just find it frustrating that it's difficult to > come to a conclusion even about a relatively narrow issue like temperatures > trends in the most recent years. Hey, here we come to one of the really fundamental points about this. How much time can you spend on stuff like this? On one hand, you want to be an informed person, be able to hold only opinions for which you have a rational basis. On the other hand, the world is really vastly too complex for any one person to do that and also function effectively. We're stuck with this efficiency tradeoff - do I try to "focus across the board", and fail miserably, or do I trust other people regarding the vast majority off stuff, the stuff about which I know too little to comment. Of course we do the latter. We temper it as much as possible with meta-level techniques for judging knowledge not by the too-complex content but by the approach and structure of those who work with it (eg: religious "knowledge" can be rejected in part because of the unsupportable approach and structure), debating techniques, prodding for inconsistencies, etc etc. With regard to the climate issue, it's clear at least to me that it is far too complex to understand as a lay person. You either dive in deep and become at least an amateur climatologist (like Darwin was an amateur biologist, ie: you live it), or you trust the people who do. As far as I can tell, the various minor scandals notwithstanding, there are a lot of specialised people in this field who know their stuff, and they say there's a serious, anthropogenic warming problem. Most of what they argue about is the details, the scale. > I don't think you can rightly dismiss doubts about claims of continued or > accelerated recent warming by looking only for those who start their charts > with 1998. I agree completely, however, that it's right to criticize those > who do so. There's so much written about all facets of this problem, that good broad filters make sense IMO, and the first one I use is to remove anything where people are boldly making specious claims, which they must either know to be specious, or else they are incompetent. I did consider for a while simply ignoring all content from the US, and I still think that might provide a clearer picture, but since Lord Monckton turned up it doesn't quite look as viable. > > Sorry for wasting your time and mine on this rather insignificant (but > annoyingly nagging) issue. > > Max Even though I cut a lot of what you wrote, I enjoyed your response Max, there's a lot of meat in it. This whole climate change issue is depressing, fundamentally because there's not much upside, but it's important enough to be spending a few cycles on, so it's not a waste of time. And, Searle's not involved, which is a relief, no? -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From emlynoregan at gmail.com Thu Feb 18 04:29:05 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 18 Feb 2010 14:59:05 +1030 Subject: [ExI] IPCC errors: facts and spin Message-ID: <710b78fc1002172029q7d329f3bp34d66da0c13dc0be@mail.gmail.com> Apologies if this has turned up before. I haven't read AR4 (because I'm not a masochistic freak), so have to take this mostly on face value. It seems reasonable though. If I were to level any criticism at the IPCC report based on this, it would be at the process; this laborious effort to coordinate the input of thousands of volunteers to make several dense books, which then inevitably have some problems, seems like a poor approach in the modern world. Surely an online collaborative approach, with the ability to amend (bug fix) would be vastly superior?? ---- IPCC errors: facts and spin http://www.realclimate.org/index.php/archives/2010/02/ipcc-errors-facts-and-spin/ Currently, a few errors ?and supposed errors? in the last IPCC report (?AR4?) are making the media rounds ? together with a lot of distortion and professional spin by parties interested in discrediting climate science. Time for us to sort the wheat from the chaff: which of these putative errors are real, and which not? And what does it all mean, for the IPCC in particular, and for climate science more broadly? Let?s start with a few basic facts about the IPCC. The IPCC is not, as many people seem to think, a large organization. In fact, it has only 10 full-time staff in its secretariat at the World Meteorological Organization in Geneva, plus a few staff in four technical support units that help the chairs of the three IPCC working groups and the national greenhouse gas inventories group. The actual work of the IPCC is done by unpaid volunteers ? thousands of scientists at universities and research institutes around the world who contribute as authors or reviewers to the completion of the IPCC reports. A large fraction of the relevant scientific community is thus involved in the effort. The three working groups are: Working Group 1 (WG1), which deals with the physical climate science basis, as assessed by the climatologists, including several of the Realclimate authors. Working Group 2 (WG2), which deals with impacts of climate change on society and ecosystems, as assessed by social scientists, ecologists, etc. Working Group 3 (WG3) , which deals with mitigation options for limiting global warming, as assessed by energy experts, economists, etc. Assessment reports are published every six or seven years and writing them takes about three years. Each working group publishes one of the three volumes of each assessment. The focus of the recent allegations is the Fourth Assessment Report (AR4), which was published in 2007. Its three volumes are almost a thousand pages each, in small print. They were written by over 450 lead authors and 800 contributing authors; most were not previous IPCC authors. There are three stages of review involving more than 2,500 expert reviewers who collectively submitted 90,000 review comments on the drafts. These, together with the authors? responses to them, are all in the public record. Errors in the IPCC Fourth Assessment Report (AR4) As far as we?re aware, so far only one?or at most two?legitimate errors have been found in the AR4: Himalayan glaciers: In a regional chapter on Asia in Volume 2, written by authors from the region, it was erroneously stated that 80% of Himalayan glacier area would very likely be gone by 2035. This is of course not the proper IPCC projection of future glacier decline, which is found in Volume 1 of the report. There we find a 45-page, perfectly valid chapter on glaciers, snow and ice (Chapter 4), with the authors including leading glacier experts (such as our colleague Georg Kaser from Austria, who first discovered the Himalaya error in the WG2 report). There are also several pages on future glacier decline in Chapter 10 (?Global Climate Projections?), where the proper projections are used e.g. to estimate future sea level rise. So the problem here is not that the IPCC?s glacier experts made an incorrect prediction. The problem is that a WG2 chapter, instead of relying on the proper IPCC projections from their WG1 colleagues, cited an unreliable outside source in one place. Fixing this error involves deleting two sentences on page 493 of the WG2 report. Sea level in the Netherlands: The WG2 report states that ?The Netherlands is an example of a country highly susceptible to both sea-level rise and river flooding because 55% of its territory is below sea level?. This sentence was provided by a Dutch government agency ? the Netherlands Environmental Assessment Agency, which has now published a correction stating that the sentence should have read ?55 per cent of the Netherlands is at risk of flooding; 26 per cent of the country is below sea level, and 29 per cent is susceptible to river flooding?. It surely will go down as one of the more ironic episodes in its history when the Dutch parliament last Monday derided the IPCC, in a heated debate, for printing information provided by ? the Dutch government. In addition, the IPCC notes that there are several definitions of the area below sea level. The Dutch Ministry of Transport uses the figure 60% (below high water level during storms), while others use 30% (below mean sea level). Needless to say, the actual number mentioned in the report has no bearing on any IPCC conclusions and has nothing to do with climate science, and it is questionable whether it should even be counted as an IPCC error. Some other issues African crop yields: The IPCC Synthesis Report states: ?By 2020, in some countries, yields from rain-fed agriculture could be reduced by up to 50%.? This is properly referenced back to chapter 9.4 of WG2, which says: ?In other countries, additional risks that could be exacerbated by climate change include greater erosion, deficiencies in yields from rain-fed agriculture of up to 50% during the 2000-2020 period, and reductions in crop growth period (Agoumi, 2003).? The Agoumi reference is correct and reported correctly. The Sunday Times, in an article by Jonathan Leake, labels this issue ?Africagate? ? the main criticism being that Agoumi (2003) is not a peer-reviewed study (see below for our comments on ?gray? literature), but a report from the International Institute for Sustainable Development and the Climate Change Knowledge Network, funded by the US Agency for International Development. The report, written by Morroccan climate expert Professor Ali Agoumi, is a summary of technical studies and research conducted to inform Initial National Communications from three countries (Morocco, Algeria and Tunisia) to the United Nations Framework Convention on Climate Change, and is a perfectly legitimate IPCC reference. It is noteworthy that chapter 9.4 continues with ?However, there is the possibility that adaptation could reduce these negative effects (Benhin, 2006).? Some examples thereof follow, and then it states: ?However, not all changes in climate and climate variability will be negative, as agriculture and the growing seasons in certain areas (for example, parts of the Ethiopian highlands and parts of southern Africa such as Mozambique), may lengthen under climate change, due to a combination of increased temperature and rainfall changes (Thornton et al., 2006). Mild climate scenarios project further benefits across African croplands for irrigated and, especially, dryland farms.? (Incidentally, the Benhin and Thornton references are also ?gray?, but nobody has complained about them. Could there be double standards amongst the IPCC?s critics?) Chapter 9.4 to us sounds like a balanced discussion of potential risks and benefits, based on the evidence available at the time?hardly the stuff for shrill ?Africagate!? cries. If the IPCC can be criticized here, it is that in condensing these results for its Synthesis Report, important nuance and qualification were lost ? especially the point that the risk of drought (defined as a 50% downturn in rainfall) ?could be exacerbated by climate change?, as chapter 9.4 wrote ? rather than being outright caused by climate change. Trends in disaster losses: Jonathan Leake (again) in The Sunday Times accused the IPCC of wrongly linking global warming to natural disasters. The IPCC in a statement points out errors in Leake?s ?misleading and baseless story?, and maintains that the IPCC provided ?a balanced treatment of a complicated and important issue?. While we agree with the IPCC here, WG2 did include a debatable graph provided by Robert Muir-Wood (although not in the main report but only as Supplementary Material). It cited a paper by Muir-Wood as its source although that paper doesn?t