From eric at m056832107.syzygy.com Mon Feb 1 01:03:29 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 1 Feb 2010 01:03:29 -0000 Subject: [ExI] How to ground a symbol In-Reply-To: <975270.46265.qm@web36504.mail.mud.yahoo.com> References: <20100131230539.5.qmail@syzygy.com> <975270.46265.qm@web36504.mail.mud.yahoo.com> Message-ID: <20100201010329.5.qmail@syzygy.com> Gordon: >This kind of processing goes on in every software/hardware system. Yes, and apparently you didn't understand me. I already addressed this issue later in the same message. It's at a different layer of abstraction. It's fine to ignore parts of messages that you agree with. It's disingenuous to act as though a point hadn't been raised when you're actually ignoring it. >> Come back after you've written a neural network >> simulator and trained it to do something useful. > >Philosophers of mind don't care much about how "useful" it may seem. While I haven't actually written a neural network simulator, I have written quite a few programs that are of similar levels of complexity. I know from experience that things which seem simple, clear, and well defined when thought about in an abstract way are in fact complex, muddy, and ill-defined when one actually tries to implement them. Until such a system has been shown to do something useful, it's probably incomplete, and any intuition learned from writing it may well be useless. That's why I stipulated usefulness. >I think artificial neural networks show great promise as decision > making tools. Natural ones do too. >But 100 billion * 0 = 0. But 100,000,000,000 * 0.000,000,000,001 = 1. Your argument depends on the axiomatic assumption that the level of understanding in a single simulated neuron is *exactly* zero. Even the tiniest amount of understanding in a programmed device (like a thermostat) devastates your argument. So you cling to the belief that understanding must be a binary thing, while the universe around you continues to work by degrees instead of absolutes. Yes, philosophy deals with absolutes, but where it ignores shades of gray in the real world it gets things horribly wrong. -eric From gts_2000 at yahoo.com Mon Feb 1 01:47:46 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 31 Jan 2010 17:47:46 -0800 (PST) Subject: [ExI] multiple realizability In-Reply-To: <20100201010329.5.qmail@syzygy.com> Message-ID: <418148.11027.qm@web36501.mail.mud.yahoo.com> --- On Sun, 1/31/10, Eric Messick wrote: >> This kind of processing goes on in every >> software/hardware system. > > Yes, and apparently you didn't understand me.? I > already addressed this issue later in the same message.? > It's at a different layer of abstraction. The layer of abstraction does not matter to me. What does matter is the extent to which the system has supposed mental operations comprised of computational processes operating over formal elements, i.e., to what extent it operates by formal programs. To that extent, in my view, the system lacks a mind. One can conceive of an "artificially" constructed neural network that is in every respect identical to a natural brain, in which case that machine has a mind. So let's be clear: my objection is not that strong AI cannot happen. It is that it cannot happen in software/hardware systems, networked or stand-alone. To make my point even more clear: I reject the doctrine of multiple realizability. I do not believe we can extract the mind from the neurological material that causes the subjective mental phenomena that characterize it, as if one could put a mind on a massive floppy disk and then load that "mental software" onto another substrate. I reject that idea as nothing more than a 21st century version of Cartesian mind/matter dualism. The irony is that people who don't understand me call me the dualist, and suggest that I rather than they posit the existence of some mysterious mental substance that exists distinct from brain matter. I hope Jeff Davis catches this message. -gts From stathisp at gmail.com Mon Feb 1 08:52:56 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 1 Feb 2010 19:52:56 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <491598.82004.qm@web36508.mail.mud.yahoo.com> References: <491598.82004.qm@web36508.mail.mud.yahoo.com> Message-ID: 2010/2/1 Gordon Swobe : >> He is the whole system, but his intelligence is only a >> small and inessential part of the system, as it could easily >> be replaced by dumber components. > > Show me who or what has conscious understanding of the symbols. The intelligence created by the system has understanding. >> It's irrelevant that the man doesn't really >> understand what he is doing. The ensemble of neurons doesn't >> understand what it's doing either, and they are the whole system too. > > I have no objection to your saying that neither the system nor anything contained in it has conscious understanding, but in that case you need to understand that you don't disagree with me; you don't believe in strong AI any more than I do. The system has understanding, but no part of the system either separately or taken as an ensemble has understanding. I've tried to explain this giving several variations on the CRA, none of which you have directly responded to, so here they are again: Suppose that each neuron has sufficient intelligence intelligence for it to know how to do its job. No neuron understands language, but the person does. There are many tiny specialised intelligences and one large general intelligence, and the two don't communicate. This is analogous to the extended CR. Suppose that the neurons are connected as one entity with sufficient intelligence to know when to make its constituent parts fire. This entity doesn't understand language, but the person does. There are two intelligences, one specialised and one general, and the two don't communicate. This is analogous to the CR. Suppose there are several men in the extended CR all doing their bit manipulating symbols. The men don't understand language, but the entity created by the system does. There are several small specialised intelligences (their general intelligence is not put to use) and one large general intelligence, and the two don't communicate. This is analogous to a normal brain. -- Stathis Papaioannou From stathisp at gmail.com Mon Feb 1 10:04:03 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 1 Feb 2010 21:04:03 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <732400.27938.qm@web36501.mail.mud.yahoo.com> References: <20100131182926.5.qmail@syzygy.com> <732400.27938.qm@web36501.mail.mud.yahoo.com> Message-ID: On 1 February 2010 06:22, Gordon Swobe wrote: > --- On Sun, 1/31/10, Eric Messick wrote: >> This was the start of a series of posts where you said that >> someone with a brain that had been partially replaced with >> programmatic neurons would behave as though he was at least partially >> not conscious.? You claimed that the surgeon would have to >> replace more and more of the brain until he behaved as though he was >> conscious, but had been zombified by extensive replacement. > > Right, and Stathis' subject will eventually pass the TT just as your subject will in your thought experiment. But in both cases the TT will give false positives. The subjects will have no real first-person conscious intentional states. I think you have tried very hard to avoid discussing this rather simple thought experiment. It has one premise, call it P: P: It is possible to make artificial neurons which behave like normal neurons in every way, but lack consciousness. That's it! Now, when I ask if P is true you have to answer "Yes" or "No". Is P true? OK, assuming P is true, what happens to a person's behaviour and to his experiences if the neurons in a part of his brain with an important role in consciousness are replaced with these artificial neurons? I'll answer (a) for you: his behaviour must remain unchanged. It must remain unchanged because the artificial neurons behave in a perfectly normal way in their interactions with normal neurons, sensory organs and effector organs, according to P. If they don't, then P is false, and you said that P is true. Can you see a way that I haven't seen whereby it might *not* be a contradiction to claim that the person's neurons will behave normally but the person will behave differently? OK, the person's behaviour remains unchanged, by definition if P is true. What about his experiences? The classic example here is visual perception. If P is true, then the person would go blind; but if P is true, he is also forced to behave as if he has normal vision. So internally, either he must not notice that he is blind, or he must notice that he is blind but be unable to communicate it. The latter is impossible for the same reasons as it is impossible that his behaviour changes: the neurons in his brain which do the thinking are also constrained to behave normally. That leaves the first option, that he goes blind but doesn't notice. If this idea is coherent to you, then you have to admit that you might right now be blind and not know it. However, you have clearly stated that you think this is preposterous: a zombie doesn't know it's a zombie, but you know you're not a zombie, and you would certainly know if you suddenly went blind (as a matter of fact, some people *don't* recognise when they go blind - it's called Anton's syndrome - but these people also behave abnormally, so they aren't zombies or partial zombies). Where does that leave you? I think you have to say you were mistaken in saying P is true. It isn't possible to make artificial neurons which behave like normal neurons in every way but lack consciousness. Can you see another way out that I haven't seen? -- Stathis Papaioannou From stefano.vaj at gmail.com Mon Feb 1 10:16:10 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 1 Feb 2010 11:16:10 +0100 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <20100129192646.5.qmail@syzygy.com> References: <584374.10388.qm@web36504.mail.mud.yahoo.com> <20100129192646.5.qmail@syzygy.com> Message-ID: <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com> On 29 January 2010 20:26, Eric Messick wrote: > Meaning is attached to word symbols when the word symbols are > associated with sense symbols, not with other word symbols. Not all symbols are words - and in fact the word "three" can be associated with the number "3" - but "sense symbols" sounds as a dubious and redundant concept. -- Stefano Vaj From gts_2000 at yahoo.com Mon Feb 1 12:53:49 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 1 Feb 2010 04:53:49 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <641209.68117.qm@web36506.mail.mud.yahoo.com> --- On Mon, 2/1/10, Stathis Papaioannou wrote: > The system has understanding, but no part of the system > either separately or taken as an ensemble has understanding. > I've tried to explain this giving several variations on the > CRA, none of which you have directly responded to Because that answer doesn't make any sense to me, Stathis. Looks like you want to skirt the issue by asserting that the system understands things that the man, *considered as the system*, does not understand. You do this by imagining a fictional third entity that you call the "ensemble of neurons" that exists independently of the system. But the ensemble is the system. Did you read the actual target article? Notice that the system AND the neurons "taken as an ensemble" understand the stories in English but they do not understand the stories in Chinese. Please explain why the ensemble and the system understand English but not Chinese. Why the difference? -gts From stathisp at gmail.com Mon Feb 1 13:14:09 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 2 Feb 2010 00:14:09 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <641209.68117.qm@web36506.mail.mud.yahoo.com> References: <641209.68117.qm@web36506.mail.mud.yahoo.com> Message-ID: On 1 February 2010 23:53, Gordon Swobe wrote: > --- On Mon, 2/1/10, Stathis Papaioannou wrote: > >> The system has understanding, but no part of the system >> either separately or taken as an ensemble has understanding. >> I've tried to explain this giving several variations on the >> CRA, none of which you have directly responded to > > Because that answer doesn't make any sense to me, Stathis. Looks like you want to skirt the issue by asserting that the system understands things that the man, *considered as the system*, does not understand. You do this by imagining a fictional third entity that you call the "ensemble of neurons" that exists independently of the system. But the ensemble is the system. Could you respond to the specific examples I have used to demonstrate this apparently non-obvious point? The neurons do not understand language, they probably don't "understand" anything, and if they got together on a day off to talk about it they still wouldn't understand anything. And yet acting in concert, they produce this new entity, the person, who does understand language. Note that it works both ways: the person, who is very much more intelligent than the neurons, doesn't have a clue what is going on in his head when he thinks either. It's his head, so how is this possible? > Did you read the actual target article? Notice that the system AND the neurons "taken as an ensemble" understand the stories in English but they do not understand the stories in Chinese. Please explain why the ensemble and the system understand English but not Chinese. Why the difference? You have to acknowledge that there are different levels of abstraction. The man understands English but that's completely irrelevant to his mechanistic symbol manipulation. It could be that a lone clever neuron in his frontal lobe understands Russian and recites Pushkin while squirting its neurotransmitters, but that has nothing to do with the man understanding Russian, since it does not in any way impact on the operation of his language centre; and conversely, the clever Russian-speaking neuron does not necessarily have any idea what the man is up nor any knowledge of English or Chinese. -- Stathis Papaioannou From gts_2000 at yahoo.com Mon Feb 1 13:29:15 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 1 Feb 2010 05:29:15 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <452387.36160.qm@web36507.mail.mud.yahoo.com> --- On Mon, 2/1/10, Stathis Papaioannou wrote: >> Right, and Stathis' subject will eventually pass the > TT just as your subject will in your thought experiment. But > in both cases the TT will give false positives. The subjects > will have no real first-person conscious intentional > states. > > I think you have tried very hard to avoid discussing this > rather simple thought experiment. It has one premise, call it P: I didn't avoid anything. We went over it a million times. :) > P: It is possible to make artificial neurons which behave > like normal neurons in every way, but lack consciousness. P = true if we define behavior as you've chosen to define it: the exchange of certain neurotransmitters into the synapses at certain times and other similar exchanges between neurons. I reject as absurd for example your theory that a brain the size of texas constructed of giant neurons made of beer cans and toilet paper will have consciousness merely by virtue of those beer cans squirting neurotransmitters betwixt themselves in the same patterns that natural neurons do. I also reject, in the first place, your implied assumption that the neuron is necessarily the atomic unit of the brain. > OK, assuming P is true, what happens to a person's > behaviour and to his experiences if the neurons in a part of his > brain with an important role in consciousness are replaced with these > artificial neurons? As I explained many times, because your artificial neurons will not help the patient have complete subjective experience, and because experience affects behavior in healthy people, the surgeon will need to keep re-programming the artificial neurons and most likely replacing and reprogramming other neurons until finally at long last he creates a patient that passes the Turing test. But that patient will not have any better quality consciousness than he started with, and may become far worse off subjectively by the time the surgeon finishes, depending on facts about neuroscience that in 2010 nobody knows. Eric offered a more straightforward experiment in which he simulated the entire brain. You complicate the matter by doing partial replacements, but the principles that drive my arguments remain the same: formal programs do not have or cause minds. If they did, the computer in front of you this very moment would have a mind and would perhaps be entitled to vote like other citizens. -gts From stathisp at gmail.com Mon Feb 1 14:28:04 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 2 Feb 2010 01:28:04 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <452387.36160.qm@web36507.mail.mud.yahoo.com> References: <452387.36160.qm@web36507.mail.mud.yahoo.com> Message-ID: 2010/2/2 Gordon Swobe : >> P: It is possible to make artificial neurons which behave >> like normal neurons in every way, but lack consciousness. > > P = true if we define behavior as you've chosen to define it: the exchange of certain neurotransmitters into the synapses at certain times and other similar exchanges between neurons. Yes, that would be one aspect of the behaviour that needs to be reproduced. > I reject as absurd for example your theory that a brain the size of texas constructed of giant neurons made of beer cans and toilet paper will have consciousness merely by virtue of those beer cans squirting neurotransmitters betwixt themselves in the same patterns that natural neurons do. That is a consequence of functionalism but at this point functionalism is assumed to be wrong. All we need is artificial neurons that fit inside the head (which excludes structures the size of Texas) and can fool their neighbours into thinking they are normal neurons. > I also reject, in the first place, your implied assumption that the neuron is necessarily the atomic unit of the brain. OK, P can be made even more general by replacing "neuron" with "component". The component could be subneuronal in size or a collection of multiple neurons.It just has to behave normally in relation to its neighbours. >> OK, assuming P is true, what happens to a person's >> behaviour and to his experiences if the neurons in a part of his >> brain with an important role in consciousness are replaced with these >> artificial neurons? > > As I explained many times, because your artificial neurons will not help the patient have complete subjective experience, Yes, that's an essential part of P: no subjective experiences > and because experience affects behavior in healthy people, the surgeon will need to keep re-programming the artificial neurons and most likely replacing and reprogramming other neurons until finally at long last he creates a patient that passes the Turing test. But that patient will not have any better quality consciousness than he started with, and may become far worse off subjectively by the time the surgeon finishes, depending on facts about neuroscience that in 2010 nobody knows. But how? We agreed that the artificial components BEHAVE NORMALLY. That is their essential feature, apart from lacking consciousness. You remove any normal component whatsoever, drop in the replacement, and the behaviour of the whole brain MUST remain unchanged, or else the replacement component is not as assumed. I can't believe that you don't see this, and after being inconsistent being disingenuous is the worst sin you can commit in philosophical discussions. > Eric offered a more straightforward experiment in which he simulated the entire brain. You complicate the matter by doing partial replacements, but the principles that drive my arguments remain the same: formal programs do not have or cause minds. If they did, the computer in front of you this very moment would have a mind and would perhaps be entitled to vote like other citizens. You keep repeating it but it doesn't make it so. I have assumed that what you are saying is true and tried to show you that it leads to an absurdity, but you respond by saying that if A behaves exactly the same as B then A does not behave exactly the same as B, and carry on as if no-one will notice the problem with this! -- Stathis Papaioannou From bbenzai at yahoo.com Mon Feb 1 14:15:33 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 1 Feb 2010 06:15:33 -0800 (PST) Subject: [ExI] extropy-chat Digest, Vol 77, Issue 1 In-Reply-To: Message-ID: <681115.27739.qm@web113615.mail.gq1.yahoo.com> > From: Gordon Swobe > To: ExI chat list > Subject: Re: [ExI] How to ground a symbol > Message-ID: <589903.82027.qm at web36507.mail.mud.yahoo.com> > Content-Type: text/plain; charset=iso-8859-1 > > --- On Sun, 1/31/10, Ben Zaiboc > wrote: > > > In future, whenever the system sees a rose, it will > know > > whether it's a red rose or not, because there'll be a > part > > of its internal state that matches the symbol "Red".? > > The system you describe won't really "know" it is red. It > will merely act as if it knows it is red, no different from, > say, an automated camera that acts as if it knows the light > level in the room and automatically adjusts for it. Please explain what "really knowing" is. I'm at a loss to see how something that acts exactly as if it knows something is red can not actually know that. In fact, I'm at a loss to see how that sentence can even make sense. You're claiming that something which not only quacks and looks like, but smells like, acts like, sounds like, and is completely indistinguishable down to the molecular level from, a duck, can in fact not be a duck. That if you discover that the processes which give rise to the molecules and their interactions are due to digital information processing, then, suddenly, no duck. Ben Zaiboc From bbenzai at yahoo.com Mon Feb 1 14:28:43 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 1 Feb 2010 06:28:43 -0800 (PST) Subject: [ExI] multiple realizability In-Reply-To: Message-ID: <629922.52960.qm@web113610.mail.gq1.yahoo.com> Gordon Swobe declared: > The layer of abstraction does not matter to me. Well, if that's the case, all your philosophising avails you nothing. At all. Levels of abstraction are vitally important, and if you dismiss them as irrelevant, you're chucking not just the baby, but the whole universe out with the bathwater. If you honestly think levels of abstraction irrelevant, then everything is just a vast sea of gluons and quarks (or something even lower down), and there is no such thing as matter, planets, stars, water, trees, or people. If levels of abstraction are irrelevant, you don't exist. Ben Zaiboc From hkeithhenson at gmail.com Mon Feb 1 17:28:56 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 1 Feb 2010 10:28:56 -0700 Subject: [ExI] Glacier Geoengineering Message-ID: On Mon, Feb 1, 2010 at 5:00 AM, Alfio Puglisi wrote: > On Sun, Jan 31, 2010 at 10:52 AM, Keith Henson wrote: > >> The object is to freeze a glacier to bedrock. snip > > Temperatures at the glacier-bedrock interface can be amazingly high. This > article talks about bedrock *welding* with temperatures higher than 1,000 > Celsius: > > http://jgs.lyellcollection.org/cgi/content/abstract/163/3/417 > > I guess the energy comes from the potential energy of the ice sliding down > the terrain. True. The article makes the point that it happened in a very short time in a small volume though. > This is only enough to take ?out the heat coming out of the earth. ?Probably >> need it somewhat >> larger to pull the huge masses of ice in a few decades down to a >> temperature where they would flow much slower. >> > > If one also needs to remove the heat generated gravitationally, this could > be potentially much larger than just the Earth's heat flux. Good point. Let's put numbers on it. Take a square km of ice a km deep. Consider the case of it sliding at 10 m/year down a 10 m/km (1%) slope. So the energy release would be Mgh. 1000 kg/cubic meter x 10E9cubic m/cubic km x 9.8 x 0.1m =9.8 10E12 J. That is released over a year, so divide by seconds in a year, 3.15 x 10E7or ~3.1 10E5 watts, which is 310 kW. So for this case of a fairly fast moving glacier, gravity released heat would be about 3 times the geo heat. Of course the heat from this motion would stop if the glacier was frozen to the bedrock. Keith From jonkc at bellsouth.net Mon Feb 1 17:04:58 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 1 Feb 2010 12:04:58 -0500 Subject: [ExI] How not to make a thought experiment (was: How to ground a symbol) In-Reply-To: <304772.53589.qm@web36501.mail.mud.yahoo.com> References: <304772.53589.qm@web36501.mail.mud.yahoo.com> Message-ID: On Jan 31, 2010, Gordon Swobe wrote: > Let me know what you think. > http://www.mind.ilstu.edu/curriculum/searle_chinese_room/searle_robot_reply.php More of the same. You ask us to imagine a room too large to fit into the observable universe and then say that it acts intelligently but "obviously" it doesn't understand anything. You just refuse to consider two possibilities: 1) That you don't fully understand understanding as well as you think you do. 2) Even if you don't understand how it could understand the room could still understand. In fact if Darwin is right (and there is an astronomical amount of evidence that he is) then that room MUST have consciousness despite your or my lack of comprehension of the mechanics of it all. And even if Darwin is not right every one of your arguments against consciousness existing in a robot could just as easily be used to argue against consciousness existing in your fellow human beings; but for some reason you seem unenthusiastic in pursuing that line of thought. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Mon Feb 1 18:13:12 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 1 Feb 2010 19:13:12 +0100 Subject: [ExI] Understanding is useless In-Reply-To: <165704.91501.qm@web36502.mail.mud.yahoo.com> References: <65FB1FA7-9C42-47BB-A32A-5B9B2C771FF9@bellsouth.net> <165704.91501.qm@web36502.mail.mud.yahoo.com> Message-ID: <580930c21002011013m7eb5e8b8r483ea67c719b304f@mail.gmail.com> On 29 January 2010 20:25, Gordon Swobe wrote: > Some people here might even call me a chauvinist of sorts for daring to claim that computers don't understand their own words. I suppose typewriters and cell phones should have civil rights too. Why, do you suggest that unconscious human beings should lose their own? ;-) -- Stefano Vaj From eric at m056832107.syzygy.com Mon Feb 1 18:14:30 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 1 Feb 2010 18:14:30 -0000 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com> References: <584374.10388.qm@web36504.mail.mud.yahoo.com> <20100129192646.5.qmail@syzygy.com> <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com> Message-ID: <20100201181430.5.qmail@syzygy.com> Stefano: >Eric: >> Meaning is attached to word symbols when the word symbols are >> associated with sense symbols, not with other word symbols. > >Not all symbols are words - and in fact the word "three" can be >associated with the number "3" - but "sense symbols" sounds as a >dubious and redundant concept. I should probably explain what I mean by the phrase "sense symbols". As a brain thinks, we can consider it as activating and processing sequences of sets of symbols. This is analogous to a CPU having various bit patterns active on internal busses, with the bit patterns representing symbols. Some of the symbols in the brain map 1 to 1 with words in a spoken language, and we would refer to them as word symbols. Other brain symbols appear within the brain as a direct result of the stimulation of sensory neurons in the body, and this is what I mean by a "sense symbol". It's basically the internal representation of directly sensed external events. Actually, I was partially mistaken in saying that meaning cannot be attached to a word by association with other words. A definition could associate a new word with a set of old words, and if all of the old words have meanings (by being grounded or by association) the new one can acquire meaning as well. -eric From Frankmac at ripco.com Mon Feb 1 18:54:46 2010 From: Frankmac at ripco.com (Frank McElligott) Date: Mon, 1 Feb 2010 13:54:46 -0500 Subject: [ExI] war is peace Message-ID: <003f01caa370$0a889bb0$ad753644@sx28047db9d36c> The largest exporter of oil is Russia, more than the Saudi's. Yet there was only one bidder, my my a dream auction. Is this the real world we live in, or are we back in the days of sherwood forest with Robin hood and the sheriff. The Wizard of Russia By Michael Bohm Michael Bohm is opinion page editor of The Moscow Times. A year after former Yukos CEO Mikhail Khodorkovsky was arrested on fraud charges, Baikal Finance Group ? a mysterious company with a share capital of only 10,000 rubles ($330) ? acquired Yukos' largest subsidiary, Yuganskneftegaz, for $9.3 billion in an "auction" consisting of only one bidder. After Yuganskneftegaz was sold four days later to state-controlled Rosneft, Andrei Illarionov, economic adviser to then-President Vladimir Putin, called the state expropriation of Yukos "the Biggest Scam of the Year" in his annual year-end list of Russia's worst events. When Illarionov announced his 2009 list in late December, he should have added another award and given it to Putin: "the Best PR Project of the Decade." The Yukos scam was "legal nihilism" par excellence, but most Russians have a completely different version of the event. The Kremlin's 180-degree PR spin on the Yukos nationalization should be a case study for any nation aspiring to create a Ministry of Truth. As Putin explained in his December call-in show, the Yukos affair was not government expropriation at all, but a way to give money that Yukos "stole from the people" back to the people by helping them buy new homes and repair old ones. Putin, it turns out, is also Russia's Robin Hood. War is peace. Ignorance is strength. Oh by the way Obama's job program is going to cost 100 billion, again another robin hood:) Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Feb 1 21:59:53 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 1 Feb 2010 16:59:53 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <394481.10295.qm@web36506.mail.mud.yahoo.com> References: <394481.10295.qm@web36506.mail.mud.yahoo.com> Message-ID: <1CC02E2F-6A82-4B99-A0B4-39BF26253BEC@bellsouth.net> On Jan 31, 2010, Gordon Swobe wrote: > digital models of human brains will have the real properties of natural brains if and only if natural brains already exist as digital objects You've said that before and when you did I said brains are not important minds are, and minds are digital although they are not objects. To save time and avoid needless wear and tear on electrons the next time you have the urge to repeat that same remark yet again lets adopt the convention of you just saying "41" and my retort to your remark will be "42". > Philosophers of mind don't care much about how "useful" it may seem. And that's why philosophers of mind have never produced anything useful and probably never will; computer programers have, mathematicians have, but philosophers of mind not so much. > They do care if it has a mind capable of having conscious intentional states: Unfortunately that is all philosophers of mind care about, if they spent just a little time considering what the mind in question actually does regardless of what "intentional state" it is in they would be much more successful. If they spent time taking a high school biology class they would be even better off. But they dislike getting their hands dirty conducting experiments other than the thought kind, and considering actual evidence is even more disagreeable to them. Darwin contributed astronomically more to understanding what the mind is than any philosopher of mind that ever lived. And these two bit philosophers act as if they've never heard of him; they deserve our contempt. > Stathis. Looks like you want to skirt the issue by asserting that the system understands things that the man, *considered as the system*, does not understand. Some might think that it was outrageous enough to propose a thought experiment that contained a room larger than the observable universe and that operated so slowly that the 13.7 billion year age of the universe is not nearly enough time for it to complete a single action, and then to confidently proclaim exactly what this bizarre amalgamation can and cannot understand; but no, Searle was just getting warmed up. Calling his next step ridiculous doesn't capture its true nature, it's more like ridiculous to the ridiculous power. Piling absurdity on top of absurdity he now wants us to think about a "man" who "internalized" this contraption that is far too large and far too slow to fit in our universe. I don't know what sort of entity could do that and I would be a fool to claim to know what that vastly improbable, something, could and couldn't do, and so would you, and so would Searle. I do know one thing, whatever it is you can bet your life that it isn't a man. > The system you describe won't really "know" it is red. It will merely act as if it knows it is red Einstein didn't understand physics he just acted like he understood physics. Tiger Woods didn't understand how to play golf he just acted like he understood how to play golf. I've said it before I'll say it again, understanding is useless! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Feb 1 23:47:10 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 01 Feb 2010 17:47:10 -0600 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" Message-ID: <4B6767FE.8080304@satx.rr.com> http://www.foxnews.com/story/0,2933,584500,00.html * Witches, Druids and pagans rejoice! The Air Force Academy in Colorado is about to recognize its first Wiccan prayer circle, a Stonehenge on the Rockies that will serve as an outdoor place of worship for the academy's neo-pagans.* Wiccan cadets and officers on the Colorado Springs base have been convening for over a decade, but the school will officially dedicate a newly built circle of stones on about March 10, putting the outdoor sanctuary on an equal footing with the Protestant, Catholic, Jewish and Buddhist chapels on the base. "When I first arrived here, Earth-centered cadets didn't have anywhere to call home," said Sgt. Robert Longcrier, the lay leader of the neo-pagan groups on base. "Now, they meet every Monday night, they get to go on retreats, and they have a stone circle." Academy officials had no tally of the number of Wiccan cadets at the school of 4,500, but said they had been angling to set up a proper space since the academic year began. "That's one of the newer groups," said John Van Winkle, a spokesman for the academy. "They've had a worship circle on base for some time and we're looking to get them an official one." The Air Force recognizes several distinct forms of neo-paganism, including Dianic Wicca, Seax Wicca, Gardnerian Wicca, shamanism and Druidism, according to Pagan groups that track the information. It isn't nearly as comprehensive when it comes to sects within other religions. The academy still does not recognize, for instance, the massive gulfs between Catholics with guilt problems and those without; or the distinct practices of Jews who keep kosher, those who eat bacon, and those secretly wish they could. Since a 2004 survey of cadets on the base revealed dozens of instances of harassment and intolerance, superintendent Michael Gould has made religious tolerance a priority. Yet Van Winkle, the academy spokesman, said he could not confirm whether the school's superintendent or senior staff would attend the dedication ceremony. "(We) haven't gotten that far yet: First we have to get a date, and then once we get a date for the dedication ceremony we'll see who's going to be available for it," he told FoxNews.com. "Once we get a date that's going to be the real driving force for who's going to attend." From msd001 at gmail.com Tue Feb 2 03:46:15 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 1 Feb 2010 22:46:15 -0500 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" In-Reply-To: <4B6767FE.8080304@satx.rr.com> References: <4B6767FE.8080304@satx.rr.com> Message-ID: <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com> On Mon, Feb 1, 2010 at 6:47 PM, Damien Broderick wrote: > "When I first arrived here, Earth-centered cadets didn't have anywhere to > call home," said Sgt. Robert Longcrier, the lay leader of the neo-pagan > groups on base. Earth-centered cadets... didn't have anywhere... to call home. Is this in comparison to space cadets? Or is it illustrating a problem with location or availability of communications equipment? Or maybe it's about the alienation of earth-centered cadets feeling isolated... from their earthican center? > "Now, they meet every Monday night, they get to go on retreats, and they > have a stone circle." On monday night, the earth-centered cadets go on retreats to a stone circle? > Academy officials had no tally of the number of Wiccan cadets at the school > of 4,500, but said they had been angling to set up a proper space since the > academic year began. I can tell you the "angling" of a stone circle is 360 degrees, no matter how many bi- sects there are. Or maybe while on their monday night retreats they go fishing? I'm not sure how that is a productive way to get work done. > "That's one of the newer groups," said John Van Winkle, a spokesman for the > academy. "They've had a worship circle on base for some time and we're > looking to get them an official one." What criteria is used to make a circle official? Is a qualified someone going to measure diameter and circumference to a high degree of precision before making a declaration? > The Air Force recognizes several distinct forms of neo-paganism, including > Dianic Wicca, Seax Wicca, Gardnerian Wicca, shamanism and Druidism, > according to Pagan groups that track the information. That's pretty impressive considering most of the time members of these groups can hardly recognize each other. > It isn't nearly as comprehensive when it comes to sects within other > religions. The academy still does not recognize, for instance, the massive > gulfs between Catholics with guilt problems and those without; or the > distinct practices of Jews who keep kosher, those who eat bacon, and those > secretly wish they could. And what would these group be "officially" recognized as Whole Guilt Catholics vs Skim Catholics or Bacon Jews vs Fakin Bacon Jews? > "(We) haven't gotten that far yet: First we have to get a date, and then > once we get a date for the dedication ceremony we'll see who's going to be > available for it," he told FoxNews.com. > > "Once we get a date that's going to be the real driving force for who's > going to attend." Much like high school students deciding if they'll attend a sophomore prom... I wonder if we could get the Air Force to recognize our Holy HotTub-based religion and declare an officially sanctioned meeting place on base? You know, to be fair and completely "tolerant." From thespike at satx.rr.com Tue Feb 2 03:54:15 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 01 Feb 2010 21:54:15 -0600 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" In-Reply-To: <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com> References: <4B6767FE.8080304@satx.rr.com> <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com> Message-ID: <4B67A1E7.9020905@satx.rr.com> On 2/1/2010 9:46 PM, Mike Dougherty quoth: >said John Van Winkle did make me wonder about a leg-pull... but he pops up on Google more than once. From moulton at moulton.com Tue Feb 2 05:42:01 2010 From: moulton at moulton.com (moulton at moulton.com) Date: 2 Feb 2010 05:42:01 -0000 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" Message-ID: <20100202054201.55588.qmail@moulton.com> Here is some background info: http://www.nytimes.com/2005/06/04/national/04airforce.html http://www.militaryreligiousfreedom.org/ It looks like things are getting better than they were a few years ago. From gts_2000 at yahoo.com Tue Feb 2 14:43:10 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 2 Feb 2010 06:43:10 -0800 (PST) Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <20100201181430.5.qmail@syzygy.com> Message-ID: <969214.43756.qm@web36506.mail.mud.yahoo.com> --- On Mon, 2/1/10, Eric Messick wrote: > Actually, I was partially mistaken in saying that meaning > cannot be attached to a word by association with other words. I think you make an excellent observation here, Eric. The mere association of a symbol to another symbol does not give either symbol meaning. Symbols have derived intentionality, whereas people who use symbols have intrinsic intentionality. I'll try to explain what I mean... Compare: 1) Jack means that the moon orbits the earth. 2) The word "moon" means a large object that orbits the earth. In the scene described in 1), Jack means something by the symbol "moon". He has intrinsic intentionality. He has a conscious mental state in which *he means* to communicate something about the moon. In sentence 2), we (English speakers of the human species) attribute intentionality to the symbol "moon", as if the symbol itself has a conscious mental state similar to the one Jack had in 1). We imagine for the sake of convenience that symbols mean to say things about themselves. We often speak of words and other symbols this way, treating them as if they have consciousness mental states, as if they really do mean to tell us what they mean. We anthropomorphize our language. The above might seem blindingly obvious (I hope so) but it has bearing on the symbol grounding question. Symbols have meaning only in the minds of conscious agents; that is, the apparent intentionality of words is derived from conscious intentional agents who actually do the meaning. -gts From bbenzai at yahoo.com Tue Feb 2 15:08:38 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 2 Feb 2010 07:08:38 -0800 (PST) Subject: [ExI] Glacier Geoengineering In-Reply-To: Message-ID: <847034.14281.qm@web113617.mail.gq1.yahoo.com> I need to ask a question here, please indulge me if the answer should be obvious: What's the point of sticking glaciers to their bedrock? Also, if you're going to build up stupendous amounts of potential energy like this, you'd better have a good scheme for dealing with it when it finally breaks loose. Hm, maybe not. The frozen-to-bedrock layer will just become the new bedrock, and you'll be back to square one, surely? Ben Zaiboc From bbenzai at yahoo.com Tue Feb 2 15:10:42 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 2 Feb 2010 07:10:42 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <263261.91221.qm@web113605.mail.gq1.yahoo.com> Gordon wrote: "the principles that drive my arguments remain the same: formal programs do not have or cause minds. If they did, the computer in front of you this very moment would have a mind and would perhaps be entitled to vote like other citizens" This is a good example of a "straw man" argument. You are misrepresenting the claim that some formal programs can cause minds as a claim that *all* formal programs *must* cause minds. This is (or should be) obvious nonsense. As many people now have said, directly and indirectly, many times, it's not the 'formal programness' that's important. That is completely irrelevant. What's important is information processing of a particular kind. This could be implemented by a biological system, an electronic or electromechanical system, a purely chemical system, a nanomechanical system or indeed by a massive array of beer cans and string. The fact that you find beer cans and string an unlikely substrate for intelligence is beside the point (I find it unlikely too, but for entirely different reasons, to do with practicality, not theoretical possibility). These 'formal programs' that you keep going on about are just one subset among a large set of possible information processing systems that can give rise to minds, if set up and run in the right way. Ben Zaiboc From stefano.vaj at gmail.com Tue Feb 2 18:57:38 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 2 Feb 2010 19:57:38 +0100 Subject: [ExI] 1984 and Brave New World In-Reply-To: <12411.23612.qm@web27003.mail.ukl.yahoo.com> References: <12411.23612.qm@web27003.mail.ukl.yahoo.com> Message-ID: <580930c21002021057x1cb3d14ai15b3cdafd0d9282a@mail.gmail.com> On 31 January 2010 16:09, Tom Nowell wrote: > Brave New World reflects the utopian thinking of those who believed a technocratic elite could bestow happiness for all, and its focus on biological engineering of people and society reflects the early 20th century eugenicists. In a time when people were publicly advocating the sterilisation of undesirable types, and where people were using dubious biology to push forward their own political views, Huxley warns us of one way in which this could end up. Mmhhh. Where is the "warning"? Huxley does seem to see the Brave New World as the unavoidable destination of the societal goals worth pursuing. And where is "eugenics", at least in a transhumanist sense? The different castes of BNW are kept as stable as possible, no effort to improve, enhance of change their genetic makeover is in place. -- Stefano Vaj From gts_2000 at yahoo.com Tue Feb 2 23:44:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 2 Feb 2010 15:44:33 -0800 (PST) Subject: [ExI] meaning & symbols In-Reply-To: Message-ID: <816357.45313.qm@web36501.mail.mud.yahoo.com> --- On Tue, 2/2/10, Spencer Campbell wrote: > According to Eric, association is the sole factor giving > many symbols meaning in human minds. The only prerequisite is that at > least one symbol in the web has meaning intrinsically; that is to > say, it is a sense symbol. Meaning can effectively be shared between > symbols, and is not diluted in the process. I think I misread Eric's sentence. Thanks for pointing that out. In any case I do not believe there exists any such thing as a "sense symbol". Organisms with highly developed nervous systems create and ponder mental abstractions, aka symbols, about sense data and about other abstractions. Simple organisms on the order of, say, fleas have eyes and other sense organs, so it seems likely that they have awareness of sense data. But because they lack a well developed nervous system it seems very improbable to me that they can do much in the way of forming symbols to represent that data. I also do not believe any symbol of any kind can have "instrinsic meaning". Meaning always arises in the context of a conscious mind. X means Y only according to some conscious Z. In casual conversation we sometimes speak about words as if they mean something, but they do not actually mean anything. Conscious agents mean things and they use words to convey their meanings. -gts From possiblepaths2050 at gmail.com Wed Feb 3 01:01:25 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 2 Feb 2010 18:01:25 -0700 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" In-Reply-To: <20100202054201.55588.qmail@moulton.com> References: <20100202054201.55588.qmail@moulton.com> Message-ID: <2d6187671002021701v18de857bs184fa3e66265410f@mail.gmail.com> I hope the Air Force Academy got a handle on the serious rape problem they had, not so many years ago. John On Mon, Feb 1, 2010 at 10:42 PM, wrote: > > Here is some background info: > http://www.nytimes.com/2005/06/04/national/04airforce.html > http://www.militaryreligiousfreedom.org/ > > It looks like things are getting better than they were a few years ago. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Wed Feb 3 02:32:09 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 2 Feb 2010 18:32:09 -0800 (PST) Subject: [ExI] meaning & symbols In-Reply-To: Message-ID: <230474.84371.qm@web36502.mail.mud.yahoo.com> --- On Tue, 2/2/10, Spencer Campbell wrote > In the second paragraph I almost jumped on you again for > misusing the concept of abstraction, but then I noticed you said "and > about other" rather than "and other". You weren't saying that sense data > are abstractions, if I understand correctly. Right. > When we get to the third paragraph, however, it sounds as > if you believe that manking discovered symbols rather than > invented them. No. Not sure why you would say that. I certainly do not believe we discover symbols. We create them. > Here's the thing: the very idea of a symbol is, in and of > itself, an abstraction. I suspect it's possible to form a > coherent model of the mind (by today's standards) without ever > mentioning symbols or anything like them. It may not be a particularly > elegant model, but it would work as well as any other. No matter what you may choose to call them, people do speak and understand word-symbols. The human mind thus "has semantics" and any coherent model of it must explain that plain fact. The computationalist model fails to explain that plain fact. On that model minds do no more than run programs and programs do no more than manipulate symbols according to rules of syntax. Nothing in the model explains how the mind can have conscious understand the symbols it manipulates. To make the model coherent, its proponents must introduce a homunculous: an observer/user of the supposed brain/computer who sees and understands the meanings of the symbols. But the homunculous fallacy proves fatal to the theory: How does the homunculous understand the symbols if not by some means other than computation? And if that's so then why did we say the mind exists as a computer in the first place? > You're correct in saying that sense symbols do not exist, > but only insofar as there aren't any symbols which DO exist. Hmm, I count 21 word-symbols in that sentence of yours. > All I meant by "intrinsic" meaning was that some symbols in > the field of all available within a given Z are meaningful > irrespective of any other symbols. Eric explains that this is so > because they are invoked directly by incoming sensory data: I see > a dog, I think a dog symbol. Yes, but when I look inside your head I see nothing even remotely resembling a digital computer. Instead I see a marvelous product of biological evolution. -gts From pharos at gmail.com Wed Feb 3 09:04:36 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Feb 2010 09:04:36 +0000 Subject: [ExI] meaning & symbols In-Reply-To: <230474.84371.qm@web36502.mail.mud.yahoo.com> References: <230474.84371.qm@web36502.mail.mud.yahoo.com> Message-ID: On 2/3/10, Gordon Swobe wrote: > Yes, but when I look inside your head I see nothing even remotely > resembling a digital computer. Instead I see a marvelous product > of biological evolution. > > Yes, You have told us all at great length that you very strongly believe that only human brains (and other things which must be almost identical to human brains) can do the magic human 'consciousness' thing. That's fine, you are allowed to believe anything you like, but it is only a belief that you cannot 'prove' is correct. And much reasoning has been produced to show that it is probably a mistaken belief. We shall just have to wait until weak AI computers develop, (probably using new designs and different programming techniques) into machines that apparently have strong AI, are more intelligent than humans and have a type of 'consciousness'. When these machines are out exploring the universe and reporting back to the remaining humans trapped on earth who are being looked after by similar intelligent machines, I fully expect you to say 'But they're not "really* conscious'. BillK From stefano.vaj at gmail.com Wed Feb 3 13:04:44 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 3 Feb 2010 14:04:44 +0100 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" In-Reply-To: <4B6767FE.8080304@satx.rr.com> References: <4B6767FE.8080304@satx.rr.com> Message-ID: <580930c21002030504m701fe685k6f0e955f191467b5@mail.gmail.com> On 2 February 2010 00:47, Damien Broderick wrote: > http://www.foxnews.com/story/0,2933,584500,00.html > > * Witches, Druids and pagans rejoice! The Air Force Academy in Colorado is > about to recognize its first Wiccan prayer circle, a Stonehenge on the > Rockies that will serve as an outdoor place of worship for the academy's > neo-pagans.* > > Good news... Even though I am somewhat diffident of the orthodoxy of US neopagans. ;-) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Feb 3 13:09:05 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 3 Feb 2010 14:09:05 +0100 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <20100201181430.5.qmail@syzygy.com> References: <584374.10388.qm@web36504.mail.mud.yahoo.com> <20100129192646.5.qmail@syzygy.com> <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com> <20100201181430.5.qmail@syzygy.com> Message-ID: <580930c21002030509m60aebf03s28104ee60540978b@mail.gmail.com> On 1 February 2010 19:14, Eric Messick wrote: > Some of the symbols in the brain map 1 to 1 with words in a spoken > language, and we would refer to them as word symbols. Other brain > symbols appear within the brain as a direct result of the stimulation > of sensory neurons in the body, and this is what I mean by a "sense > symbol". > So, is the integer "3" a word symbol or a sense symbol? And what about the ASCII decoding of a byte? Or the rasterisation of the ASCII symbol? And what difference would exactly make? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Feb 3 13:15:28 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 3 Feb 2010 14:15:28 +0100 Subject: [ExI] war is peace In-Reply-To: <003f01caa370$0a889bb0$ad753644@sx28047db9d36c> References: <003f01caa370$0a889bb0$ad753644@sx28047db9d36c> Message-ID: <580930c21002030515y34e2ec6ag9742e901071242b4@mail.gmail.com> 2010/2/1 Frank McElligott > The Yukos scam was "legal nihilism" par excellence, but most Russians have > a completely different version of the event. The Kremlin's 180-degree PR > spin on the Yukos nationalization should be a case study for any nation > aspiring to create a Ministry of Truth. As Putin explained in his December > call-in show, the Yukos affair was not government expropriation at all, but > a way to give money that Yukos "stole from the people" back to the people by > helping them buy new homes and repair old ones. Putin, it turns out, is also > Russia's Robin Hood. War is peace. Ignorance is strength. > > I am really confused. Do you maintain that they were wrong to change their views? Or that they should have left Yukos in the hands it had fallen in? And why? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Wed Feb 3 13:52:04 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 3 Feb 2010 05:52:04 -0800 (PST) Subject: [ExI] meaning & symbols In-Reply-To: Message-ID: <910459.5460.qm@web113613.mail.gq1.yahoo.com> Gordon Swobe wrote: > I do not believe there exists any such thing as > a "sense symbol". > > Organisms with highly developed nervous systems create and > ponder mental abstractions, aka symbols, about sense data > and about other abstractions. > > Simple organisms on the order of, say, fleas have eyes and > other sense organs, so it seems likely that they have > awareness of sense data. But because they lack a well > developed nervous system it seems very improbable to me that > they can do much in the way of forming symbols to represent > that data. OK obviously this word 'symbol' needs some clear definition. I would use the word to mean any distinct pattern of neural activity that has a relationship with other such patterns. In that sense, sensory symbols exist, as do (visual) word symbols, (auditory) word symbols, concept symbols, which are a higher-level abstraction from the above three types, and hundreds of other types of 'symbol', representing all the different patterns of neural activity that can be regarded as coherent units, like emotional states, memories, linguistic units (nouns, verbs, etc.), and their higher-level 'chunks' (birdness, the concept of fluidity, etc.), and so on. But that's just me. Maybe I'm overstretching the use of the word. What do other people mean by the word 'symbol', in this context? Gordon points out that they are all meaningless in themselves, only taking on a meaning in the context of a system that can be called a conscious mind. I'm not sure if the 'conscious' part is necessary, though. In any event, the 'meaning' arises as a result of the interaction of the symbols, grounded in the system's interaction with its environment. To say that an organism's 'hunger', which results in it finding and consuming food, is meaningless unless the organism is conscious, is rather a silly statement, and calls into question what we mean by 'meaning'. Ben Zaiboc From jonkc at bellsouth.net Wed Feb 3 15:32:24 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Feb 2010 10:32:24 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <969214.43756.qm@web36506.mail.mud.yahoo.com> References: <969214.43756.qm@web36506.mail.mud.yahoo.com> Message-ID: On Feb 2, 2010, Gordon Swobe wrote: > The mere association of a symbol to another symbol does not give either symbol meaning. > Symbols have derived intentionality, whereas people who use symbols have intrinsic intentionality. Broken down to its smallest component parts, a symbol is something that consistently and systematically changes the state of the symbol reader. A Turing Machine does this when it encounters a zero or a one, and a punch card reader does this when it encounters a hole. You demand an explanation of human style intentionality and say, correctly, that the examples I cite are far less complex and awe inspiring, but if they were just as mysterious they wouldn't be doing their job. I honestly don't know what you want, you say you want an explanation but when one is provided and its split into parts small enough to comprehend you say I understand that so it can't be the explanation. Your retort is always I don't understand that or I do understand that so "obviously" that can't be right. Even in theory I don't see how any explanation would satisfy you. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Feb 3 16:13:18 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Feb 2010 11:13:18 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <230474.84371.qm@web36502.mail.mud.yahoo.com> References: <230474.84371.qm@web36502.mail.mud.yahoo.com> Message-ID: On Feb 2, 2010, at 9:32 PM, Gordon Swobe wrote: > The human mind thus "has semantics" and any coherent model of it must explain that plain fact. But before that can happen you must explain what you mean by explain. > > The computationalist model fails to explain that plain fact. It explains it beautifully according to my understanding of the word. You want a theory that is simultaneously completely understandable and utterly mysterious, so naturally you have been disappointed. > On that model minds do no more than run programs and programs do no more than manipulate symbols according to rules of syntax. Correct me if I'm wrong but I seem to think you may have said something along those lines before, and I think I even remember people bringing up very good counter arguments against that argument that you have steadfastly ignored. > Nothing in the model explains how the mind can have conscious understand the symbols it manipulates. True, they are not comprehensible and incomprehensible at the same time. > I look inside your head I see nothing even remotely resembling a digital computer. Then why are people spending hundreds of millions of dollars building digital computers that simulate larger and larger chunks of neurons? > Instead I see a marvelous product of biological evolution. How can you dare use the word "Evolution"!? YOUR VIEWS ARE 100% INCOMPATIBLE WITH EVOLUTION! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ablainey at aol.com Wed Feb 3 19:54:58 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Wed, 03 Feb 2010 14:54:58 -0500 Subject: [ExI] meaning & symbols In-Reply-To: <910459.5460.qm@web113613.mail.gq1.yahoo.com> Message-ID: <8CC7321E5DB89A5-D54-12CB@webmail-d081.sysops.aol.com> -----Original Message----- Ben Zaiboc wrote >OK obviously this word 'symbol' needs some clear definition. >I would use the word to mean any distinct pattern of neural activity that has a >relationship with other such patterns. In that sense, sensory symbols exist, as >do (visual) word symbols, (auditory) word symbols, concept symbols, which are a >higher-level abstraction from the above three types, and hundreds of other types >of 'symbol', representing all the different patterns of neural activity that can >be regarded as coherent units, like emotional states, memories, linguistic units >(nouns, verbs, etc.), and their higher-level 'chunks' (birdness, the concept of >fluidity, etc.), and so on. > >But that's just me. Maybe I'm overstretching the use of the word. > >What do other people mean by the word 'symbol', in this context? > >Gordon points out that they are all meaningless in themselves, only taking on a >meaning in the context of a system that can be called a conscious mind. > >I'm not sure if the 'conscious' part is necessary, though. In any event, the >'meaning' arises as a result of the interaction of the symbols, grounded in the >system's interaction with its environment. > >To say that an organism's 'hunger', which results in it finding and consuming >food, is meaningless unless the organism is conscious, is rather a silly >statement, and calls into question what we mean by 'meaning'. > >Ben Zaiboc I agree. The problem is that we are using linguistic symbols to which we give our own personal meaning to debate a system that we do not fully understand and of which cannot effectively articulate our personal view. I would go along with the notion that there are sense symbols and many others kinds. So In that context of "Symbols" I dont think conciousness is necessary. Certainly not at a self awareness level. Does this exclude inteligence? I think our definitions need some tweaking. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Wed Feb 3 22:05:18 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 3 Feb 2010 14:05:18 -0800 (PST) Subject: [ExI] meaning & symbols In-Reply-To: Message-ID: <837227.95744.qm@web36508.mail.mud.yahoo.com> --- On Wed, 2/3/10, BillK wrote: > You have told us all at great length that you very > strongly believe that only human brains (and other things which must > be almost identical to human brains) can do the magic human > 'consciousness' thing. I have no interest in magic. I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents (semantics). Idealistic dreamers here on ExI take offense. Sorry about that. -gts From gts_2000 at yahoo.com Wed Feb 3 23:08:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 3 Feb 2010 15:08:34 -0800 (PST) Subject: [ExI] meaning & symbols In-Reply-To: Message-ID: <642330.6805.qm@web36501.mail.mud.yahoo.com> --- On Wed, 2/3/10, John Clark wrote: > Broken down to its smallest component parts, a symbol is something that > consistently and systematically changes the state of the symbol reader. Many things aside from symbols can consistently and systematically change the state of the symbol reader. This wikipedia definition seems better: "A symbol is something such as an object, picture, written word, sound, or particular mark that represents something else by association, resemblance, or convention." -gts From hkeithhenson at gmail.com Thu Feb 4 00:26:16 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 3 Feb 2010 17:26:16 -0700 Subject: [ExI] Glacier Geoengineering Message-ID: On Wed, Feb 3, 2010 at 5:00 AM, Ben Zaiboc wrote: > I need to ask a question here, please indulge me if the answer should be obvious: > > What's the point of sticking glaciers to their bedrock? To slow them down. That way they don't run off into the sea or down to lower altitudes where they melt > Also, if you're going to build up stupendous amounts of potential energy like this, you'd better have a good scheme for dealing with it when it finally breaks loose. > > Hm, maybe not. The frozen-to-bedrock layer will just become the new bedrock, and you'll be back to square one, surely? No, they will still move, but much slower. Ice is like cold tar and the colder you get it the slower it moves. Keith From ddraig at gmail.com Tue Feb 2 06:03:23 2010 From: ddraig at gmail.com (ddraig) Date: Tue, 2 Feb 2010 17:03:23 +1100 Subject: [ExI] US "Air Force recognizes several distinct forms of neo-paganism" In-Reply-To: <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com> References: <4B6767FE.8080304@satx.rr.com> <62c14241002011946p46d7c02awa01259e1695b244c@mail.gmail.com> Message-ID: On 2 February 2010 14:46, Mike Dougherty wrote: > I can tell you the "angling" of a stone circle is 360 degrees, no > matter how many bi- sects there are. Maybe the stones lean inwards? Or outwards? Maybe the whole thing is designed to be in a slow process of collapse, until you end with something looking like a stone circle made of dominos? >> "That's one of the newer groups," said John Van Winkle, a spokesman for the >> academy. "They've had a worship circle on base for some time and we're >> looking to get them an official one." > > What criteria is used to make a circle official? Circles are, officially, circular, I believe. > That's pretty impressive considering most of the time members of these > groups can hardly recognize each other. Sure they can. They are fat, and dress in black. > I wonder if we could get the Air Force to recognize our Holy > HotTub-based religion and declare an officially sanctioned meeting > place on base? ?You know, to be fair and completely "tolerant." Works for me. Is the HPS cute? Dwayne -- ddraig at pobox.com irc.deoxy.org #chat ...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e... http://www.barrelfullofmonkeys.org/Data/3-death.jpg our aim is wakefulness, our enemy is dreamless sleep From lacertilian at gmail.com Mon Feb 1 00:54:57 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 31 Jan 2010 16:54:57 -0800 Subject: [ExI] How to ground a symbol In-Reply-To: <975270.46265.qm@web36504.mail.mud.yahoo.com> References: <20100131230539.5.qmail@syzygy.com> <975270.46265.qm@web36504.mail.mud.yahoo.com> Message-ID: Gordon Swobe : >Eric Messick : >> The animations and other text at the site all indicate that >> this is the type of processing going on in Chinese rooms. > > This kind of processing goes on in every software/hardware system. No, it doesn't. That's only the result of the processing. I went over this before. The processing itself is so spectacularly more fine-grained that thinking about it as an "if this input, then this output" rule is outright fallacious. Yes, you put that input in; yes, you get that output out; but between these two points, a universe is created and destroyed. Gordon Swobe : >Eric Messick : >> Come back after you've written a neural network >> simulator and trained it to do something useful. > > Philosophers of mind don't care much about how "useful" it may seem. They do care if it has a mind capable of having conscious intentional states: thoughts, beliefs, desires and so on as I've already explained. The point isn't to have a useful product, it's to demonstrate a minimal comprehension of how neural network simulations work. You left out the crux of what Eric said: "Then we'll see if your intuition still says that computers can't understand anything." Getting a neural network simulation to do anything useful is sufficiently difficult that you will necessarily learn something about them in the process, and this may change your intuitive impression of what a computer is capable of. Besides, we don't care what philosophers of mind think. We care what computers think. Regrettably, we are forced to talk to the former in order to learn about the latter. From lacertilian at gmail.com Mon Feb 1 02:47:55 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 31 Jan 2010 18:47:55 -0800 Subject: [ExI] multiple realizability In-Reply-To: <418148.11027.qm@web36501.mail.mud.yahoo.com> References: <20100201010329.5.qmail@syzygy.com> <418148.11027.qm@web36501.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > The layer of abstraction does not matter to me. Bad move. You've been attacked before on the basis that you have trouble comprehending the importance of abstraction. How far down through the layers one can go before new complexities cease to emerge is a tremendous component of the argument against formal programs being capable of creating consciousness. To prove this is trivial. All I have to do is invoke a couple of black boxes: One box contains my brain, and another box contains a standard digital computer running the best possible simulation of my brain. Both brains begin in exactly the same state. A single question is sent into each box at the same moment, and the response coming out of the other side is identical. This is the highest level of abstraction, turning whole brains into, essentially, pseudo-random number generators. They carry states; an input is combined with the state through a definite function; the state changes; an output is produced. Gordon has said before that in situations like these, it is impossible to determine whether or not either box has consciousness without "exploratory surgery". I assume Gordon is at least as good a surgeon as Almighty God, and has unlimited computing power available to analyze the resulting data instantaneously. The point is that such surgery is precisely the sort of process which reduces the level of abstraction. A crash course may be in order. You are given ten thousand people. You ask, "How many have blue eyes?". The number splits into two, becoming less abstract. You ask, "How many are taller than I am?". Now there are four numbers, and one quarter the abstraction. Eventually any question you ask will be redundant, as you will have split the population into ten thousand groups of one. But there is still some abstraction left: people are not fundamental particles. So you ask enough questions to uniquely identify every proton, neutron, electron, and any other relevant components. Yet still your description is abstract, because you've only differentiated the particles: you haven't determined their exact locations in space. And here, in a universe equipped with the Heisenberg uncertainty principle, we find that you can't. The description is still abstract. It can be made less so, as we expend greater and greater sums of energy to pin down ever more precise configurations of human beings, but to eliminate abstraction entirely would require infinite energy. In this thread Gordon explicitly rejects the notion that a mind can be copied, in whole, to another substrate without a catastrophic interruption in subjective experience. I agree with this, but I think it's for a completely different reason. I can't say for sure because his clarification made things less clear. Proposition A: a machine operating by formal programs cannot replicate the human mind. Proposition B: a neural network could conceivably replicate the human mind. Logical Conclusion: an individual human mind cannot be extracted from its neurological material. This does not appear to follow, unless you were counting artificial neural networks as "neurological material". I understood you to mean the specific neurons responsible for instantiating the mind in question originally. By my understanding, that one experiment in which you replace each individual neuron with an artificial duplicate, one by one, would preserve the same conscious mind you started with. Actually I am kind of counting on this last point being true, so I have a vested interest in finding out whether or not it is. If you can convince me of my error before I actually act on it, Gordon, I would appreciate it. For the record, I am a dualist in the sense that I believe minds are distinct entities from brains, as well as that programs are distinct entities from computers. However, I do not believe that minds or programs are composed of a "substance" in any sense. Both are insubstantial. Software (which I say includes minds) is one layer of abstraction higher than its supporting hardware (which I say includes brains), and therefore one order of magnitude less "real". I'm not sure what the radix is for that order of magnitude, but I am absolutely confident that it is exactly one order! From lacertilian at gmail.com Mon Feb 1 19:15:11 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 1 Feb 2010 11:15:11 -0800 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: References: <20100131182926.5.qmail@syzygy.com> <732400.27938.qm@web36501.mail.mud.yahoo.com> Message-ID: Stathis Papaioannou : > P: It is possible to make artificial neurons which behave like normal > neurons in every way, but lack consciousness. > > That's it! Now, when I ask if P is true you have to answer "Yes" or > "No". Is P true? Yes. But not for any reason relevant to the discussion. The proposition doesn't illustrate your point. Ordinary neurons behave normally without producing consciousness all the time! This state can be produced with trivial effort: either fall asleep, faint, or get somebody to knock you upside the head. Presto. An entire unconscious brain, neurons and all. Request that you clarify the constraints of the experiment. Now, for the other thing that bothered me... Gordon Swobe : > P = true if we define behavior as you've chosen to define it: the exchange of certain neurotransmitters into the synapses at certain times and other similar exchanges between neurons. Stathis has not chosen to define behavior this way. Stathis Papaioannou : > Yes, that would be one aspect of the behaviour that needs to be reproduced. See, he's talking about behavior in full: walking, talking, thinking, everything. I don't know why he didn't come right out and say that when obviously it's a point of contention. I had to deduce it from this cryptic reply. It seems as if Gordon believes behavior has nothing to do with consciousness, and Stathis believes consciousness is produced as a direct result of behavior. Further, that the quantity of consciousness is proportional to the intelligence of that behavior. I'd be interested to hear from each of you a description of what would constitute the simplest possible conscious system, and whether or not such a system would necessarily also have intelligence or understanding. I haven't been able to figure out exactly what any of these three words mean to either of you. I am pretty sure, however, that you each have radically different definitions. From lacertilian at gmail.com Mon Feb 1 19:49:23 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 1 Feb 2010 11:49:23 -0800 Subject: [ExI] extropy-chat Digest, Vol 77, Issue 1 In-Reply-To: <681115.27739.qm@web113615.mail.gq1.yahoo.com> References: <681115.27739.qm@web113615.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : >Gordon Swobe : >> The system you describe won't really "know" it is red. It >> will merely act as if it knows it is red, no different from, >> say, an automated camera that acts as if it knows the light >> level in the room and automatically adjusts for it. > > Please explain what "really knowing" is. > > I'm at a loss to see how something that acts exactly as if it knows something is red can not actually know that. ?In fact, I'm at a loss to see how that sentence can even make sense. Like so many other things, it depends on the method of measurement. Gordon did not describe any such thing, but we can assume he had at least a vague notion of one in mind. It actually is possible to get that paradoxical result, and in fact it's easy enough that examples of it are widespread in reality. See: public school systems the world over, and their obsessive tendency to test knowledge. It's alarmingly easy to get the right answer on a test without understanding why it's the right answer, but a certain mental trick is required to notice when this happens. Basically, you have to understand your own understanding without falling into an infinite recursion loop. Human beings are naturally born into that ability, but most people lose it in school because they learn (incorrectly) that understanding doesn't make a difference. Ben Zaiboc : > You're claiming that something which not only quacks and looks like, but smells like, acts like, sounds like, and is completely indistinguishable down to the molecular level from, a duck, can in fact not be a duck. ?That if you discover that the processes which give rise to the molecules and their interactions are due to digital information processing, then, suddenly, no duck. This is the standard method of measurement in philosophy: omniscience. The only problem is, omniscience tends to break down rather rapidly when confronted with questions about subjective experience. If you do manage to pry a correct answer from your god's-eye view, it will typically be paradoxical, ambiguous, or both. Works great for ducks, though, and brains by extension. If you assume the existence of consciousness in a given brain, and then you perfectly reconstruct that brain elsewhere on an atomic level, the copy must necessarily also have consciousness. But then you have to ask whether or not it's the same consciousness, and, in my case, I'm forced to conclude that the copy is identical, but distinct. In the next moment, the two versions will diverge, ceasing to be identical. So far so good. However, Gordon usually does not begin with a working consciousness: he tries to construct one from scratch, and he finds that when he uses a digital computer to do so, he fails. I'm not sure yet whether this is a fundamental limitation built into how digital computers work, or if Gordon is just a really bad programmer. I tend to believe the latter. Gordon believes the former, so he's extended the notion to situations in which we DO begin with a working consciousness and then try to move it to another medium. Hope that elucidates matters for you. Also, that it's accurate. From lacertilian at gmail.com Tue Feb 2 18:03:01 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 2 Feb 2010 10:03:01 -0800 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <969214.43756.qm@web36506.mail.mud.yahoo.com> References: <20100201181430.5.qmail@syzygy.com> <969214.43756.qm@web36506.mail.mud.yahoo.com> Message-ID: On Tue, Feb 2, 2010 at 6:43 AM, Gordon Swobe wrote: > --- On Mon, 2/1/10, Eric Messick wrote: > >> Actually, I was partially mistaken in saying that meaning >> cannot be attached to a word by association with other words. > > I think you make an excellent observation here, Eric. The mere association of a symbol to another symbol does not give either symbol meaning. > > Symbols have derived intentionality, whereas people who use symbols have intrinsic intentionality. I'll try to explain what I mean... > > Compare: > > 1) Jack means that the moon orbits the earth. > > 2) The word "moon" means a large object that orbits the earth. > > In the scene described in 1), Jack means something by the symbol "moon". He has intrinsic intentionality. He has a conscious mental state in which *he means* to communicate something about the moon. > > In sentence 2), we (English speakers of the human species) attribute intentionality to the symbol "moon", as if the symbol itself has a conscious mental state similar to the one Jack had in 1). We imagine for the sake of convenience that symbols mean to say things about themselves. We often speak of words and other symbols this way, treating them as if they have consciousness mental states, as if they really do mean to tell us what they mean. We anthropomorphize our language. > > The above might seem blindingly obvious (I hope so) but it has bearing on the symbol grounding question. Symbols have meaning only in the minds of conscious agents; that is, the apparent intentionality of words is derived from conscious intentional agents who actually do the meaning. > > -gts > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Gordon Swobe : >> Actually, I was partially mistaken in saying that meaning >> cannot be attached to a word by association with other words. > > I think you make an excellent observation here, Eric. The mere association of a symbol to another symbol does not give either symbol meaning. This is exactly what Eric did not say. The whole paragraph, in case you missed it, was: Eric Messick : > Actually, I was partially mistaken in saying that meaning cannot be > attached to a word by association with other words. ?A definition > could associate a new word with a set of old words, and if all of the > old words have meanings (by being grounded or by association) the new > one can acquire meaning as well. According to Eric, association is the sole factor giving many symbols meaning in human minds. The only prerequisite is that at least one symbol in the web has meaning intrinsically; that is to say, it is a sense symbol. Meaning can effectively be shared between symbols, and is not diluted in the process. From lacertilian at gmail.com Wed Feb 3 00:16:42 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 2 Feb 2010 16:16:42 -0800 Subject: [ExI] meaning & symbols In-Reply-To: <816357.45313.qm@web36501.mail.mud.yahoo.com> References: <816357.45313.qm@web36501.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > In any case I do not believe there exists any such thing as a "sense symbol". > > Organisms with highly developed nervous systems create and ponder mental abstractions, aka symbols, about sense data and about other abstractions. > > Simple organisms on the order of, say, fleas have eyes and other sense organs, so it seems likely that they have awareness of sense data. But because they lack a well developed nervous system it seems very improbable to me that they can do much in the way of forming symbols to represent that data. In the second paragraph I almost jumped on you again for misusing the concept of abstraction, but then I noticed you said "and about other" rather than "and other". You weren't saying that sense data are abstractions, if I understand correctly. Nothing to disagree with there. When we get to the third paragraph, however, it sounds as if you believe that manking discovered symbols rather than invented them. Here's the thing: the very idea of a symbol is, in and of itself, an abstraction. I suspect it's possible to form a coherent model of the mind (by today's standards) without ever mentioning symbols or anything like them. It may not be a particularly elegant model, but it would work as well as any other. So, it's really just a matter of convenience to talk about symbols instead of synapses. Fleas have synapses, if fewer than we do, so if we wanted to we could easily say that they form and use symbols (blood, not-blood) within their puny flea-minds. We wouldn't be wrong. You're correct in saying that sense symbols do not exist, but only insofar as there aren't any symbols which DO exist. Gordon Swobe : > I also do not believe any symbol of any kind can have "instrinsic meaning". Meaning always arises in the context of a conscious mind. X means Y only according to some conscious Z. You're right, of course. It was a poor choice of words. I was trying to convey Eric's theory, which I mostly agree with, in as lazy a manner as possible. All I meant by "intrinsic" meaning was that some symbols in the field of all available within a given Z are meaningful irrespective of any other symbols. Eric explains that this is so because they are invoked directly by incoming sensory data: I see a dog, I think a dog symbol. I have no control over whether or not this happens, except to avoid looking at dogs. It's impossible to perceive, or even conceive, a discrete object without simultaneously attaching a symbol to it. Or, if you prefer, grounding a symbol on it. (Assuming that we're considering information processing in terms of symbols.) From lacertilian at gmail.com Wed Feb 3 03:44:54 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 2 Feb 2010 19:44:54 -0800 Subject: [ExI] meaning & symbols In-Reply-To: <230474.84371.qm@web36502.mail.mud.yahoo.com> References: <230474.84371.qm@web36502.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > No matter what you may choose to call them, people do speak and understand word-symbols. The human mind thus "has semantics" and any coherent model of it must explain that plain fact. I still can't see how the computationalist model fails here, but, more significantly, I can't see why you think it does. Maybe if I went back through the archives and read this whole discussion from the start, but even I don't have that much free time. Gordon Swobe : >Spencer Campbell : >> You're correct in saying that sense symbols do not exist, >> but only insofar as there aren't any symbols which DO exist. > > Hmm, I count 21 word-symbols in that sentence of yours. And I count 23, because those apostrophes denote points where I've smashed two discrete words together to save space. I could also get 25 by treating "insofar" as the full three words it's composed of. Then again, it's really just an arbitrary convention that allows me to do this, so if I change the convention I could just as easily count "in saying that" (ins'ingt), "but only" ('tonly), and "insofar as there" (you get the idea) as single word-symbols as well. There's also an uncompressed "do not" in there, and the concept of sense symbols might catch on to such an extent that we start talking about sensesymbols instead! So I might just as well say that there are only 14 word symbols in that sentence. Then again, I only chose those particular words because it struck me that I almost always put them together in just those sequences. I could make any two words into one, if I don't care about efficiency. Therefore, I could squeeze the whole sentence into a single, magnificently specific word that I'll never, ever have a chance to use again. Conclusion: spaces are not (aren't) the be-all end-all demarcation method of choice, and this is why the word counter in my command-line shell comes up with a slightly different answer than the one built into Google Docs. By the first method this message weighs in at 368 words, whereas the second confidently gives me a figure of 371. From lacertilian at gmail.com Wed Feb 3 16:36:02 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 3 Feb 2010 08:36:02 -0800 Subject: [ExI] meaning & symbols In-Reply-To: <910459.5460.qm@web113613.mail.gq1.yahoo.com> References: <910459.5460.qm@web113613.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > OK obviously this word 'symbol' needs some clear definition. > > I would use the word to mean any distinct pattern of neural activity that has a relationship with other such patterns. > But that's just me. ?Maybe I'm overstretching the use of the word. > > What do other people mean by the word 'symbol', in this context? About the same. It's a problematic definition in that *distinct* patterns of neural activity are hard to come by, but I can't do any better. From lacertilian at gmail.com Wed Feb 3 16:44:16 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 3 Feb 2010 08:44:16 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <969214.43756.qm@web36506.mail.mud.yahoo.com> Message-ID: John Clark : > Your retort is always I don't understand that or I do understand that so > "obviously" that can't be right. Even in theory I don't see how any > explanation would satisfy you. Right now I'm thinking the only way to do it is by forming an unbreakable line of similarity between Turing machines and human brains. Not an easy task for the same reason you hinted at: one is very simple and easy to understand, whereas one is very complex and difficult to understand. Basically, it all depends on what Gordon thinks is the simplest conceivable object capable of intentionality. From lacertilian at gmail.com Thu Feb 4 00:30:40 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 3 Feb 2010 16:30:40 -0800 Subject: [ExI] meaning & symbols In-Reply-To: <642330.6805.qm@web36501.mail.mud.yahoo.com> References: <642330.6805.qm@web36501.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Many things aside from symbols can consistently and systematically change the state of the symbol reader. This isn't true even remotely true of all symbol readers real and imaginary. Maybe you're talking about the human mind specifically. But, if not: Turing machines are not real things, but they are symbol readers. If we imagine a Turing machine whose state can be changed, consistently and systematically, by anything aside from symbols, we are not really imagining a Turing machine anymore. Gordon Swobe : > This wikipedia definition seems better: > > "A symbol is something such as an object, picture, written word, sound, or particular mark that represents something else by association, resemblance, or convention." More generally accurate, yes, but in this context not nearly as useful as Ben Zaiboc's definition. Wikipedia's does not come anywhere near explaining how one symbol can dynamically give rise to a chain of other symbols, which, to my thinking, is the very essence of thought. My guess is that no one here believes meaning can exist outside of thought, or at least a thought-like process. The only question is how thought-like the process has to be. From gts_2000 at yahoo.com Thu Feb 4 01:34:32 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 3 Feb 2010 17:34:32 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <428505.81133.qm@web36503.mail.mud.yahoo.com> --- On Mon, 2/1/10, Stathis Papaioannou wrote: >> I reject as absurd for example your theory that a >> brain the size of Texas constructed of giant neurons made of >> beer cans and toilet paper will have consciousness merely by >> virtue of those beer cans squirting neurotransmitters >> betwixt themselves in the same patterns that natural neurons >> do. > > That is a consequence of functionalism but at this point > functionalism is assumed to be wrong. ?? Can a conscious Texas-sized brain constructed out of giant neurons made of beer cans and toilet paper exist as a possible consequence of your brand of functionalism? Or not? -gts From stathisp at gmail.com Thu Feb 4 03:03:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 4 Feb 2010 14:03:08 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <428505.81133.qm@web36503.mail.mud.yahoo.com> References: <428505.81133.qm@web36503.mail.mud.yahoo.com> Message-ID: On 4 February 2010 12:34, Gordon Swobe wrote: > --- On Mon, 2/1/10, Stathis Papaioannou wrote: > >>> I reject as absurd for example your theory that a >>> brain the size of Texas constructed of giant neurons made of >>> beer cans and toilet paper will have consciousness merely by >>> virtue of those beer cans squirting neurotransmitters >>> betwixt themselves in the same patterns that natural neurons >>> do. >> >> That is a consequence of functionalism but at this point >> functionalism is assumed to be wrong. > > ?? > > Can a conscious Texas-sized brain constructed out of giant neurons made of beer cans and toilet paper exist as a possible consequence of your brand of functionalism? Or not? It would have to be much, much larger than Texas if it was to be human equivalent and it probably wouldn't physically possible due (among other problems) to loss of structural integrity over the vast distances involved. However, theoretically, there is no problem if such a system is Turing-complete and if the behaviour of the brain is computable. As for the "??": I have ASSUMED that functionalism is wrong, i.e. that is possible to make a structure which behaves like a brain but lacks consciousness, to see where this leads. I have shown (with your help) that it leads to a contradiction, eg. "the structure both does and does not behave exactly like a normal brain", which implies that the original assumption must be FALSE. It is like assuming that sqrt(2) is rational, and then showing that this leads to contradiction, which implies that sqrt(2) is not rational. -- Stathis Papaioannou From avantguardian2020 at yahoo.com Thu Feb 4 06:33:39 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 3 Feb 2010 22:33:39 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <969214.43756.qm@web36506.mail.mud.yahoo.com> Message-ID: <317129.32350.qm@web65616.mail.ac4.yahoo.com> ----- Original Message ---- > From: Spencer Campbell > To: ExI chat list > Sent: Wed, February 3, 2010 8:44:16 AM > Subject: Re: [ExI] How not to make a thought experiment > > John Clark : > > Your retort is always I don't understand that or I do understand that so > > "obviously" that can't be right. Even in theory I don't see how any > > explanation would satisfy you. > > Right now I'm thinking the only way to do it is by forming an > unbreakable line of similarity between Turing machines and human > brains. Not an easy task for the same reason you hinted at: one is > very simple and easy to understand, whereas one is very complex and > difficult to understand. > > Basically, it all depends on what Gordon thinks is the simplest > conceivable object capable of intentionality. If you equate intentionality with consciousness, one is left with the result that individual cells (of all types)?are conscious. This is because cells demonstrate intentionality. It is one of the lesser known hallmarks of life. Survival is intentional and?anything that left survival strictly to chance would?quickly be weeded out by natural selection. One can clearly see that in this video posted earlier by Spike. http://www.youtube.com/watch?v=JnlULOjUhSQ The white blood cell is clearly *intent* on eating the bacterium. And the bacterium is clearly *intent* on evading the theat to its existense.?Therefore a bacterium is the simplest concievable object that I am?confident is capable of intentionality. Although viruses being far?simpler may possibly?also display intentionality if you?interpret trying to?hijack cells and evade?the immune?response as hallmarks of "intention". With regard to the on going discussion, I think that it may be?an important first step?to try to program a computer to be unequivocally "alive" even on the level of a bacterium as a first step. It would be far simpler than trying to create a "brain" from scratch and would lend a great deal of support to the functional case. Not to mention it would disprove vitalism once and for all which would be a feather in the cap of functionalism.??? ? Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From bbenzai at yahoo.com Thu Feb 4 09:35:52 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 01:35:52 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <496675.14673.qm@web113614.mail.gq1.yahoo.com> Spencer Campbell wrote: > >From my perspective, Gordon has been very consistent > when it comes to > what will and will not pass the Turing test. His arguments, > implicitly > or explicitly, state that the Turing test does not measure > consciousness. This is one point on which he and I agree. The Turing test was designed to answer the question "can machines think?". It doesn't measure consciousness directly (we don't know of anything that can), but it does measure something which can only be the product of consciousness: The ability of a system to convince a human that it is itself human. This is equivalent to convincing them that is is conscious. If this wasn't the case, people would have no real reason to believe that other people were conscious. For this reason, I'd say that anything which can convincingly pass the Turing test should be regarded as conscious. Obviously, you'd want to take this seriously, and not be satisfied with a five-minute conversation. It'd have to be over a period of time, involving many different domains of knowledge before you'd be fully convinced, but if and when you were convinced that you were actually talking to a human, You'd have to admit that either you think you were talking to a conscious being, or that you think other humans aren't conscious. Ben Zaiboc From jameschoate at austin.rr.com Thu Feb 4 10:25:00 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 4 Feb 2010 10:25:00 +0000 Subject: [ExI] The digital nature of brains In-Reply-To: <496675.14673.qm@web113614.mail.gq1.yahoo.com> Message-ID: <20100204102500.86W8S.511470.root@hrndva-web26-z02> No it does not. It is test which asks if a human being can tell the difference through a remote communications channel between a machine and a human. It says absolutely nothing about intelligence, thinking, or anything like that with regard to machines. These sorts of claims demonstrate that the claimant has an inverted understanding of the issue. The Turing Test has one, and only one outcome...to measure the limits of human ability. ---- Ben Zaiboc wrote: > The Turing test was designed to answer the question "can machines think?". -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From stathisp at gmail.com Thu Feb 4 11:15:11 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 4 Feb 2010 22:15:11 +1100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: References: Message-ID: On 31 January 2010 14:07, Spencer Campbell wrote: > Stathis Papaioannou : >>Gordon Swobe : >>> A3: syntax is neither constitutive of nor sufficient for semantics. >>> >>> It's because of A3 that the man in the room cannot understand the symbols. I started the robot thread to discuss the addition of sense data on the mistaken belief that you had finally recognized the truth of that axiom. Do you recognize it now? >> >> No, I assert the very opposite: that meaning is nothing but the >> association of one input with another input. You posit that there is a >> magical extra step, which is completely useless and undetectable by >> any means. > > Crap! Now I'm doing it too. This whole discussion is just an absurdly > complex feedback loop, neither positive nor negative. It will never > get better and it will never end. Yet the subject matter is > interesting, and I am helpless to resist. > > First, yes, I agree with Stathis's assertion that association of one > input with another input, or with another output, or, generally, of > one datum with another datum, is the very definition of meaning. > Literally, "A means B". This is mathematically equivalent to, "A > equals B". Smoke equals fire, therefore, if smoke is true or fire is > true then both are true. This is very bad reasoning, and very human. > Nevertheless, we can say that there is a semantic association between > smoke and fire. > > Of course the definitions of semantics and syntax seem to have become > deranged somewhere along the lines, so someone with a different > interpretation of their meaning than I have may very well leap at the > chance to rub my face in it here. This is a risk I am willing to take. > > So! > > To see a computer's idea of semantics one might look at file formats. > An image can be represented in BMP or PNG format, but in either case > it is the same image; both files have the same meaning, though the > manner in which that meaning is represented differs radically, just as > 10/12 differs from 5 * 6^-1. > > Another source might be desktop shortcuts. You double-click the icon > for the terrible browser of your choice, and your computer takes this > to mean instead that you are double-clicking an EXE file in a > completely different place. Note that I could very naturally insert > the word "mean" there, implying a semantic association. > > Neither of these are nearly so human a use of semantics, because the > relationship in each case is literal, not causal. However, it is still > semantics: an association between two pieces of information. > > Gordon has no beef with a machine that produces intelligent behavior > through semantic processes, only with one that produces the same > behavior through syntax alone. > > At this point, though, his argument becomes rather hazy to me. How can > anything even resembling human intelligence be produced without > semantic association? > > A common feature in Searle's thought experiments, and in Gordon's by > extension, is that there is a very poor description of the exact > process by which a conversational computer determines how to respond > to any given statement. This is necessary to some extent, because if > anyone could give a precise description of the program that passes the > Turing test, well, they could just write it. > > In any case, there's just no excuse to describe that program with > rules like: if I hear "What is a pig?" then I will say "A farm > animal". Sure, some people give that response to that question some of > the time. But if you ask it twice in a row to the same person, you > will get dramatically different answers each time. It's a gross > oversimplification, but I'm forced to admit that it is technically > valid if one views it only as what will happen, from a very high-level > perspective, if "What is a pig?" is the very next thing the Chinese > Room is asked. A whole new lineup of rules like that would be have to > be generated after each response. Not a very practical solution. > Effective, but not efficient. > > However, it seems to me that even if we had the brute processing power > to implement a system like that while keeping it realistically > quick-witted, it would still be impossible to generate that rule > without the program containing at least one semantic fact, namely, > "pig = farm animal". > > The only part syntactical rules play in this scenario is to insert the > word "a" at the beginning of the sentence. Syntax is concerned only > with grammatical correctness. Using syntax alone, one might imagine > that the answer would be "a noun": the place at which "pig" occurs in > the sentence implies that the word must be a noun, and this is as > close as a syntactical rule can come to showing similarity between two > symbols. If the grammar in question doesn't explicitly provide > categories for symbols, as in English, then not even this can be done, > and a meaningful syntax-based response is completely impossible. > > I started on this message to point out that Stathis had completely > missed the point of A3, but sure enough I ended up picking on Searle > (and Gordon) as well. > > In the end, I would like to make the claim: syntax implies semantics, > and semantics implies syntax. One cannot find either in isolation, > except in the realm of one's imagination. Like so many other divisions > imposed between natural (that is, non-imaginary) phenomena, this one > is valid but false. I'm not completely sure what you're saying in this post, but at some point the string of symbol associations (A means B, B means C, C means D...) is grounded in sensory input. Searle would say that there needs to be an extra step whereby the symbol so grounded gains "meaning", but this extra step is not only completely mysterious, it is also completely superfluous, since every observable fact about the world would be the same without it. It's like claiming that a subset of humans have an extra dimension of meaning, meaning*, which is mysterious and undetectable, but assuredly there making their lives richer. -- Stathis Papaioannou From stefano.vaj at gmail.com Thu Feb 4 11:59:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 4 Feb 2010 12:59:50 +0100 Subject: [ExI] Personal conclusions Message-ID: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> On 3 February 2010 23:05, Gordon Swobe wrote: > I have no interest in magic. I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents (semantics). Yes, this is clear by now. The bunch of threads of which Gordon Swobe is the star, which I have admittedly followed on and off, also because of their largely repetitive nature, have been interesting, albeit disquieting, for me. Not really to hear him reiterate innumerable times that for whatever reason he thinks that (organic? human?) brains, while obviously sharing universal computation abilities with cellular automata and PCs, would on the other hand somewhat escape the Principle of Computational Equivalence. But because so many of the people having engaged in the discussion of the point above, while they may not believe any more in a religious concept of "soul", seem to accept without a second thought that some very poorly defined Aristotelic essences would per se exist corresponding to the symbols "mind", "consciousness", "intelligence", and that their existence in the sense above would even be an a priori not really open to analysis or discussion. Now, if this is the case, I have sincerely troubles in finding a reason why we should not accept on an equal basis, the article of faith that Gordon Swobe proposes as to the impossibility for a computer to exhibit the same. Otherwise, we should perhaps reconsider a little non really the AI research programmes in place, but rather, say, the Circle of Vienna, Popper or Dennett. -- Stefano Vaj From stathisp at gmail.com Thu Feb 4 12:07:17 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 4 Feb 2010 23:07:17 +1100 Subject: [ExI] Personal conclusions In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: On 4 February 2010 22:59, Stefano Vaj wrote: > But because so many of the people having engaged in the discussion of > the point above, while they may not believe any more in a religious > concept of "soul", seem to accept without a second thought that some > very poorly defined Aristotelic essences would per se exist > corresponding to the symbols "mind", "consciousness", "intelligence", > and that their existence in the sense above would even be an a priori > not really open to analysis or discussion. Probably you and I believe the same things about "mind", "consciousness" etc., but we use different words. -- Stathis Papaioannou From stefano.vaj at gmail.com Thu Feb 4 12:16:45 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 4 Feb 2010 13:16:45 +0100 Subject: [ExI] The digital nature of brains In-Reply-To: <496675.14673.qm@web113614.mail.gq1.yahoo.com> References: <496675.14673.qm@web113614.mail.gq1.yahoo.com> Message-ID: <580930c21002040416o54ebd748rceedf0dabe607034@mail.gmail.com> On 4 February 2010 10:35, Ben Zaiboc wrote: > For this reason, I'd say that anything which can convincingly pass the Turing test should be regarded as conscious. In fact, I suspect that anything that can convincingly pass the Turing test is simply conscious *by definition*, because it is the test that we routinely apply to check whether the system we are in touch with is conscious or not (say, when trying to decide whether some human being is asleep or dead). The simple question is: should something, in addition to being able to perform as well as the average adult, alert human being in a Turing test, have blue eyes, flesh limbs, a hairy head or a liver to qualify as "conscious"? If we try to analyse any "intuition" we may have in this sense, any such intuition evaporates quickly enough -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Thu Feb 4 12:24:24 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 4 Feb 2010 13:24:24 +0100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: References: Message-ID: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> On 4 February 2010 12:15, Stathis Papaioannou wrote: > I'm not completely sure what you're saying in this post, but at some > point the string of symbol associations (A means B, B means C, C means > D...) is grounded in sensory input. Defined as? > Searle would say that there needs > to be an extra step whereby the symbol so grounded gains "meaning", > but this extra step is not only completely mysterious, it is also > completely superfluous, since every observable fact about the world > would be the same without it. > Which sounds pretty equivalent to saying that it does not exist, if one accepts that one's "world" is simply the set of all observable phenomena, and that a claim pertaining to the existence of something is meaningless only if it can be disproved. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Feb 4 13:03:26 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 00:03:26 +1100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> References: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> Message-ID: 2010/2/4 Stefano Vaj : > On 4 February 2010 12:15, Stathis Papaioannou wrote: >> >> I'm not completely sure what you're saying in this post, but at some >> point the string of symbol associations (A means B, B means C, C means >> D...) is grounded in sensory input. > > Defined as? Input from the environment. "Chien" is "hund", "hund" is "dog", and "dog" is the furry creature with four legs and a tail, as learned by English speakers as young children. >> Searle would say that there needs >> to be an extra step whereby the symbol so grounded gains "meaning", >> but this extra step is not only completely mysterious, it is also >> completely superfluous, since every observable fact about the world >> would be the same without it. > > Which sounds pretty equivalent to saying that it does not exist, if one > accepts that one's "world" is simply the set of all observable phenomena, > and that a claim pertaining to the existence of something is meaningless > only if it can be disproved. Yes, or you could create undetectable entities like this whenever the fancy took you. -- Stathis Papaioannou From bbenzai at yahoo.com Thu Feb 4 13:38:59 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 05:38:59 -0800 (PST) Subject: [ExI] Mind extension In-Reply-To: Message-ID: <767464.48754.qm@web113618.mail.gq1.yahoo.com> Spencer Campbell wrote: > By my understanding, that one experiment in which > you replace each individual neuron with an artificial duplicate, one > by one, would preserve the same conscious mind you started with. > Actually I am kind of counting on this last point being true, so I > have a vested interest in finding out whether or not it is. If you can > convince me of my error before I actually act on it, Gordon, I would > appreciate it. I've been pondering this issue, and it's possible that there's a way around the problem of confirming that consciousness can run on artificial neurons without actually removing existing natural neurons, and condemning the subject to death if it turns out to be untrue. I'm thinking of an 'mind extension' scenario, where you attach these artificial neurons (or their software equivalent) to an existing brain using neural interfaces, in a configuration that does something useful, like giving an extra sense or an expanded or secondary short-term memory (of course all this assumes good neural interface technology, working artificial neurons and a better understanding of mental architecture than we have just now). Let the user settle in with the new part of their brain for a while, then they should be able to tell if they 'inhabit' it or if it's just like driving a car: it's something 'out there' that they are operating. If they feel that their consciousness now partly resides in the new brain area, it should be possible to duplicate all the vital brain modules and selectively anaesthetise their biological counterparts without any change in subjective experience. If the person says "Hang on, I blanked out there" for the period of time the artificial brain parts were operating on their own, we would know that they don't support conscious experience, and the person could say 'no thanks' to uploading, with their original brain intact. The overall idea is to build extra room for the mind to expand into, and see if it really has or not. If the new, artificial parts actually don't support consciousness, you'd soon notice. If they do, you could augment your brain to the point where the original was just a tiny part, and you wouldn't even miss it when it eventually dies off. Ben Zaiboc From bbenzai at yahoo.com Thu Feb 4 13:13:12 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 05:13:12 -0800 (PST) Subject: [ExI] multiple realizability In-Reply-To: Message-ID: <764802.35206.qm@web113618.mail.gq1.yahoo.com> I suspect that it's ignorance of the importance of levels of abstraction that can lead to ideas like "minds can come from neural networks, but not from digital programs". All you need to see is that a digital program can implement a neural network at a higher level of abstraction to demolish this idea. That's an over-simplification of course, because the digital program/s would more likely implement a set of software objects that interact to implement individual neural nets that interact to implement sets of information processing mechanisms that interact to create a mind. That's 5 levels of abstraction in my probably over-simplistic concept of the process. There may well be several more in a realistic implementation. Ben Zaiboc From gts_2000 at yahoo.com Thu Feb 4 13:47:00 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 05:47:00 -0800 (PST) Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> Message-ID: <595536.39512.qm@web36508.mail.mud.yahoo.com> --- On Thu, 2/4/10, Stefano Vaj wrote: Stathis wrote: >> Searle would say that there >> needs to be an extra step whereby the symbol so grounded gains >> "meaning", but this extra step is not only completely mysterious, it >> is also completely superfluous, since every observable fact about >> the world would be the same without it. No, he would remind you of the obvious truth there exist facts in the world that have subjective first-person ontologies. We can know those facts only in the first-person but they have no less reality than those objective third-person facts that as you say "would be the same without it". Real subjective first-person facts of the world include one's own conscious understanding of words. Stefano wrote: > Which sounds pretty equivalent to saying that it does not > exist, I think you want to deny the reality of the subjective. I don't know why. -gts From alfio.puglisi at gmail.com Thu Feb 4 13:56:39 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Thu, 4 Feb 2010 14:56:39 +0100 Subject: [ExI] New NASA plans Message-ID: <4902d9991002040556x5a5407c1r7a8e0bfee32f401a@mail.gmail.com> Does anyone know if this article from the Economist about Obama's plans for NASA: http://www.economist.com/sciencetechnology/displayStory.cfm?story_id=15449787&source=features_box_main is anywhere accurate? The overall tone is more positive than I expected... in particular, the elimination of "cost-plus" contracts seems a big step in cleaning things up. And, well, I'm a huge fan of SpaceX :-) Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Thu Feb 4 13:32:16 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 05:32:16 -0800 (PST) Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: Message-ID: <854080.27226.qm@web36504.mail.mud.yahoo.com> --- On Wed, 2/3/10, Stathis Papaioannou wrote: >> Can a conscious Texas-sized brain constructed out of >> giant neurons made of beer cans and toilet paper exist as a >> possible consequence of your brand of functionalism? Or >> not? > > It would have to be much, much larger than Texas if it was > to be human equivalent and it probably wouldn't physically possible due > (among other problems) to loss of structural integrity over the > vast distances involved. However, theoretically, there is no > problem if such a system is Turing-complete and if the behaviour of > the brain is computable. Okay, I take that as a confirmation of your earlier assertion that brains made of beer cans and toilet paper can have consciousness provided those beer cans squirt the correct neurotransmitters between themselves at the correct times. This suggests to me that your ideology has a firmer grip on your thinking that does your sense of the absurd, and that no reductio ad absurdum argument will find traction with you. I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them. :-) -gts From bbenzai at yahoo.com Thu Feb 4 14:26:44 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 06:26:44 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <438813.76903.qm@web113617.mail.gq1.yahoo.com> claimed: > ---- Ben Zaiboc > wrote: > > > The Turing test was designed to answer the question > "can machines think?". > No it does not. It is test which asks if a human being can > tell the difference through a remote communications channel > between a machine and a human. > > It says absolutely nothing about intelligence, thinking, or > anything like that with regard to machines. These sorts of > claims demonstrate that the claimant has an inverted > understanding of the issue. The Turing Test has one, and > only one outcome...to measure the limits of human ability. Well, we're talking about different things. I said "it was designed to..", and you replied "no it does not". Both of these can be true. The test was intended to test the abilities of a machine to convince a human, not to test the abilities of the human. Of course that may well be one of its side effects! (apparently a disturbingly high proportion of people - mostly teenagers I think - are convinced by some chatbots) Ben Zaiboc From stefano.vaj at gmail.com Thu Feb 4 14:30:37 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 4 Feb 2010 15:30:37 +0100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: References: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> Message-ID: <580930c21002040630k1d8931dapd63f2ef62ff51491@mail.gmail.com> On 4 February 2010 14:03, Stathis Papaioannou wrote: > 2010/2/4 Stefano Vaj : > > On 4 February 2010 12:15, Stathis Papaioannou > wrote: > >> > >> I'm not completely sure what you're saying in this post, but at some > >> point the string of symbol associations (A means B, B means C, C means > >> D...) is grounded in sensory input. > > > > Defined as? > > Input from the environment. "Chien" is "hund", "hund" is "dog", and > "dog" is the furry creature with four legs and a tail, as learned by > English speakers as young children. > Mmhhh. "Dog" is a sound perceived with one's ears, subvocalised or represented by the appropriate characters in a given typeface, Pluto may be a design or icon of such an animal, the bits by which he is rasterised are another symbol thereof, the pixel of the image of an actual dog on somebody's retina is another symbol thereof. Symbols all the way down, all of them "sensorial" after a fashion, for us as exactly as for any other system. OTOH, inputs and interfaces are of course crucial to the definition of a given system. Mr. Jones is different from young baby Brown who is different from a bat who is different from a PC with a SCSI scanner which is different from an I-Phone... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Thu Feb 4 14:35:27 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 06:35:27 -0800 (PST) Subject: [ExI] The simplest possible conscious system In-Reply-To: Message-ID: <820642.67186.qm@web113613.mail.gq1.yahoo.com> The simplest possible conscious system Spencer Campbell asked: > I'd be interested to hear from each of you a description of what would > constitute the simplest possible conscious system, and whether or not > such a system would necessarily also have intelligence or > understanding. Hm, interesting challenge. I'd probably define Intelligence as problem-solving ability, and Understanding as the association of new 'concept-symbols' with established ones. I'd take "Conscious" to mean "Self-Conscious" or "Self-Aware", which almost certainly involves a mental model of one's self, as well as an awareness of the environment, and one's place in it. I'd guess that the simplest possible conscious system would have an embodiment ('real' or virtual) within an environment, sensors, actuators, the ability to build internal representations of both its environment and itself, and by implication some kind of state memory. Hm. maybe we already have conscious robots, and don't realise it! This is just a stab in the dark, though, I may be way off. As for possessing intelligence and understanding, the simplest possible conscious system almost certainly wouldn't have much of either, although by my definitions above, there would have to be *some* of both. Just not very much (it would need Intelligence only if it's going to try to survive). Ben Zaiboc From gts_2000 at yahoo.com Thu Feb 4 15:00:22 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 07:00:22 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: <20100204102500.86W8S.511470.root@hrndva-web26-z02> Message-ID: <325635.76699.qm@web36508.mail.mud.yahoo.com> Spencer: > How would one determine, in practice, whether or not any > given information processor is a digital computer? I would start by looking for the presence of real physical digital electronic hardware and syntactic instructions that run on it. In some cases you will find those instructions in the software. In other cases you will find them coded into the hardware or firmware. Another way to answer your question: If you find yourself wanting to consult a philosopher about whether a given entity might in some sense exist at some level of description as a digital computer then most likely it's not really a digital computer. :) >> Is it accurate to say that two digital computers, >> networked together, may themselves constitute a larger digital computer? Sure. >> Is the Internet a digital computer? Or, equivalently, >> depending on your definition of the Internet: is the Internet a >> piece of software running on a digital computer? I see the internet as a network of computers that run software. You could consider it one large computer if you like. >> Finally, would you say that an artificial neural >> network is a digital computer? Software implementations of artificial neural networks certainly fall under the general category of digital computer, yes. However in my view no software of any kind can cause subjective experience to arise in the software or hardware. I consider it logically impossible that syntactical operations on symbols, whether they be 1's and 0's or Shakespeare's sonnets, can cause the system implementing those operations to have subjective mental contents. The upshot is that 1) strong AI on digital computers is false, and 2) the human brain does something besides run programs, assuming it runs programs at all. -gts From jonkc at bellsouth.net Thu Feb 4 15:47:02 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 4 Feb 2010 10:47:02 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <317129.32350.qm@web65616.mail.ac4.yahoo.com> References: <969214.43756.qm@web36506.mail.mud.yahoo.com> <317129.32350.qm@web65616.mail.ac4.yahoo.com> Message-ID: <4C1944CD-40AD-48A1-A274-2CF40D90CA77@bellsouth.net> On Feb 4, 2010, The Avantguardian wrote: > a bacterium is the simplest concievable object that I am confident is capable of intentionality. Stripped to its essentials intentionality means someone or something that can change its internal state, a state that predisposes it to do one thing rather than another; or at least that's what I mean by the word. I like it because it lacks circularity. So I would say that a punch card reader is simpler than a bacterium and it has intentionality. A Turing Machine is even simpler and it has intentionality too. Granted this underlying mechanism may seem a bit mundane and inglorious, but that's in the very nature of explanations; presenting complex and mysterious things in the smallest possible chunks in a way that is easily understood. Gordon would disagree with me because for him intentionality means having consciousness, and having consciousness means having intentionality. A circle has no end so that may be why his thread has been going on for so long with no end in sight. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jameschoate at austin.rr.com Thu Feb 4 16:43:16 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 4 Feb 2010 16:43:16 +0000 Subject: [ExI] The digital nature of brains In-Reply-To: <438813.76903.qm@web113617.mail.gq1.yahoo.com> Message-ID: <20100204164316.1CI9P.454213.root@hrndva-web06-z02> This is a perfect example of my 'understanding inversion' claim... First, we're not talking about different things. The Turing Test was suggested, not 'designed' as it's not a algorithm or mechanism. At best it's a heuristic. If you read Turing's papers and the period documentation the fundamental question is 'can the person tell the difference?'. If the answer is 'yes' the -pre-assumptive claim- is that some level of 'intelligence' has been reached in AI technology. Exactly what that level is, is never defined specifically by the original authors. The second and follow on generations of AI researchers have interpreted it to mean that AI has intelligence in the human sense. I would suggest, strongly, that this is a cultural 'taboo' that differentiates main stream from perceived cranks. They way you flip the meaning of 'can the person tell the difference' to 'machine to convince' are specious and moot. The important point is the human not being able to tell the difference. You say it is not meant to test the ability of humans, but it is the humans who -must be convinced-. I would say you're trying to massage the test to fit a preconceived cultural desire and not a real technical benchmark. It's about validating human emotion and not mechanical performance. ---- Ben Zaiboc wrote: > Well, we're talking about different things. I said "it was designed to..", and you replied "no it does not". Both of these can be true. > > The test was intended to test the abilities of a machine to convince a human, not to test the abilities of the human. Of course that may well be one of its side effects! (apparently a disturbingly high proportion of people - mostly teenagers I think - are convinced by some chatbots) -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jameschoate at austin.rr.com Thu Feb 4 16:48:30 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 4 Feb 2010 16:48:30 +0000 Subject: [ExI] The simplest possible conscious system In-Reply-To: <820642.67186.qm@web113613.mail.gq1.yahoo.com> Message-ID: <20100204164830.VFDFM.454315.root@hrndva-web06-z02> I would agree, however there is a couple of issues that must be addressed before it becomes meaningful. First, what is 'conscious'? That definition must not use human brains as an axiomatic measure. Otherwise we're arguing in circles and making an axiomatic assumption that humans are somehow fundamentally gifted with a singular behavior. This destroys our test on several levels. The point being the theoretic structure must demonstrate that human thought is conscious and not be assumptive on that point. We can't use an a priori assumption we are conscious, that we think we are does not make it so. ---- Ben Zaiboc wrote: > The simplest possible conscious system > > Spencer Campbell asked: > > > I'd be interested to hear from each of you a description of what would > > constitute the simplest possible conscious system, and whether or not > > such a system would necessarily also have intelligence or > > understanding. > > Hm, interesting challenge. -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jonkc at bellsouth.net Thu Feb 4 16:51:20 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 4 Feb 2010 11:51:20 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <642330.6805.qm@web36501.mail.mud.yahoo.com> References: <642330.6805.qm@web36501.mail.mud.yahoo.com> Message-ID: On Feb 3, 2010 Gordon Swobe wrote: > Many things aside from symbols can consistently and systematically change the state of the symbol reader. Like what? And if it consistently and systematically changes the state of the symbol reader exactly what additional quality do these "many things" have that disqualifies them as being symbols? > This wikipedia definition seems better: > "A symbol is something such as an object, picture, written word, sound, or particular mark that represents something else by association Rather like a hole in a particular place on a punch card, and its association to a particular column in a punch card reader. > I take that as a confirmation of your earlier assertion that brains made of beer cans and toilet paper can have consciousness provided those beer cans squirt the correct neurotransmitters between themselves at the correct times. ABSOLUTELY! > This suggests to me that your ideology has a firmer grip on your thinking that does your sense of the absurd, and that no reductio ad absurdum argument will find traction with you. Before you use a reductio ad absurdum argument you must be certain it's logically contradictory, just being odd is not good enough. > I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them. It seems odd to us for beer cans and toilet paper to be conscious but in a beer can world it would seem equally odd for 3 pounds of grey goo to be conscious. Neither is logically contradictory. > I have no interest in magic. I'm sure you tell yourself that and I'm sure you believe it, but I don't believe it. Grey goo has magic but beer cans computers and toilet paper don't, despite all the talk of semantics and syntax that is the heart of your argument. > I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents And how did you learn of this very interesting fact? You certainly didn't prove it mathematically or find it in the fossil record, you must have learned of it magically. A magic stronger than Darwin. John K Clark > > -gts > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Thu Feb 4 17:50:01 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Thu, 4 Feb 2010 11:50:01 -0600 Subject: [ExI] Blue Brain Project film preview Message-ID: <55ad6af71002040950o43a4825xa44e985b385c002c@mail.gmail.com> Noah Sutton is making a documentary: http://thebeautifulbrain.com/2010/02/bluebrain-film-preview/ "We are very proud to present the world premiere of BLUEBRAIN ? Year One, a documentary short which previews director Noah Hutton?s 10-year film-in-the-making that will chronicle the progress of The Blue Brain Project, Henry Markram?s attempt to reverse-engineer a human brain. Enjoy the piece and let us know what you think." There's a longer video that explains what he's up to. The Emergence of Intelligence in the Neocortical Microcircuit http://video.google.com/videoplay?docid=-2874207418572601262&ei=lghrS6GmG4jCqQLA1Yz7DA - Bryan http://heybryan.org/ 1 512 203 0507 From lacertilian at gmail.com Thu Feb 4 18:12:27 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 10:12:27 -0800 Subject: [ExI] Personal conclusions In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: Stefano Vaj : > Not really to hear him reiterate innumerable times that for whatever > reason he thinks that (organic? human?) brains, while obviously > sharing universal computation abilities with cellular automata and > PCs, would on the other hand somewhat escape the Principle of > Computational Equivalence. Yeah... yeah. He doesn't seem like the type to take Stephen Wolfram seriously. I'm working on it. Fruitlessly, maybe, but I'm working on it. Getting some practice in rhetoric, at least. Stefano Vaj : > ... very poorly defined Aristotelic essences would per se exist > corresponding to the symbols "mind", "consciousness", "intelligence" ... Actually, I gave a fairly rigorous definition for intelligence in an earlier message. I've refined it since then: The intelligence of a given system is inversely proportional to the average action (time * work) which must be expended before the system achieves a given purpose, assuming that it began in a state as far away as possible from that purpose. (As I said before, this definition won't work unless you assume an arbitrary purpose for the system in question. Purposes are roughly equivalent to attractors here, but the system may itself be part of a larger system, like us. Humans are tricky: the easiest solution is to say they swap purposes many times a day, which means their measured intelligence would change depending on what they're currently doing. Which is consistent with observed reality.) I can't give similarly precise definitions for "mind" or consciousness, and I wouldn't be able to describe the latter at all. Tentatively, I think consciousness is devoid of measurable qualities. This would make it impossible to prove its existence, which to my mind is a pretty solid argument for its nonexistence. Nevertheless, we talk about it all the time, throughout history and in every culture. So even if it doesn't exist, it seems reasonable to assume that it is at least meaningful to think about. Stefano Vaj : > Now, if this is the case, I have sincerely troubles in finding a > reason why we should not accept on an equal basis, the article of > faith that Gordon Swobe proposes as to the impossibility for a > computer to exhibit the same. Your argument runs like this: We have assumed at least one truth a priori. Therefore, we should assume all truths a priori. No, sorry. Doesn't work that way. All logic is, at base, fundamentally illogical. You begin by assuming something for no logical reason whatsoever, and attempt to redeem yourself from there. That doesn't mean reasoning is futile. There's a big difference between a logical assumption (which doesn't exist) and a rational assumption (which does). Accepting at face value that we have minds, intelligence, and consciousness, is perfectly rational. Accepting at face value that computers can not, is not. I can't say exactly why you should believe either of these statements, of course. They aren't in the least bit logical. Make of them what you will. I have to go eat breakfast. From jonkc at bellsouth.net Thu Feb 4 18:32:10 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 4 Feb 2010 13:32:10 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <595536.39512.qm@web36508.mail.mud.yahoo.com> References: <595536.39512.qm@web36508.mail.mud.yahoo.com> Message-ID: <4529BD17-B295-4C3A-B869-2C19DDC12F88@bellsouth.net> On Feb 4, 2010, Gordon Swobe wrote: >>> > he [Searle] would remind you of the obvious truth there exist facts in the world that have subjective first-person ontologies. We can know those facts[...] What's with this "we" business? You can know certain subjective facts about the universe from direct experience, and that outranks everything else even logic. But you have no reason to think that I or anybody else or a computer could do that too, and yet you do think that at least for the first two, you think other people are conscious when they are able to act intelligently; you do this because like me you couldn't function it you thought you were the only conscious being in the universe. But every one of the arguments you have used against the existence of consciousness in computers could just as easily be used to argue against the existence of consciousness in your fellow human beings, but you have never done so. You could also use your arguments to try to show that even you are not conscious, but as I say direct experience outranks everything else; but you have no reason to believe that other people who act intelligently are fundamentally different from anything else that acts intelligently. John K Clark > only in the first-person but they have no less reality than those objective third-person facts that as you say "would be the same without it". > > Real subjective first-person facts of the world include one's own conscious understanding of words. > > Stefano wrote: >> Which sounds pretty equivalent to saying that it does not >> exist, > > I think you want to deny the reality of the subjective. I don't know why. > > -gts > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Thu Feb 4 18:24:10 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Thu, 4 Feb 2010 10:24:10 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: <4C1944CD-40AD-48A1-A274-2CF40D90CA77@bellsouth.net> References: <969214.43756.qm@web36506.mail.mud.yahoo.com> <317129.32350.qm@web65616.mail.ac4.yahoo.com> <4C1944CD-40AD-48A1-A274-2CF40D90CA77@bellsouth.net> Message-ID: <565163.72063.qm@web65609.mail.ac4.yahoo.com> From: John Clark >To: ExI chat list >Sent: Thu, February 4, 2010 7:47:02 AM >Subject: Re: [ExI] How not to make a thought experiment > > >On Feb 4, 2010, The Avantguardian wrote: > >a bacterium is the simplest concievable object that I am?confident is capable of intentionality. > >Stripped to its essentials intentionality means someone or something that can change its internal state, a state that predisposes it to do one thing rather >than another; or at least that's what I mean by the word. I like it because it lacks circularity. While I understand your dislike of circularity, the definition you give is far too broad. Almost everything has an internal state that can be changed. The discovery of this and the mathematics behind it made?Ludwig Boltzmann famous. A rock has a temperature which is an?"internal state". If the temperature of the rock is higher than that of its surroundings,?its internal state predisposes the rock to?cool down. FWIW evolution by natural selection is based on a circular argument as well. Species evolve by the differential survival and reproduction of the fittest members of the species. What is fitness? Those adaptations?that allows?members of a?species to survive and reproduce.? >So I would say that a punch card reader is simpler than a bacterium and it has intentionality. A Turing Machine is even simpler and it has intentionality >too. While I will not discount the possibility that in the future a sufficently complex program running on a computer may exhibit life or consciousness, that program does not currently exist. Currently the "intentionality" of existing software is completely explicit and vicarious. That is to say that all software currently in existence?exhibits only the intentionality of the programmer and not any native or implicit intentionality of its own. By the same token, a mouse trap exhibits explicit intentionality as well, but?lacks implicit intentionality. That is we would say the mouse trap is *intended* to catch a mouse but we would not say the mouse trap is *intent* on catching a mouse.?Now some people may think that is true of?bacteria as well, but we laugh at intelligent design don't we? >Granted this underlying mechanism may seem a bit mundane and inglorious, but that's in the very nature of explanations; presenting complex and >mysterious things in the smallest possible chunks in a way that is easily understood.? The way of reductionism is fraught with the peril of oversimplification. You can reduce an automobile to quarks but that doesn't give you any insight as to how an automobile works. >Gordon would disagree with me because for him intentionality means having consciousness, and having consciousness means having intentionality. Then Gordon must accept that a bacterium is conscious. I however would say that implicit intentionality is necessary?for consciousness but not sufficient.? >A circle has no end so that may be why his thread has been going on for so long with no end in sight. One can extrapolate insufficent data into any conclusion one likes. Two given points can lie on a straight line or on?a?drawing of a unicorn. Neither of?these is likely the truth. Which is why I prefer empiricaI science to philosophy. I?think experimentation is the only hope of?settling this argument. ?? ?Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From aware at awareresearch.com Thu Feb 4 18:30:39 2010 From: aware at awareresearch.com (Aware) Date: Thu, 4 Feb 2010 10:30:39 -0800 Subject: [ExI] Personal conclusions In-Reply-To: References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: On Thu, Feb 4, 2010 at 10:20 AM, Aware wrote: > It's simply and necessarily how any system refers to references to > itself. ?Yes, it's recursive, and therefore unfamiliar and unsupported > by a language and culture that evolved to deal with relatively shallow > context and linear relationships of cause and effect. ?Meaning is not > as perceived by the observer, but in the response of the observer, > determined by its nature within a particular context. I left out a key word, sorry. Should have been: > It's simply and necessarily how any system refers to references to > itself. Yes, it's recursive, and therefore unfamiliar and unsupported > by a language and culture that evolved to deal with relatively shallow > context and linear relationships of cause and effect. Meaning is not > as perceived by the observer, but in the observed response of the observer, > determined by its nature within a particular context. - Jef From rpwl at lightlink.com Thu Feb 4 18:18:17 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 04 Feb 2010 13:18:17 -0500 Subject: [ExI] ANNOUNCE: New "Artificial General Intelligence" discussion list on Google Groups In-Reply-To: <55ad6af71002040950o43a4825xa44e985b385c002c@mail.gmail.com> References: <55ad6af71002040950o43a4825xa44e985b385c002c@mail.gmail.com> Message-ID: <4B6B0F69.8090709@lightlink.com> In response to the imminent closure of the AGI discussion list, I just set this up as an alternative: http://groups.google.com/group/artificial-general-intelligence Full name of the group is "Artificial General Intelligence" but the short name is AGI-group. (Note that there is already a google group called Artificial General Intelligence, but it appears to be spam-dead.) Its purpose is to encourage polite and well-informed discussion, so it will be moderated to that effect. Allow me to explain my rationale. In the past I felt like posting substantial content to the AGI list because it seemed that there were some people who were well-informed enough to engage in discussion. These days, the noise level is so high that I have no interest, because I know that the people who would give serious thought to real issues are just not listening anymore. I understand that Ben Goertzel is trying to solve this by setting up the H+ forum on AGI. I wish him luck in this, of course, and I myself have joined that forum and will participate if there is useful material there. But I also prefer the faster, easier format of a discussion list WHEN THAT LIST IS CONTROLLED. Consider this to be an experiment, then. If it works, it works. If not, then not. Anyone can join. But if there are people who (a) send ad hominem remarks (b) rant on about fringe topics (c) persistently introduce irrelevant material ... they will first be subjected to KILLTHREADs, and then if it does not stop they will be banned. This process will be escalated slowly, and anything as drastic as a ban will be preceded by soliciting the opinions of the group if it is a borderline case. Wow! That sounds draconian! Who is to say what is "fringe" and what is way out there, but potentially valuable? Well, the best I can offer is this. I have over 25 years' experience of research in AI, physics and psychology, and I have also investigated other "fringe" areas like scientific parapsychology, so I consider my standards to be very tolerant when it comes to new ideas (after all, I have some outlier ideas of my own), but also savvy enough to know when someone is puncturing the envelope, rather than pushing it. So here goes. You are all invited to join at the above address. For the serious people: let's try to establish a standard early on. Richard Loosemore From aware at awareresearch.com Thu Feb 4 18:20:45 2010 From: aware at awareresearch.com (Aware) Date: Thu, 4 Feb 2010 10:20:45 -0800 Subject: [ExI] Personal conclusions In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: On Thu, Feb 4, 2010 at 3:59 AM, Stefano Vaj wrote: > On 3 February 2010 23:05, Gordon Swobe wrote: >> I have no interest in magic. I contend only that software/hardware systems as we conceive of them today cannot have subjective mental contents (semantics). > > Yes, this is clear by now. > > The bunch of threads of which Gordon Swobe is the star, which I have > admittedly followed on and off, also because of their largely > repetitive nature, have been interesting, albeit disquieting, for me. Interesting to me too, as an example of our limited capability at present, even among intelligent, motivated participants, to effectively specify and frame disparate views and together seek a greater, unifying context which either resolves the apparent differences or serves to clarify them. > > Not really to hear him reiterate innumerable times that for whatever > reason he thinks that (organic? human?) brains, while obviously > sharing universal computation abilities with cellular automata and > PCs, would on the other hand somewhat escape the Principle of > Computational Equivalence. Gordon exhibits a strong reductionist bent; he seems to believe that Truth WILL be found if only one can see closely and precisely enough into the heart of the matter. Ironically, to the extent that he parrots Searle his logic is impeccable, but he went seriously off that track when engaging with Stathis in the neuron-replacement thought experiment. Most who engage in this debate fall into the same trap of defending functionalism, and this is where the Chinese Room Argument gets most of its mileage, but functionalism, materialism and computationalism are really not at issue. Searle quite clearly and coherently shows that syntax DOES NOT entail semantics, no matter how detailed the implementation. So at the sophomoric level representative of most common objections, the debate spins around and around, as if Searle were denying functionalist, materialist, or computationalist accounts of reality. He's not, and neither is Gordon. The point is that there's a paradox. [And paradox is always a matter of insufficient context. In the bigger picture, all the pieces must fit.] John Clark jumps in to hotly defend Truth and his simple but circular view that consciousness is a Fact, it obviously arrived via Evolution thus Evolution is the key. And how dare you deny Evolution--or Truth!? Stathis patiently (he has plenty of patients, as well as patience) rehashes the defense of functionalism which needs no defending, and although Gordon occasionally asserts that he doesn't disagree (on this) he doesn't go far enough to acknowledge and embrace the apparent truth of functionalist accounts WHILE highlighting the ostensible paradox presented by Searle. Eric and Spencer jump in (late in the game, if a merry-go-round can be said to have a "later" point in the ride) and contribute the next layer after functionalism: If we accept that we have "consciousness", "and unquestionably we do", and we accept materialist, functionalist, computationalist accounts of reality, then the answer is not to be found in the objects being represented, but in the complex associations between them. They too are correct (within their context) but their explanation only raises the problem another level, no closer to resolution. > But because so many of the people having engaged in the discussion of > the point above, while they may not believe any more in a religious > concept of "soul", seem to accept without a second thought that some > very poorly defined Aristotelic essences would per se exist > corresponding to the symbols "mind", "consciousness", "intelligence", > and that their existence in the sense above would even be an a priori > not really open to analysis or discussion. Yes, many of our "rationalist" friends decry belief in a soul, but passionately defend belief in an essential self--almost as if their self depended on it. And along the way we get essential [qualia, experience, intentionality, free-will, meaning, personal identity...] and paradox. And despite accumulating evidence of the incoherence of consciousness, with all its gaps, distortions, fabrication and confabulation, we hang on to it, and decide it must be a very Hard Problem. Thus inoculated, and fortified by the biases built in to our language and culture, we know that when someone comes along and says that it's actually very simple, cf. Dennett, Metzinger, Pollack, Buddha..., we can be sure, even though we can't make sense of what they're saying, that they must be wrong. A few deeper thinkers, aiming for greater coherence over greater context, have suggested that either all entities "have consciousness" or none do. This is a step in the right direction. Then the question, clarified, might be decided in simply information-theoretic terms. But even then, more often they will side with Panpsychism (even a rock has consciousness, but only a little) than to face the possibility of non-existence of an essential experiencer. > Now, if this is the case, I have sincerely troubles in finding a > reason why we should not accept on an equal basis, the article of > faith that Gordon Swobe proposes as to the impossibility for a > computer to exhibit the same. > > Otherwise, we should perhaps reconsider a little non really the AI > research programmes in place, but rather, say, the Circle of Vienna, > Popper or Dennett. Searle is right, in his logic. Wrong, in his premises. No formal syntactic system produces semantics. Further, to the extent that the human brain is formally described, no semantics will be found there either. We never had it, and don't need it. "It" can't even be defined in functional terms. The notion is incoherent, despite the strength and seductiveness of the illusion. It's simply and necessarily how any system refers to references to itself. Yes, it's recursive, and therefore unfamiliar and unsupported by a language and culture that evolved to deal with relatively shallow context and linear relationships of cause and effect. Meaning is not as perceived by the observer, but in the response of the observer, determined by its nature within a particular context. Yes, it may feel like a direct attack on the sanctity of Self, but it's not. It destroys nothing that ever existed, and opens up thinking on agency just as valid, extending beyond the boundaries of the cranium, or the skin, or the organism plus its tools, or ... Oh well. Baby steps... - Jef From lacertilian at gmail.com Thu Feb 4 18:57:01 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 10:57:01 -0800 Subject: [ExI] The digital nature of brains In-Reply-To: <325635.76699.qm@web36508.mail.mud.yahoo.com> References: <20100204102500.86W8S.511470.root@hrndva-web26-z02> <325635.76699.qm@web36508.mail.mud.yahoo.com> Message-ID: Gordon Swobe wrote: > Spencer: >> Finally, would you say that an artificial neural network is a digital computer? > > Software implementations of artificial neural networks certainly fall under the general category of digital computer, yes. I could easily have guessed you would say that, but my question pertains to hard, non-simulated artificial neural networks. This brings up another point of interest: you seem to place computer programs within the category of digital computers. This isn't how I use the term. I would say: Firefox is not a digital computer, it is instantiated by a digital computer. All computers are physical objects in reality; if they are not, they should be explicitly designated as virtual computers. As a side note, are all computers effectively digital computers? It'd save me some time, and the Internet some bandwidth, if so. Personally I could go either way. When I want to be fully inclusive, I usually say "information processor", which denotes brains just as well as laptops. From gts_2000 at yahoo.com Thu Feb 4 19:09:39 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 11:09:39 -0800 (PST) Subject: [ExI] Principle of Computational Equivalence In-Reply-To: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: <578968.53151.qm@web36506.mail.mud.yahoo.com> --- On Thu, 2/4/10, Stefano Vaj wrote: > Not really to hear him reiterate innumerable times that for > whatever reason he thinks that (organic? human?) brains, while > obviously sharing universal computation abilities with cellular > automata and PCs, would on the other hand somewhat escape the Principle > of Computational Equivalence. I see no reason to consider the so-called Principle of Computational Equivalence of philosophical interest with respect to natural objects like brains. Given a natural entity or process x and a computation of it c(x) it does not follow that c(x) = x. It does matter whether x = an organic apple or an organic brain. c(x) = x iff x = a true digital artifact. It seems to me that we have no reason to suppose except as a matter of religious faith that any x in the natural world actually exists as a digital artifact. For example we might in principle create perfect computations of hurricanes. It would not follow that hurricanes do computations. -gts From hkeithhenson at gmail.com Thu Feb 4 19:09:49 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 4 Feb 2010 12:09:49 -0700 Subject: [ExI] Space based solar power again Message-ID: (reply to a discussion on another list about power satellites) How you get the energy down from GEO is a problem with a couple of known solutions. What has to be solved is getting the parts to GEO (or the parts to LEO and the whole thing to GEO if you build it in LEO). Even at a million tons per year (what's needed for a decent sized SBSP project) the odds are against the cost being low enough for power satellites to make sense (i.e., undercut coal and nuclear) if you try to transport the parts with chemical rockets. You either have to go to some non reaction method, magnet launcher, cannon, launch loop or space elevator, or you have to go to an exhaust velocity higher than what the energy of chemical fuels will give you. The non reaction methods are extremely difficult engineering problems, partly because we live a the bottom of a dense atmosphere, partly because of the extreme energy needed. The rule of thumb from the rocket equation is that mass ratio 3 will get the vehicle up to the exhaust velocity and a mass ratio 2 will get it to a bit under 0.7 of the exhaust velocity. Beyond mass ratio 3 the payload fraction rapidly goes to zero. So to get to LEO on a mass ratio 3 means an average exhaust velocity of around 9.5 km/sec The Skylon gets about 10.5 km/sec equivalent Ve in air breathing mode. Laser heated hydrogen will give up to 9.8 km/sec. So much for the physics, on to the engineering! :-) Keith From lacertilian at gmail.com Thu Feb 4 19:11:07 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 11:11:07 -0800 Subject: [ExI] Mind extension In-Reply-To: <767464.48754.qm@web113618.mail.gq1.yahoo.com> References: <767464.48754.qm@web113618.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > (lots and lots of neat stuff) Ever since Stathis put me on the spot to state my feelings on uploading and "brain prostheses", and in fact for several years prior, I've been thinking about pretty much the same thing. At the time I sent my response I actually thought this is what he was talking about, but later decided he probably meant a full brain transplant. Judging by the craziness Alex is pulling off with that EPOC EEG, I'm thinking it would be trivially easy to add new "wings" to our brains. We might be able to do it right now, with lab-grown neurons, if we can figure out a way to increase skull capacity. I hadn't considered a few of the tests you propose, though. Specifically, temporarily "turning off" the organic hemisphere to see if the synthetic hemisphere keeps working. I can imagine a lot of problems with putting that into practice. Certainly, we couldn't even try it until we start creating neurons via building rather than via growing. Hmm! From Frankmac at ripco.com Fri Feb 5 19:14:07 2010 From: Frankmac at ripco.com (Frank McElligott) Date: Fri, 5 Feb 2010 14:14:07 -0500 Subject: [ExI] war is peace Message-ID: <004001caa697$66d15db0$ad753644@sx28047db9d36c> In Russia, strong leaders are held in high esteem. If you want to take over an oil company, just throw the CEO in jail for tax evasion and the people will say he deserved it and our Putin knows best. Then to create a legal auction to have the state take over the company under this action with one bidder is ok, it was legal wasn't it. Russia is a different place, rules are set by strong leaders and the people go along with it. What is legal is what Putin says is legal. As Example from the US, if Obama decided that Goldman Sach's bonus plan was out of the realm of what was good for the country, he could arrest the CEO, stopping him from doing god's work, and then have the government take over the company and thus screwing the shareholders out of hundreds of millions. Sorry that's a bad example I should have used AIG instead. Last year thats what Us Gov't did with AIG except they have not arrested no one YET. What I could have used was the EU taking over the books of Greece because we all know the Greek Gov't is corrupt and for the good of the EU we must stop tose Greeks from cheating. If Greece falls, so does Spain, so does Portugal, and must I say Italy as well? Greece is in trouble because they lied, AIG was in trouble on count of their greed, and in Russia it's tax problems. If your Government tells you it is right, you accept it, as we did with Bush and his Weapons of Mass Destruction, in Russia they are no different, or in EU for that matter. So my friend , War is Peace, here in the US, in Russia, and now even in the Eurzone. Hope that helps Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Thu Feb 4 19:19:47 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 11:19:47 -0800 Subject: [ExI] multiple realizability In-Reply-To: <764802.35206.qm@web113618.mail.gq1.yahoo.com> References: <764802.35206.qm@web113618.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > I suspect that it's ignorance of the importance of levels of abstraction that can lead to ideas like "minds can come from neural networks, but not from digital programs". ?All you need to see is that a digital program can implement a neural network at a higher level of abstraction to demolish this idea. Yep. But, I'm sure Gordon has already been there. My guess is he took a plane instead of walking or driving, though, and most likely missed all of the cultural flavor by persistently following a tour guide. I wouldn't be surprised if he just stayed in a hotel the whole time. More as an experiment than anything else, I've been trying to figure out how to take him step-by-step into Abstraction City and show him everything he missed. Right now I'm stuck on black boxes. We have a long way to go. From lacertilian at gmail.com Thu Feb 4 19:43:45 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 11:43:45 -0800 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <854080.27226.qm@web36504.mail.mud.yahoo.com> References: <854080.27226.qm@web36504.mail.mud.yahoo.com> Message-ID: Gordon Swobe : >Stathis Papaioannou : >> ... loss of structural integrity over the >> vast distances involved. However, theoretically, there is no >> problem if such a system is Turing-complete and if the behaviour of >> the brain is computable. > I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them. :-) Whenever I've seen Stathis use the term "an absurdity", I've mentally translated it to "a paradox". A beer can brain is absurd, but not paradoxical. An unconscious brain which is exactly identical to a conscious brain is paradoxical, but not absurd. From bbenzai at yahoo.com Thu Feb 4 20:07:37 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 4 Feb 2010 12:07:37 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <356658.52755.qm@web113601.mail.gq1.yahoo.com> wrote: ---- Ben Zaiboc wrote: > > > Well, we're talking about different things. I > said "it was designed to..", and you replied "no it does > not". Both of these can be true. > This is a perfect example of my 'understanding inversion' > claim... > > First, we're not talking about different things. The Turing > Test was suggested, not 'designed' as it's not a algorithm > or mechanism. At best it's a heuristic. If you read Turing's > papers and the period documentation the fundamental question > is 'can the person tell the difference?'. If the answer is > 'yes' the -pre-assumptive claim- is that some level of > 'intelligence' has been reached in AI technology. Exactly > what that level is, is never defined specifically by the > original authors. The second and follow on generations of AI > researchers have interpreted it to mean that AI has > intelligence in the human sense. I would suggest, strongly, > that this is a cultural 'taboo' that differentiates main > stream from perceived cranks. > > They way you flip the meaning of 'can the person tell the > difference' to 'machine to convince' are specious and moot. > The important point is the human not being able to tell the > difference. You say it is not meant to test the ability of > humans, but it is the humans who -must be convinced-. > > I would say you're trying to massage the test to fit a > preconceived cultural desire and not a real technical > benchmark. It's about validating human emotion and not > mechanical performance. Um, I think 'understanding inversion' is right. I don't actually understand what you're trying to say. Ben Zaiboc From gts_2000 at yahoo.com Thu Feb 4 20:39:00 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 12:39:00 -0800 (PST) Subject: [ExI] Personal conclusions In-Reply-To: Message-ID: <469838.61714.qm@web36504.mail.mud.yahoo.com> --- On Thu, 2/4/10, Aware wrote: > So at the sophomoric level representative of most common > objections, the debate spins around and around, as if Searle were > denying functionalist, materialist, or computationalist accounts > of reality. He's not, and neither is Gordon.? On the contrary, I most certainly do deny the functionalist and computationalist, (but not so much the materialist), accounts of reality. By the way, to make things as clear as mud: 1) computationalism is a species of functionalism, not a theory that competes with it as suggested, and 2) functionalism is not about making artificial neurons, per se, and 3) nobody here in recent months has articulated a true functionalist or functionalist/computationalist account of mind or reality. -gts From lacertilian at gmail.com Thu Feb 4 21:22:22 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 13:22:22 -0800 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: References: Message-ID: Stathis Papaioannou wrote: > I'm not completely sure what you're saying in this post, but at some > point the string of symbol associations (A means B, B means C, C means > D...) is grounded in sensory input. I'm talking about syntax and semantics, but especially syntax. In the context of this discussion, you're making a statement about semantics. One assumption (or conclusion, it's hard to tell) made by the notorious Gordon Swobe is that digital computers are capable of syntax, but not of semantics. I made that post to explore the question of whether or not that's even possible in theory. If I was vague and difficult to understand (I was), that might be due to the fact I have a very fuzzy idea of what Gordon means when he talks about syntax, and his is the definition I tried to use. I wouldn't describe your typical CPU as performing syntactical operations normally, but here I would do so without hesitation. From lacertilian at gmail.com Thu Feb 4 21:41:48 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 13:41:48 -0800 Subject: [ExI] Semiotics and Computability Message-ID: Gordon Swobe : > Stathis wrote: >> Searle would say that there >> needs to be an extra step whereby the symbol so grounded gains >> "meaning", but this extra step is not only completely mysterious, it >> is also completely superfluous, since every observable fact about >> the world would be the same without it. > > No, he would remind you of the obvious truth there exist facts in the world that have subjective first-person ontologies. We can know those facts only in the first-person but they have no less reality than those objective third-person facts that as you say "would be the same without it". You're both wrong! Only I am right! Me! >From my limited research, it appears Searle has never said anything about some unknown extra step necessary to produce meaning. If you think his arguments imply any such thing, that's your extrapolation, not his. The Chinese room argument isn't chiefly about meaning: it's about understanding. They're extremely different things. We take meaning as input and output, or at least feel like we do, but we simply HAVE understanding. And no, it isn't a substance. It's a measurable phenomenon. Not easily measurable, but measurable nonetheless. Secondly, "facts with subjective first-person ontologies" is a nightmarishly convoluted phrase. Does the universe even have facts in it, technically speaking? I suppose what I'm meant to do is pick a component of my subjective experience, say, my headache, and call it a fact. Then I say the fact of my headache has a subjective first-person ontology. But that's redundant: all subjective things are first-person things, and vice-versa. And "ontology" actually means "the study of existence". I don't think the fact of my headache has any kind of study, let alone such an esoteric one. Gordon must have meant "existence", not "ontology". Searle uses that same terminology. It makes things terribly difficult. So to say something has "subjective first-person ontology" really means it "exists only for the subject". There are facts (my headache) which exist only for the subject (me). Ah! Now it makes sense. I even have a word for facts like that: "delusions". It's a low blow, I know. It shouldn't be, but it is. Really, it just means we're too hard on especially delusional people. We need delusions in order to function. They aren't inherently bad. Who was it that wrote the paper describing how a delusion of self is unavoidable when implementing a general-purpose consciousness such as myself? I liked that paper. It appealed to my nihilistic side, which is also the rest of me. Ugh, this is going to drive me crazy. I have to remember some keywords to search for. He used a very specific term to refer to that delusion. "Distributed agent" was used in the paper, I think, but not the message that linked to the paper... From gts_2000 at yahoo.com Thu Feb 4 22:17:15 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 14:17:15 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <938103.15549.qm@web36508.mail.mud.yahoo.com> --- On Thu, 2/4/10, Spencer Campbell wrote: > Secondly, "facts with subjective first-person ontologies" > is a nightmarishly convoluted phrase. Sorry for the jargon. > Does the universe even have facts in > it, technically speaking? I suppose what? I'm meant to > do is pick a component of my subjective experience, say, my headache, > and call it a fact. Exactly right. > Then I say the fact of my headache has a subjective > first-person ontology. Yes. > But that's redundant: all subjective things are first-person > things, and vice-versa. And "ontology" actually means "the > study of existence". I make the distinction because subjective experiences exist both epistemically and ontologically in the first person; that is, we can do epistemic investigations of their causes (why do you have a headache?) in the same sense that we do epistemic investigations of any third-person objective phenomena, and they also have their *existence* in the first-person and thus a first-person ontology. Some people especially my materialist friends seem wont to deny the first-person ontology of consciousness. They deny its existence altogether and attempt to reduce it to something "material", not realizing that in doing so they use the same dualistic vocabulary as do those with whom they want to disagree. Theirs is an over-reaction; we can keep consciousness as a real phenomenon without bringing Descartes back from the dead. -gts From lacertilian at gmail.com Thu Feb 4 22:24:59 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 4 Feb 2010 14:24:59 -0800 Subject: [ExI] The simplest possible conscious system In-Reply-To: <820642.67186.qm@web113613.mail.gq1.yahoo.com> References: <820642.67186.qm@web113613.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > Hm, interesting challenge. > > I'd probably define Intelligence as problem-solving ability, and > Understanding as the association of new 'concept-symbols' with established ones. > > I'd take "Conscious" to mean "Self-Conscious" or "Self-Aware", which almost certainly involves a mental model of one's self, as well as an awareness of the environment, and one's place in it. Somehow I was expecting people to radically disagree on these definitions, but you actually have very similar conceptions of consciousness, intelligence and understanding to my own. Understanding is notably different in my mind, though: I'd say to have a mental model of a thing is to understand that thing. Symbols don't really enter into it, except that we use them as shorthand to refer to understood models. The more similarly your model behaves to the target system, the better you understand that system! Ben Zaiboc : > I'd guess that the simplest possible conscious system would have an embodiment ('real' or virtual) within an environment, sensors, actuators, the ability to build internal representations of both its environment and itself, and by implication some kind of state memory. ?Hm. maybe we already have conscious robots, and don't realise it! I can conceive of a disembodied consciousness, interacting with its environment only through verbal communication, which would be simpler. Top that! : > I would agree, however there is a couple of issues that must be addressed before it becomes meaningful. > > First, what is 'conscious'? That definition must not use human brains as an axiomatic measure. I agree. The only problem is that, if consciousness exists, any English definition of it would at least be inaccurate, if not outright incorrect. We can only approximate the speed of light using feet, but we can describe it exactly with meters. I'm not even sure if consciousness is better considered as a binary state, present or absent, or if we should be talking about degrees of consciousness. Certainly, intelligence and understanding are both scalar quantities. Is the same true of consciousness? My current theory is that consciousness requires recursive understanding: that is, understanding of understanding. Meta-understanding. I don't know if it exhibits any emergent properties over and above that, though, or if there are any other prerequisites. From stathisp at gmail.com Thu Feb 4 22:27:52 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 09:27:52 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <854080.27226.qm@web36504.mail.mud.yahoo.com> References: <854080.27226.qm@web36504.mail.mud.yahoo.com> Message-ID: On 5 February 2010 00:32, Gordon Swobe wrote: > --- On Wed, 2/3/10, Stathis Papaioannou wrote: > >>> Can a conscious Texas-sized brain constructed out of >>> giant neurons made of beer cans and toilet paper exist as a >>> possible consequence of your brand of functionalism? Or >>> not? >> >> It would have to be much, much larger than Texas if it was >> to be human equivalent and it probably wouldn't physically possible due >> (among other problems) to loss of structural integrity over the >> vast distances involved. However, theoretically, there is no >> problem if such a system is Turing-complete and if the behaviour of >> the brain is computable. > > Okay, I take that as a confirmation of your earlier assertion that brains made of beer cans and toilet paper can have consciousness provided those beer cans squirt the correct neurotransmitters between themselves at the correct times. This suggests to me that your ideology has a firmer grip on your thinking that does your sense of the absurd, and that no reductio ad absurdum argument will find traction with you. > > I find it ironic that you try to use reductio ad absurdum arguments with me given that you have apparently inoculated yourself against them. :-) I take it that you are aware of the concept of "Turing equivalence"? It implies that if a digital computer can have a mind, then any Turing equivalent machine can also have a mind. If a beer can computer is Turing equivalent then you don't gain anything philosophically by pointing to it and saying that it's "absurd"; that's more like a politician's subterfuge than a philosophical argument. The absurdity I was referring to, on the other hand, is logical contradiction. Spencer Campbell suggested that these may not be the same thing but that is what I meant; see http://en.wikipedia.org/wiki/Proof_by_contradiction. The logical contradiction is the claim that, for example, artificial brain components can be made which both do and do not behave exactly the same as normal neurons. Not even God can make it so that both P and ~P are true; however, God could easily make a beer can and toilet paper computer or a Chinese Room. It is a difference in kind, not a difference in degree. -- Stathis Papaioannou From stathisp at gmail.com Thu Feb 4 22:43:49 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 09:43:49 +1100 Subject: [ExI] Mind extension In-Reply-To: <767464.48754.qm@web113618.mail.gq1.yahoo.com> References: <767464.48754.qm@web113618.mail.gq1.yahoo.com> Message-ID: On 5 February 2010 00:38, Ben Zaiboc wrote: > I've been pondering this issue, and it's possible that there's a way around the problem of confirming that consciousness can run on artificial neurons without actually removing existing natural neurons, and condemning the subject to death if it turns out to be untrue. > > I'm thinking of an 'mind extension' scenario, where you attach these artificial neurons (or their software equivalent) to an existing brain using neural interfaces, in a configuration that does something useful, like giving an extra sense or an expanded or secondary short-term memory (of course all this assumes good neural interface technology, working artificial neurons and a better understanding of mental architecture than we have just now). ?Let the user settle in with the new part of their brain for a while, then they should be able to tell if they 'inhabit' it or if it's just like driving a car: it's something 'out there' that they are operating. > > If they feel that their consciousness now partly resides in the new brain area, it should be possible to duplicate all the vital brain modules and selectively anaesthetise their biological counterparts without any change in subjective experience. > > If the person says "Hang on, I blanked out there" for the period of time the artificial brain parts were operating on their own, we would know that they don't support conscious experience, and the person could say 'no thanks' to uploading, with their original brain intact. > > The overall idea is to build extra room for the mind to expand into, and see if it really has or not. ?If the new, artificial parts actually don't support consciousness, you'd soon notice. If they do, you could augment your brain to the point where the original was just a tiny part, and you wouldn't even miss it when it eventually dies off. An important point is that if you noticed a difference not only would that mean the artificial parts don't support normal consciousness, it would also mean the artificial parts do not exactly reproduce the objectively observable behaviour of the natural neurons. -- Stathis Papaioannou From stathisp at gmail.com Thu Feb 4 22:53:23 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 09:53:23 +1100 Subject: [ExI] The digital nature of brains In-Reply-To: <325635.76699.qm@web36508.mail.mud.yahoo.com> References: <20100204102500.86W8S.511470.root@hrndva-web26-z02> <325635.76699.qm@web36508.mail.mud.yahoo.com> Message-ID: On 5 February 2010 02:00, Gordon Swobe wrote: > Software implementations of artificial neural networks certainly fall under the general category of digital computer, yes. However in my view no software of any kind can cause subjective experience to arise in the software or hardware. I consider it logically impossible that syntactical operations on symbols, whether they be 1's and 0's or Shakespeare's sonnets, can cause the system implementing those operations to have subjective mental contents. Let's be clear: it is not LOGICALLY impossible that syntax can give rise to meaning. There is no LOGICAL contradiction in the claim that when a symbol is paired with a particular type of input, then that symbol is grounded, and grounding of the symbol is sufficient for meaning. You don't like this idea because you have a view that there is a mysterious extra layer to provide meaning, but that is a claim about the way the world is (one that is not empirically verifiable), not a LOGICAL claim. -- Stathis Papaioannou From ablainey at aol.com Thu Feb 4 23:13:42 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Thu, 04 Feb 2010 18:13:42 -0500 Subject: [ExI] Pig Symbol In-Reply-To: References: <20100204102500.86W8S.511470.root@hrndva-web26-z02> <325635.76699.qm@web36508.mail.mud.yahoo.com> Message-ID: <8CC7406D403F237-55FC-520D@webmail-m027.sysops.aol.com> Following on from Symbols, AI and especially the robot seeing a pig. Here is a website that pretends to be a personality test based upon your drawing of a pig. There has been over 2 million pigs drawn so far. It seems to me that you could get some interesting insight into AI image recognition by feeding in 2 million + drawings of pigs. The site owner is asking for suggestions of what to do with the drawings. Does anyone have an AI that needs some training data? Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Thu Feb 4 23:18:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 4 Feb 2010 15:18:34 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <523017.16837.qm@web36507.mail.mud.yahoo.com> --- On Thu, 2/4/10, Stathis Papaioannou wrote: >> Software implementations of artificial neural networks >> certainly fall under the general category of digital >> computer, yes. However in my view no software of any kind >> can cause subjective experience to arise in the software or >> hardware. I consider it logically impossible that >> syntactical operations on symbols, whether they be 1's and >> 0's or Shakespeare's sonnets, can cause the system >> implementing those operations to have subjective mental >> contents. > > Let's be clear: it is not LOGICALLY impossible that syntax > can give rise to meaning. I think it is logically impossible. > There is no LOGICAL contradiction in the > claim that when a symbol is paired with a particular type of input, > then that symbol is grounded, and grounding of the symbol is > sufficient for meaning. I take it that on your view a picture dictionary understands the nouns for which it has pictures, since it "pairs" its word-symbols with sense-data, grounding the symbols in the same way that a computer + webcam can pair and ground symbols. How about a lunch menu? Does it understand sandwiches? :-) -gts From eric at m056832107.syzygy.com Fri Feb 5 00:10:05 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 5 Feb 2010 00:10:05 -0000 Subject: [ExI] Religious idiocy (was: digital nature of brains) In-Reply-To: <580930c21002030509m60aebf03s28104ee60540978b@mail.gmail.com> References: <584374.10388.qm@web36504.mail.mud.yahoo.com> <20100129192646.5.qmail@syzygy.com> <580930c21002010216r46ed7bb8wade659b70018224b@mail.gmail.com> <20100201181430.5.qmail@syzygy.com> <580930c21002030509m60aebf03s28104ee60540978b@mail.gmail.com> Message-ID: <20100205001005.5.qmail@syzygy.com> Stefano writes: >So, is the integer "3" a word symbol or a sense symbol? The integer 3 is a concept for which a brain probably has a symbol. That symbol will be distinct from the symbol for the word "three", and both are distinct from the impressions (represented by sense symbols) generated when someone views a hand with three fingers held up. All those symbols are related to each other, and activation of any one is likely to make it easier to activate any of the others. > And what about the ASCII decoding of a byte? I'm not sure exactly what you're asking here. ASCII maps byte values to stereotypical glyphs, so I'm assuming you're referring to the glyph '3' as a decoding of the byte value 0x33. When you look at that glyph, a particular sense symbol will be activated, which will likely lead to activation of the corresponding concept and word symbols mentioned above. > Or the rasterisation of the ASCII symbol? Again, I'm not sure exactly what you're getting at. Is that rasterisation what shows up on your video monitor when the computer displays the '3' glyph? I could think about the concept of that occurring, or I could look at the result (see above). >And what difference would exactly make? Not much, really. They're just names for things, so we can talk about them. The brain probably uses similar mechanisms to process all those symbols. That processing is likely confined to different areas of the brain for each type of symbol, though. I don't think anyone knows yet how the brain does any of this processing. We don't even know much about how the symbols might be encoded, although theories do exist. I happen to like William Calvin's theory as presented in "The Cerebral Code": http://williamcalvin.com/bk9/ I don't think we're yet to the point where we can put that theory to the test. We do know a good deal about the low level processing, but things get complicated as we climb the abstraction ladder. -eric From eric at m056832107.syzygy.com Fri Feb 5 00:25:12 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 5 Feb 2010 00:25:12 -0000 Subject: [ExI] meaning & symbols In-Reply-To: <910459.5460.qm@web113613.mail.gq1.yahoo.com> References: <910459.5460.qm@web113613.mail.gq1.yahoo.com> Message-ID: <20100205002512.5.qmail@syzygy.com> Ben writes: >OK obviously this word 'symbol' needs some clear definition. > >I would use the word to mean any distinct pattern of neural activity > that has a relationship with other such patterns. In that sense, > sensory symbols exist, as do (visual) word symbols, (auditory) word > symbols, concept symbols, which are a higher-level abstraction from > the above three types, and hundreds of other types of 'symbol', > representing all the different patterns of neural activity that can > be regarded as coherent units, like emotional states, memories, > linguistic units (nouns, verbs, etc.), and their higher-level > 'chunks' (birdness, the concept of fluidity, etc.), and so on. This sounds exactly like what I mean when I use the term "symbol" in this context. The question came up about how hard it might be to tease apart *distinct* patterns of neural activity. I agree that this is likely to be tricky. I expect many symbols will be active in a brain at the same time, and differentiating them could be hard. They may change representation with the brain region they are active in. I do expect that a symbol is simpler than a global neural firing pattern, though. If a firing pattern in one part of the brain triggers a similar firing pattern in another part of the brain, is the same symbol active in both areas, or are there two distinct symbols? I don't think we have a good enough handle on this to answer such questions yet. -eric From possiblepaths2050 at gmail.com Fri Feb 5 00:50:11 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 4 Feb 2010 17:50:11 -0700 Subject: [ExI] "Supreme Court Allows Corporations To Run For Political Office, " Onion parody article Message-ID: <2d6187671002041650i3cb96621u60890b5970ebbef4@mail.gmail.com> This is both funny and creepy... http://www.theonion.com/content/news_briefs/supreme_court_allows John : ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Fri Feb 5 02:33:29 2010 From: aware at awareresearch.com (Aware) Date: Thu, 4 Feb 2010 18:33:29 -0800 Subject: [ExI] Personal conclusions In-Reply-To: <469838.61714.qm@web36504.mail.mud.yahoo.com> References: <469838.61714.qm@web36504.mail.mud.yahoo.com> Message-ID: On Thu, Feb 4, 2010 at 12:39 PM, Gordon Swobe wrote: > On the contrary, I most certainly do deny the functionalist and computationalist, (but not so much the materialist), accounts of reality. Well, sometimes in effect you do; sometimes you don't. You seem to enjoy the polemics more than you do the opportunity to encompass a greater context of understanding. > By the way, to make things as clear as mud: 1) computationalism is a species of functionalism, not a theory that competes with it as suggested, Seems to me that { Materialism { Functionalism { Computationalism}}}. Your "clear as mud" is clearly appropriate. > and 2) functionalism is not about making artificial neurons, per se, Stathis would argue, I think, that such was the point of that part of his discussion with you. > and 3) nobody here in recent months has articulated a true functionalist or functionalist/computationalist account of mind or reality. My central point (were you paying attention?) is that there can be no "true functionalist/computationalist account of mind..." notwithstanding the legitimacy of all three of these isms in their appropriate contexts. Finally, Gordon, I'd like to thank you for your characteristically thorough and thoughtful reply to my comments... - Jef From x at extropica.org Fri Feb 5 03:08:29 2010 From: x at extropica.org (x at extropica.org) Date: Thu, 4 Feb 2010 19:08:29 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: On Thu, Feb 4, 2010 at 1:41 PM, Spencer Campbell wrote: > Who was it that wrote the paper describing how a delusion of self is > unavoidable when implementing a general-purpose consciousness such as > myself? I liked that paper. It appealed to my nihilistic side, which > is also the rest of me. > > Ugh, this is going to drive me crazy. I have to remember some keywords > to search for. He used a very specific term to refer to that delusion. > "Distributed agent" was used in the paper, I think, but not the > message that linked to the paper... Pollock, JL, Ismael J. 2006. Knowledge and reality; So You Think You Exist? In Defense of Nolipsism :35-62. I've posted it to this list twice now. This is the first indication I've seen that anyone read it. - Jef From ablainey at aol.com Fri Feb 5 08:24:01 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 05 Feb 2010 03:24:01 -0500 Subject: [ExI] "Supreme Court Allows Corporations To Run For Political Office, " Onion parody article In-Reply-To: <2d6187671002041650i3cb96621u60890b5970ebbef4@mail.gmail.com> References: <2d6187671002041650i3cb96621u60890b5970ebbef4@mail.gmail.com> Message-ID: <8CC7453B455DDF3-2B10-C068@webmail-d005.sysops.aol.com> Funny and true. I can't see how it is different from reality? This is much more apparent In the UK where each plitical party is a registered bussiness, with its corporate logo and company rules. Even one of the police forces is now alledgedly a subsidary of IBM. Politics and authority have always been big business. -----Original Message----- From: John Grigg To: ExI chat list ; World Transhumanist Association Discussion List ; transfigurism Sent: Fri, 5 Feb 2010 0:50 Subject: [ExI] "Supreme Court Allows Corporations To Run For Political Office, " Onion parody article This is both funny and creepy... http://www.theonion.com/content/news_briefs/supreme_court_allows John : ) _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Feb 5 09:59:50 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 20:59:50 +1100 Subject: [ExI] Principle of Computational Equivalence In-Reply-To: <578968.53151.qm@web36506.mail.mud.yahoo.com> References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> <578968.53151.qm@web36506.mail.mud.yahoo.com> Message-ID: On 5 February 2010 06:09, Gordon Swobe wrote: > --- On Thu, 2/4/10, Stefano Vaj wrote: > >> Not really to hear him reiterate innumerable times that for >> whatever reason he thinks that (organic? human?) brains, while >> obviously sharing universal computation abilities with cellular >> automata and PCs, would on the other hand somewhat escape the Principle >> of Computational Equivalence. > > I see no reason to consider the so-called Principle of Computational Equivalence of philosophical interest with respect to natural objects like brains. > > Given a natural entity or process x and a computation of it c(x) it does not follow that c(x) = x. It does matter whether x = an organic apple or an organic brain. > > c(x) = x iff x = a true digital artifact. It seems to me that we have no reason to suppose except as a matter of religious faith that any x in the natural world actually exists as a digital artifact. > > For example we might in principle create perfect computations of hurricanes. It would not follow that hurricanes do computations. Gordon, that is all true, but sometimes even a bad copy of an object does perform the same function as the object. For example, a ball may fly through the air like an apple even though it isn't an apple and lacks many of the other properties of an apple. The claim is not that a computer will be *identical* with the brain but that it will reproduce the intelligence of the brain and, as a corollary, the consciousness of the brain, which it turns out (from a logical argument that you can't or won't follow or even attempt to rebut) is impossible to disentangle from the intelligence. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 5 10:14:01 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 21:14:01 +1100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <595536.39512.qm@web36508.mail.mud.yahoo.com> References: <580930c21002040424x5021098fl5c0599629dc3d8ec@mail.gmail.com> <595536.39512.qm@web36508.mail.mud.yahoo.com> Message-ID: On 5 February 2010 00:47, Gordon Swobe wrote: > --- On Thu, 2/4/10, Stefano Vaj wrote: > > Stathis wrote: >>> Searle would say that there >>> needs to be an extra step whereby the symbol so grounded gains >>> "meaning", but this extra step is not only completely mysterious, it >>> is also completely superfluous, since every observable fact about >>> the world would be the same without it. > > No, he would remind you of the obvious truth there exist facts in the world that have subjective first-person ontologies. We can know those facts only in the first-person but they have no less reality than those objective third-person facts that as you say "would be the same without it". > > Real subjective first-person facts of the world include one's own conscious understanding of words. I don't deny subjective experience but I deny that when I understand something I do anything more than associate it with another symbol, ultimately grounded in something I have seen in the real world. That would seem necessary and sufficient for understanding, and for the subjective experience of understanding, such as it is. Searle is postulating an extra layer over and above this which is completely useless. What's to stop us postulating even more layers: people with red hair have understanding*, which stands in relation to understanding as understanding stands in relation to mere symbol-association. Of course the redheads don't behave any differently and don't even know they are any different, but when they use a word they experience something which non-redheads could never even imagine. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 5 10:30:40 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 21:30:40 +1100 Subject: [ExI] The digital nature of brains In-Reply-To: <523017.16837.qm@web36507.mail.mud.yahoo.com> References: <523017.16837.qm@web36507.mail.mud.yahoo.com> Message-ID: On 5 February 2010 10:18, Gordon Swobe wrote: > I take it that on your view a picture dictionary understands the nouns for which it has pictures, since it "pairs" its word-symbols with sense-data, grounding the symbols in the same way that a computer + webcam can pair and ground symbols. > > How about a lunch menu? Does it understand sandwiches? :-) No, they're not intelligent. Your argument here is equivalent to me pointing to an inert lump of matter in order to demonstrate that matter is incapable of thinking. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 5 11:35:26 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 5 Feb 2010 22:35:26 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: On 5 February 2010 08:41, Spencer Campbell wrote: > >From my limited research, it appears Searle has never said anything > about some unknown extra step necessary to produce meaning. If you > think his arguments imply any such thing, that's your extrapolation, > not his. The Chinese room argument isn't chiefly about meaning: it's > about understanding. They're extremely different things. We take > meaning as input and output, or at least feel like we do, but we > simply HAVE understanding. > > And no, it isn't a substance. It's a measurable phenomenon. Not easily > measurable, but measurable nonetheless. By definition it isn't measurable, since (according to Searle and Gordon) it would be possible to perfectly reproduce the behaviour of the brain, but leave out understanding. It is only possible to observe behaviour, so if behaviour is separable from understanding, you can't observe it. I'm waiting for Gordon to say, OK, I've changed my mind, it is *not* possible to reproduce the behaviour of the brain and leave out understanding, but he just won't do it. -- Stathis Papaioannou From pharos at gmail.com Fri Feb 5 12:35:10 2010 From: pharos at gmail.com (BillK) Date: Fri, 5 Feb 2010 12:35:10 +0000 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: On 2/5/10, Stathis Papaioannou wrote: > By definition it isn't measurable, since (according to Searle and > Gordon) it would be possible to perfectly reproduce the behaviour of > the brain, but leave out understanding. It is only possible to observe > behaviour, so if behaviour is separable from understanding, you can't > observe it. I'm waiting for Gordon to say, OK, I've changed my mind, > it is *not* possible to reproduce the behaviour of the brain and leave > out understanding, but he just won't do it. > > "You cannot reason people out of a position that they did not reason themselves into." ? Ben Goldacre (Bad Science) Gordon is listening to a voice in his head that tells him that 'It *must* be this way. It just *must*!' And you can't argue with that. True-believer syndrome is an expression coined by M. Lamar Keene to describe an apparent cognitive disorder characterized by believing in the reality of paranormal or supernatural events after one has been presented overwhelming evidence that the event was fraudulently staged. BillK From gts_2000 at yahoo.com Fri Feb 5 13:05:35 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 05:05:35 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <426324.97073.qm@web36507.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: > By definition it isn't measurable, since (according to > Searle and Gordon) it would be possible to perfectly reproduce the > behaviour of the brain, but leave out understanding. Does your watch understand what time it is, Stathis? No, of course not. But yet it tells you the correct time anyway, *as if* it had understanding. Amazing! -gts From gts_2000 at yahoo.com Fri Feb 5 13:23:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 05:23:33 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <211266.88332.qm@web36506.mail.mud.yahoo.com> --- On Fri, 2/5/10, BillK wrote: > Gordon is listening to a voice in his head that tells him,,, I barely find time to respond to sincere posts from people like Stathis. I have no time to respond to childish insults from the peanut gallery. Please stop. -gts From stathisp at gmail.com Fri Feb 5 13:25:24 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 00:25:24 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <426324.97073.qm@web36507.mail.mud.yahoo.com> References: <426324.97073.qm@web36507.mail.mud.yahoo.com> Message-ID: On 6 February 2010 00:05, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >> By definition it isn't measurable, since (according to >> Searle and Gordon) it would be possible to perfectly reproduce the >> behaviour of the brain, but leave out understanding. > > Does your watch understand what time it is, Stathis? No, of course not. But yet it tells you the correct time anyway, *as if* it had understanding. Amazing! The watch obviously does not behave exactly like a human, so there is no reason why it should understand time in the same way as a human does. The point is that is impossible to make a brain or brain component that behaves *exactly* like the natural equivalent but lacks understanding. Note that this says nothing about programs or computers: it is impossible through *any means whatsoever* to make such a device, even even if you could invoke magic. -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Feb 5 13:31:16 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 05:31:16 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <213871.74812.qm@web36502.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: >> I take it that on your view a picture dictionary >> understands the nouns for which it has pictures, since it >> "pairs" its word-symbols with sense-data, grounding the >> symbols in the same way that a computer + webcam can pair >> and ground symbols. > >> How about a lunch menu? Does it understand sandwiches? > :-) > > No, they're not intelligent. Your argument here is > equivalent to me pointing to an inert lump of matter in order to > demonstrate that matter is incapable of thinking. True or false, Stathis: When program running on a digital computer associates a sense-datum (say, an image of object taken with its web-cam) with the appropriate word-symbol, the system running that program has now by virtue of that association grounded the word-symbol and now has understanding of the meaning of that word-symbol. -gts From sparge at gmail.com Fri Feb 5 13:37:31 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 5 Feb 2010 08:37:31 -0500 Subject: [ExI] The digital nature of brains In-Reply-To: <213871.74812.qm@web36502.mail.mud.yahoo.com> References: <213871.74812.qm@web36502.mail.mud.yahoo.com> Message-ID: On Fri, Feb 5, 2010 at 8:31 AM, Gordon Swobe wrote: > > True or false, Stathis: > > When program running on a digital computer associates a sense-datum (say, an image of object taken with its web-cam) with the appropriate word-symbol, the system running that program has now by virtue of that association grounded the word-symbol and now has understanding of the meaning of that word-symbol. That depends entirely upon the nature of the program. -Dave From gts_2000 at yahoo.com Fri Feb 5 13:45:26 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 05:45:26 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <591415.38933.qm@web36503.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: >> Does your watch understand what time it is, Stathis? >> No, of course not. But yet it tells you the correct time >> anyway, *as if* it had understanding. Amazing! > > The watch obviously does not behave exactly like a human, > so there is no reason why it should understand time in the same way as > a human does. My point here is that intelligent behavior does not imply understanding. We can construct robots that behave intelligently like humans but which have no subjective understanding of anything whatsoever. We've already started doing so in limited areas. It's only a matter of time before we do it in a general sense (weak AGI). > The point is that is impossible to make a brain or > brain component that behaves *exactly* like the natural > equivalent but lacks understanding. Not impossible at all! Weak AI that passes the Turing test is entirely possible. It will just take a lot of hard work to get there. -gts From stathisp at gmail.com Fri Feb 5 13:45:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 00:45:12 +1100 Subject: [ExI] The digital nature of brains In-Reply-To: <213871.74812.qm@web36502.mail.mud.yahoo.com> References: <213871.74812.qm@web36502.mail.mud.yahoo.com> Message-ID: On 6 February 2010 00:31, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >>> I take it that on your view a picture dictionary >>> understands the nouns for which it has pictures, since it >>> "pairs" its word-symbols with sense-data, grounding the >>> symbols in the same way that a computer + webcam can pair >>> and ground symbols. >> >>> How about a lunch menu? Does it understand sandwiches? >> :-) >> >> No, they're not intelligent. Your argument here is >> equivalent to me pointing to an inert lump of matter in order to >> demonstrate that matter is incapable of thinking. > > True or false, Stathis: > > When program running on a digital computer associates a sense-datum (say, an image of object taken with its web-cam) with the appropriate word-symbol, the system running that program has now by virtue of that association grounded the word-symbol and now has understanding of the meaning of that word-symbol. Does an amoeba have an understanding of "food" when it makes an association between the relevant chemotactic signals and the feeling it gets when it engulfs the morsel? You might say "no, the amoeba and this behaviour is too simple". Yet it's from compounding such simple behaviours that we get human level intelligence. The computer behaviour you described is even simpler than that of the amoeba, so you would have to grant the amoeba understanding before considering the possibility that the computer has understanding. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 5 13:51:14 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 00:51:14 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <591415.38933.qm@web36503.mail.mud.yahoo.com> References: <591415.38933.qm@web36503.mail.mud.yahoo.com> Message-ID: On 6 February 2010 00:45, Gordon Swobe wrote: >> The point is that is impossible to make a brain or >> brain component that behaves *exactly* like the natural >> equivalent but lacks understanding. > > Not impossible at all! Weak AI that passes the Turing test is entirely possible. It will just take a lot of hard work to get there. Yes, but then when pressed you say that such a brain or brain component would *not* behave exactly like the natural equivalent! -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Feb 5 14:01:50 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 06:01:50 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <805514.15115.qm@web36501.mail.mud.yahoo.com> --- On Fri, 2/5/10, Dave Sill wrote: >> True or false, Stathis: >> >> When a program running on a digital computer associates >> a sense-datum (say, an image of an object taken with its >> web-cam) with the appropriate word-symbol, the system >> running that program has now by virtue of that association >> grounded the word-symbol and now has understanding of the >> meaning of that word-symbol. > > That depends entirely upon the nature of the program. I see. So then let us say programmer A writes a program that fails but that programmer B writes one that succeeds. What programming tricks did B use such that his program instantiated an entity capable of having subjective understanding of words? (And where can I find him? I want to hire him.) -gts From gts_2000 at yahoo.com Fri Feb 5 14:18:40 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 06:18:40 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <459214.1690.qm@web36507.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: >> True or false, Stathis: >> >> When a program running on a digital computer associates >> a sense-datum (say, an image of an object taken with its >> web-cam) with the appropriate word-symbol, the system >> running that program has now by virtue of that association >> grounded the word-symbol and now has [conscious] >> understanding of the meaning of that word-symbol. > > Does an amoeba have an understanding of "food" when it > makes an association between the relevant chemotactic signals No, ameobas have nothing I mean by consciousness. Is my statement above true or false, Stathis? I added the word "conscious" to make me meaning even more clear. I ask you these T/F questions to try find out exactly what you think. -gts From sparge at gmail.com Fri Feb 5 14:19:21 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 5 Feb 2010 09:19:21 -0500 Subject: [ExI] The digital nature of brains In-Reply-To: <805514.15115.qm@web36501.mail.mud.yahoo.com> References: <805514.15115.qm@web36501.mail.mud.yahoo.com> Message-ID: On Fri, Feb 5, 2010 at 9:01 AM, Gordon Swobe wrote: > > I see. So then let us say programmer A writes a program that fails but that programmer B writes one that succeeds. > > What programming tricks did B use such that his program instantiated an entity capable of having subjective understanding of words? (And where can I find him? I want to hire him.) I don't think that achieving intelligence will be the result of "programming tricks". I also don't think it'll be a one-man effort, and I'm pretty sure it hasn't been done yet. -Dave From gts_2000 at yahoo.com Fri Feb 5 14:35:44 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 06:35:44 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <860799.26946.qm@web36506.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: > You might say "no, the amoeba and this behaviour is too simple". Yet > it's from compounding such simple behaviours that we get human level > intelligence. I do not care how the computer behaves. Does it have conscious understanding of the meaning of the word by virtue of having associated it with an image file of the object represented by the word? I think you know the answer. -gts From gts_2000 at yahoo.com Fri Feb 5 14:12:52 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 06:12:52 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <369222.38842.qm@web36505.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: >> Not impossible at all! Weak AI that passes the Turing >> test is entirely possible. It will just take a lot of hard >> work to get there. > > Yes, but then when pressed you say that such a brain or > brain component would *not* behave exactly like the natural > equivalent! I've said that such an artificial neuron/brain will require a lot of work before it behaves like the natural equivalent. This is why the surgeon in your thought experiment must keep replacing and re-programming your artificial neurons until finally he creates a patient that passes the TT. -gts From stathisp at gmail.com Fri Feb 5 14:42:09 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 01:42:09 +1100 Subject: [ExI] The digital nature of brains In-Reply-To: <459214.1690.qm@web36507.mail.mud.yahoo.com> References: <459214.1690.qm@web36507.mail.mud.yahoo.com> Message-ID: On 6 February 2010 01:18, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >>> True or false, Stathis: >>> >>> When a program running on a digital computer associates >>> a sense-datum (say, an image of an object taken with its >>> web-cam) with the appropriate word-symbol, the system >>> running that program has now by virtue of that association >>> grounded the word-symbol and now has [conscious] >>> understanding of the meaning of that word-symbol. >> >> Does an amoeba have an understanding of "food" when it >> makes an association between the relevant chemotactic signals > > No, ameobas have nothing I mean by consciousness. > > Is my statement above true or false, Stathis? I added the word "conscious" to make me meaning even more clear. > > I ask you these T/F questions to try find out exactly what you think. False, as it is for the amoeba doing the same thing. To have human level consciousness and understanding it has to have human level intelligence! You seem to dismiss this obvious point for computers, yet you're not nearly so generous with bestowing consciousness on non-human organisms as consistency would require. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 5 14:50:40 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 01:50:40 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <369222.38842.qm@web36505.mail.mud.yahoo.com> References: <369222.38842.qm@web36505.mail.mud.yahoo.com> Message-ID: On 6 February 2010 01:12, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >>> Not impossible at all! Weak AI that passes the Turing >>> test is entirely possible. It will just take a lot of hard >>> work to get there. >> >> Yes, but then when pressed you say that such a brain or >> brain component would *not* behave exactly like the natural >> equivalent! > > I've said that such an artificial neuron/brain will require a lot of work before it behaves like the natural equivalent. This is why the surgeon in your thought experiment must keep replacing and re-programming your artificial neurons until finally he creates a patient that passes the TT. You agree that the artificial neuron will perfectly replicate the behaviour of the natural neuron it replaces, and in the same breath you say that the brain will start behaving differently and the surgeon will have to make further adjustments! Do you really not see that this is a blatant contradiction? -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Feb 5 15:00:11 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 07:00:11 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <204176.88244.qm@web36508.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: > > --- On Fri, 2/5/10, Stathis Papaioannou > wrote: > > > >>> Not impossible at all! Weak AI that passes the > Turing > >>> test is entirely possible. It will just take a > lot of hard > >>> work to get there. > >> > >> Yes, but then when pressed you say that such a > brain or > >> brain component would *not* behave exactly like > the natural > >> equivalent! > > > > I've said that such an artificial neuron/brain will > require a lot of work before it behaves like the natural > equivalent. This is why the surgeon in your thought > experiment must keep replacing and re-programming your > artificial neurons until finally he creates a patient that > passes the TT. > > You agree that the artificial neuron will perfectly > replicate the behaviour of the natural neuron it replaces, and in the > same breath you say that the brain will start behaving differently and > the surgeon will have to make further adjustments! Do you really not > see that this is a blatant contradiction? I think you've misrepresented or misunderstood me here. Where in the same breath did I say these things? In your thought experiment, the artificial program-driven neurons will require a lot of work for the same reason that programming weak AI will require a lot of work. We're not there yet, but it's within the realm of programming possibility. -gts From rpwl at lightlink.com Fri Feb 5 15:03:25 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 05 Feb 2010 10:03:25 -0500 Subject: [ExI] Symbol Grounding [WAS Re: The digital nature of brains] In-Reply-To: <805514.15115.qm@web36501.mail.mud.yahoo.com> References: <805514.15115.qm@web36501.mail.mud.yahoo.com> Message-ID: <4B6C333D.7090205@lightlink.com> Gordon Swobe wrote: > --- On Fri, 2/5/10, Dave Sill wrote: > >>> True or false, Stathis: >>> >>> When a program running on a digital computer associates a >>> sense-datum (say, an image of an object taken with its web-cam) >>> with the appropriate word-symbol, the system running that program >>> has now by virtue of that association grounded the word-symbol >>> and now has understanding of the meaning of that word-symbol. >> That depends entirely upon the nature of the program. > > I see. So then let us say programmer A writes a program that fails > but that programmer B writes one that succeeds. > > What programming tricks did B use such that his program instantiated > an entity capable of having subjective understanding of words? (And > where can I find him? I want to hire him.) [I doubt that you could afford me, but I am open to pleasant surprises.] As to the earlier question, you are asking about the fundamental nature of "grounding". Since there is a huge amount of debate and confusion on the topic, I will save you the trouble of searching the mountain of prior art and come straight to the answer. If a system builds a set of symbols that purport to be "about" things in the world, then the only way to decide if those symbols are properly grounded is to look at (a) the mechanisms that build those symbols, (b) the mechanisms that use those symbols (to do, e.g., thinking), (c) the mechanisms that adapt or update the symbols over time, (d) the interconnectedness of the symbols. If these four aspects of the symbol system are all coherently engaged with one another, so that the building mechanisms generate symbols that the deployment mechanisms then use in a way that is consistent, and the development mechanisms also modify the symbols in a coherent way, and the connectedness makes sense, then the symbols are grounded. The key to understanding this last paragraph is that Harnad's contention was that, as a purely practical matter, this kind of global coherence can only be achieved if ALL the mechanisms are working together from the get-go .... which means that the building mechanisms, in particular, are primarily responsible for creating the symbols (using real world interaction). So the normal way for symbols to get grounded is for there to be meaningful "pickup" mechanisms that extract the symbols autonomously, as a result of the system interacting with the environment. But notice that pickup of the trivial kind you implied above (the system just has an object detector attached to its webcam, and a simple bit of code that forms an association with a word) is not by itself enough to satisfy the requirements of grounding. Direct pickup from the senses is a NECESSARY condition for grounding, it is not a SUFFICIENT condition. Why not? Because if this hypothetical system is going to be intelligent, then you need a good deal more than just the webcam and a simple association function - and all that other machinery that is lurking in the background has to be coherently connected to the rest. Only if the whole lot is built and allowed to develop in a coherent, autonomous manner, can the system be said to be grounded. So, because you only mentioned a couple of mechanisms at the front end (webcam and association function) you did not give enough information to tell if the symbols are grounded or not. The correct answer was, then, "it depends on the program". The point of symbol grounding is that if the symbols are connected up by hand, the subtle relationships and mechanism-interactions are almost certainly not going to be there. But be careful about what is claimed here: in principle someone *could* be clever enough to hand-wire an entire intelligent system to get global coherence, and in that case it could actually be grounded, without the symbols being picked up by the system itself. But that is such a difficult task that it is for all practical purposes impossible. Much easier to give the system a set of mechanisms that include the pickup (symbol-building) mechanisms and let the system itself find the symbols that matter. It is worth noting that although Harnad did not say it this way, the problem is really an example of the complex systems problem (cf my 2007 paper on the subject). Complex-system issues are what make it practically impossible to hand-wire a grounded system. You make one final comment, which is about building a system that has a "subjective" understanding of words. That goes beyond grounding, to philosophy of mind issues about subjectivity. A properly grounded system will talk about having subjective comprehension or awareness of meanings, not because it is grounded per se, but because it has "analysis" mechanisms that adjudicate on subjectivity issues, and these mechanisms have systemic issues that give rise to subjectivity. For more details about that, see my 2009 paper on Consciousness, which was given at the AGI conference last year. Richard Loosemore From gts_2000 at yahoo.com Fri Feb 5 15:12:42 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 5 Feb 2010 07:12:42 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <800656.29106.qm@web36502.mail.mud.yahoo.com> I think you've just confused consciousness with behavior that seems as-if conscious, and that we need to be more precise in our language. I absolutely disagree with your contention that things (neurons, brains, whatever) that behave as-if they have consciousness must by virtue of that fact have consciousness. -gts - From pharos at gmail.com Fri Feb 5 15:23:05 2010 From: pharos at gmail.com (BillK) Date: Fri, 5 Feb 2010 15:23:05 +0000 Subject: [ExI] Semiotics and Computability In-Reply-To: <211266.88332.qm@web36506.mail.mud.yahoo.com> References: <211266.88332.qm@web36506.mail.mud.yahoo.com> Message-ID: On 2/5/10, Gordon Swobe wrote: > I barely find time to respond to sincere posts from people like Stathis. > I have no time to respond to childish insults from the peanut gallery. > Please stop. > > I see it more as a statement of fact rather than an insult. You and responders to you have produced over 500 messages going round in circles while you keep repeating the same *belief*, that it is impossible for computers to ever understand anything. Your thought experiments and strawmen arguments get wilder and wilder as you desperately try to repeat the same *belief* using different words. Which in turn causes more confusion as it appears that you might be saying something different, when you're not. Stathis (and others) arguments have certainly clarified the situation so that working towards creating human level (and greater) intelligence in computers appears a worthwhile objective. For that we should thank him. BillK From stathisp at gmail.com Fri Feb 5 15:25:34 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 02:25:34 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <204176.88244.qm@web36508.mail.mud.yahoo.com> References: <204176.88244.qm@web36508.mail.mud.yahoo.com> Message-ID: On 6 February 2010 02:00, Gordon Swobe wrote: > I think you've misrepresented or misunderstood me here. Where in the same breath did I say these things? > > In your thought experiment, the artificial program-driven neurons will require a lot of work for the same reason that programming weak AI will require a lot of work. We're not there yet, but it's within the realm of programming possibility. The artificial neurons (or subneuronal or multineuronal structures, it doesn't matter) exhibit the same behaviour as the natural equivalents, but lack consciousness. That's all you need to know about them: you don't have to worry how difficult it was to make them, just that they have been made (provided it is logically possible). Now it seems that you allow that such components are possible, but then you say that once they are installed the rest of the brain will somehow malfunction and needs to be tweaked. That is the blatant contradiction: if the brain starts behaving differently, then the artificial components lack the defining property you agreed they have. -- Stathis Papaioannou From aware at awareresearch.com Fri Feb 5 15:51:59 2010 From: aware at awareresearch.com (Aware) Date: Fri, 5 Feb 2010 07:51:59 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: On Fri, Feb 5, 2010 at 4:35 AM, BillK wrote: > On 2/5/10, Stathis Papaioannou wrote: >> By definition it isn't measurable, since (according to Searle and >> ?Gordon) it would be possible to perfectly reproduce the behaviour of >> ?the brain, but leave out understanding. It is only possible to observe >> ?behaviour, so if behaviour is separable from understanding, you can't >> ?observe it. I'm waiting for Gordon to say, OK, I've changed my mind, >> ?it is *not* possible to reproduce the behaviour of the brain and leave >> ?out understanding, but he just won't do it. > > "You cannot reason people out of a position that they did not reason > themselves into." > ? Ben Goldacre (Bad Science) Ironically, nearly EVERYONE in this discussion is defending the "obvious, indisputable, common-sense position" that this [qualia | consciousness | meaning | intentionality...(name your 1st-person essence)] actually exists as an ontological attribute of certain systems. It's strongly reminiscent of belief in phlogiston or ?lan vital, but so much trickier because of the epistemological factor. Nearly everyone here, with righteous rationality, are defending a position they did not reason themselves into, even though, when pressed, they will admit they don't know how to model it or even clearly define it. Gordon presents Searle's argument, and no one here gets that the logic is right, but the premise is wrong--because they are True Believers sharing that premise. The "consciousness" you're looking for--that you assume drives your thinking and receives your experience--doesn't exist. The illusion of an essential self (present always only when the system asks) is simply the necessary behavior of any system referring to references to itself. - Jef From jonkc at bellsouth.net Fri Feb 5 16:04:28 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 5 Feb 2010 11:04:28 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <565163.72063.qm@web65609.mail.ac4.yahoo.com> References: <969214.43756.qm@web36506.mail.mud.yahoo.com> <317129.32350.qm@web65616.mail.ac4.yahoo.com> <4C1944CD-40AD-48A1-A274-2CF40D90CA77@bellsouth.net> <565163.72063.qm@web65609.mail.ac4.yahoo.com> Message-ID: <3B0B158F-40D3-4F38-A7FD-BFD538221EFB@bellsouth.net> Me: >> Stripped to its essentials intentionality means someone or something that can change its internal state, a state that predisposes it to do one thing rather than another; or at least that's what I mean by the word. I like it because it lacks circularity. >> The Avantguardian > While I understand your dislike of circularity, the definition you give is far too broad. Almost everything has an internal state that can be changed. The discovery of this and the mathematics behind it made Ludwig Boltzmann famous. You are making my case for me! Certainly looking at things that way proved to be very productive indeed for Mr. Boltzmann. > A rock has a temperature which is an "internal state". If the temperature of the rock is higher than that of its surroundings, its internal state predisposes the rock to cool down. I have no problem with that, if it's good enough for Boltzmann it's good enough for me. > evolution by natural selection is based on a circular argument as well. Species evolve by the differential survival and reproduction of the fittest members of the species. The circularity is in the redundant nature of your sentence not in the Theory of Evolution; it could be better stated just by saying "Species evolve by differential survival". And even then it would only be 50% true because it says nothing about random mutation. And speaking of Evolution, I have pointed out many times that Gordon's ideas are totally incompatible with Darwin's, nobody has disputed this but they just shrug and continue to argue about some arcane point in his latest thought experiment. I don't get it. > all software currently in existence exhibits only the intentionality of the programmer and not any native or implicit intentionality of its own. You are advising us in the above to get right back on the good old circular express; I believe that looking at things that way will bring us about as much enlightenment as Gordon has given us in the last month or so. I also think you are making an error in assuming that intentionality is an all or nothing thing. Yes a Turing Machine finding a zero or a one may seem simple and un-mysterious compared with our deepest desires, but as I said before that is in the very nature of explanations, or at least it is of good ones. > The way of reductionism is fraught with the peril of oversimplification. For some reason nowadays it's very fashionable to bad mouth reductionism, but it is at the heart of nearly every scientific discovery made in the last 500 years; waiting until you understand everything before you try to understand anything has not proven to be productive. If you refuse to break down consciousness into smaller easier to understand parts you are doomed to circularity as Gordon has ably demonstrated. > I prefer empiricaI science to philosophy. Me too. > I think experimentation is the only hope of settling this argument. But that would not change Gordon's mind, he specifically said that no matter what a robot did, no matter how brilliantly it behaved he would not treat it as conscious because... well... because it's a robot. What really got me was that the other day he had the gall to mention the word "Evolution". John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Feb 5 17:04:27 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 5 Feb 2010 12:04:27 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <591415.38933.qm@web36503.mail.mud.yahoo.com> References: <591415.38933.qm@web36503.mail.mud.yahoo.com> Message-ID: <8829C060-4B3A-43D2-A1A1-6BB31AD968D3@bellsouth.net> Since my last post Gordon Swobe has posted 13 times: > I do not care how the computer behaves. Yes Gordon I know, you don't care about behavior and that of course means you don't care about Evolution either, and I have a real problem with that because, even if your thought experiments were superb rather than absurd, real experiments outrank thought experiments. > My point here is that intelligent behavior does not imply understanding. That is 100% incompatible with Darwin's ideas. > We can construct robots that behave intelligently like humans but which have no subjective understanding of anything whatsoever. That is 100% incompatible with Darwin's ideas. > Does your watch understand what time it is, Stathis? No, of course not. Yet another wonderful example of how not to make a thought experiment. You want to investigate a property so you set up a thought experiment. You have absolutely no way of measuring or even detecting this property, so you just arbitrarily state that this property does or does not exist in this thought experiment, and then claim to have proven something profound about the existence or nonexistence of that property. Don't you think that's just a bit ridiculous? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlosehuerta at gmail.com Fri Feb 5 17:23:49 2010 From: carlosehuerta at gmail.com (Carlos Huerta) Date: Fri, 5 Feb 2010 12:23:49 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: <246CA158-8B20-47A6-AA68-D482AA749F78@gmail.com> Hi, is there somewhere I can find this (Pollock, JL, Ismael J. 2006. Knowledge and reality; So You Think You Exist? In Defense of Nolipsism) paper for online reading? Or more of JL Pollock's work? Thanks Get Technical El Feb 4, 2010, a las 10:08 PM, x at extropica.org escribi?: > On Thu, Feb 4, 2010 at 1:41 PM, Spencer Campbell > wrote: >> Who was it that wrote the paper describing how a delusion of self is >> unavoidable when implementing a general-purpose consciousness such as >> myself? I liked that paper. It appealed to my nihilistic side, which >> is also the rest of me. >> >> Ugh, this is going to drive me crazy. I have to remember some >> keywords >> to search for. He used a very specific term to refer to that >> delusion. >> "Distributed agent" was used in the paper, I think, but not the >> message that linked to the paper... > > Pollock, JL, Ismael J. 2006. Knowledge and reality; So You Think You > Exist? In Defense of Nolipsism :35-62. > > I've posted it to this list twice now. This is the first indication > I've seen that anyone read it. > > - Jef > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Feb 5 19:00:54 2010 From: pharos at gmail.com (BillK) Date: Fri, 5 Feb 2010 19:00:54 +0000 Subject: [ExI] Semiotics and Computability In-Reply-To: <246CA158-8B20-47A6-AA68-D482AA749F78@gmail.com> References: <246CA158-8B20-47A6-AA68-D482AA749F78@gmail.com> Message-ID: On 2/5/10, Carlos Huerta wrote: > > Hi, is there somewhere I can find this (Pollock, JL, Ismael J. 2006. > Knowledge and reality; So You Think You > Exist? In Defense of Nolipsism) paper for online reading? Or more of JL > Pollock's work? > Isn't Google marvelous? ;) BillK From jrd1415 at gmail.com Fri Feb 5 20:50:34 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 5 Feb 2010 13:50:34 -0700 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: On Thu, Feb 4, 2010 at 8:08 PM, wrote: > Pollock, JL, Ismael J. ?2006. ?Knowledge and reality; So You Think You > Exist? In Defense of Nolipsism :35-62. > > I've posted it to this list twice now. ?This is the first indication > I've seen that anyone read it. Can be found at: http://oscarhome.soc-sci.arizona.edu/ftp/PAPERS/Nolipsism.pdf Best, jeff davis From steinberg.will at gmail.com Fri Feb 5 21:15:32 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 5 Feb 2010 16:15:32 -0500 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: References: <854080.27226.qm@web36504.mail.mud.yahoo.com> Message-ID: <4e3a29501002051315n758e0f95v848faebb866dc406@mail.gmail.com> Neurons also encode information based on the relative strengths of connections with adjacent neural structures, as well as in their propensities towards different underlying currents. It seems like a lot of you, perhaps due to the polarizing atmosphere of the land of GetSwobia, have too easily discounted legitimate logic and mathematics. Now--the Chinese Room is an absolutely ridiculous analogy and Searle should be demeaned for it, as should anyone who takes it too seriously, because semantics ARE NOT STATIC. I do not understand how anyone can begin to make any arguments centering around this sort of ill-thought out, sophomoric idea. It's as if Searle simply took the first thought experiment he could think of, immediately deciding, without considering reality, that the rules of understanding could be "booked"--simply an untruth. Really, the man should have a second set of books which give him rules for erasing and writing in new rules in his first set, and a third set which tells him how to edit the second, and on and on until we asymptotically approach sets of books for which I/O is practically meaningless. Now, in truth, the books of the mind are not truly leveled like this but probably exist on multiple levels at once, with different types of information in the brain having widespread effects on many other types. Even sets of remarkably few neurons will demonstrate very, very complicated recursions. If each neuron has its own rules for varying connections based on input and prior connection strength (much like the rules for a TM,) the fact that it can change it owes rules perhaps lends itself to the idea of mental non-computability, at least in today's sense of the word. Swobe is still wrong, but brains aren't Turing equivalent because the brain does NOT remain a constant T(n) but instead is composed of innumerable modular T(x); T(y); T(z); each is constantly changing the T-value of itself and adjacent virtual machines. Each module has in it some semblance of UTM-ness allowing it to read others, perhaps owing to a greater mental structure of which we are not yet aware. I understand the physicalist's desire to immediately quash all notions of noncomputability, but this is the same sort of blind partisanship that, if continued, will prevent us from truly learning how we think. A static TM is a limited concept. Understanding of the brain will dictate our need to branch out and explore self-modifying Turturingmachineing Machines... -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Feb 5 22:13:35 2010 From: pharos at gmail.com (BillK) Date: Fri, 5 Feb 2010 22:13:35 +0000 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <4e3a29501002051315n758e0f95v848faebb866dc406@mail.gmail.com> References: <854080.27226.qm@web36504.mail.mud.yahoo.com> <4e3a29501002051315n758e0f95v848faebb866dc406@mail.gmail.com> Message-ID: On 2/5/10, Will Steinberg wrote: > Even sets of remarkably few neurons > will demonstrate very, very complicated recursions. If each neuron has its > own rules for varying connections based on input and prior connection > strength (much like the rules for a TM,) the fact that it can change it owes > rules perhaps lends itself to the idea of mental non-computability, at least > in today's sense of the word. > > Swobe is still wrong, but brains aren't Turing equivalent because the brain > does NOT remain a constant T(n) but instead is composed of innumerable > modular T(x); T(y); T(z); each is constantly changing the T-value of itself > and adjacent virtual machines. Each module has in it some semblance of > UTM-ness allowing it to read others, perhaps owing to a greater mental > structure of which we are not yet aware. > > I understand the physicalist's desire to immediately quash all notions of > noncomputability, but this is the same sort of blind partisanship that, if > continued, will prevent us from truly learning how we think. A static TM is > a limited concept. Understanding of the brain will dictate our need to > branch out and explore self-modifying Turturingmachineing Machines... > > I strongly agree with this comment. That's why an eternity ago (was it really only a month ago?) I said that our present digital computers didn't work the same way as the brain. My attempt at a description was:- The brain is more like an analogue computer. It is not like a digital computer that runs a program stored in memory. The brain *is* the program and *is* the computer. And it is a constantly changing analogue computer as it grows new paths and links. There are no brain programs that resemble computer programs stored in a coded format since all the programming and all the data is built into neuronal networks. If you want to get really complicated, you can think of the brain as multiple analogue computers running in parallel, processing different functions, all growing and changing and passing signals between themselves. We may need a new generation of a different kind of computer to generate this 'consciousness'. It is a different question whether we need this 'consciousness' in our intelligent computers. ------------------ BillK From stefano.vaj at gmail.com Fri Feb 5 22:44:07 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 5 Feb 2010 23:44:07 +0100 Subject: [ExI] Space based solar power again In-Reply-To: References: Message-ID: <580930c21002051444p34551596o15c5545ff6ec648a@mail.gmail.com> On 4 February 2010 20:09, Keith Henson wrote: > Even at a million tons per year (what's needed for a decent sized SBSP > project) the odds are against the cost being low enough for power > satellites to make sense (i.e., undercut coal and nuclear) if you try > to transport the parts with chemical rockets. > > You either have to go to some non reaction method, magnet launcher, > cannon, launch loop or space elevator, or you have to go to an exhaust > velocity higher than what the energy of chemical fuels will give you. Or - be it just in theory - you can go to some nuclear reaction, Project Orion-like, method, which would be in my understanding much easier to implement, at the state-of-the-art, than any of the alternatives. -- Stefano Vaj From bbenzai at yahoo.com Sat Feb 6 08:37:55 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 6 Feb 2010 00:37:55 -0800 (PST) Subject: [ExI] Nolopsism In-Reply-To: Message-ID: <243641.74772.qm@web113601.mail.gq1.yahoo.com> Carlos Huerta asked: > Hi, is there somewhere I can find this (Pollock, JL, Ismael > J.? 2006.??? > Knowledge and reality; So You Think You > Exist? In Defense of Nolipsism) paper for online reading? > Or more of? > JL Pollock's work? > Thanks Yes, you use a search engine ;> First on the list after searching for 'Nolipsism' in Scroogle: http://www.u.arizona.edu/~jtismael/nolipsism.pdf BTW, this is a great paper. I've been reading through it, and so far, it seems to make perfect sense. The basic idea is simple and elegant, and as far as I can see, completely solves all these circular discussions we've been having on this list. I'm sure certain parties wouldn't agree with that opinion though! I'm pretty impressed. Ben Zaiboc From stathisp at gmail.com Sat Feb 6 09:09:36 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 6 Feb 2010 20:09:36 +1100 Subject: [ExI] The digital nature of brains (was: digital simulations) In-Reply-To: <4e3a29501002051315n758e0f95v848faebb866dc406@mail.gmail.com> References: <854080.27226.qm@web36504.mail.mud.yahoo.com> <4e3a29501002051315n758e0f95v848faebb866dc406@mail.gmail.com> Message-ID: 2010/2/6 Will Steinberg : > Swobe is still wrong, but brains aren't Turing equivalent because the brain > does NOT remain a constant T(n) but instead is composed of innumerable > modular T(x); T(y); T(z); each is constantly changing the T-value of itself > and adjacent virtual machines.? Each module has in it some semblance of > UTM-ness allowing it to read others, perhaps owing to a greater mental > structure of which we are not yet aware. A Turing machine is limited in that it does not handle dynamic interaction with an environment, as brains and digital computers do. However, all digital computers are said to be Turing emulable, because any computation the computer can do a Turing machine could also do. A brain could be emulated on a digital computer provided that there is nothing in the physics of the brain that is not computable. An example of non-computable brain physics would be processes that require actual real numbers a solution of the halting problem in order to model them. Absent such complications, the brain (or any other part of the universe) could be modelled by any digital computer with the right program and enough memory. -- Stathis Papaioannou From stefano.vaj at gmail.com Sat Feb 6 09:48:41 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Feb 2010 10:48:41 +0100 Subject: [ExI] war is peace In-Reply-To: <004001caa697$66d15db0$ad753644@sx28047db9d36c> References: <004001caa697$66d15db0$ad753644@sx28047db9d36c> Message-ID: <580930c21002060148p119e4ef9h8926a0590fdbf843@mail.gmail.com> 2010/2/5 Frank McElligott : > If your Government tells you it is right, you accept it, as we did with Bush > and his Weapons of Mass Destruction, in Russia they are? no different, or in > EU for that matter. Mmhhh, just for the sake of discussion, there is a difference. If the government X says that WMDs exist in a given place or that global warming exists and is largely anthropic, either it is empirically true or it is not. If a given government wants to nationalise a given company, and makes use of its power (or implements legislation granting it the power) to do so, it is hard to say that such nationalisation is not "legal". At most, it may not be "right" according to one's socio-political views. But there again, there are a lot of illegal activities (say, writing Galileo's Dialogue Concerning the Two Chief World Systems in 1632, or starting the US war of independence) that one is ready to condone, if one likes or approve them. This is why, being a jurist myself, I am especially wary of invoking legality or illegality as a value judgment... :-) -- Stefano Vaj From stefano.vaj at gmail.com Sat Feb 6 10:02:49 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Feb 2010 11:02:49 +0100 Subject: [ExI] The simplest possible conscious system In-Reply-To: <820642.67186.qm@web113613.mail.gq1.yahoo.com> References: <820642.67186.qm@web113613.mail.gq1.yahoo.com> Message-ID: <580930c21002060202i4b0917f0m3d5404c7d219d8b2@mail.gmail.com> On 4 February 2010 15:35, Ben Zaiboc wrote: > I'd guess that the simplest possible conscious system would have an embodiment ('real' or virtual) within an environment, sensors, actuators, the ability to build internal representations of both its environment and itself, and by implication some kind of state memory. ?Hm. maybe we already have conscious robots, and don't realise it! If "conscious" is taken to mean "exhibiting the same information processing features of an average adult, healthy, alert, educated human being" the theoretical answer for me corresponds to the question "what is the simplest possible universal computer". And the answer is discussed quite in depth in A New Kind of Science. OTOH, most universal computers which are in fact not human beings, including all those who are much simpler than brains, would be monstrously inefficient in performing such task. So, if you are referring to something which may pass a Turing test without its opponent (or perhaps the universe...) dying of old age between one question and its answer, the requirements would of course be much stricter. -- Stefano Vaj From stefano.vaj at gmail.com Sat Feb 6 10:15:21 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Feb 2010 11:15:21 +0100 Subject: [ExI] Personal conclusions In-Reply-To: References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: <580930c21002060215t5cacb91fqe4555291e02cea4f@mail.gmail.com> On 4 February 2010 19:12, Spencer Campbell wrote: > The intelligence of a given system is inversely proportional to the > average action (time * work) which must be expended before the system > achieves a given purpose, assuming that it began in a state as far > away as possible from that purpose. I would say it sounds a good one to me. In particular since it is not a white-black one and does not invoke metaphysical, ineffable entities. What about "perfomance in the execution of a given kind of data processing"? > This would make it impossible to prove its existence, which to my mind > is a pretty solid argument for its nonexistence. Nevertheless, we talk > about it (consciousness) all the time, throughout history and in every culture. So > even if it doesn't exist, it seems reasonable to assume that it is at > least meaningful to think about. Absolutely. In fact, there are a lot of useful and perfectly legitimate concepts which do not correspond to "entities". If I say "horizon", "beauty", "computation", "popular will", "sleep", everybody knows what I am talking about, even though nobody, except perhaps Plato, thinks that they have to be something you can rap your nuckles against. -- Stefano Vaj From bbenzai at yahoo.com Sat Feb 6 12:41:15 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 6 Feb 2010 04:41:15 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <375474.21539.qm@web113618.mail.gq1.yahoo.com> Gordon Swobe wrote: >>> True or false, Stathis: >>> >>> When a program running on a digital computer associates >>> a sense-datum (say, an image of an object taken with its >>> web-cam) with the appropriate word-symbol, the system >>> running that program has now by virtue of that association >>> grounded the word-symbol and now has understanding of the >>> meaning of that word-symbol. >> >> That depends entirely upon the nature of the program. > I see. So then let us say programmer A writes a program that fails but that programmer B writes one that succeeds. Hang on, how would you know? What test would you use to determine whether programmer B's program has understanding whereas A's hasn't? > I do not care how the computer behaves. Does it have conscious > understanding of the meaning of the word by virtue of having > associated it with an image file of the object represented > by the word? Well, if you disregard it's behaviour, how can you know what it's doing? Whether or not it's having 'conscious understanding' must surely be reflected in its behaviour, and observing its behaviour is the only kind of test that can be done. If you say "we don't need to look at behaviour, we can look at its structure instead", that presupposes that we know what kinds of structure do and don't give rise to conscious understanding, and as we are creating these systems to investigate this in the first place, we would be assuming the very thing we want to prove. There'd be no point in doing the experiment. That's not science. It may be philosophy, but it's definitely not science. Ben Zaiboc From bbenzai at yahoo.com Sat Feb 6 12:39:06 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 6 Feb 2010 04:39:06 -0800 (PST) Subject: [ExI] The digital nature of brains In-Reply-To: Message-ID: <484433.35998.qm@web113602.mail.gq1.yahoo.com> BillK wrote: > > ... an eternity ago (was it really only a month > ago?) I said > that our present digital computers didn't work the same way > as the brain. Is anybody actually claiming that? I certainly wouldn't. Computers don't work the same way as the traffic, but that doesn't stop us from using them to model traffic. This is why levels of abstraction are so crucial. It doesn't matter that digital computers don't work the same way as the brain. What matters is that digital computers can create virtual objects that do. (or objects that create objects that create objects that do, etc.). Ben Zaiboc From stefano.vaj at gmail.com Sat Feb 6 16:08:06 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Feb 2010 17:08:06 +0100 Subject: [ExI] Nolopsism In-Reply-To: <243641.74772.qm@web113601.mail.gq1.yahoo.com> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> Message-ID: <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> On 6 February 2010 09:37, Ben Zaiboc wrote: > BTW, this is a great paper. ?I've been reading through it, and so far, it seems to make perfect sense. ?The basic idea is simple and elegant, and as far as I can see, completely solves all these circular discussions we've been having on this list. ?I'm sure certain parties wouldn't agree with that opinion though! Indeed. Even though he is too pessimistic in saying that "cannot accept that we cannot exist". We should say "we cannot avoid making use of reflexive indicators", but there are plenty of other useful, understandable and practical concepts to which no "essence" really corresponds. Why should one's "self" be an exception? Most of dualism can easily be reduced to linguistic short-circuits and paradoxes... -- Stefano Vaj From natasha at natasha.cc Sat Feb 6 17:21:31 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 6 Feb 2010 11:21:31 -0600 Subject: [ExI] Polytopia Interview with Natasha Message-ID: http://spacecollective.org/projects/Polytopia/ Or link to article here: http://spacecollective.org/Wildcat/5527/The-Audacious-beauty-of-our-future-N atasha-VitaMore-an-interview Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From gts_2000 at yahoo.com Sat Feb 6 19:27:40 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 6 Feb 2010 11:27:40 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <156490.82842.qm@web36504.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: >> In your thought experiment, the artificial >> program-driven neurons will require a lot of work for the >> same reason that programming weak AI will require a lot of >> work. We're not there yet, but it's within the realm of >> programming possibility. > > The artificial neurons (or subneuronal or multineuronal > structures, it doesn't matter)... If it doesn't matter, then let's keep it straightforward and refer to artificial brains rather than to artificial neurons surgically inserted into the midst of natural neurons. This will eliminate a lot of uncertainties that arise from the present state of ignorance about neuroscience. > exhibit the same behaviour as the natural equivalents, > but lack consciousness. In my view an artificial brain can exhibit the same intelligent behaviors as a natural brain without having subjective mental states where we define behavior as, for example, acts of speech. > That's all you need to know about them: you don't have to worry how > difficult it was to make them, just that they have been made (provided > it is logically possible). Now it seems that you allow that such > components are possible, but then you say that once they are installed > the rest of the brain will somehow malfunction and needs to be tweaked. > That is the blatant contradiction: if the brain starts behaving > differently, then the artificial components lack > the defining property you agreed they have. As above, let's save a lot of confusion and speak of brains rather than individual neurons. -gts From lacertilian at gmail.com Sat Feb 6 20:09:01 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 12:09:01 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: Message-ID: Stathis Papaioannou : >Spencer Campbell : >> They're extremely different things. We take >> meaning as input and output, or at least feel like we do, but we >> simply HAVE understanding. >> >> And no, it isn't a substance. It's a measurable phenomenon. Not easily >> measurable, but measurable nonetheless. > > By definition it isn't measurable, since (according to Searle and > Gordon) it would be possible to perfectly reproduce the behaviour of > the brain, but leave out understanding. It is only possible to observe > behaviour, so if behaviour is separable from understanding, you can't > observe it. I'm waiting for Gordon to say, OK, I've changed my mind, > it is *not* possible to reproduce the behaviour of the brain and leave > out understanding, but he just won't do it. Unfortunately for both you and Gordon, both of you are right in this case. If you define understanding as I do, that is: to understand a system is to have a model of that system in your mind, which entails the ability to correctly guess past or future states of the system based on an assumed state, or the consequences of an interaction between this and another understood system. It's easy to see how this definition covers understanding things like weather patterns, but it also applies in some unexpected ways. I understand English. I can guess what will happen in your mind when you read this sentence; it'll be a pretty inaccurate guess, by any objective measure, but it will be of a higher quality than pure chance would predict by many, many orders of magnitude. To define understanding in terms of associations between symbols does not make sense to me. I understand that dogs are canines. This has no relationship whatsoever to my understanding of dogs; I can only make that statement based on my understanding of English. It's more a fact about words than it is about animals. Returning to the original point: Stathis is correct in saying that understanding has an effect on behavior, and Gordon is correct in saying that intelligent behavior does not imply understanding. I can argue these points further if they aren't obvious, but to me they are. It should be possible, theoretically, to perfectly reproduce human behavior without reproducing a lick of human understanding. But this isn't entirely true. We can set up an experiment in which the (non-understanding) robot does exactly the same thing as the human, but if we observed the human and robot in their natural environments for a couple years it ought soon become obvious that they approach the world in radically different ways, even if their day-to-day behavior is nearly indistinguishable. (The robot I'm thinking of would be built to "understand" the world right off the bat, rather than learning about things as it goes along, as we do.) From gts_2000 at yahoo.com Sat Feb 6 20:11:56 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 6 Feb 2010 12:11:56 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <22844.25525.qm@web36508.mail.mud.yahoo.com> --- On Fri, 2/5/10, Aware wrote: > Ironically, nearly EVERYONE in this discussion is defending > the "obvious, indisputable, common-sense position" that this > [qualia | consciousness | meaning | intentionality...(name your > 1st-person essence)] actually exists as an ontological attribute of > certain systems. Spencer mentioned a head-ache as an example of something I would call a fact of reality that exists with a first-person ontology. > It's strongly reminiscent of belief in phlogiston... Have you ever had a head-ache, Jef? How about a tooth-ache? It seems to me that these kinds of phenomena really do exist in the world. I actually had a tooth extracted two weeks ago, and I can tell you that few things had more reality to me then than the experience of the tooth-ache that precipitated my desire to see the dentist. Subjective experiences such as these differ from such phenomena as mountains and planets only in so much they have first-person rather than third-person ontologies. My dentist agrees that tooth-aches really do exist, and so does the Bayer company. I consider myself a materialist, but in the reaction against mind/matter dualism some of my fellow materialists (e.g., Dennett) go overboard and irrationally deny the plain facts of subjective experience. They try to explain it away in third-person terms, fearing that any recognition of the mental will place them in the same came with Descartes. They don't understand that in so doing they embrace and acknowledge Descartes' dualistic vocabulary. -gts From gts_2000 at yahoo.com Sat Feb 6 20:55:38 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 6 Feb 2010 12:55:38 -0800 (PST) Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: Message-ID: <684806.48339.qm@web36503.mail.mud.yahoo.com> --- On Fri, 2/5/10, Stathis Papaioannou wrote: > I don't deny subjective experience but I deny that when I > understand something I do anything more than associate it with another > symbol, ultimately grounded in something I have seen in the real > world. That would seem necessary and sufficient for understanding, and > for the subjective experience of understanding, such as it is. When I asked you about a digital computer that did exactly that, you acknowledged that said computer lacked conscious understanding of the symbol and went off on a tangent about amoebas. So then it seems that first you say these sorts of associations are necessary and sufficient for subjective experience of understanding, but then you don't. re: the amoeba As I use the word "consciousness", I believe the amoeba has none whatsoever. This unconscious creature exhibits intelligent behavior but because it has no nervous system I doubt very seriously that it has any conscious experience of living. It looks for food intelligently in the same sense that your watch tells the time intelligently and in the same sense in which weak AI systems may one day have the intelligence needed to pass the Turing test; that is, it has intelligence but no consciousness. -gts From aware at awareresearch.com Sat Feb 6 21:05:58 2010 From: aware at awareresearch.com (Aware) Date: Sat, 6 Feb 2010 13:05:58 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <22844.25525.qm@web36508.mail.mud.yahoo.com> References: <22844.25525.qm@web36508.mail.mud.yahoo.com> Message-ID: On Sat, Feb 6, 2010 at 12:11 PM, Gordon Swobe wrote: > Have you ever had a head-ache, Jef? How about a tooth-ache? It seems to me that these kinds of phenomena really do exist in the world. > > I actually had a tooth extracted two weeks ago, and I can tell you that few things had more reality to me then than the experience of the tooth-ache that precipitated my desire to see the dentist. Subjective experiences such as these differ from such phenomena as mountains and planets only in so much they have first-person rather than third-person ontologies. My dentist agrees that tooth-aches really do exist, and so does the Bayer company. > > I consider myself a materialist, but in the reaction against mind/matter dualism some of my fellow materialists (e.g., Dennett) go overboard and irrationally deny the plain facts of subjective experience. They try to explain it away in third-person terms, fearing that any recognition of the mental will place them in the same came with Descartes. They don't understand that in so doing they embrace and acknowledge Descartes' dualistic vocabulary. Gordon, you presented the ostensible puzzle of Searle's Chinese Room, in which YOU are left facing a paradox. I contributed a very simple, clear and coherent (but perhaps jarringly non-intuitive) resolution to your paradox. A resolution that you're unable to accept due to your discomfort with the notion that there is no ESSENTIAL Gordon Swobe to experience ESSENTIAL qualia, despite my reassurances that this in no way denies the very real Gordon Swobe and his experiences as we AND YOU know them. A resolution that I've lived with for nearly thirty years now; one that flipped my world-view inside-out, leaving everything the same but simpler (no singularity of Self) and that costs nothing, while providing a more coherent basis for reasoning and extrapolation. Fine, enjoy your faith in the illusion, and live with the paradox. In everyday life, as long as you're not, for example, trying in vain to find a way to physically implement the qualia you imagine to exist, you should have little trouble. Your limited view does get in the way of more advanced thinking on the topic of agency and its role in metaethics, which I consider crucial to the ongoing growth of what matters to us as a society, but hey, you've got lots of company. This is a very old argument, and all the necessary pieces of the puzzle are strewn about you. If you use all the pieces, they fit together only one way. - Jef From gts_2000 at yahoo.com Sat Feb 6 22:01:52 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 6 Feb 2010 14:01:52 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <264701.53414.qm@web36506.mail.mud.yahoo.com> --- On Sat, 2/6/10, Aware wrote: >> Have you ever had a head-ache, Jef? How about a >> tooth-ache? It seems to me that these kinds of phenomena >> really do exist in the world. >> >> I actually had a tooth extracted two weeks ago, and I >> can tell you that few things had more reality to me then >> than the experience of the tooth-ache that precipitated my >> desire to see the dentist. Subjective experiences such as >> these differ from such phenomena as mountains and planets >> only in so much they have first-person rather than >> third-person ontologies. My dentist agrees that tooth-aches >> really do exist, and so does the Bayer company. > >> I consider myself a materialist, but in the reaction >> against mind/matter dualism some of my fellow materialists >> (e.g., Dennett) go overboard and irrationally deny the plain >> facts of subjective experience. They try to explain it away >> in third-person terms, fearing that any recognition of the >> mental will place them in the same camp with Descartes. They >> don't understand that in so doing they embrace and >> acknowledge Descartes' dualistic vocabulary. > > Gordon, you presented the ostensible puzzle of Searle's > Chinese Room, in which YOU are left facing a paradox. > > I contributed a very simple, clear and coherent (but > perhaps jarringly non-intuitive) resolution to your paradox. > > A resolution that you're unable to accept due to your > discomfort with the notion that there is no ESSENTIAL Gordon Swobe to > experience ESSENTIAL qualia, despite my reassurances that this in no > way denies the very real Gordon Swobe and his experiences as we AND > YOU know them. > > A resolution that I've lived with for nearly thirty years > now; one that flipped my world-view inside-out, leaving everything > the same but simpler (no singularity of Self) and that costs nothing, > while providing a more coherent basis for reasoning and > extrapolation. > > Fine, enjoy your faith in the illusion, and live with the > paradox.? In everyday life, as long as you're not, for example, trying > in vain to find a way to physically implement the qualia you imagine > to exist, you should have little trouble.? Your limited view > does get in the way of more advanced thinking on the topic of agency and > its role in metaethics, which I consider crucial to the ongoing growth > of what matters to us as a society, but hey, you've got lots of > company. > > This is a very old argument, and all the necessary pieces > of the puzzle are strewn about you.? If you use all the > pieces, they fit together only one way. I'll ask again: have you ever had a tooth-ache? -gts From avantguardian2020 at yahoo.com Sat Feb 6 22:41:03 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 6 Feb 2010 14:41:03 -0800 (PST) Subject: [ExI] Nolopsism In-Reply-To: <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> Message-ID: <641575.48388.qm@web65616.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stefano Vaj > To: ExI chat list > Sent: Sat, February 6, 2010 8:08:06 AM > Subject: Re: [ExI] Nolopsism > > On 6 February 2010 09:37, Ben Zaiboc wrote: > > BTW, this is a great paper. ?I've been reading through it, and so far, it > seems to make perfect sense. ?The basic idea is simple and elegant, and as far > as I can see, completely solves all these circular discussions we've been having > on this list. ?I'm sure certain parties wouldn't agree with that opinion though! > > Indeed. Even though he is too pessimistic in saying that "cannot > accept that we cannot exist". We should say "we cannot avoid making > use of reflexive indicators", but there are plenty of other useful, > understandable and practical concepts to which no "essence" really > corresponds. Why should one's "self" be an exception? Most of dualism > can easily be reduced to linguistic short-circuits and paradoxes... Philosophically nolipsism bears some resemblance to Buddhism which is fine from a spiritual point of view. E.g. why fear death when there is no "me" to die, etc.?Being an attorney however, I am sure you are aware of the legal can of worms nolipsism opens up. Human rights are tied to identity. If "I" don't exist, then stealing my stuff or even murdering me is a victimless crime. Doesn't make for a happy outcome in my opinion, especially for libertarians.?Probably why the authors back-pedalled from their claims in the conclusion. From lacertilian at gmail.com Sun Feb 7 00:11:08 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 16:11:08 -0800 Subject: [ExI] The simplest possible conscious system In-Reply-To: <580930c21002060202i4b0917f0m3d5404c7d219d8b2@mail.gmail.com> References: <820642.67186.qm@web113613.mail.gq1.yahoo.com> <580930c21002060202i4b0917f0m3d5404c7d219d8b2@mail.gmail.com> Message-ID: Stefano Vaj : > If "conscious" is taken to mean "exhibiting the same information > processing features of an average adult, healthy, alert, educated > human being" the theoretical answer for me corresponds to the question > "what is the simplest possible universal computer". Of course "conscious" is not taken to mean that here, nor anything like that, if only for the reasons pointed out by James Choate two or three days ago. Humans are ludicrously complex examples of consciousness (assuming they are conscious at all), and almost completely useless for finding an answer to the question. Unless, of course, we can incrementally subtract non-vital functions, paring down a prototypical human being to nothing more than a consciousness-generating machine. Then we have to think about how humans do it at all. This is an excellent line of inquiry to pursue. Most awake people are conscious, and most asleep people are unconscious. Agreed? Agreed. What about other states? Peculiar cases may be illustrative here. Is a lucid dreamer conscious? Any other dreamer? A blacked-out drunk? A hypnosis subject? An acid head? I've heard of mental states available in meditation that eradicate the subject-object duality, and others that are devoid of all content save for vast diffuse consciousness. The latter is obviously conscious, by definition, but I don't know about the former; if I am meditating on an idol, and then perceive myself to be identical with the idol, am I conscious? Is the idol? There aren't a lot of sharp lines to be drawn here. This much is obvious. Aside from that, I have very little to contribute. Can any experienced psychonauts lend some anecdotal evidence here? From lacertilian at gmail.com Sun Feb 7 00:34:53 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 16:34:53 -0800 Subject: [ExI] Personal conclusions In-Reply-To: References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: Aware : > And despite accumulating evidence of the incoherence of consciousness, > with all its gaps, distortions, fabrication and confabulation, we hang > on to it, and decide it must be a very Hard Problem. ?Thus inoculated, > and fortified by the biases built in to our language and culture, we > know that when someone comes along and says that it's actually very > simple, cf. Dennett, Metzinger, Pollack, Buddha..., we can be sure, > even though we can't make sense of what they're saying, that they must > be wrong. > > A few deeper thinkers, aiming for greater coherence over greater > context, have suggested that either all entities "have consciousness" > or none do. ?This is a step in the right direction. ?Then the > question, clarified, might be decided in simply information-theoretic > terms. ?But even then, more often they will side with Panpsychism > (even a rock has consciousness, but only a little) than to face the > possibility of non-existence of an essential experiencer. Considering the thread's subject, it seems safe to burn some bytes on personal information. So: I subscribe to panexperientialism myself. Either everything has subjective experience, or nothing does. Unfortunately this doesn't help me at all when faced with a question like, "is a human more conscious than a pig more conscious than a fly more conscious than a rock?". I want to say yes, really I do, but at the moment I just can't! I see no reason whatsoever why certain amounts or types of information processing should "attract" excess consciousness, if, like me, you want to treat it as a fundamental property of matter. So, my contributions to the discussion are probably incremental at best. I have stepped far enough back to understand that I understand nothing, and just barely further. Dennett, Metzinger, Pollack, Buddha (Gautama?). I have some books to put on my reading list. From lacertilian at gmail.com Sun Feb 7 01:15:55 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 17:15:55 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <264701.53414.qm@web36506.mail.mud.yahoo.com> References: <264701.53414.qm@web36506.mail.mud.yahoo.com> Message-ID: Quoting from the "Personal conclusions" thread, because, on reflection, it seems more relevant over here. Stefano Vaj : >Spencer Campbell : >> The intelligence of a given system is inversely proportional to the >> average action (time * work) which must be expended before the system >> achieves a given purpose, assuming that it began in a state as far >> away as possible from that purpose. > > I would say it sounds a good one to me. In particular since it is not > a white-black one and does not invoke metaphysical, ineffable > entities. What about "perfomance in the execution of a given kind of > data processing"? Then you crash straight into the concept of FLOPS, and all the terrible awful difficulties it entails. "Performance" is not well-defined with respect to computing, or at least not to the extent you'd expect, and I shudder to think of how one would go about distinguishing "a given kind of data processing" from any other kind. I might give that definition to some strange thing like "cognitive excellence" which no one ever talks about outside of circles like this one, but certainly not to intelligence. A total idiot can learn to multiply large numbers quickly. The modern computer: a total idiot with astounding cognitive excellence. Gordon Swobe : > Stathis Papaioannou : >> I don't deny subjective experience but I deny that when I >> understand something I do anything more than associate it with another >> symbol, ultimately grounded in something I have seen in the real >> world. That would seem necessary and sufficient for understanding, and >> for the subjective experience of understanding, such as it is. > > When I asked you about a digital computer that did exactly that, you acknowledged that said computer lacked conscious understanding of the symbol and went off on a tangent about amoebas. > > So then it seems that first you say these sorts of associations are necessary and sufficient for subjective experience of understanding, but then you don't. For once, I agree unequivocally with Gordon Swobe. I'm not sure how to feel about that! Ambivalent? Nonplussed? I think I'll go with indifferent. If Stathis continues to conflate intelligence, understanding, consciousness, and, worst of all, symbolic association, I may have a lasting position in the Searle-Gordon camp. It's a shame I believe formal programs are perfectly capable of reproducing human subjective experience. Otherwise, I'd fit in just fine. : > Pollock, JL, Ismael J. ?2006. ?Knowledge and reality; So You Think You > Exist? In Defense of Nolipsism :35-62. > > I've posted it to this list twice now. ?This is the first indication > I've seen that anyone read it. > > - Jef Yes! Thank you. Both for reminding me and for posting it to begin with. I already agreed with the premise, so no tectonic shift of world-view occurred, but hearing some coherent theories as to WHY we must believe ourselves to be real, objective, unchanging entities, not necessarily corresponding to any physical structure, was very rewarding nonetheless. From gts_2000 at yahoo.com Sun Feb 7 01:18:12 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 6 Feb 2010 17:18:12 -0800 (PST) Subject: [ExI] The simplest possible conscious system In-Reply-To: Message-ID: <820785.29759.qm@web36504.mail.mud.yahoo.com> --- On Sat, 2/6/10, Spencer Campbell wrote: > I've heard of mental states available in meditation that > eradicate the subject-object duality, and others that are devoid of > all content save for vast diffuse consciousness. The latter is obviously > conscious, by definition, but I don't know about the former; if I am > meditating on an idol, and then perceive myself to be identical with the > idol, am I conscious? Is the idol? I have since the 70s practiced transcendental meditation. Occasionally while meditating my mind seems to, as you say, eradicate the subject-object duality. However I can only infer this indirectly, and only after the fact. During those actual moments I have awareness of nothing at all. I don't believe in qualia, as such, because the idea implies the possibility of consciousness-without-an-object or consciousness-sans-qualia. Such states seem to me impossible both in theory and in practice. Instead I believe one experiences various qualities or aspects of one's own unified field of consciousness. -gts From spike66 at att.net Sun Feb 7 04:37:34 2010 From: spike66 at att.net (spike) Date: Sat, 6 Feb 2010 20:37:34 -0800 Subject: [ExI] valentines day advice In-Reply-To: <641575.48388.qm@web65616.mail.ac4.yahoo.com> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com><580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: <9EF6A887A5D648958A8E14DBE5F0912F@spike> In the midst of all the heavy technical discussion of minds and consciousness, do let me here interject a bit of useful advice for the lads as we approach that emotional minefield known as Valentines Day. Perhaps you have already chosen a gift for your sweethearts, but this one can be in addition to that, and will only cost you about 20 bucks. It might even get you a get-out-of-jail-free card for your next romantic screwup. Buy your sweetheart a DVD of the Celtic Woman singing Songs From the Heart. Take my word for it, me lads. Then watch it with her. Be prepared, for at some time during the performance you can count on her asking some form of the question "Which of them is the most beautiful?" You will at that time respond instantly with "You are, my dear." This will likely be followed with a less-than-fully-sincere version of "Bullshit, now which one?" At this time you must stick with your original story like a thrice conviced felon caught with a smoking pistol in his hand, by uttering "All are beautiful of course, but the radiance of your beauty exceeds them all, as the noonday sun outshines the stars!" We know of course that if measured at equal distance, perhaps as many as half the stars do in fact outshine the sun, for the sensible radiation is proportional to the inverse square of the distance. This is a detail that you and I can keep between just us lads, shall we? Good. Practice the delivery until you can do it without dirisive laughter, on your first (and only) try. You can do it: I did, and I was making it up as I went along. But of course I benefit from many years of dedicated practice at this sort of thing. Hey, I too benefit from the inverse square law, being no Rock Hudson myself. You don't even lie, exactly: your own sweetheart does in fact outshine even the stunning Lisa Kelly, assuming Lisa is home in Ireland while you sit beside your sweetheart in say, Australia, or Neptune. Now get out there and buy that DVD, and practice your lines. You have only one chance at it. Good luck! spike From lacertilian at gmail.com Sun Feb 7 05:31:02 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 21:31:02 -0800 Subject: [ExI] The simplest possible conscious system In-Reply-To: <820785.29759.qm@web36504.mail.mud.yahoo.com> References: <820785.29759.qm@web36504.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > I have since the 70s practiced transcendental meditation. Occasionally while meditating my mind seems to, as you say, eradicate the subject-object duality. However I can only infer this indirectly, and only after the fact. During those actual moments I have awareness of nothing at all. Gordon! I never knew. My opinion of you is again on the upward part of its fluctuation. This seems to me a good argument for the idea that all consciousness is that of a subject being aware of an object. I'd have said that before, but my source regarding that particular state (from Ken Wilber, I think) wasn't very specific on the matter. The vast-consciousness-without-content state does seem to contradict this theory, though, from what little I know about it. I've heard it described (probably by Ken Wilber again) as a void which is only aware of itself. You could interpret that as saying that there technically is a subject-object duality in the moment, but both positions are occupied by the same thing and the thing in question actually isn't anything. Precisely as nolipsism predicts! And Buddhism, and so on. Does any of this help us to construct the simplest possible conscious system? Self-awareness seems to be a good description of consciousness, but awareness isn't exactly understanding. I'm not sure what sort of mechanism is responsible for awareness. You could claim that a Turing machine is aware of the symbol it's currently reading, and only that symbol. Logically, then, a Turing machine that does nothing but read a tape of symbols denoting itself should be conscious. The symbol grounding problem again: how to cause a symbol to denote anything at all? General consensus dictates that some sort of interaction with the environment is necessary. It's obvious to me that this works when taken to the extremely sophisticated level of human awareness, but I would be hard pressed to define an exact point at which the unconsciousness of an ungrounded Turing machine is replaced by the consciousness of an egotistic Spencer Campbell. Attaching a webcam to associate images with symbols (using complex object recognition software, of course), which are then fed to the machine on tape, does not seem sufficient to produce consciousness even if you point the camera at a mirror. Yet, I have no good reason to believe it isn't. Sheer anthropocentric prejudice alone makes me say that such a system is incapable of awareness: the Swobe Fallacy. So, I haven't managed to convince myself that a system simpler than a disembodied verbal AI (discussed previously) is capable of consciousness. It must be, though, if I can remain conscious even with duct tape over my mouth. Calling the potential to communicate a feature of consciousness would be extraordinarily counterintuitive, at best. Basically I am talking to myself at this point. Do all possible consciousnesses do that? Hmm. From lacertilian at gmail.com Sun Feb 7 05:54:47 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 6 Feb 2010 21:54:47 -0800 Subject: [ExI] Nolopsism In-Reply-To: <641575.48388.qm@web65616.mail.ac4.yahoo.com> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: The Avantguardian : > Philosophically nolipsism bears some resemblance to Buddhism which is fine from a spiritual point of view. E.g. why fear death when there is no "me" to die, etc.?Being an attorney however, I am sure you are aware of the legal can of worms nolipsism opens up. Human rights are tied to identity. If "I" don't exist, then stealing my stuff or even murdering me is a victimless crime. Doesn't make for a happy outcome in my opinion, especially for libertarians.?Probably why the authors back-pedalled from their claims in the conclusion. Hi, my name is Spencer Campbell, I will be your Stefano Vaj for tonight. Are victimless crimes morally, ethically, and legally acceptable? They ARE crimes, so, no to the last one. The first two are arguable. I feel confident that a coherent system of law could be made without the assumption that any selves exist for it to protect. It would essentially treat people as highly valuable property, no different from houses or cars, owned by entities just as imaginary as corporations. Really it could only streamline everything. We should do this. We should do this right now. From nanite1018 at gmail.com Sun Feb 7 06:23:39 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Sun, 7 Feb 2010 01:23:39 -0500 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: <915E9809-F351-43BE-A6A5-10E53227BD42@GMAIL.COM> > Are victimless crimes morally, ethically, and legally acceptable? They > ARE crimes, so, no to the last one. The first two are arguable. I feel > confident that a coherent system of law could be made without the > assumption that any selves exist for it to protect. It would > essentially treat people as highly valuable property, no different > from houses or cars, owned by entities just as imaginary as > corporations. > > Really it could only streamline everything. We should do this. We > should do this right now. -Spencer Campbell Problem: Who do you punish? This imaginary entity that damaged the property of another imaginary entity? If you do it like that, then I don't see any difference between that and a legal system based on actual "selves." And without a victim, there is no crime. I can't see the purpose of law without individual rights as its basis (rights based on principles derived from the nature of human beings), and if you eliminate the individual, you'll have a hard time justifying anything, ultimately. Corporations are entities made up of people ultimately, and they are created and owned and controlled by people. Hence a crime against a corporation is a crime against a group of people (the owners or employees). Without individuals, you can't say make laws based on happiness, or prosperity, or anything else, because all of those reference individuals and minds. And obviously "rights" go right out the window. I'll finish reading the article and probably get back later. Joshua Job nanite1018 at gmail.com From stathisp at gmail.com Sun Feb 7 07:09:30 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 7 Feb 2010 18:09:30 +1100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <684806.48339.qm@web36503.mail.mud.yahoo.com> References: <684806.48339.qm@web36503.mail.mud.yahoo.com> Message-ID: On 7 February 2010 07:55, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >> I don't deny subjective experience but I deny that when I >> understand something I do anything more than associate it with another >> symbol, ultimately grounded in something I have seen in the real >> world. That would seem necessary and sufficient for understanding, and >> for the subjective experience of understanding, such as it is. > > When I asked you about a digital computer that did exactly that, you acknowledged that said computer lacked conscious understanding of the symbol and went off on a tangent about amoebas. > > So then it seems that first you say these sorts of associations are necessary and sufficient for subjective experience of understanding, but then you don't. These sorts of associations are the basic stuff of which understanding is made, but obviously there are degrees of understanding, involving complex syntax and multiple associations. A human's understanding, perception and intelligence stands in relationship to that of a simple computer system's as it stands in relationship to that of a simple organism. > re: the amoeba > > As I use the word "consciousness", I believe the amoeba has none whatsoever. This unconscious creature exhibits intelligent behavior but because it has no nervous system I doubt very seriously that it has any conscious experience of living. It looks for food intelligently in the same sense that your watch tells the time intelligently and in the same sense in which weak AI systems may one day have the intelligence needed to pass the Turing test; that is, it has intelligence but no consciousness. The amoeba is not only less conscious than a human, it is also less intelligent. Do you think it is just a coincidence that intelligence and consciousness seem to be directly proportional? A neuron is not essentially different from an amoeba, except in the fact that it cooperates with other neurons to process information that the individual neuron does not understand (rather like the man in the Chinese Room). It is this activity which gives rise to intelligence and consciousness, not anything to do with the biology of the neuron itself. The biology of the neuron is akin to the workings of an internal combustion engine in a car: it is essential to make the car go and any significant problem with it will make the car stop, but if you replaced the whole thing with an electric motor and battery system of similar characteristics the car would go just as well. -- Stathis Papaioannou From stathisp at gmail.com Sun Feb 7 08:25:43 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 7 Feb 2010 19:25:43 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <156490.82842.qm@web36504.mail.mud.yahoo.com> References: <156490.82842.qm@web36504.mail.mud.yahoo.com> Message-ID: On 7 February 2010 06:27, Gordon Swobe wrote: > --- On Fri, 2/5/10, Stathis Papaioannou wrote: > >>> In your thought experiment, the artificial >>> program-driven neurons will require a lot of work for the >>> same reason that programming weak AI will require a lot of >>> work. We're not there yet, but it's within the realm of >>> programming possibility. >> >> The artificial neurons (or subneuronal or multineuronal >> structures, it doesn't matter)... > > If it doesn't matter, then let's keep it straightforward and refer to artificial brains rather than to artificial neurons surgically inserted into the midst of natural neurons. This will eliminate a lot of uncertainties that arise from the present state of ignorance about neuroscience. It is a basic requirement of the experiment that the brain replacement be *partial*. This is in order to demonstrate that there is a problem with the idea that a brain part could have normal behaviour but lack consciousness. Having demonstrated that the brain parts must have consciousness, it should then be obvious that an entirely artificial brain made out of these parts will also be conscious. It is true that we don't at present have the capability to make such artificial brains or neurons, but I have asked you to assume that we do. Surely this is no more difficult than imagining the Chinese Room! The Chinese Room is logically possible but probably physically impossible, while artificial neurons may even become available in our lifetimes. >> exhibit the same behaviour as the natural equivalents, >> but lack consciousness. > > In my view an artificial brain can exhibit the same intelligent behaviors as a natural brain without having subjective mental states where we define behavior as, for example, acts of speech. > >> That's all you need to know about them: you don't have to worry how >> difficult it was to make them, just that they have been made (provided >> it is logically possible). Now it seems that you allow that such >> components are possible, but then you say that once they are installed >> the rest of the brain will somehow malfunction and needs to be tweaked. >> That is the blatant contradiction: if the brain starts behaving >> differently, then the artificial components lack >> the defining property you agreed they have. > > As above, let's save a lot of confusion and speak of brains rather than individual neurons. Is there anyone out there still following this thread who is confused by my description of the thought experiment or doesn't understand its rationale? Please email me off list if you prefer. -- Stathis Papaioannou From stathisp at gmail.com Sun Feb 7 08:32:23 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 7 Feb 2010 19:32:23 +1100 Subject: [ExI] The simplest possible conscious system In-Reply-To: <820785.29759.qm@web36504.mail.mud.yahoo.com> References: <820785.29759.qm@web36504.mail.mud.yahoo.com> Message-ID: On 7 February 2010 12:18, Gordon Swobe wrote: > I don't believe in qualia, as such, because the idea implies the possibility of consciousness-without-an-object or consciousness-sans-qualia. Such states seem to me impossible both in theory and in practice. Instead I believe one experiences various qualities or aspects of one's own unified field of consciousness. There isn't any context in which you could use "qualia" where "experience" or "perception" would not do as well. -- Stathis Papaioannou From bbenzai at yahoo.com Sun Feb 7 10:53:22 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 7 Feb 2010 02:53:22 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <455971.33560.qm@web113605.mail.gq1.yahoo.com> Gordon Swobe wrote: --- On Sat, 2/6/10, Aware wrote: > >> A resolution that you're unable to accept due to your >> discomfort with the notion that there is no ESSENTIAL Gordon Swobe to >> experience ESSENTIAL qualia, despite my reassurances that this in no >> way denies the very real Gordon Swobe and his experiences as we AND >> YOU know them. ... >> This is a very old argument, and all the necessary pieces >> of the puzzle are strewn about you.? If you use all the >> pieces, they fit together only one way. > I'll ask again: have you ever had a tooth-ache? Gordon, your repeated question shows that you're either ignoring or not getting the point of Jef's reply. If it makes no sense to you, please say so, don't just regurgitate the same question that he is replying to. On the other hand, if you're simply ignoring things that you don't like reading, what's the point of continuing the conversation? Ben Zaiboc From bbenzai at yahoo.com Sun Feb 7 11:04:36 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 7 Feb 2010 03:04:36 -0800 (PST) Subject: [ExI] Nolopsism In-Reply-To: Message-ID: <370344.96441.qm@web113608.mail.gq1.yahoo.com> The Avantguardian wrote: > Philosophically nolipsism bears some resemblance to > Buddhism which is fine from a spiritual point of view. > E.g. why fear death when there is no "me" to die, etc.? > Being an attorney however, I am sure you are aware of > the legal can of worms nolipsism opens up. Human rights > are tied to identity. If "I" don't exist, then > stealing my stuff or even murdering me is a victimless crime. > Doesn't make for a happy outcome in my opinion, > especially for libertarians. > Probably why the authors back-pedalled from their > claims in the conclusion. Philosophically, it may be as you say. Practically, though, it's not really that useful because it makes no actual difference to the way we regard things like fear of death, or to the law. It's in the scientific arena that nolipsism is most useful, because it explains what subjectivity actually is, and clears the nonsense and confusion out of the way. We know, at least in theory, that subjectivity can be built into an artificial mind, and we can finally dump the concept of the 'hard problem' in the bin. The concept of a "de se" designator explains why we don't have souls, not why we shouldn't have property rights. Ben Zaiboc From nebathenemi at yahoo.co.uk Sun Feb 7 11:21:51 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Sun, 7 Feb 2010 11:21:51 +0000 (GMT) Subject: [ExI] Glacier Geoengineering In-Reply-To: Message-ID: <671565.18196.qm@web27001.mail.ukl.yahoo.com> Keith's proposal relies on using a lot of organic liquids with a low vapor point. Keith, how are you proposing to trap the vapor, condense it and re-use it? If this process isn't highly efficient, you get two big problems: 1) you need to add a lot more liquid, which costs energy to make, adding to expense and cost 2) you have to worry about the environmental problems when the vapor condenses somewhere else. In fact, even with a tiny amount of leakage this can become a problem. I'm not sure how much of these it would take to become toxic rather than mild irritants, but in the volumes needed to freeze glaciers there is a risk of a major spill. Seeing as places with the huge glaciers like Antarctica and Greenland have coastlines with fragile polar ecosystems, I can see this being a problem. In Kim Stanley Robinson's recent trilogy of ecothrillers (40 days of rain/ 50 degrees below/ 60 days) one of the protagonists investigates geoengineering for a presidential candidate and advises him in the last book. The scheme they use for direct lowering of sea-levels is pumping sea water on to the West Antarctic where the glaciers are highly stable, and increasing glacier coverage that way. Tom From aware at awareresearch.com Sun Feb 7 13:21:55 2010 From: aware at awareresearch.com (Aware) Date: Sun, 7 Feb 2010 05:21:55 -0800 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: On Sat, Feb 6, 2010 at 9:54 PM, Spencer Campbell wrote: > The Avantguardian : >> Philosophically nolipsism bears some resemblance to Buddhism which is fine from a spiritual point of view. E.g. why fear death when there is no "me" to die, etc.?Being an attorney however, I am sure you are aware of the legal can of worms nolipsism opens up. Human rights are tied to identity. If "I" don't exist, then stealing my stuff or even murdering me is a victimless crime. Doesn't make for a happy outcome in my opinion, especially for libertarians.?Probably why the authors back-pedalled from their claims in the conclusion. > > Hi, my name is Spencer Campbell, I will be your Stefano Vaj for tonight. > > Are victimless crimes morally, ethically, and legally acceptable? They > ARE crimes, so, no to the last one. The first two are arguable. I feel > confident that a coherent system of law could be made without the > assumption that any selves exist for it to protect. It would > essentially treat people as highly valuable property, no different > from houses or cars, owned by entities just as imaginary as > corporations. > > Really it could only streamline everything. We should do this. We > should do this right now. Spencer, a more coherent system of justice is possible, and "we" continue to move, in fits and starts, in this direction already for many millennia. But it's not based on abolishment of the self. As Pollock's paper shows, a sense of self is NECESSARY for situated effectiveness. Rather, it's a matter of identification of self over a GREATER sphere of agency. Increasing agreement on the rightness or "morality" of actions corresponds to the extent such actions are assessed as promoting an increasing context of increasingly coherent, hierarchical, fine-grained, evolving but present (subjective) values, via methods methods increasingly effective, in principle, over increasing (objective) scope of interaction. Lather, rinse, repeat. Yes, it's a mouthful, and I estimate it takes several hundred pages to unpack in order to accommodate the priors of most here. - Jef From jonkc at bellsouth.net Sun Feb 7 16:54:32 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 7 Feb 2010 11:54:32 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <684806.48339.qm@web36503.mail.mud.yahoo.com> References: <684806.48339.qm@web36503.mail.mud.yahoo.com> Message-ID: <1608D5D4-35BB-4CCC-8E48-7B3A679037B6@bellsouth.net> Since my last post Gordon Swobe has posted 5 times. > I consider myself a materialist Gordon considers himself a rationalist too, but we all know that is not the case. > but in the reaction against mind/matter dualism some of my fellow materialists (e.g., Dennett) go overboard and irrationally deny the plain facts of subjective experience BULLSHIT. Dennett is not a fool and only a fool would deny subjective experiance > As I use the word "consciousness", I believe the amoeba has none whatsoever. This unconscious creature exhibits intelligent behavior but [...] But what? If Gordon believes an amoeba acts intelligently (something I will not defend) then he has absolutely no reason to believe it is not conscious, except perhaps for a mysterious little voice in his head whispering otherwise. And let me repeat for the 422 time that Gordon's ideas and Darwin's are 100% incompatible. If consciousness and intelligence are not linked them science has no explanation how consciousness came to be on planet Earth, and yet we know with absolute certainty that it did at least once and probably many billions of times. People, this is not a minor point, this is a show stopper as far as Gordon's ideas are concerned. Charles Darwin had the single best idea that any human being ever had and it is at the center of all the biological sciences. Either Gordon Swobe is the greatest genius the human race has ever produced or the man is dead wrong. I must confess to being a little disappointed in Extropians because I seem to be the only one who sees how important Evolution is; instead they dispute Gordon on some obscure point in his latest ridiculous thought experiment. Real experiments take precedence over thought experiments and planet Earth has been conducting one for the last 3 billion years. The results of that experiment are unambiguous, consciousness and intelligence MUST be linked and if you or me or Gordon doesn't understand exactly how that could be it doesn't change the fact that it is. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Sun Feb 7 17:23:48 2010 From: max at maxmore.com (Max More) Date: Sun, 07 Feb 2010 11:23:48 -0600 Subject: [ExI] The Audacious beauty of our future - Natasha Vita-More, an interview Message-ID: <201002071750.o17Hoark026457@andromeda.ziaspace.com> It's good to see an extensive and informative interview that's also presented in such an appealing format. Congrats! I haven't come across this site before, but it's nicely designed and appears to have some other intriguing content. http://spacecollective.org/Wildcat/5527/The-Audacious-beauty-of-our-future-Natasha-VitaMore-an-interview Max From gts_2000 at yahoo.com Sun Feb 7 18:58:26 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 7 Feb 2010 10:58:26 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <455971.33560.qm@web113605.mail.gq1.yahoo.com> Message-ID: <82654.40894.qm@web36501.mail.mud.yahoo.com> How about you, Ben? Have you ever had a toothache? Was it a real toothache? Or was it just an illusion? -gts From bbenzai at yahoo.com Sun Feb 7 19:02:21 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 7 Feb 2010 11:02:21 -0800 (PST) Subject: [ExI] Blue Brain Project In-Reply-To: Message-ID: <558651.23421.qm@web113616.mail.gq1.yahoo.com> Anyone not familiar with this can read about it here: http://seedmagazine.com/content/article/out_of_the_blue/P1/ The next ten years should be interesting! Ben Zaiboc From gts_2000 at yahoo.com Sun Feb 7 19:30:19 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 7 Feb 2010 11:30:19 -0800 (PST) Subject: [ExI] The simplest possible conscious system In-Reply-To: Message-ID: <487344.53568.qm@web36503.mail.mud.yahoo.com> --- On Sun, 2/7/10, Spencer Campbell wrote: > This seems to me a good argument for the idea that all > consciousness is that of a subject being aware of an object. Yes. > I'd have said that before, but my source regarding that particular state > (from Ken Wilber, I think) wasn't very specific on the matter. Mystics including Wilber like to talk about such things as consciousness-without-an-object. I believe they're misguided. > The vast-consciousness-without-content state does seem to > contradict this theory, though, from what little I know about it. I've > heard it described (probably by Ken Wilber again) as a void which is > only aware of itself. Speaking only from my own experience, I can tell you that there does exist for me a state that *seems* like consciousness-without-content. I can see how those who have mystical philosophical biases might interpret that state as consciousness of the "void" or some such. I have no such mystical bias (although I once did) and I believe that, in reality, the experience to which I refer above represents only a very clear and silent state of mind. It appears typically in the moment following that during which subject-object actually does disappear -- after the transcendent moment itself -- and is easily mistaken for it. -gts From spike66 at att.net Sun Feb 7 19:35:34 2010 From: spike66 at att.net (spike) Date: Sun, 7 Feb 2010 11:35:34 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: <1608D5D4-35BB-4CCC-8E48-7B3A679037B6@bellsouth.net> References: <684806.48339.qm@web36503.mail.mud.yahoo.com> <1608D5D4-35BB-4CCC-8E48-7B3A679037B6@bellsouth.net> Message-ID: <8AF4B8E496854465AAD96F0C0D14B279@spike> ...On Behalf Of John Clark ...I must confess to being a little disappointed in Extropians because I seem to be the only one who sees how important Evolution is... John K Clark John you speak great blammisphies! I practically worship evolution. Or so I am told by the religionistas. This is my contribution to your meme: one knowns not misery and cognitive dissonance until one has been an avid observer of wildlife *without* Darwin's dangerous ideas. In my own misspent youth, not really knowing anything about evolution, I struggled to understand how these observations could even be possible. They didn't talk about Darwin in the public schools in those days, not even in the biology classes. I listened carefully in biology, always enjoying it, actually reading the textbook even, but I sure do not recall any discussion of evolution. My first real exposure to the notion was from Carl Sagan's Cosmos, in my third year of college. (!) One just cannot get nature without the concept of evolution. Those who have always assumed evolution may not realize what it is like to be an observer of the wilderness without the knowledge of evolution. Learning of Darwin was like being intellectually born again, and doing it right the second time. spike From gts_2000 at yahoo.com Sun Feb 7 20:07:12 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 7 Feb 2010 12:07:12 -0800 (PST) Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: Message-ID: <445777.14858.qm@web36505.mail.mud.yahoo.com> --- On Sun, 2/7/10, Stathis Papaioannou wrote: >> As I use the word "consciousness", I believe the > amoeba has none whatsoever. This unconscious creature > exhibits intelligent behavior but because it has no nervous > system I doubt very seriously that it has any conscious > experience of living. It looks for food intelligently in the > same sense that your watch tells the time intelligently and > in the same sense in which weak AI systems may one day have > the intelligence needed to pass the Turing test; that is, it > has intelligence but no consciousness. > > The amoeba is not only less conscious than a human, it is > also less intelligent. The amoeba has no neurons or nervous system, Stathis, so "less conscious" is an understatement. It has no consciousness at all. -gts From lacertilian at gmail.com Sun Feb 7 20:47:20 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 12:47:20 -0800 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: Aware : > Spencer, a more coherent system of justice is possible, and "we" > continue to move, in fits and starts, in this direction already for > many millennia. > > But it's not based on abolishment of the self. ?As Pollock's paper > shows, a sense of self is NECESSARY for situated effectiveness. > > Rather, it's a matter of identification of self over a GREATER sphere of agency. Right: corporations. Total abolishment of the self would ultimately require doing away with recognition of discrete objects entirely, which doesn't make a lot of sense. I would want a system that treats "selves" as legal fabrications, not one which denies their conceptual validity. Phew, almost said "existence" there. Shaky ground. Aware : > Increasing agreement on the rightness or "morality" of actions > corresponds to the extent such actions are assessed as promoting an > increasing context of increasingly coherent, hierarchical, > fine-grained, evolving but present (subjective) values, via methods > methods increasingly effective, in principle, over increasing > (objective) scope of interaction. Lather, rinse, repeat. > > Yes, it's a mouthful, and I estimate it takes several hundred pages to > unpack in order to accommodate the priors of most here. You're correct: for me, it's more than a mouthful. You seem to be giving a definition for "increasing agreement on the rightness or 'morality' of actions", but I can't figure out exactly how that bears on the discussion. It wasn't a point of contention. We seem to be making incremental progress toward Kantland. It's only a matter of time before we come across a roaming herd of categorical imperatives. From kanzure at gmail.com Sun Feb 7 20:49:15 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Sun, 7 Feb 2010 14:49:15 -0600 Subject: [ExI] Blue Brain Project In-Reply-To: <558651.23421.qm@web113616.mail.gq1.yahoo.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> Message-ID: <55ad6af71002071249q64f8f7d2yc4017d23457856a9@mail.gmail.com> On Sun, Feb 7, 2010 at 1:02 PM, Ben Zaiboc wrote: > Anyone not familiar with this can read about it here: > http://seedmagazine.com/content/article/out_of_the_blue/P1/ > > The next ten years should be interesting! I know I mentioned these links a few days ago, but it's worth repeating. Noah Sutton is making a documentary: http://thebeautifulbrain.com/2010/02/bluebrain-film-preview/ "We are very proud to present the world premiere of BLUEBRAIN ? Year One, a documentary short which previews director Noah Hutton?s 10-year film-in-the-making that will chronicle the progress of The Blue Brain Project, Henry Markram?s attempt to reverse-engineer a human brain. Enjoy the piece and let us know what you think." There's a longer video that explains what he's up to. The Emergence of Intelligence in the Neocortical Microcircuit http://video.google.com/videoplay?docid=-2874207418572601262&ei=lghrS6GmG4jCqQLA1Yz7DA - Bryan http://heybryan.org/ 1 512 203 0507 From stefano.vaj at gmail.com Sun Feb 7 20:50:56 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 7 Feb 2010 21:50:56 +0100 Subject: [ExI] The simplest possible conscious system In-Reply-To: References: <820642.67186.qm@web113613.mail.gq1.yahoo.com> <580930c21002060202i4b0917f0m3d5404c7d219d8b2@mail.gmail.com> Message-ID: <580930c21002071250u1acda46dnfab88c16064f1445@mail.gmail.com> On 7 February 2010 01:11, Spencer Campbell wrote: > Stefano Vaj : >> If "conscious" is taken to mean "exhibiting the same information >> processing features of an average adult, healthy, alert, educated >> human being" the theoretical answer for me corresponds to the question >> "what is the simplest possible universal computer". > > Of course "conscious" is not taken to mean that here, nor anything > like that, if only for the reasons pointed out by James Choate two or > three days ago. Humans are ludicrously complex examples of > consciousness (assuming they are conscious at all), and almost > completely useless for finding an answer to the question. No, this was not my point. What I meant is "if conscious does not mean to be entrusted with a soul/operated by an organic brain/being bipedal". I have no doubt that consciousness which is the (one of the?) vague concept(s) which we adopt to indicate human alertness can be exhibited by *any* universal computer. That is, if reasonable performance is not a requirement. -- Stefano Vaj From lacertilian at gmail.com Sun Feb 7 21:05:40 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 13:05:40 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <156490.82842.qm@web36504.mail.mud.yahoo.com> Message-ID: Stathis Papaioannou : > Is there anyone out there still following this thread who is confused > by my description of the thought experiment or doesn't understand its > rationale? Please email me off list if you prefer. Seems pretty clear to me, as a neuron-by-neuron replacement is precisely what I've wanted for the past two to five years. I would advise phrasing it again, simply and concisely, because (a) what you have in mind may have changed since you last did so, (b) I might have overwritten your description with my own, and (c) the point on which Gordon disagrees remains a total mystery. Incidentally, I had a toothache last night. Not that anyone asked, but it was an illusion and I was very frustrated that I couldn't dispel it. Gordon, time for the true or false game: there is a difference between real toothaches and illusionary toothaches. My answer is "false", and I get the impression that yours will be "true". If so, why? How? Is there any way to measure realness in a toothache, scientifically? If not, is it possible to distinguish between the two through purely subjective experience? To both of these questions I give, of course, a decisive "no". But maybe you mean something by the word "illusion" that I haven't yet grasped. Currently, I am using a definition borrowing partly from Buddhism and partly from Dungeons & Dragons. If anyone knows of another dimension I've missed, do tell. From lacertilian at gmail.com Sun Feb 7 21:13:44 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 13:13:44 -0800 Subject: [ExI] Nolopsism In-Reply-To: <915E9809-F351-43BE-A6A5-10E53227BD42@GMAIL.COM> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> <915E9809-F351-43BE-A6A5-10E53227BD42@GMAIL.COM> Message-ID: JOSHUA JOB : > Problem: Who do you punish? This imaginary entity that damaged the property of another imaginary entity? If you do it like that, then I don't see any difference between that and a legal system based on actual "selves." And without a victim, there is no crime. I can't see the purpose of law without individual rights as its basis (rights based on principles derived from the nature of human beings), and if you eliminate the individual, you'll have a hard time justifying anything, ultimately. Solution: No one needs to be punished. In theory, the only justification for legal punishment right now is to modify future behavior on a societal scale. There are far more effective and less draconian methods of doing this. See: Norwegian open prisons. http://www.globalpost.com/dispatch/europe/091017/norway-open-prison There are other justifications which make punishment a much more attractive option. Prisoners of war in ancient Rome were made to fight as gladiators. Justification: entertainment. An eye-for-an-eye system is self-justifying: justice for the sake of justice. JOSHUA JOB : > Corporations are entities made up of people ultimately, and they are created and owned and controlled by people. Hence a crime against a corporation is a crime against a group of people (the owners or employees). Without individuals, you can't say make laws based on happiness, or prosperity, or anything else, because all of those reference individuals and minds. And obviously "rights" go right out the window. A measurement of prosperity need make no reference to individuals or minds unless corporations, countries, and planets count as individuals. You could define the Earth's prosperity as equivalent to its biodiversity, for example, and just start tracking all the DNA. I don't know why you would, but you could. My argument against happiness is the same as my argument against punishment: it is valuable only as a tool for behavioral modification, heartless as that may sound. Look at how happiness evolved. It's just an arbitrary reward for survival. This is the attitude with which I regard my own happiness, and it doesn't seem to impair me in any way practical or philosophical. Finally: obviously "rights" don't go out the window at all! In fact, we would only have more of them. A brand-new car would have the right not to be crushed into a tiny cube, because such would be blatantly wasteful and wrong. Similarly, a brand-new human would have the same right, but a totaled junker or a corpse would not. From aware at awareresearch.com Sun Feb 7 21:03:42 2010 From: aware at awareresearch.com (Aware) Date: Sun, 7 Feb 2010 13:03:42 -0800 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: On Sun, Feb 7, 2010 at 12:47 PM, Spencer Campbell wrote: > Aware : >> Spencer, a more coherent system of justice is possible, and "we" >> continue to move, in fits and starts, in this direction already for >> many millennia. >> >> But it's not based on abolishment of the self. ?As Pollock's paper >> shows, a sense of self is NECESSARY for situated effectiveness. >> >> Rather, it's a matter of identification of self over a GREATER sphere of agency. > > Right: corporations. Total abolishment of the self would ultimately > require doing away with recognition of discrete objects entirely, > which doesn't make a lot of sense. I would want a system that treats > "selves" as legal fabrications, not one which denies their conceptual > validity. > > Phew, almost said "existence" there. Shaky ground. > > Aware : >> Increasing agreement on the rightness or "morality" of actions >> corresponds to the extent such actions are assessed as promoting >> an increasing context of increasingly coherent, hierarchical, >> fine-grained, present but evolving (subjective) values, via >> methods increasingly effective, in principle, over increasing >> (objective) scope of interaction. Lather, rinse, repeat. >> >> Yes, it's a mouthful, and I estimate it takes several hundred pages to >> unpack in order to accommodate the priors of most here. > > You're correct: for me, it's more than a mouthful. You seem to be > giving a definition for "increasing agreement on the rightness or > 'morality' of actions", but I can't figure out exactly how that bears > on the discussion. It wasn't a point of contention. I saw you wrote "I feel confident that a coherent system of law could be made without the assumption that any selves exist for it to protect." I responded to that without recognizing the sarcasm that became obvious further down. Sorry. > We seem to be making incremental progress toward Kantland. It's only a > matter of time before we come across a roaming herd of categorical > imperatives. I'm talking about an upgrade to Kant's Categorical Imperative. Its most significant weakness is its lack of evolutionary perspective. - Jef From lacertilian at gmail.com Sun Feb 7 21:33:20 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 13:33:20 -0800 Subject: [ExI] The simplest possible conscious system In-Reply-To: <580930c21002071250u1acda46dnfab88c16064f1445@mail.gmail.com> References: <820642.67186.qm@web113613.mail.gq1.yahoo.com> <580930c21002060202i4b0917f0m3d5404c7d219d8b2@mail.gmail.com> <580930c21002071250u1acda46dnfab88c16064f1445@mail.gmail.com> Message-ID: Stefano Vaj : > No, this was not my point. What I meant is "if ?conscious does not > mean to be entrusted ?with a soul/operated by an organic brain/being > bipedal". I was thinking you meant "having thoughts, feelings, opinions, sensory experiences, and every other feature common to the information processing of all adult, healthy, alert, educated human beings". That seemed extremely excessive to me, as a description for the minimum prerequisites of consciousness, and thus the strong reaction. This second definition of yours is deductive, not inductive, so it doesn't tell me much: only what you think consciousness isn't, not what it is. From lacertilian at gmail.com Sun Feb 7 22:24:57 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 14:24:57 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: <1608D5D4-35BB-4CCC-8E48-7B3A679037B6@bellsouth.net> References: <684806.48339.qm@web36503.mail.mud.yahoo.com> <1608D5D4-35BB-4CCC-8E48-7B3A679037B6@bellsouth.net> Message-ID: I am now officially arguing with John Clark. Cover your eyes, everybody, 'cause this is gonna get real ugly, real fast. John Clark : > Since my last post?Gordon Swobe has posted 5 times. > > I consider myself a materialist > > Gordon considers himself a rationalist too, but we all know that is not the > case. False. I consider him a rationalist. Therefore, we do not all know that is not the case. Rationalism dictates that reason alone is sufficient to produce knowledge. Gordon trusts absolutely in his powers of reason, even when the material evidence contradicts him: the very model of a rationalist! > But what? If Gordon believes an amoeba acts intelligently (something I will > not defend) then he has absolutely no reason to believe it is not conscious, > except perhaps for a mysterious little voice in his head whispering > otherwise. Point one: leukocytes, such as the one in the (fantastic) video that's been floating around here, obviously behave intelligently when it comes to chasing down and devouring foreign bodies. It does not make sense to say they don't. And no, I will not repeat my definition of intelligence again! Point two: all beliefs are mysterious little voices whispering in someone's head. If you believe otherwise, clearly you are delusional and/or have never read the relevant Lewis Carrol story. http://www.ditext.com/carroll/tortoise.html Point three: if you really must insist on a logical-sounding reason to claim that a leukocyte might be intelligent but not conscious, I suggest you ask Gordon for one directly rather than trying to goad him into defending his honor. Of course this assumes you're actually trying to further the discussion, rather than simply shouting libel into the aether for its own sake (as I am now). > And let me repeat for the 422 time that Gordon's ideas and Darwin's are 100% > incompatible. I take it you've read On the Origin of Species. I have not. Quote a passage that contradicts Gordon's ideas, please? I have my doubts that he says much about consciousness directly, but if you can even come up with something IMPLICITLY incompatible I would be impressed. > If consciousness and intelligence are not linked them science > has no explanation how consciousness came to be on planet Earth, and yet we > know with absolute certainty that it did at least once and probably many > billions of times. Do we John? Do we know that? With absolute certainty, no less. > People, this is not a minor point, this is a show stopper > as far as Gordon's ideas are concerned. Charles Darwin had the single best > idea that any human being ever had and it is at the center of all the > biological sciences.?Either?Gordon Swobe is the greatest genius the human > race has ever produced or the man is dead wrong. If you say so, if you say so, and if you say so, respectively. No one in the history of the universe has ever come anywhere close to John Clark's astounding mastery of hyperbole. If the loudest philosopher wins, we're done. > I must confess to being a little disappointed in Extropians because I seem > to be the only one who sees how important Evolution is; instead they dispute > Gordon on some obscure point in his latest ridiculous thought experiment. It disturbs me on a visceral level that you capitalize the E in "evolution" like that, but I'll disregard it for the purposes of argument. Of course evolution is important. In general. However, it isn't the least bit important for someone concerned with comprehending the logic of Gordon Swobe well enough to start agreeing with him or to show him where he went wrong. It does not matter whether or not a given thought experiment is ridiculous if the person who came up with it believes it isn't, and the only way to even THEORETICALLY convince them otherwise is to pick at the most obscure points in it. You can expect they've already noticed the least obscure. > Real experiments take precedence over thought experiments and planet Earth > has been conducting one for the last 3 billion years. The results of that > experiment are unambiguous, consciousness and intelligence MUST be linked > and if you or me or Gordon doesn't understand exactly how that could be it > doesn't change the fact that it is. One: life is not an experiment. Two: the earth can not conduct experiments. Three: I'm sorry but I can't stop myself from using a stereotypical southern creationist voice when I read that paragraph. Reel experamints teyk press-a-dents. Not to a rationalist, they don't. From stathisp at gmail.com Sun Feb 7 22:54:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 8 Feb 2010 09:54:47 +1100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <445777.14858.qm@web36505.mail.mud.yahoo.com> References: <445777.14858.qm@web36505.mail.mud.yahoo.com> Message-ID: On 8 February 2010 07:07, Gordon Swobe wrote: > The amoeba has no neurons or nervous system, Stathis, so "less conscious" is an understatement. It has no consciousness at all. As far as you're concerned the function of the nervous system - intelligence - which is due to the interactions between neurons bears no essential relationship to consciousness. You believe that consciousness is a property of certain specialised cells. That leaves open the possibility that amoebae have these properties also, despite their apparent lack of intelligence. -- Stathis Papaioannou From lacertilian at gmail.com Mon Feb 8 00:08:24 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 16:08:24 -0800 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: Aware : >Spencer Campbell : >> You're correct: for me, it's more than a mouthful. You seem to be >> giving a definition for "increasing agreement on the rightness or >> 'morality' of actions", but I can't figure out exactly how that bears >> on the discussion. It wasn't a point of contention. > > I saw you wrote "I feel confident that a coherent system of law could > be made without the assumption that any selves exist for it to protect." > > I responded to that without recognizing the sarcasm that became > obvious further down. ?Sorry. Apology unnecessary, but accepted. To be fair, it was half-sarcasm. On the Internet, I try to avoid saying things that I don't at least partially believe are true. Aware : >Spencer Campbell : >> We seem to be making incremental progress toward Kantland. It's only a >> matter of time before we come across a roaming herd of categorical >> imperatives. > > I'm talking about an upgrade to Kant's Categorical Imperative. ?Its > most significant weakness is its lack of evolutionary perspective. Request that you start a new thread to elaborate on that. If I understand you correctly, which I suspect I do not, you're implying that evolution is purpose-generating. To my mind this is a massive teleological mistake; the evolution of life on Earth is just one elaborate ongoing accident, not a directed effort toward better things. You know what humanity has evolved lately? The ability to burn fat faster. http://www.cracked.com/blog/lose-weight-the-natural-way-by-slowly-evolving I'm not sure how I feel about Cracked.com being the first hit for "brown fat evolution" on Google, but there you go. We have evolved to become less fuel-efficient. Not in exchange for higher peak power output or anything. Just because. From max at maxmore.com Mon Feb 8 00:23:17 2010 From: max at maxmore.com (Max More) Date: Sun, 07 Feb 2010 18:23:17 -0600 Subject: [ExI] Theories, medicine, and government Message-ID: <201002080023.o180NQdN028663@andromeda.ziaspace.com> I liked this quote from Taleb's excellent book: "A theory is like medicine (or government): often useless, sometimes necessary, always self-serving, and on occasion lethal. So it needs to be used with care, moderation, and close adult supervision." -- Taleb, The Black Swan, p. 285. Max From lacertilian at gmail.com Mon Feb 8 00:56:24 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 7 Feb 2010 16:56:24 -0800 Subject: [ExI] Theories, medicine, and government In-Reply-To: <201002080023.o180NQdN028663@andromeda.ziaspace.com> References: <201002080023.o180NQdN028663@andromeda.ziaspace.com> Message-ID: Max More : > I liked this quote from Taleb's excellent book: > > "A theory is like medicine (or government): often useless, sometimes > necessary, always self-serving, and on occasion lethal. So it needs to be > used with care, moderation, and close adult supervision." > -- Taleb, The Black Swan, p. 285. > > Max Magnificent. This does imply that I am roughly equivalent to a morphine addict (or senator), of course. From max at maxmore.com Mon Feb 8 03:19:14 2010 From: max at maxmore.com (Max More) Date: Sun, 07 Feb 2010 21:19:14 -0600 Subject: [ExI] The Surrogates, graphic novel Message-ID: <201002080325.o183PRUR020245@andromeda.ziaspace.com> I have added this comment on The Surrogates to my list of "Comics of Transhumanist Interest" here: The Surrogates, by Robert Venditti & Brett Weldele I found the movie mildly entertaining, but not terribly engaging or intellectually stimulating. The original graphic novel is a little more interesting. I didn't much like the illustration by Weldele, though it might be more to your taste (too lacking in detail for my liking). The strongest parts, for me, were the fictional ads for surrogate bodies (which seemed to have very much in common with Natasha Vita-More's earlier "Primo Posthuman") and related text on the ad campaign. We're all extremely familiar with the idea of virtual bodies in virtual space. Surrogates differs from the standard by envisioning a world of physical surrogate bodies that often look like de-aged and enhanced versions of people's "real" physical bodies. The tone is lightly anti-transhumanist, alas. In reality, transhumanists might like to have such surrogate bodies, but surely they would also prefer to enhance their primary bodies, rather than to leave their sluggish, slobbish physical primaries stacked ungainly in the closet. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From avantguardian2020 at yahoo.com Mon Feb 8 03:11:54 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sun, 7 Feb 2010 19:11:54 -0800 (PST) Subject: [ExI] Nolopsism Message-ID: <292696.68811.qm@web65611.mail.ac4.yahoo.com> ----- Original Message ---- > From: Ben Zaiboc > To: extropy-chat at lists.extropy.org > Sent: Sun, February 7, 2010 3:04:36 AM > Subject: Re: [ExI] Nolopsism > Philosophically, it may be as you say.? Practically, though, it's not really > that useful because it makes no actual difference to the way we regard things > like fear of death, or to the law. > > It's in the scientific arena that nolipsism is most useful, because it explains > what subjectivity actually is, and clears the nonsense and confusion out of the > way. How so??Siddhartha Guatama?said that the self was an illusion some 3000 years ago only?he called it the skandha of consciousness instead of a "de se designator".?What makes nolopsism more scientifically useful than Buddhism? Are you suggesting that by sweeping consciousness under a rug, it can be scientifically ignored? I can imagine the dialog: Chalmers: What is the neurological basis of phenomenal consciousness? Pollock: Phenomenal consciousness doesn't actually exist. It is simply a necessary illusion of subjectivity. It allows you to think about yourself without knowing anything about yourself. Which?would be?useful?if you went on a bender and passed out?in the Stanford Library. That is, of course,?if there were a you to do the thinking and a you to to think about. Which there isn't. But human?minds weren't meant to go there, so you can pretend to exist if you want. ? Julie Andrews [waltzing by]: Me... the name... I?call myself...? Chalmers: Oookaaay...?So what is the neurological basis of the *illusion* of phenomenal consciousness? ? Pollock: [glances at watch] Well would you?look at the time. It's been nice chatting but I must be going now. ? >? We know, at least in theory, that subjectivity can be built into an > artificial mind, and we can finally dump the concept of the 'hard problem' in > the bin. So you think that programming a computer to falsely believe itself to be conscious is easier than to program one to actually be so. Or do you think that programming a computer to use "de se" designators necessarily makes it think itself conscious? A person could get by without the use "de se" designators yet still retain a sense of self. It might sound funny, but?a person could consistently refer to themselves in the third-person by name even in their thoughts.?Stuart doesn't think that "de se" designators are particularly profound.?Stuart doesn't need them. Do you see what Stuart means?? ? > The concept of a "de se" designator explains why we don't have souls, not why we > shouldn't have property rights.? Property rights are no less abstract than souls. Neither seems to have a physical basis beyond?metaphysical/philosophical fiat. Communists tend not to believe in either. ? Stuart LaForge ? "Never express yourself more clearly than you think." - Niels Bohr From Frankmac at ripco.com Mon Feb 8 03:43:08 2010 From: Frankmac at ripco.com (Frank McElligott) Date: Sun, 7 Feb 2010 22:43:08 -0500 Subject: [ExI] belief in Karma, Message-ID: <001e01caa870$d8f5ee80$ad753644@sx28047db9d36c> In 2005 the city of New.Orleans. was under water, I was there during that time helping out, and yes it was REALLY under water and dying a slow death. Now within 5 years from that event that city can rejoice in their football team as a sign from the CONTROLLER, that's a new name for you know who, that the worst is now over, and from now on everything will be on the upside instead on the slope to no where. Now if Haiti had a football team,,,,,,,, Bye the Bye, the sun is acting up again with large solar flares, in the "X" range, are expected this week to peak. There are some people who believe it affects people to do strange things, and become very very fearful here on earth Watch the stock market go crazy this week, not because of Greece, it's the sun doing the damage. The flares effect the magnetic waves and telephone communications I am told, but I am retired without a job so what do I know? Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Mon Feb 8 04:01:27 2010 From: max at maxmore.com (Max More) Date: Sun, 07 Feb 2010 22:01:27 -0600 Subject: [ExI] Joy in the transcendence of the species Message-ID: <201002080401.o1841ase025933@andromeda.ziaspace.com> I know many of you are fascinated by the intricacies and peculiarities of language. Perhaps you might be motivated to help me figure this out: I have long enjoyed the word "Schadenfreude", meaning pleasure derived from the misfortunes of others. [Note: I've enjoyed the *term* "schadenfreude", not the thing it refers to.] I got to thinking what word (if one could be coined) would mean "pleasure in the transcendence of the species" (i.e., transcendence of the human condition). It may be asking a lot of a word (even a German word) to do all this work, but I'd like to give it a try. According to Wikipedia: The Buddhist concept of mudita, "sympathetic joy" or "happiness in another's good fortune," is cited as an example of the opposite of schadenfreude. However, that doesn't do the job that I'm looking for. On a first stab, exclusively thinking about German, I came up with the rather unsatisfactory "erhabenheitschaude" which would mean (I think) joy in transcendence. That's part of what I'm looking for, but doesn't fit the bill. Any thoughts? (Please, *anything* to dilute the Searle/semantics/syntax discussion...) Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From hkeithhenson at gmail.com Mon Feb 8 04:23:42 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 7 Feb 2010 21:23:42 -0700 Subject: [ExI] extropy-chat Digest, Vol 77, Issue 12 In-Reply-To: References: Message-ID: On Sun, Feb 7, 2010 at 5:00 AM, Tom Nowell wrote: > Keith's proposal relies on using a lot of organic liquids with a low vapor point. > > Keith, how are you proposing to trap the vapor, condense it and re-use it? If this process isn't highly efficient, you get two big problems: Heat pipes are sealed, passive devices. It's hard to put a number on their efficiency besides 100%. http://en.wikipedia.org/wiki/Heat_pipe > 1) you need to add a lot more liquid, which costs energy to make, adding to expense and cost > 2) you have to worry about the environmental problems when the vapor condenses somewhere else. In fact, even with a tiny amount of leakage this can become a problem. I'm not sure how much of these it would take to become toxic rather than mild irritants, but in the volumes needed to freeze glaciers there is a risk of a major spill. Seeing as places with the huge glaciers like Antarctica and Greenland have coastlines with fragile polar ecosystems, I can see this being a problem. Propane and ammonia (two good choices) don't cause environmental problems in the small amounts used. > In Kim Stanley Robinson's recent trilogy of ecothrillers (40 days of rain/ 50 degrees below/ 60 days) one of the protagonists investigates geoengineering for a presidential candidate and advises him in the last book. The scheme they use for direct lowering of sea-levels is pumping sea water on to the West Antarctic where the glaciers are highly stable, and increasing glacier coverage that way. I think you mean East Antartic but I am not sure this would be a good idea. The salt in seawater might cause the glaciers to soften and slide off into the ocean. Of course, if you are going to pump water upwards more than a few hundred meters, it only cost 200-300 meters of head to take the salt out with osmosis. Interesting concept though. To put numbers on it, the area of the earth is ~5.1 x 10E14 square meters. 3/4 of that is water, so ~3.8 10E15 square meters. To lower the oceans by a meter in a year would require pumping at 1.21 x 10E7 cubic meters per second. 12,100,000 cubic meters per second. Hmm The flow of the Amazon is 219,000 cubic meters per second, so it would take 55 times the flow of the Amazon. Pumping it up some 3000 meters to the ice sheet would take considerable energy, P=Q*g*h*1/pump efficiency (0.9). 1.21*10E7*9.8*3000/0.9 = 396 GW. 400 one GW reactors would do the job. (Please check this number.) Keith > Tom From spike66 at att.net Mon Feb 8 04:34:26 2010 From: spike66 at att.net (spike) Date: Sun, 7 Feb 2010 20:34:26 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <201002080401.o1841ase025933@andromeda.ziaspace.com> References: <201002080401.o1841ase025933@andromeda.ziaspace.com> Message-ID: <12C91DE16C6A47ADB3E4696EC08C4B18@spike> > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Max More > ...I came up with > the rather unsatisfactory "erhabenheitschaude" which would > mean (I think) joy in transcendence. That's part of what I'm > looking for, but doesn't fit the bill. Any thoughts? Erhebungaufregung? Approximately ~ horny for uplifting. > > (Please, *anything* to dilute the Searle/semantics/syntax > discussion...) > > Max Hey I tried to give them Valentines Day advice. Did they laugh at my jokes? No! They all seem to have slammed the whole bottle of serious pills lately. I do commend the Searlers for at least maintaining a most civil tone throughout the marathon discussion however. You guys were aware that occasional lighthearted silliness is an extropian tradition, ja? spike From aware at awareresearch.com Mon Feb 8 04:50:41 2010 From: aware at awareresearch.com (Aware) Date: Sun, 7 Feb 2010 20:50:41 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <12C91DE16C6A47ADB3E4696EC08C4B18@spike> References: <201002080401.o1841ase025933@andromeda.ziaspace.com> <12C91DE16C6A47ADB3E4696EC08C4B18@spike> Message-ID: On Sun, Feb 7, 2010 at 8:34 PM, spike wrote: > You guys were aware that occasional lighthearted silliness is an extropian > tradition, ja? I was certainly Aware... I suppose I could confuse those who get my position, and reinforce the thinking of those who don't by appropriating the old Behaviorist gag on the inaccessibility of subjective experience: Jef: It was good for you; was it good for me? Gordon: Don't bother me. I have a headache... Or maybe we could spend the next couple of months debating where one's lap goes when one stands up. - Jef From nanite1018 at gmail.com Mon Feb 8 05:10:07 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Mon, 8 Feb 2010 00:10:07 -0500 Subject: [ExI] Nolopsism In-Reply-To: References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> <915E9809-F351-43BE-A6A5-10E53227BD42@GMAIL.COM> Message-ID: On Feb 7, 2010, at 4:13 PM, Spencer Campbell wrote: > Solution: No one needs to be punished. In theory, the only > justification for legal punishment right now is to modify future > behavior on a societal scale. There are far more effective and less > draconian methods of doing this... While I agree that the point of a justice system is to prevent the violation of people's rights, and whatever means are best for doing this are obviously to be preferred (such as, I suppose, open prisons, though I am wary of such a thing). > A measurement of prosperity need make no reference to individuals or > minds unless corporations, countries, and planets count as > individuals... My argument against happiness is the same as my argument against > punishment: it is valuable only as a tool for behavioral modification, > heartless as that may sound. Look at how happiness evolved. It's just > an arbitrary reward for survival. This is the attitude with which I > regard my own happiness, and it doesn't seem to impair me in any way > practical or philosophical. Seeing as how the goal of all living organisms is to live (as any other goal leads to death and the end of all possible goals), and we are conscious entities (whatever that means in nolipsism, even if it is an imaginary thing in the imagination of an imaginary thing, haha) that need motivation to survive, and emotions, including happiness, are an important part of that. Sure, it is behavior modification, of a kind, but it is still important to human beings, at least for now (and I hope indefinitely). > Finally: obviously "rights" don't go out the window at all! In fact, > we would only have more of them. A brand-new car would have the right > not to be crushed into a tiny cube, because such would be blatantly > wasteful and wrong. Similarly, a brand-new human would have the same > right, but a totaled junker or a corpse would not. A car can't have rights because it isn't self-aware. It isn't even alive. It doesn't have choices, and nothing matters to it. It doesn't have the quality of being this weird self-referencing thing with a de se operator, it lacks the capacity of reason. And thus, it can't have any rights. A corpse isn't alive or rational or aware either. A brand-new human is, or at least will be in short order. Proliferating rights in the manner you suggest devalues the word and destroys its meaning, allowing evil people to appropriate it for their own ends, as they did in socialist countries all across the globe. The result? The deaths of tens of millions from starvation, disease, and brutal suppression of dissent. Having rights that you suggest would likely lead to chaos and that would support the rise of oppressive regimes, just as the proliferation of "rights" to jobs, health care, income, education, etc. have caused problems by creating a "need" for ever more oppressive regulations. The result? More chaos, and more regulation. Networks of rational agents generate spontaneous order through rational self-interest. I don't see how you can have any such thing as rational agents, or self-interest, without some "thing" which is an agent and has an interest in its own existence. Even if there is no such thing as a "self", there is a thing which employs a de se operator to describe "itself", whatever "it" is, and I'm not clear on what the difference is between such an entity and a "self". It obviously has memory, reasons, and is self-aware (i.e. aware of the the thing that is speaking, thinking, etc., whatever it is). Doesn't some "thing" have to exist to employ such an operator? Joshua Job nanite1018 at gmail.com From thespike at satx.rr.com Mon Feb 8 05:10:59 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 07 Feb 2010 23:10:59 -0600 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: References: <201002080401.o1841ase025933@andromeda.ziaspace.com> <12C91DE16C6A47ADB3E4696EC08C4B18@spike> Message-ID: <4B6F9CE3.1020701@satx.rr.com> On 2/7/2010 10:50 PM, Aware wrote: > Or maybe we could spend the next couple of months debating where one's > lap goes when one stands up. It goes into abeyance. Or, in the case of the herniated, into hiatus. Damien Broderick From max at maxmore.com Mon Feb 8 06:33:49 2010 From: max at maxmore.com (Max More) Date: Mon, 08 Feb 2010 00:33:49 -0600 Subject: [ExI] Joy in the transcendence of the species Message-ID: <201002080634.o186Xxrd020842@andromeda.ziaspace.com> spike wrote: >Erhebungaufregung? Approximately ~ horny for uplifting. Young man, wash your mouth out with soap. There will be no uplifting of your horn in my presence! Why, the very idea. Now, if you all can keep your bungaufregung in your pants, I'm still looking for a word that would combine joy + posthuman transcension + everyone. M From max at maxmore.com Mon Feb 8 06:39:47 2010 From: max at maxmore.com (Max More) Date: Mon, 08 Feb 2010 00:39:47 -0600 Subject: [ExI] Joy in the transcendence of the species Message-ID: <201002080639.o186duCL019011@andromeda.ziaspace.com> I just remembered a term that is close (but not quite) what I'm looking for: "extrosattva", which is a play on "boddisattva". See: http://www.gregburch.net/writing/Extrosattva.htm Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From spike66 at att.net Mon Feb 8 07:09:06 2010 From: spike66 at att.net (spike) Date: Sun, 7 Feb 2010 23:09:06 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <201002080634.o186Xxrd020842@andromeda.ziaspace.com> References: <201002080634.o186Xxrd020842@andromeda.ziaspace.com> Message-ID: <8DEAFD88CD6D4BDEAC0B05D6DEA9AD2F@spike> > ...On Behalf Of Max More > Subject: Re: [ExI] Joy in the transcendence of the species > > spike wrote: > > >Erhebungaufregung? Approximately ~ horny for uplifting. > > Young man, wash your mouth out with soap... Young? Vas ist dieses young? We have "dirty old man" and the "young horndog" but we need a term for those of us who fall somewhere in the middle. Regarding another topic on which I posted earlier today, evolution and the fact that it isn't discussed much in the US public schools. Is it? Some time ago Max complained that his US students knew so little about Darwin that he had to waste valuable time in his philosophy lectures explaining the basics of evolution before he could even start on the material. Max am I remembering it correctly? The insight I had on this topic was given to me by one of our regulars, who was visiting me at my house last week. He commented that his friend had been in NASA in the early years, but had grown discouraged, for after the Apollo program was over, the organization was taken over by jesus freaks. In retrospect, my own early years in a space town reflects exactly that notion. I never really realized that everywhere wasn't like my own home town. Now I know it isn't like that everywhere. I already know you British guys get the right story, since Darwin was one of your own. So question please, USians, did your public school teach you about Darwin? Did they teach it right? spike From spike66 at att.net Mon Feb 8 06:55:36 2010 From: spike66 at att.net (spike) Date: Sun, 7 Feb 2010 22:55:36 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <201002080639.o186duCL019011@andromeda.ziaspace.com> References: <201002080639.o186duCL019011@andromeda.ziaspace.com> Message-ID: <10FC50E8D8EF496F87F9E1DB26856F34@spike> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > Subject: Re: [ExI] Joy in the transcendence of the species > > I just remembered a term that is close (but not quite) what > I'm looking for: "extrosattva", which is a play on "boddisattva". See: > http://www.gregburch.net/writing/Extrosattva.htm > > Max So where the heck has Greg Burch been hiding out? spike From stefano.vaj at gmail.com Mon Feb 8 10:48:16 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 8 Feb 2010 11:48:16 +0100 Subject: [ExI] Nolopsism In-Reply-To: <641575.48388.qm@web65616.mail.ac4.yahoo.com> References: <243641.74772.qm@web113601.mail.gq1.yahoo.com> <580930c21002060808t6996b91ehb19f34a6c5767e10@mail.gmail.com> <641575.48388.qm@web65616.mail.ac4.yahoo.com> Message-ID: <580930c21002080248t60f6e9d7ubd5d4f73275f1fcd@mail.gmail.com> On 6 February 2010 23:41, The Avantguardian wrote: > Being an attorney however, I am sure you are aware of the legal can of worms nolipsism opens up. Human rights are tied to identity. If "I" don't exist, then stealing my stuff or even murdering me is a victimless crime. Doesn't make for a happy outcome in my opinion, especially for libertarians.?Probably why the authors back-pedalled from their claims in the conclusion. Besides the fact that victimless crimes do exist in positive law, you cannot steal from a legal entity, and yet we are well aware of the conventional nature of concepts such as its identity, will, good faith, responsibility, etc. Moreover, we do *not* consider victimless crimes those affecting unconscious human beings. Why should it be necessary to take a stance as to some metaphysical quality of the consciousness of human beings to regulate social life so that harming them out of the circumstances provided for in law is a crime? But you are right on a point: the POV according to which only an "essentialist humanist polipsism" can be the ground for granting us a personhood status would be an argument to keep out other different, albeit persuasively exhibiting similar behaviours, entities. Say, the great apes, or computers passing the Turing test or uploaded humans. -- Stefano Vaj From stathisp at gmail.com Mon Feb 8 10:59:02 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 8 Feb 2010 21:59:02 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <156490.82842.qm@web36504.mail.mud.yahoo.com> Message-ID: On 8 February 2010 08:05, Spencer Campbell wrote: > Seems pretty clear to me, as a neuron-by-neuron replacement is > precisely what I've wanted for the past two to five years. I would > advise phrasing it again, simply and concisely, because (a) what you > have in mind may have changed since you last did so, (b) I might have > overwritten your description with my own, and (c) the point on which > Gordon disagrees remains a total mystery. The premise is that it is possible to make an artificial neuron which behaves exactly the same as a biological neuron, but lacks consciousness. We've been discussing computer consciousness but it doesn't have to be a computerised neuron, we could say that it is animated by a little demon and the conclusion from the thought experiment remains unchanged. These zombie neurons are then put into your head replacing normal neurons that play some important role in a conscious process, such as visual perception or understanding of language. Before going further, is it perfectly clear that the behaviour of the remaining biological parts of your brain and your overall behaviour will remain unchanged? If not, then the artificial neuron does not work as claimed. OK, so your behaviour is unchanged and your thoughts are unchanged as a result of the substitution; for if your thoughts changed, you would be able to say "my thoughts have changed", and therefore your behaviour would have changed. What of your consciousness? If your consciousness changes as a result of the substitution, you would be unable to notice a change, since again if you noticed the change you would be able to say "I've noticed a change", and that would be a change in your behaviour, which is impossible. So: if your consciousness changes as a result of the substitution, you would be unable to notice any change. You would lose all visual perception and not only behave as if you had normal vision, but also honestly believe that you had normal vision. Or you would lose the ability to understand words starting with "r" but you would be able to use these words appropriately and honestly believe that you understood what these words meant. You would be partially zombified but you wouldn't know it. In which case, how do you know you don't now have zombie vision, zombie understanding or a zombie toothache? If zombie consciousness is indistinguishable objectively *or* subjectively (i.e. by the unzombified part of a partly zombified mind) from real consciousness, then the claim that there is nevertheless a distinction is meaningless. The conclusion, therefore, is that the original premise is false: it is not possible to make an artificial neuron which behaves exactly the same as a biological neuron, but lacks consciousness. Either such a neuron would not really behave like a biological neuron, or it would behave like a biological neuron and also have the consciousness inherent in a biological neuron. This is a statement of the functionalist position, of which computationalism is a subset. It is possible that computationalism is false but functionalism is still true. Note that the above argument assumes no theory of consciousness. Its conclusion is just that if consciousness exists at all, whatever it is, it is ineluctably linked to brain function. -- Stathis Papaioannou From stefano.vaj at gmail.com Mon Feb 8 11:18:23 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 8 Feb 2010 12:18:23 +0100 Subject: [ExI] Semiotics and Computability In-Reply-To: <264701.53414.qm@web36506.mail.mud.yahoo.com> References: <264701.53414.qm@web36506.mail.mud.yahoo.com> Message-ID: <580930c21002080318q5c3cfda6h920b50b14ebbfe9b@mail.gmail.com> On 6 February 2010 23:01, Gordon Swobe wrote: > I'll ask again: have you ever had a tooth-ache? I for one never have, but I have had my fair share of experience of physical pain. Now, absolutely *nothing* in such experience ever told me that "pain" is anything else than a word describing a computational feature, programmed by natural selection, on the tunes of "if , then ". But perhaps it does tell that to everybody else, and I am the only philosophical zombie in the world... :-D -- Stefano Vaj From stefano.vaj at gmail.com Mon Feb 8 11:29:34 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 8 Feb 2010 12:29:34 +0100 Subject: [ExI] Glacier Geoengineering In-Reply-To: <847034.14281.qm@web113617.mail.gq1.yahoo.com> References: <847034.14281.qm@web113617.mail.gq1.yahoo.com> Message-ID: <580930c21002080329q5f76a769u138e142b1965e22a@mail.gmail.com> On 2 February 2010 16:08, Ben Zaiboc wrote: > I need to ask a question here, please indulge me if the answer should be > obvious: > > What's the point of sticking glaciers to their bedrock? > Because we can? :-) Or, on the tunes of Sir Edmund Hilary, "because they are there"? :-) But admittedly I have not the foggiest idea on whether we should make it a habit... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Feb 8 11:29:32 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 8 Feb 2010 22:29:32 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <580930c21002080318q5c3cfda6h920b50b14ebbfe9b@mail.gmail.com> References: <264701.53414.qm@web36506.mail.mud.yahoo.com> <580930c21002080318q5c3cfda6h920b50b14ebbfe9b@mail.gmail.com> Message-ID: On 8 February 2010 22:18, Stefano Vaj wrote: > On 6 February 2010 23:01, Gordon Swobe wrote: >> I'll ask again: have you ever had a tooth-ache? > > I for one never have, but I have had my fair share of experience of > physical pain. > > Now, absolutely *nothing* in such experience ever told me that "pain" > is anything else than a word describing a computational feature, > programmed by natural selection, on the tunes of "if threatened by fire>, then ". You do know that first you withdraw your hand, through reflex, and then experience the pain? No doubt the purpose of the pain is so that you will remember not to do it again. But why pain; why not disgust, or horror, or just reluctance? -- Stathis Papaioannou From bbenzai at yahoo.com Mon Feb 8 11:06:36 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 8 Feb 2010 03:06:36 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <711259.1066.qm@web113612.mail.gq1.yahoo.com> Gordon Swobe wrote: > Ben Zaiboc wrote: >>Gordon Swobe wrote: >>> I'll ask again: have you ever had a tooth-ache? >> Gordon, your repeated question shows that you're either ignoring >> or not getting the point of Jef's reply. >> If it makes no sense to you, please say so, don't just >> regurgitate the same question that he is replying to. >> On the other hand, if you're simply ignoring things that you >> don't like reading, what's the point of continuing the >> conversation? > How about you, Ben? Have you ever had a toothache? Was it a > real toothache? Or was it just an illusion? Ah, Ignoring, then. Fine. Also ironic, as you're the one who's said, more than once, "I wonder now if I can count on you for an honest discussion?" Ben Zaiboc From stefano.vaj at gmail.com Mon Feb 8 11:39:45 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 8 Feb 2010 12:39:45 +0100 Subject: [ExI] Semiotics and Computability (was: The digital nature of brains) In-Reply-To: <445777.14858.qm@web36505.mail.mud.yahoo.com> References: <445777.14858.qm@web36505.mail.mud.yahoo.com> Message-ID: <580930c21002080339s52487cc2hcc656db8a99da769@mail.gmail.com> On 7 February 2010 21:07, Gordon Swobe wrote: > The amoeba has no neurons or nervous system, Stathis, so "less conscious" > is an understatement. It has no consciousness at all. > This is becoming increasingly circular. Why a nervous system should produce "consciousness", whatever it may be? And how would it be different not to have one? Have you ever tried? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Mon Feb 8 11:51:51 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 8 Feb 2010 12:51:51 +0100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <264701.53414.qm@web36506.mail.mud.yahoo.com> <580930c21002080318q5c3cfda6h920b50b14ebbfe9b@mail.gmail.com> Message-ID: <580930c21002080351l52934842xc6c3a3f5890c31c3@mail.gmail.com> On 8 February 2010 12:29, Stathis Papaioannou wrote: > You do know that first you withdraw your hand, through reflex, and > then experience the pain? No doubt the purpose of the pain is so that > you will remember not to do it again. Yes. Or no. It might even be a byproduct, what is Gould's word for that? But I do not see how it would change my original remark. > But why pain; why not disgust, > or horror, or just reluctance? By definition. Because "pain" is simply the name we give to our reactions to the fact of being burnt, which may well be different (in its causes, perhaps as well in its consequences or intensity) from that generated by "horrorful" rather than "painful" experiences. All the paradox of qualia is of course that in such perspective you may well "feel" horror, or for that matter pleasure, when I feel pain. You would obviously call it "pain" anyway, as long as you speak English, and as long as your reaction thereto would be identical, there would be no way ever to know it. As a consequence, it would seem obvious that since "feelings abstracted from reactions" are not part of the phenomenical reality, their concept is only a philosophical Fata Morgana dictated by a few centuries of dualism. -- Stefano Vaj From bbenzai at yahoo.com Mon Feb 8 11:31:58 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 8 Feb 2010 03:31:58 -0800 (PST) Subject: [ExI] The Surrogates, graphic novel In-Reply-To: Message-ID: <938582.59281.qm@web113609.mail.gq1.yahoo.com> Seems to me that the story sacrifices common-sense for the sake of having a story. That kind of technology would make the need for remote surrogate bodies unnecessary. People would just, as Max says, upgrade their own bodies, and be cyborgs. ben Zaiboc From gts_2000 at yahoo.com Mon Feb 8 12:43:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 8 Feb 2010 04:43:45 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <468928.31819.qm@web36508.mail.mud.yahoo.com> --- On Sun, 2/7/10, Spencer Campbell wrote: > Gordon, time for the true or false game: there is a > difference between real toothaches and illusionary toothaches. > > My answer is "false", and I get the impression that yours > will be "true". If so, why? How? No, I answer false also. I asked that question about toothaches to point out that subjective facts exist and that consciousness exists. It makes no difference whether your toothache exists as a result of a cavity or as an effect caused by a stage-hypnotist; if you feel the pain then it exists with as much reality as does a heart attack. It should seem obvious that the world contains both subjective facts like toothaches and objective facts like mountains. It should seem equally obvious that consciousness exists, and that consciousness has certain qualities. The majority of people do in fact consider these things perfectly obvious. And contrary to the bafflegab promulgated by some quasi-intellectual pseudo-philosophers, on these subjects the majority of people have it exactly right. -gts From bbenzai at yahoo.com Mon Feb 8 13:11:03 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 8 Feb 2010 05:11:03 -0800 (PST) Subject: [ExI] Joy in the transcendence of the species In-Reply-To: Message-ID: <584597.9024.qm@web113602.mail.gq1.yahoo.com> Transzendenzjedenfreude Ben Zaiboc From msd001 at gmail.com Mon Feb 8 14:42:24 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 8 Feb 2010 09:42:24 -0500 Subject: [ExI] The Surrogates, graphic novel In-Reply-To: <938582.59281.qm@web113609.mail.gq1.yahoo.com> References: <938582.59281.qm@web113609.mail.gq1.yahoo.com> Message-ID: <62c14241002080642j56bb72c2r969b4ff798608e69@mail.gmail.com> On Mon, Feb 8, 2010 at 6:31 AM, Ben Zaiboc wrote: > Seems to me that the story sacrifices common-sense for the sake of having a story. > > That kind of technology would make the need for remote surrogate bodies unnecessary. ?People would just, as Max says, upgrade their own bodies, and be cyborgs. Sure, but the general population has a hard time understanding even the dumbed-down parts. I was a bit annoyed by the suddenly super-human acts of robot jumping - as if being a robot allows a surri to violate gravity at will. So yes, it was Hollywood for the sake of a telling a story. In those scenes where surri were mangled, my wife had a visceral emotional response. I asked her if that was because she was thinking about the surrigates as people. She admitted that she was. I found that interesting because people rarely have such reaction to a car accident. It's the same loss of personal property, but if the human operator walks away the response is usually "Well at least nobody was harmed" - Why does a machine that looks like a person warrant the extra attention? I commented that the "dread camp" were effectively neo-Amish. They really made no sense to the culture, but provided a plot device that was easy to understand. I think it might have made a more interesting story for us to imagine the clash between the embodied real world presences and the disembodied/uploaded virtual world presences. (the usual competition for resource utilization/etc.) I think Greg Egan's "Diaspora" has a nice treatment of fleshers, gleisner robots and citizens. [http://en.wikipedia.org/wiki/Diaspora_(novel)] From aware at awareresearch.com Mon Feb 8 16:10:11 2010 From: aware at awareresearch.com (Aware) Date: Mon, 8 Feb 2010 08:10:11 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <201002080639.o186duCL019011@andromeda.ziaspace.com> References: <201002080639.o186duCL019011@andromeda.ziaspace.com> Message-ID: On Sun, Feb 7, 2010 at 10:39 PM, Max More wrote: > I just remembered a term that is close (but not quite) what I'm looking for: > "extrosattva", which is a play on "boddisattva". See: > http://www.gregburch.net/writing/Extrosattva.htm It's close, but crosses the line into the mythical. [Re-reading Greg's essay triggered strong nostalgia for those heady days of Extropian discussion past.] I'll contribute for brainstorming that (Hofstadter's) "super-rationality" encompasses much of the concept: acting on the basis of identification with a future context of meaning and value greater than one's present context. ==> superrationalist - Jef From natasha at natasha.cc Mon Feb 8 16:21:02 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 8 Feb 2010 10:21:02 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" Message-ID: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> It's an informative article. My issue with it, however, is that it seems to be black or white and misrepreents Max. http://ieet.org/index.php/IEET/more/3670/ Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: att920ae.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From bbenzai at yahoo.com Mon Feb 8 16:08:51 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 8 Feb 2010 08:08:51 -0800 (PST) Subject: [ExI] Nolipsism [was Re: Nolopsism] In-Reply-To: Message-ID: <57273.49114.qm@web113615.mail.gq1.yahoo.com> The Avantguardian wrote: > From: Ben Zaiboc >> Philosophically, it may be as you say.? Practically, though, it's not really > that useful because it makes no actual difference to the way we regard things > like fear of death, or to the law. > >> It's in the scientific arena that nolipsism is most useful, because it explains > what subjectivity actually is, and clears the nonsense and confusion out of the > way. > How so? Siddhartha Guatama said that the self was an illusion some 3000 years ago only he called it the skandha of consciousness instead of a "de se designator". What makes nolopsism more scientifically useful than Buddhism? Are you suggesting that by sweeping consciousness under a rug, it can be scientifically ignored? Buddhism says nothing about neuroscience. Nevertheless, maybe the skandha of consciousness is just as scientifically useful as the "de se" designator. Maybe they are the same thing. This is not sweeping consciousness under a rug, it's subjecting it to the light of day. Far from ignoring it, this is an attempt to explain it. An attempt that to me, at least, makes pretty good sense. >> We know, at least in theory, that subjectivity can be built into an > artificial mind, and we can finally dump the concept of the 'hard problem' in > the bin. > So you think that programming a computer to falsely believe itself to be conscious is easier than to program one to actually be so. Or do you think that programming a computer to use "de se" designators necessarily makes it think itself conscious? A person could get by without the use "de se" designators yet still retain a sense of self. It might sound funny, but?a person could consistently refer to themselves in the third-person by name even in their thoughts.?Stuart doesn't think that "de se" designators are particularly profound.?Stuart doesn't need them. Do you see what Stuart means?? We seem to be getting different things from this paper. I'm not at all suggesting that a computer (or robot) be programmed to 'falsely believe itself to be conscious', and neither are the authors (how would it even be possible? it would have to already be conscious in order to believe anything, so the belief wouldn't be false). The suggestion is that a non-descriptive reflexive designator is necessary for general-purpose cognition, and that this is what "I" is. Inasmuch as we regard "I" as the thing that is conscious, the "de se" designator is at the heart of consciousness. This is not a 'false' consciousness, it's what consciousness is, whether it be in a robot or a human. It's the ungrounded symbol that gives personal meaning to everything else. By definition, A person could *not* get by without a "de se" designator yet still retain a sense of self, because it is the very essence of the sense of self. Where is this third-person Stuart? What location does he occupy? Not right now as in some temporary location defined by an external coordinate system, but at any time? There's only one answer: "Here" (Stuart points to self). Third-person Stuart has nowhere to point. He has no self-centred coordinate system. Only first-person Stuart can have such a thing. If there is no self for Stuart to point to, he cannot answer the question, it means nothing to him. >> The concept of a "de se" designator explains why we don't have souls, not why we > shouldn't have property rights.? > Property rights are no less abstract than souls. Neither seems to have a physical basis beyond metaphysical/philosophical fiat. Communists tend not to believe in either. The difference is that the term "property rights" signifies, things that exist, as sets of rules, and are useful. The term "souls" signifies only a fantasy, and is not useful. Property rights have an effect on the world. Even thought they aren't material things they still exist. Souls don't. I think it's very very easy to misunderstand what is meant by "the self is an illusion". It needs a fair bit of pondering. For me, at least, the concept of "de se" desgignators makes it much clearer. Ben Zaiboc From thespike at satx.rr.com Mon Feb 8 16:49:30 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 10:49:30 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> Message-ID: <4B70409A.9070807@satx.rr.com> On 2/8/2010 10:21 AM, Natasha Vita-More wrote: > My issue with it, however, is that it seems to be black or white and > misrepreents Max. That would be this, I assume: In what way does this misrepresent Max? By snipping the larger conext of his comment, presumably, but did he say and mean that? Damien Broderick From jonkc at bellsouth.net Mon Feb 8 16:32:25 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Feb 2010 11:32:25 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <468928.31819.qm@web36508.mail.mud.yahoo.com> References: <468928.31819.qm@web36508.mail.mud.yahoo.com> Message-ID: Since my last post Gordon Swobe has posted 4 times. > > It should seem obvious that the world contains both subjective facts like toothaches and objective facts like mountains. It should seem equally obvious that consciousness exists, and that consciousness has certain qualities. The majority of people do in fact consider these things perfectly obvious. And contrary to the bafflegab promulgated by some quasi-intellectual pseudo-philosophers, on these subjects the majority of people have it exactly right. Swobe has used this straw man argument many times, and I believe he is being disingenuous. Nobody on this list, or any rational person for that matter, seriously thinks he doesn't think. True, I have heard some say that consciousness is an illusion, and yes that is a bit dumb as it's not at all clear why it's more "illusionary" than any other perfectly respectable mental phenomena, but that's not as bad as saying it doesn't exist. No sane person thinks consciousness doesn't exists, although some very silly people may say so when they try (unsuccessfully) to sound sophisticated and provocative. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Mon Feb 8 17:00:57 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 8 Feb 2010 11:00:57 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <4B70409A.9070807@satx.rr.com> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com> Message-ID: <4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> "Libertarian transhumanists like Thiel and More..." Please read: RU's interview with Max: http://www.acceleratingfuture.com/people/Max-More/?interview=32 Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Damien Broderick Sent: Monday, February 08, 2010 10:50 AM To: ExI chat list Subject: Re: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" On 2/8/2010 10:21 AM, Natasha Vita-More wrote: > My issue with it, however, is that it seems to be black or white and > misrepreents Max. That would be this, I assume: In what way does this misrepresent Max? By snipping the larger conext of his comment, presumably, but did he say and mean that? Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ablainey at aol.com Mon Feb 8 17:14:37 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Mon, 08 Feb 2010 12:14:37 -0500 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> Message-ID: <8CC76F9532BE5FA-4614-1CE8@webmail-d042.sysops.aol.com> I lost interest at 'Liberal democracy' in the first paragraph. The clue to it being bipolar to the nth degree is in the title. Divide and conquer. NLP. Exclude all other posibility. Read it all before and living in the UK see it, hear it, read it and get shoveled it everyday. i'll try and wade through it later. Maybe i'm wrong? A -----Original Message----- From: Natasha Vita-More To: 'ExI chat list' ; extrobritannia at yahoogroups.com Sent: Mon, 8 Feb 2010 16:21 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" It's an informative article. My issue with it, however, is that it seems to be black or white and misrepreents Max. http://ieet.org/index.php/IEET/more/3670/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Feb 8 17:21:11 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 11:21:11 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com> <4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> Message-ID: <4B704807.5090907@satx.rr.com> On 2/8/2010 11:00 AM, Natasha Vita-More wrote: > "Libertarian transhumanists like Thiel and More..." > > Please read: RU's interview with Max: > http://www.acceleratingfuture.com/people/Max-More/?interview= That would be the following 2004 discussion: As far as the actual quote in James Hughe's essay goes, does it misrepresent Max's thinking on the topic at issue? This next quote would suggest that Hughe's implication of antidemocratic or top-down bias (if that's what he means) is wrong: Still, Max is quoted as saying Damien Broderick From natasha at natasha.cc Mon Feb 8 17:26:05 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 8 Feb 2010 11:26:05 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <4B704807.5090907@satx.rr.com> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> Message-ID: <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1> So what? Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Damien Broderick Sent: Monday, February 08, 2010 11:21 AM To: ExI chat list Subject: Re: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" On 2/8/2010 11:00 AM, Natasha Vita-More wrote: > "Libertarian transhumanists like Thiel and More..." > > Please read: RU's interview with Max: > http://www.acceleratingfuture.com/people/Max-More/?interview= That would be the following 2004 discussion: As far as the actual quote in James Hughe's essay goes, does it misrepresent Max's thinking on the topic at issue? This next quote would suggest that Hughe's implication of antidemocratic or top-down bias (if that's what he means) is wrong: Still, Max is quoted as saying Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From thespike at satx.rr.com Mon Feb 8 17:35:49 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 11:35:49 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1> Message-ID: <4B704B75.5010709@satx.rr.com> On 2/8/2010 11:26 AM, Natasha Vita-More wrote: > So what? Eh? From hkeithhenson at gmail.com Mon Feb 8 17:51:34 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 8 Feb 2010 10:51:34 -0700 Subject: [ExI] Refreezing the Arctic ocean Message-ID: On another list someone wrote > The bigger problem right now is solar insolation during polar summers. This > is the Arctic problem in the immediate term and a intermediate problem for > the West Antarctic Ice Sheet. A blue water Arctic is starting, I don't want > to use the words "chain reaction" (it's not a nuclear process), but least > non-linear warming (oceanic). Scaled down by a factor of 25, say a 2.5-3 inch pipe 50 feet long, the heat pipe I analyzed would suck out 4 kW with the same delta T for the heat exchanger. The Arctic isn't as cold as the Antarctic, but at least 5 months of the year it is cold enough to make this work. A kWh is 3600 kJ. So such a heat pipe would freeze 4*3600/333kJ/kg, ~43 kg of water per hour. After 5 months, that would be ~160000 kg of ice, or 160 tonnes, or 160 cubic meters. This would be a cylinder of ice ~16 m high by ~4 meters. Such a slender ratio would have to be examined for stability (sinking weight on a cable perhaps). The interior would be at -15 C, which seems to me that it would likely last without much loss till the next winter, but this would need to be calculated. In 5 years the block would be 8 meters across. I don't know what would be the optimum heat pipe size or spacing, or how spacing could be maintained, but this is the general idea of how to refreeze the Arctic ocean. What it would be doing is to raise the effective temperature in the Arctic during the coldest part of the year, making the radiation into space higher. I also don't know if this is economically feasible, or even desirable (shipping). But this is the kind of thought that would go into an engineering solution if anyone cared to look in this direction. The West Antarctic Ice Sheet is made for the trick of freezing it to bedrock >>this motion would stop if the glacier was frozen to the bedrock. > > Then, it's not a glacier. It's not going to stop them moving entirely, just slow them down. Ice is still plastic. (mondo snip to another post) >> FACT: All glaciers are retreating, many already gone. Glaciers are the >> source of the >> majority of the world's fresh water, directly or indirectly. ... > > Sure, Keith compares them to farm land. Only for the point that the area of glaciers isn't orders of magnitude larger than the area humans have massively affected. This is an economic argument: if humans were really were desperate, we could afford to pin glaciers and slow them down. It's not just melting in sunlight. Glaciers flow down to lower and warmer altitudes and into the sea where they melt. Getting dark particulates out of the air by switching away from coal and dirty burning engines would also help slow down melting. >> All we can do is band-aid what we can and live with the pain, I have been aware since the early 70s. Dr Peter Vajk and I were nearly thrown out of a Limits to Growth conference in 1975 for having the gall to suggest there might be a way out of the problem. Now, 35 years later, and perhaps too late, space based solar power is finally getting serious attention. They still are not taking a systems approach which tells you that chemical exhaust velocity is not enough for low cost energy. But the methods,for example, air breathing part way up and laser heated hydrogen above that, are obvious if you go back to the basic physics. Of course, economically it only works for a large traffic model. Keith From thespike at satx.rr.com Mon Feb 8 17:58:44 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 11:58:44 -0600 Subject: [ExI] IEET piece re "Problems of Transhumanism" In-Reply-To: <4B704B75.5010709@satx.rr.com> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1> <4B704B75.5010709@satx.rr.com> Message-ID: <4B7050D4.8050802@satx.rr.com> On 2/8/2010 11:35 AM, Damien Broderick wrote: >Natasha Vita-More : >> So what? > Eh? To expand on that: Isn't what I quoted to the point of Hughes' essay, which starts: BUT, he goes on below, and follows with the quote from Max I've now cited twice, which you don't think is relevant (I suppose). Is it really irrelevant? Perhaps so. Here it is again: I certainly don't think that Max's comment on "monkey politics" is a disparagement of *democracy* but rather of tribal power plays, hierarchies of force and authority, etc. And his views are certainly inconsistent with Hughes' glib opening comment about a supposed >H "tendency to disparage liberal democracy in favor of the rule by dei ex machina and technocratic elites." But it doesn't sound like a ringing endorsement of the position that "liberal democracy is the best path to betterment," which is where Hughes started. Damien Broderick From max at maxmore.com Mon Feb 8 18:18:26 2010 From: max at maxmore.com (Max More) Date: Mon, 08 Feb 2010 12:18:26 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" Message-ID: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> Damien, you seem to be suggesting ("Still, Max is quoted as saying") that Hughes "implication of antidemocratic or top-down bias" is understandable because of my statement that "Democratic arrangements have no intrinsic value; they have value only to the extent that they enable us to achieve shared goals while protecting our freedom." If so, I don't understand how you can say that. Saying that democratic arrangements (as they exist at any particular time) have no intrinsic value is not in the least equivalent to saying that authoritarian control is better. Should we not strive for something better than the ugly system of democracy that currently exists? Are authoritarian arrangements the only conceivable alternative? Perhaps you use "democracy" far more broadly than I do. If you mean it such that it covers all possible non-authoritarian systems, then your interpretation makes sense. I prefer to restrict it to the systems we have seen historically, which are crude attempts to convert the desires of people in general into governance through various forms of majority voting. Max Damien wrote: >As far as the actual quote in James Hughe's essay goes, does it >misrepresent Max's thinking on the topic at issue? This next quote would >suggest that Hughe's implication of antidemocratic or top-down bias (if >that's what he means) is wrong: > >freedom of action, and experimentation. Opposing authoritarian social >control and favoring the rule of law and decentralization of power. >Preferring bargaining over battling, and exchange over compulsion. >Openness to improvement rather than a static utopia. [...] I find it >both amusing and revolting to observe socialist transhumanists who >characterize themselves as "democratic transhumanists" but use the term >"democracy" as a cover for using governmental power to compel everyone >to fit into their notion of "equality." Democracy, in the more generally >accepted sense, is an important way of implementing the principle of >Open Society.> > >Still, Max is quoted as saying > >only to the extent that they enable us to achieve shared goals while >protecting our freedom. Surely, as we strive to transcend the >biological limitations of human nature, we can also improve upon >monkey politics? > ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From natasha at natasha.cc Mon Feb 8 18:19:04 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 8 Feb 2010 12:19:04 -0600 Subject: [ExI] IEET piece re "Problems of Transhumanism" In-Reply-To: <4B7050D4.8050802@satx.rr.com> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1><4B704B75.5010709@satx.rr.com> <4B7050D4.8050802@satx.rr.com> Message-ID: <1A5A60391CF9407A9B10772A59F5C938@DFC68LF1> Sorry, I was so busy working on something. (I was just writing an article for The Scavenger and was referencing _The Judas Mandala_ and had to go to Amazon to get an image in case ...) Now to answer your post, I was simply was referring to the "libertarian transhumanist" phrase which takes away from the value of what James' article because this is old and worn-out. Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Damien Broderick Sent: Monday, February 08, 2010 11:59 AM To: ExI chat list Subject: Re: [ExI] IEET piece re "Problems of Transhumanism" On 2/8/2010 11:35 AM, Damien Broderick wrote: >Natasha Vita-More : >> So what? > Eh? To expand on that: Isn't what I quoted to the point of Hughes' essay, which starts: BUT, he goes on below, and follows with the quote from Max I've now cited twice, which you don't think is relevant (I suppose). Is it really irrelevant? Perhaps so. Here it is again: I certainly don't think that Max's comment on "monkey politics" is a disparagement of *democracy* but rather of tribal power plays, hierarchies of force and authority, etc. And his views are certainly inconsistent with Hughes' glib opening comment about a supposed >H "tendency to disparage liberal democracy in favor of the rule by dei ex machina and technocratic elites." But it doesn't sound like a ringing endorsement of the position that "liberal democracy is the best path to betterment," which is where Hughes started. Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ablainey at aol.com Mon Feb 8 18:30:58 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Mon, 08 Feb 2010 13:30:58 -0500 Subject: [ExI] IEET piece re "Problems of Transhumanism" In-Reply-To: <4B7050D4.8050802@satx.rr.com> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1><4B704B75.5010709@satx.rr.com> <4B7050D4.8050802@satx.rr.com> Message-ID: <8CC7703FD00403A-1250-225@webmail-d042.sysops.aol.com> I agree Damien, Max's quote has little or nothing to do with democracy, rather the way it is played. And now I have read the whole thing I can see I was right in my first impression. Polarise the arguement even though the arguement itself is invalid. Since when is Liberal democracy the opposite of totalitarianism? And how can Libertarianism be the opposite of liberal democracy, while argueing that Libertarian transhumanists support totalitarianism ?? Neither is correct. Contrived. Like I said, I read it before. A -----Original Message----- From: Damien Broderick To: ExI chat list Sent: Mon, 8 Feb 2010 17:58 Subject: Re: [ExI] IEET piece re "Problems of Transhumanism" On 2/8/2010 11:35 AM, Damien Broderick wrote: >Natasha Vita-More : >> So what? > Eh? To expand on that: Isn't what I quoted to the point of Hughes' essay, which starts: BUT, he goes on below, and follows with the quote from Max I've now cited twice, which you don't think is relevant (I suppose). Is it really irrelevant? Perhaps so. Here it is again: I certainly don't think that Max's comment on "monkey politics" is a disparagement of *democracy* but rather of tribal power plays, hierarchies of force and authority, etc. And his views are certainly inconsistent with Hughes' glib opening comment about a supposed >H "tendency to disparage liberal democracy in favor of the rule by dei ex machina and technocratic elites." But it doesn't sound like a ringing endorsement of the position that "liberal democracy is the best path to betterment," which is where Hughes started. Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Feb 8 18:37:24 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 12:37:24 -0600 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Problems of Transhumanism" In-Reply-To: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> References: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> Message-ID: <4B7059E4.1010402@satx.rr.com> On 2/8/2010 12:18 PM, Max More wrote: > Saying that democratic arrangements (as they exist at any particular > time) have no intrinsic value is not in the least equivalent to saying > that authoritarian control is better. Should we not strive for something > better than the ugly system of democracy that currently exists? Certainly; see Krugman's essay in the NYT today for a truly egregious example of "democracy" in inaction in the US. > Are authoritarian arrangements the only conceivable alternative? No, and I think I'd favor a decision structure closer to a rhizome than a ladder or pyramid, which is why I'm a sort of communitarian anarchist. (A utopian prospect, admittedly, because people have largely been conned into becoming--as the gibe has it--sheeple.) But as I said a moment ago in another post, your statement doesn't sound like a ringing endorsement of the position that "liberal democracy is the best path to betterment," which is the point Hughes is making about >Humanism. Where he goes from there is questionable if not absurd *as a generalization*--but there's certainly a technocratic, elitist tendency in a lot of >H discourse I've read here over the last 15 years or so. Damien Broderick From thespike at satx.rr.com Mon Feb 8 18:39:08 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 12:39:08 -0600 Subject: [ExI] IEET piece re "Problems of Transhumanism" In-Reply-To: <1A5A60391CF9407A9B10772A59F5C938@DFC68LF1> References: <53226DE6C0DE42C689797B6CFF4DE80B@DFC68LF1> <4B70409A.9070807@satx.rr.com><4FD738C2D52E41A69BDFE397923AFDDB@DFC68LF1> <4B704807.5090907@satx.rr.com> <7BBCE87398FB428484B3A0D5FAACAD95@DFC68LF1><4B704B75.5010709@satx.rr.com> <4B7050D4.8050802@satx.rr.com> <1A5A60391CF9407A9B10772A59F5C938@DFC68LF1> Message-ID: <4B705A4C.1000907@satx.rr.com> On 2/8/2010 12:19 PM, Natasha Vita-More wrote: > I was simply was referring to the "libertarian > transhumanist" phrase which takes away from the value of what James' article > because this is old and worn-out. Fair enough, and Max certainly deals with that in his interview with RU. From rpwl at lightlink.com Mon Feb 8 18:37:59 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 08 Feb 2010 13:37:59 -0500 Subject: [ExI] Blue Brain Project In-Reply-To: <558651.23421.qm@web113616.mail.gq1.yahoo.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> Message-ID: <4B705A07.1040209@lightlink.com> Ben Zaiboc wrote: > Anyone not familiar with this can read about it here: > http://seedmagazine.com/content/article/out_of_the_blue/P1/ > > The next ten years should be interesting! Or not. Markram is NOT, as many people seem to assume, developing a biologically accurate model of a cortical column circuit. He is instead developing a model that contains neurons that are biologically accurate, down to a certain level of detail, but with random connections between those neurons. The statistical distribution of wires is supposed to be the same as that in a cortical column, but the actual wires... not so much. So, to anyone who thinks that a randomly mode of an i86 computer chip in which all the wiring was replaced by random connections would be a fantastically interesting thing, worth spending a billion dollars to construct, the Blue Brain project must make you delirious with joy. Markram's entire project, then, rests on his hope that if he builds a randomly wired column model, the model will "self-assemble" and do something interesting. He produces no arguments for what those self-assembly mechanisms actually look like, nor does he demonstrate that his model includes those mechanisms. Further, he ignores the possibility that the self-assembly mechanisms are dependent on such factors as (a) specific wiring circuits in the column, or (b) specific wiring in outside structures (subcortical mechanisms, for example) which act as drivers of the self-assembly process. (To couch this in terms of an example, suppose the biology causes loops of ten neurons to be set up all over the column, with the strength of synapses around each loop being extremely specific (say, high, high, low, high, high, low, high, high, low, low). Now suppose that the self-organizing capability of the system is crucially dependent on the presence of these loops. Since Markram is blind to exact wiring he will never see the loops. He certainly would not see the pattern of synaptic strengths, and he probably would never notice the physical pattern of the ten-neurons loops, either.) As far as I can tell, Markram's only reason to believe that his model columns will self-assemble is ... well, just a hunch. If his hunch is wrong, he will have built the world's most expensive white-noise generator. Notice that so far, in the test runs he has done, his evidence that the model circuit actually works has all been based on a low-level statistical correspondence between the patterns of firing in the model and in the original. Given that he went to great trouble to ensure the same distributions in his model, this result gives practically no information at all. Markram does not hesitate to publicize these achievements with words that imply that his model column does actually "function" like a biological column. (Going back to the i86 chip analogy: would a statistically similar signal pattern in a random model of such a chip indicate that the random model was "functioning" like a normal chip?). There are plenty of other criticisms that could be leveled at the Blue Brain project, but this should be enough. Can you say "Neuroscience Star Wars Project", children? Richard Loosemore From lacertilian at gmail.com Mon Feb 8 19:18:57 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 11:18:57 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) Message-ID: JOSHUA JOB : > A car can't have rights because it isn't self-aware. It isn't even alive. It doesn't have choices, and nothing matters to it. It doesn't have the quality of being this weird self-referencing thing with a de se operator, it lacks the capacity of reason. And thus, it can't have any rights. A corpse isn't alive or rational or aware either. A brand-new human is, or at least will be in short order. Proliferating rights in the manner you suggest devalues the word and destroys its meaning, allowing evil people to appropriate it for their own ends, as they did in socialist countries all across the globe. The result? The deaths of tens of millions from starvation, disease, and brutal suppression of dissent. There are people who believe that every single thing in the universe is technically alive. Since cars actually have the ability to burn fuel and propel themselves, traits generally associated with animals, they actually have a better claim to the title than most inanimate objects. You could argue that a car isn't alive because it can't move intentionally, and the obvious counterargument would be to point demonstratively at a plant; but many plants lean towards the sun. I would prefer coral polyps as an example, or, even better, phytoplankton. On the subject of awareness, use of "de se" designators, et cetera, Stefano Vaj points out that unconscious human beings retain the rights of their conscious selves. I would equate this with a house retaining the right not to be demolished even when its residents are away. You're correct that my position devalues and/or destroys the current meaning of "rights", but I don't see how a fungal bloom of evil would follow logically from that. Elaborate please? JOSHUA JOB : > Having rights that you suggest would likely lead to chaos and that would support the rise of oppressive regimes, just as the proliferation of "rights" to jobs, health care, income, education, etc. have caused problems by creating a "need" for ever more oppressive regulations. The result? More chaos, and more regulation. Networks of rational agents generate spontaneous order through rational self-interest. I don't see how you can have any such thing as rational agents, or self-interest, without some "thing" which is an agent and has an interest in its own existence. I'm broadening the definition with the intention to make law in general more sensible: rights prevent wrongs. It is wrong to dump garbage in the ocean. Therefore, the ocean should have rights. At the moment the ocean is effectively piggybacking on tortuously ill-defined rights of humans, such as "the right to live on a planet that isn't broken", which are not explicitly enumerated in any written document that I'm aware of. I'm not saying it would be more morally correct to give the ocean rights, as though it were a living, feeling entity. I'm not even saying it would be more intuitive. I'm saying it would be simpler and more efficient. JOSHUA JOB : > Even if there is no such thing as a "self", there is a thing which employs a de se operator to describe "itself", whatever "it" is, and I'm not clear on what the difference is between such an entity and a "self". It obviously has memory, reasons, and is self-aware (i.e. aware of the the thing that is speaking, thinking, etc., whatever it is). Doesn't some "thing" have to exist to employ such an operator? I don't actually think the optimal system of law would exclude the concept of a rational agent from playing a part. For the sake of argument, I'm simply saying it's possible, and that such a system could be made to work just as well, in effect, as any other. It would likely employ much less concise language to do so, so it wouldn't be as efficient as a "de se"-enabled system with all of the same laws. The difference between "selves" and "de se systems", if that term is accurate, is largely a matter of abstraction. A self is highly complex, and theoretically atomic: it persists from one moment to another, and one can correctly attribute memory, thought, self-awareness, and so on to it. Memory is inextricable from self. All of the parts add up to an indivisible whole. Conversely, a "de se" designator is just a symbol. It points to a greater system, which we normally call a self, but that system is composed of a great many independent parts that interact in complicated ways. Memories are only a part of that system, and purely optional. The symbol doesn't care: it just refers to whatever's present at the time. I talk about "my arm" and "my foot" in exactly the same way that I talk about "my house" and "my computer", as though all of these things were part of me and I would be incomplete without them. This is obviously false for the latter two, and not-so-obviously false for the former two. My arm is part of my self, just as surely as is my computer, but neither are parts of me. I don't have any parts; I don't technically exist outside of the moment in which I write this. There's a more tenuous relationship between quoted passages and responses in this post than I normally display. I found myself copying and pasting paragraphs from beneath one quote to beneath another, because they worked equally well in both places and I wanted to spread things out a little. So it might not sound as if I was "listening" very carefully. Sorry about that; I actually was, but I got a bit sidetracked. You argue against cars having rights, and claim, indirectly, that doing so would cause a great many terrible unintended consequences. Request that you expound on a few of those consequences. You can choose something other than a car, if easier. From jameschoate at austin.rr.com Mon Feb 8 19:42:48 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Mon, 8 Feb 2010 19:42:48 +0000 Subject: [ExI] Blue Brain Project Message-ID: <20100208194249.YWLBW.497187.root@hrndva-web17-z02> There is very little actual 'suppose' in your commentary, the homeobox and related genetic affects are critical to this sort of organization. It is far from stochastic. Besides that, this model completely ignores the chemotaxis effects of development as well. ---- Richard Loosemore wrote: > Markram is NOT, as many people seem to assume, developing a biologically > accurate model of a cortical column circuit. He is instead developing a > model that contains neurons that are biologically accurate, down to a > certain level of detail, but with random connections between those > neurons. The statistical distribution of wires is supposed to be the > same as that in a cortical column, but the actual wires... not so much. > Further, he ignores the possibility that the self-assembly mechanisms > are dependent on such factors as (a) specific wiring circuits in the > column, or (b) specific wiring in outside structures (subcortical > mechanisms, for example) which act as drivers of the self-assembly process. > > (To couch this in terms of an example, suppose the biology causes loops > of ten neurons to be set up all over the column, with the strength of > synapses around each loop being extremely specific (say, high, high, > low, high, high, low, high, high, low, low). Now suppose that the > self-organizing capability of the system is crucially dependent on the > presence of these loops. Since Markram is blind to exact wiring he will > never see the loops. He certainly would not see the pattern of synaptic > strengths, and he probably would never notice the physical pattern of > the ten-neurons loops, either.) -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From lacertilian at gmail.com Mon Feb 8 19:42:36 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 11:42:36 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <468928.31819.qm@web36508.mail.mud.yahoo.com> References: <468928.31819.qm@web36508.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > No, I answer false also. I asked that question about toothaches to point out that subjective facts exist and that consciousness exists. It makes no difference whether your toothache exists as a result of a cavity or as an effect caused by a stage-hypnotist; if you feel the pain then it exists with as much reality as does a heart attack. I reject the notion that toothaches and heart attacks are equally real, on the basis that inherent "reality" does indeed lie on a spectrum rather than being binary. Gordon Swobe : > It should seem obvious that the world contains both subjective facts like toothaches and objective facts like mountains. It should seem equally obvious that consciousness exists, and that consciousness has certain qualities. The majority of people do in fact consider these things perfectly obvious. And contrary to the bafflegab promulgated by some quasi-intellectual pseudo-philosophers, on these subjects the majority of people have it exactly right. I feel like rejecting the notion of quasi-intellectual pseudo-philosophers, as well, but that's really only because of how accurately the term describes me. I do, however, reject the notion of a mountain being an objective fact. They're all just hills which we've decided, by a subjective value judgement, are simply too tall to be called hills! And hills are merely ground that's too bumpy for us to responsibly name as ground. In fact, I'm having difficulty convincing myself that there could be any such thing as an objective fact at all; even electrons are a mathematical abstraction, based on our subjective interpretation of a mountain of empirical evidence. Note that "empirical" means "based on experience", not "inviolably true". The difference between empirical and anecdotal is one of degree, not kind; there is empirical evidence for the existence of alien life. We usually call very consistent findings empirical, and very inconsistent findings, or those with an insufficient sample size to say either way, anecdotal. Truth has never been found outside the laboratory, nor survived without artificial support. It's a little like antimatter. As soon as you turn off the containment field, things go back to being just as they are. Now that I think about it, the distinction between the objective and the subjective is fallacious and misleading. Dualistic thinking at its worst. I propose to do away with it. From thespike at satx.rr.com Mon Feb 8 19:47:49 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 13:47:49 -0600 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: Message-ID: <4B706A65.7010403@satx.rr.com> On 2/8/2010 1:18 PM, Spencer Campbell wrote: > There are people who believe that every single thing in the universe > is technically alive. Yes, there are an awful lot of clueless idiots. I've taken to listening to Coast to Coast while doing evening exercises, and my mind genuinely boggles at the sheer stupidity of its gullible, desperate-to-believe listeners. Damien Broderick From lacertilian at gmail.com Mon Feb 8 19:47:37 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 11:47:37 -0800 Subject: [ExI] The Surrogates, graphic novel In-Reply-To: <62c14241002080642j56bb72c2r969b4ff798608e69@mail.gmail.com> References: <938582.59281.qm@web113609.mail.gq1.yahoo.com> <62c14241002080642j56bb72c2r969b4ff798608e69@mail.gmail.com> Message-ID: Mike Dougherty : > In those scenes where surri were mangled, my wife had a visceral > emotional response. ?I asked her if that was because she was thinking > about the surrigates as people. ?She admitted that she was. ?I found > that interesting because people rarely have such reaction to a car > accident. ?It's the same loss of personal property, but if the human > operator walks away the response is usually "Well at least nobody was > harmed" ?- Why does a machine that looks like a person warrant the > extra attention? Why indeed. Join in on the "Rights without selves" thread! It's alarmingly relevant, car analogy and all. From lacertilian at gmail.com Mon Feb 8 20:01:23 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 12:01:23 -0800 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: <201002080401.o1841ase025933@andromeda.ziaspace.com> References: <201002080401.o1841ase025933@andromeda.ziaspace.com> Message-ID: Max More : > I know many of you are fascinated by the intricacies and peculiarities of > language. Perhaps you might be motivated to help me figure this out: Yes! Me! Me! Well, except that the one area of linguistics I actively dislike is the art of the portmanteau. I think the word you're thinking of would refer also to "joy in the existence of humanity", "joy in the existence of animals", "joy in the existence of macroscopic life", and on down the evolutionary ladder. It would also explicitly celebrate all progress in the arts and sciences. Joy in evolution and revolution, joy in the complexity of life and technology past, present and future, without any distinction made between the two. It'd be a good word. If you wouldn't mind, I'd be keen on arrogating it for my imaginary language. You know, the one with a complete set of grammatical rules but no words to speak of. The one no one, anywhere, is the least bit interested in but me. Not bitter! Now if you'll excuse me, I have to go pop a few more seriousness pills. From bbenzai at yahoo.com Mon Feb 8 19:51:47 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 8 Feb 2010 11:51:47 -0800 (PST) Subject: [ExI] Blue Brain Project In-Reply-To: Message-ID: <480152.31985.qm@web113604.mail.gq1.yahoo.com> Richard Loosemore wrote: > Ben Zaiboc wrote: > > Anyone not familiar with this can read about it here: > > http://seedmagazine.com/content/article/out_of_the_blue/P1/ > > > > The next ten years should be interesting! > > Or not. > > Markram is NOT, as many people seem to assume, developing a > biologically > accurate model of a cortical column circuit.? He is > instead developing a > model that contains neurons that are biologically accurate, > down to a > certain level of detail, but with random connections > between those > neurons.? The statistical distribution of wires is > supposed to be the > same as that in a cortical column, but the actual wires... > not so much. ... > Markram's entire project, then, rests on his hope that if > he builds a > randomly wired column model, the model will "self-assemble" > and do > something interesting.? He produces no arguments for > what those > self-assembly mechanisms actually look like, nor does he > demonstrate > that his model includes those mechanisms. ... > As far as I can tell, Markram's only reason to believe that > his model > columns will self-assemble is ... well, just a hunch. > This is pretty much what a human brain starts out as. The brain of a baby is *massively* overconnected, and 90% of the connections get pruned away as the baby starts to learn. Markram's reason to believe that his model columns will self assemble is that that's what they do in biological systems. Even so. Suppose this is not the case, suppose Markram's random interconnections fail to capture something intrinsic to the biological situation, it's still not a pointless exercise. We are constantly mapping the connections in the brain (the connectome project), and while hand-wiring them is out of the question, we will surely extract significant statistical patterns that should help with setting up the blue brain with more successful starting patterns. The blue brain project might not produce the results Markram hopes it will, but it can't fail to produce useful information, even if it's just "how not to build a brain". Ben Zaiboc From jonkc at bellsouth.net Mon Feb 8 20:25:56 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Feb 2010 15:25:56 -0500 Subject: [ExI] Blue Brain Project. In-Reply-To: <558651.23421.qm@web113616.mail.gq1.yahoo.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> Message-ID: On Feb 7, 2010, Ben Zaiboc wrote: > Anyone not familiar with this can read about it here: > http://seedmagazine.com/content/article/out_of_the_blue/P1/ > > The next ten years should be interesting! Great article, thanks Ben! sort of makes Gordon Swobe's remark "I look inside your head I see nothing even remotely resembling a digital computer " seem rather medieval. These are some of my favorite quotes : "In ten years, this computer will be talking to us." "Once the team is able to model a complete rat brain?that should happen in the next two years?Markram will download the simulation into a robotic rat, so that the brain has a body. He?s already talking to a Japanese company about constructing the mechanical animal. ?The only way to really know what the model is capable of is to give it legs,? he says. ?If the robotic rat just bumps into walls, then we?ve got a problem.?" ?Now we just have to scale it up.? Blue Brain scientists are confident that, at some point in the next few years, they will be able to start simulating an entire brain. ?If we build this brain right, it will do everything,? Markram says. I ask him if that includes selfconsciousness: Is it really possible to put a ghost into a machine? ?When I say everything, I mean everything,? he says, and a mischievous smile spreads across his face." John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Feb 8 20:53:40 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Feb 2010 14:53:40 -0600 Subject: [ExI] Blue Brain Project. In-Reply-To: References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> Message-ID: <4B7079D4.1090103@satx.rr.com> On 2/8/2010 2:25 PM, John Clark quoted: > "Once the team is able to model a complete rat brain?that should happen > in the next two years?Markram will download the simulation into a > robotic rat, so that the brain has a body. He?s already talking to a > Japanese company about constructing the mechanical animal. A decade or more ago, Hugo de Garis was promising a robot CAM-brain puddytat. He lost several sponsors along the way. Anyone know if he's doing anything along those lines today? (No, I've never heard of Google--how does that work? His own site informs us excitedly of things due to happen in 2006 and 2007...) Damien Broderick From rpwl at lightlink.com Mon Feb 8 20:59:17 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 08 Feb 2010 15:59:17 -0500 Subject: [ExI] Blue Brain Project In-Reply-To: <480152.31985.qm@web113604.mail.gq1.yahoo.com> References: <480152.31985.qm@web113604.mail.gq1.yahoo.com> Message-ID: <4B707B25.1040006@lightlink.com> Ben Zaiboc wrote: > Richard Loosemore wrote: > >> Ben Zaiboc wrote: >>> Anyone not familiar with this can read about it here: >>> http://seedmagazine.com/content/article/out_of_the_blue/P1/ >>> >>> The next ten years should be interesting! >> Or not. >> >> Markram is NOT, as many people seem to assume, developing a >> biologically accurate model of a cortical column circuit. He is >> instead developing a model that contains neurons that are >> biologically accurate, down to a certain level of detail, but with >> random connections between those neurons. The statistical >> distribution of wires is supposed to be the same as that in a >> cortical column, but the actual wires... not so much. > ... > >> Markram's entire project, then, rests on his hope that if he builds >> a randomly wired column model, the model will "self-assemble" and >> do something interesting. He produces no arguments for what those >> self-assembly mechanisms actually look like, nor does he >> demonstrate that his model includes those mechanisms. > ... > >> As far as I can tell, Markram's only reason to believe that his >> model columns will self-assemble is ... well, just a hunch. >> > > This is pretty much what a human brain starts out as. The brain of a > baby is *massively* overconnected, and 90% of the connections get > pruned away as the baby starts to learn. Markram's reason to believe > that his model columns will self assemble is that that's what they do > in biological systems. This is a non sequiteur, surely? Just because the connections are pruned, it doe snot follow that they were random to begin with. > Even so. Suppose this is not the case, suppose Markram's random > interconnections fail to capture something intrinsic to the > biological situation, it's still not a pointless exercise. We are > constantly mapping the connections in the brain (the connectome > project), and while hand-wiring them is out of the question, we will > surely extract significant statistical patterns that should help with > setting up the blue brain with more successful starting patterns. > > The blue brain project might not produce the results Markram hopes it > will, but it can't fail to produce useful information, even if it's > just "how not to build a brain". I wouldn't think much of a billion-dollar project, using some of the world's largest supercomputers, whose goal was to understand computer chips by modeling randomly wired versions of them, with some vague promises that in the future some other project will be supplying more accurate wiring diagrams, and with the fallback position that it would be "bound to produce some valuable information, even if it's just 'how not to build a computer'. Collecting statistical patterns is just an excuse to burn research money without having to think. Richard Loosemore From gts_2000 at yahoo.com Mon Feb 8 21:52:59 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 8 Feb 2010 13:52:59 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <785979.10766.qm@web36505.mail.mud.yahoo.com> --- On Mon, 2/8/10, Spencer Campbell wrote: > I reject the notion that toothaches and heart attacks are > equally real, on the basis that inherent "reality" does indeed lie > on a spectrum rather than being binary. I think it fair to say that either x is real or else x is not real. I also understand that some people here do not distinguish or care about the difference between reality and ~reality. They have medications for that. :) > I do, however, reject the notion of a mountain being an > objective fact. They're all just hills which we've decided, by a > subjective value judgment, are simply too tall to be called hills! But mountains do exist, no matter how you describe them. Yes? > In fact, I'm having difficulty convincing myself that there > could be any such thing as an objective fact at all You've fallen into the hallucination. :) > Note that "empirical" means "based on experience", not > "inviolably true". The word "empirical" needs some disambiguation. On the one hand people mean by the word "empirical" something like "objectively existent facts in the world, which any observer can verify". But on the other hand people mean by it something like "facts which exist in the world, including for example the facts of an entity's subjective experience". Clearly some things exist in the second sense that do no exist in the first sense. We can consider it an empirical fact that your dentist, for example, considers it true that you have a real toothache. Right? -gts From jonkc at bellsouth.net Mon Feb 8 21:55:14 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Feb 2010 16:55:14 -0500 Subject: [ExI] Blue Brain Project In-Reply-To: <4B705A07.1040209@lightlink.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> <4B705A07.1040209@lightlink.com> Message-ID: <4FBF0F0F-4486-4EDA-B705-CE3906028F36@bellsouth.net> On Feb 8, 2010, Richard Loosemore wrote: > Markram is NOT, as many people seem to assume, developing a biologically accurate model of a cortical column circuit. He is instead developing a model that contains neurons that are biologically accurate, down to a certain level of detail That is a legitimate concern, and it's true we don't know everything there is to know about neurons, but we know a lot, and this is by far the best simulation of a large number of neurons ever made. And remember, most of the things that neurons do have nothing to do with thought, they are just the routine (though fabulously complex) housekeeping functions that any cell needs to do to stay alive. > So, to anyone who thinks that a randomly mode of an i86 computer chip in which all the wiring was replaced by random connections would be a fantastically interesting thing, worth spending a billion dollars to construct, the Blue Brain project must make you delirious with joy. But the boosters of biology are always pointing out that 3/4 of a i86 chip is a completely worthless object, while a dog with only 3 legs is not immobile, he can still limp around. Markram wants to incorporate the same robustness into a computer program, and I think he has a fighting chance of pulling it off. > Markram's entire project, then, rests on his hope that if he builds a randomly wired column model, the model will "self-assemble" and do something interesting. Markram says that already his simulation is acting in ways that remind him of the ways real neurons act. OK maybe he's talking Bullshit, but I'm very impressed that in his very next utterance he shows us a way to prove him wrong. He says that in the next 2 to 3 years he will be able to synthesize an entire rat brain. He also says he can link that computer model to a mechanical rat. If that robot rat moves at random then he has failed. If the mechanism moves in more interesting ways then the man is onto something. > Further, he ignores the possibility that the self-assembly mechanisms are dependent on such factors as (a) specific wiring circuits in the column, or (b) specific wiring in outside structures (subcortical mechanisms, for example) which act as drivers of the self-assembly process. To couch this in terms of an example, suppose the biology causes loops of ten neurons to be set up all over the column, with the strength of synapses around each loop being extremely specific (say, high, high, low, high, high, low, high, high, low, low). Now suppose that the self-organizing capability of the system is crucially dependent on the presence of these loops. Since Markram is blind to exact wiring he will never see the loops. He certainly would not see the pattern of synaptic strengths, and he probably would never notice the physical pattern of the ten-neurons loops, either. My neurons are not making the proper connections. If you put a gun to my head I couldn't tell you what the hell you're talking about. > As far as I can tell, Markram's only reason to believe that his model columns will self-assemble is ... well, just a hunch. If his hunch is wrong, he will have built the world's most expensive white-noise generator. If he fails it will be a heroic failure, if he succeeds it will be the the most important work ever done, not just scientific work, work in general. > On Sun Ben Zaiboc wrote: > > I know I mentioned these links a few days ago, but it's worth repeating. > Noah Sutton is making a documentary: > http://thebeautifulbrain.com/2010/02/bluebrain-film-preview/ > There's a longer video that explains what he's up to. > The Emergence of Intelligence in the Neocortical Microcircuit > http://video.google.com/videoplay?docid=-2874207418572601262&ei=lghrS6GmG4jCqQLA1Yz7DA Thanks agin Ben, yet more great stuff! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanite1018 at gmail.com Mon Feb 8 22:39:10 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Mon, 8 Feb 2010 17:39:10 -0500 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: Message-ID: On Feb 8, 2010, at 2:18 PM, Spencer Campbell wrote: > There are people who believe that every single thing in the universe > is technically alive. Since cars actually have the ability to burn > fuel and propel themselves, traits generally associated with animals, > they actually have a better claim to the title than most inanimate > objects. You could argue that a car isn't alive because it can't move > intentionally, and the obvious counterargument would be to point > demonstratively at a plant; but many plants lean towards the sun. I > would prefer coral polyps as an example, or, even better, > phytoplankton. Anyone who thinks everything is alive is, as Damien said, an idiot, or I will add, severely epistemologically confused. Life also maintains homeostasis, grows, etc., things which a car does not do. Also, while rights, in my view, only apply to life, that is not a sufficient condition. The sufficient condition is that they are self-aware and rational, something cars, plants, etc. are not. > On the subject of awareness, use of "de se" designators, et cetera, > Stefano Vaj points out that unconscious human beings retain the rights > of their conscious selves. I would equate this with a house retaining > the right not to be demolished even when its residents are away. Perhaps, though I generally view that as simply the fact that from experience we know the person still exists, and must merely be "woken up" in order to resume reasoning, etc. And damaging that physical object that that entity "resides" in will cause damage to the entity's capacity to continue to exist, and thus, is a violation of its rights (more on that in a moment). > I'm broadening the definition with the intention to make law in > general more sensible: rights prevent wrongs. It is wrong to dump > garbage in the ocean. Therefore, the ocean should have rights. For similar reasons, I argue that things which are not rational cannot have anything have personal meaning to them. The ocean does not employ de se operators, it is not self-aware, and in fact isn't even alive, so nothing can "wrong" it. I'll agree, basically, that rights prevent things from wronging other things, in a very specific sense, but since the ocean does not have any way it can be "wronged", it cannot possibly have rights. > The difference between "selves" and "de se systems", if that term is > accurate, is largely a matter of abstraction. A self is highly > complex, and theoretically atomic: it persists from one moment to > another, and one can correctly attribute memory, thought, > self-awareness, and so on to it. Memory is inextricable from self. All > of the parts add up to an indivisible whole. > > Conversely, a "de se" designator is just a symbol. It points to a > greater system, which we normally call a self, but that system is > composed of a great many independent parts that interact in > complicated ways. Memories are only a part of that system, and purely > optional. The symbol doesn't care: it just refers to whatever's > present at the time. I did not find their argument convincing, because the de se operator refers to the system of which it is a part, i.e. the computational structure (the program, essentially) of their hypothetical robot, or of we human being's brains/minds. That isn't exactly physical, as it is merely a pattern. It's an abstraction of sorts, and "I" is strange in that it cannot be particularly descriptive (it leads to infinite regress, since "I" includes the meaning of "I", which.... and so on). But I have thought, for a long time now, that that is exactly what the "self" is. I understand that this is similar, at least in part, to the thesis of Hofstadter's book "I am a Strange Loop", and while I own it, I have not read it yet. I just thought I should address that point, before diving into rights. > You argue against cars having rights, and claim, indirectly, that > doing so would cause a great many terrible unintended consequences. > Request that you expound on a few of those consequences. You can > choose something other than a car, if easier. Well, first, I say that only entities with de se operators can have things of true "value", i.e. mental representations of things of personal importance, arranged in a hierarchy which is understood conceptually. Such an entity must work in order to continue to exist (at least any entity I have ever been able to conceive). By its nature, it has to choose whether to "live" or "die" (i.e. continue to exist or cease to exist), and does so continuously in all its actions (by pursuing things that help its life or harm it). Now, by its nature as such an entity, it's standard of morality is its own life (i.e. it should do things which help it live, and not do things which harm it), since if it isn't working for its life, it will die, and cease to be able to do anything at all. Now, critically important is that entities such as this (that is, entities that are "self-aware", rational, and operate using concepts) have to decide what is in their own interest, because obviously it's value structure is particular to it and determined by it (that is, it employs de se operators, to use language from the paper). So it is impossible, by definition, for another entity to interfere with another (i.e. forcibly prevent it from taking an action, by threatening its existence), without getting in that entity's way in trying to survive. So you cannot initiate force without interfering with a fundamental requirement for the survival of all entities like you, and thereby reject a principle upon which your own survival is based. That is my basic view of rights. I don't see how an ocean or a car can have rights, because rights need wrongs, and wrongs need values, and values need rational conceptual entities that employ de se operators (and it needs those entities to exist in some manner). What sort of horrible consequences would come if you gave a car rights (like the right to exist, for example)? Well, besides dumping that whole structure of rights, one, in my mind, based on logic, you lose the power of its base of logic, and rejecting logic leaves open any number of possible groundings for "rights", like faith or racism or random whim, etc. And that is bad. But lets be concrete about it. If a car has a right to exist, then that means I can't scrap it if I don't want it (and own it). But if that is the case, that means I am forced to give it to someone else, even if I don't want to (breaching my right to control my life, because I purchased the car with my money, which I used some of my life to acquire). Moreover, let us say that a deer with big antlers jumps in front of my vehicle and I can hit the deer (and quite possibly be killed or at least gored by its antlers, its happened a good bit around where I live to other people), or I can veer off the side of the road, and hit a lamp-post which I know I will likely survive (as I have a wonderfully safe car), but my car will be completely totalled. What should I do? My car has a right to exist, but if I veer off to the side, it will be destroyed, and I will have destroyed it. I have a right to live if I can, but if I want to live, I must destroy my car. So, do you suggest, as you do in an earlier post, that a car has a right to exist? If it does, than I would have to go gently into that good night in my hypothetical situation above (or, more likely, scream and then gurgle as I choke on my own blood). Or, if I have a right to live, and do not have to do this, then the car cannot have a right to exist. Rights are universals, they cannot be contextual, or else they aren't "rights." Everyone can have the right not to initiate force against others, as it leads to no contradictions. My car cannot have a right to exist, because it leads to a contradiction with a logically derived principle that I have a right to my life. Similarly with the ocean, trees, dirt, space shuttles, and asteroids. No inanimate object can possibly have rights, and most living organisms cannot have them either (like bacteria, sea sponges, fish, insects, etc., as they are not even conceivably entities with a conceptual faculty and de se operators). Btw, if my argument sounds similar-ish to the Objectivist argument, it is because I am heavily influenced by Objectivism (and may, though I am not certain, end up subscribing to that view fully). Joshua Job nanite1018 at gmail.com From possiblepaths2050 at gmail.com Tue Feb 9 00:05:36 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 8 Feb 2010 17:05:36 -0700 Subject: [ExI] Joy in the transcendence of the species In-Reply-To: References: <201002080401.o1841ase025933@andromeda.ziaspace.com> Message-ID: <2d6187671002081605p10af2129ha5b58cd7672b0e41@mail.gmail.com> Max More wrote: I have long enjoyed the word "Schadenfreude", meaning pleasure derived from the misfortunes of others. [Note: I've enjoyed the *term* "schadenfreude", not the thing it refers to.] I got to thinking what word (if one could be coined) would mean "pleasure in the transcendence of the species" (i.e., transcendence of the human condition). It may be asking a lot of a word (even a German word) to do all this work, but I'd like to give it a try. >> I think of the German language as being very guttural and ugly to the the native English speaker's ear. But perhaps that is because I have seen so many movies that had yelling Nazi's in them! I recently watched the film "Downfall" and this opinion was only reinforced (but it is a great work of cinema). I think the languages we should be looking at in this "coin a new word quest" are English, Greek and Latin, among others. How about this... *Gaudiumhumanitastranscendia* In other words, "joy at humanity transcending!" "Gaudium-humanitas-transcendia!" I like it! Or another possibility would be "gaudium-humanitas-transcendere," but I think "transcendia" rolls off the tongue better. Taken from the Merriam-Webster online dictionary: Etymology: Middle English, from Latin *transcendere* to climb across, transcend, from *trans-* + *scandere* to climb ? Date: 14th century >> Max, what do you think? Everyone else? John Grigg : ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafal.smigrodzki at gmail.com Tue Feb 9 00:26:22 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Mon, 8 Feb 2010 19:26:22 -0500 Subject: [ExI] Personal conclusions In-Reply-To: References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> Message-ID: <7641ddc61002081626ocf720du28a01091680bb3d9@mail.gmail.com> On Sat, Feb 6, 2010 at 7:34 PM, Spencer Campbell wrote: > > Considering the thread's subject, it seems safe to burn some bytes on > personal information. So: I subscribe to panexperientialism myself. > Either everything has subjective experience, or nothing does. ### Eh, why? Does everything have the quality of "threeness"? Maybe there is place in this world for an infinity of qualities, such as "being the number 3", "being a quark", or "feeling blue". Experience is just one of an infinity of flavors that parts of reality can have, and there is no reason to insist that all reality has it. There is nothing really special about the human qualia, except us being interested in them. And why wouldn't a syntax have qualia? After all, a human brain is a formal syntax, a concatenation of symbols (synaptic spikes, mostly) produced by chemical and electrical processes, so why not? Rafal From lacertilian at gmail.com Tue Feb 9 01:33:33 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 17:33:33 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <785979.10766.qm@web36505.mail.mud.yahoo.com> References: <785979.10766.qm@web36505.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > I think it fair to say that either x is real or else x is not real. I also understand that some people here do not distinguish or care about the difference between reality and ~reality. They have medications for that. :) I very nearly fall into that category. On closer inspection, however, I go further than distinguishing between real and ~real. I say ~real is synonymous with imaginary, but not with non-existent; there are real things which do not exist, and imaginary things which do. The only trouble is, I don't categorize things consistently. Is consciousness real or imaginary? Does it exist or not? I give different answers at different times. Perhaps I should define my terms better. Gordon Swobe : > But mountains do exist, no matter how you describe them. Yes? In the sense of large mounds of dirt and rocks, yes. I assume so. Personally I've never stepped on an official mountain, myself. So: mountains are real and existing. A toothache caused by a cavity is real, whereas one caused by hypnosis is imaginary. The second toothache certainly exists, for as long as the hypnosis lasts, but the existence of the first toothache is highly variable: if you aren't noticing it, it doesn't exist in that moment. Mine is caused by clenching my teeth while I sleep. I'm not sure what that means. Perhaps it's a complex toothache, in the algebraic sense. Gordon Swobe : > You've fallen into the hallucination. :) The blue pill again! Curse my deuteranomaly! Gordon Swobe : > The word "empirical" needs some disambiguation. > > On the one hand people mean by the word "empirical" something like "objectively existent facts in the world, which any observer can verify". But on the other hand people mean by it something like "facts which exist in the world, including for example the facts of an entity's subjective experience". > > Clearly some things exist in the second sense that do no exist in the first sense. We can consider it an empirical fact that your dentist, for example, considers it true that you have a real toothache. Right? Well, no, wrong. I haven't told my dentist. And I don't think she's especially psychic. For the sake of argument I'm going to pretend I'm in the worldline where I did, though. Right. That would invoke the second sense: my dentist's belief is inaccessible to an outside observer, but perfectly obvious to the dentist herself. She has empirical evidence for her own belief; that is, she can experience it whenever she likes. This is all rather far away from the thread subject. It seems the original topic died off quite a while ago. From lacertilian at gmail.com Tue Feb 9 02:37:13 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 8 Feb 2010 18:37:13 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: Message-ID: JOSHUA JOB : > Anyone who thinks everything is alive is, as Damien said, an idiot, or I will add, severely epistemologically confused. Life also maintains homeostasis, grows, etc., things which a car does not do. Agreed, agreed. > Also, while rights, in my view, only apply to life, that is not a sufficient condition. The sufficient condition is that they are self-aware and rational, something cars, plants, etc. are not. As it stands now, agreed. I've been saying that a new definition, including unaware and irrational creatures or what have you, could be more useful. Not certainly, but possibly. Either way I'm confident that it would be self-consistent at the very least. and probably more so than our current conception of ethics. >> On the subject of awareness, use of "de se" designators, et cetera, >> Stefano Vaj points out that unconscious human beings retain the rights >> of their conscious selves. I would equate this with a house retaining >> the right not to be demolished even when its residents are away. > Perhaps, though I generally view that as simply the fact that from experience we know the person still exists, and must merely be "woken up" in order to resume reasoning, etc. And damaging that physical object that that entity "resides" in will cause damage to the entity's capacity to continue to exist, and thus, is a violation of its rights (more on that in a moment). Thus we come to the problem of comatose patients and undesired fetuses. Invoking the possibility of consciousness or the expectation of future consciousness as a basis for inviolable rights leads very quickly to some major complications in a world where we can't predict the future with much accuracy. > For similar reasons, I argue that things which are not rational cannot have anything have personal meaning to them. The ocean does not employ de se operators, it is not self-aware, and in fact isn't even alive, so nothing can "wrong" it. I'll agree, basically, that rights prevent things from wronging other things, in a very specific sense, but since the ocean does not have any way it can be "wronged", it cannot possibly have rights. As I noted before: "I'm not saying it would be more morally correct to give the ocean rights, as though it were a living, feeling entity". A system of rights that makes no mention of selves would necessarily describe rights in a very diffuse sense; it would be impossible to "wrong" something in particular, anything at all, though it would be quite easy to commit a wrong. "It is wrong to dump garbage in the ocean", but the ocean is not wronged if you dump garbage in it. >> You argue against cars having rights, and claim, indirectly, that >> doing so would cause a great many terrible unintended consequences. >> Request that you expound on a few of those consequences. You can >> choose something other than a car, if easier. (snip) > That is my basic view of rights. I don't see how an ocean or a car can have rights, because rights need wrongs, and wrongs need values, and values need rational conceptual entities that employ de se operators (and it needs those entities to exist in some manner). Normally I agree with you. But for now, I disagree vehemently! Rights need wrongs: granted. Wrongs need values: granted. Values need rational conceptual entities that employ de se operators: contested. Laws are loaded with values, and remain loaded with the same values long after whoever wrote them ceases to exist in any manner. It doesn't matter where the values come from. I could write a quick program that randomly generates grammatically correct value judgements ("it is wrong for jellyfish to vomit"), and it would naturally instantiate a whole litany of injustices in the world. The exigencies of survival are an equally valid source of values, and obviously far more natural. Of course I shouldn't have to point out that "more natural" does not automatically equal "better". It depends entirely on how you go about determining the ultimate good. On this list I generally go under the assumption that everyone agrees increasing extropy is the ultimate good, and nature plays only an incidental part there. > What sort of horrible consequences would come if you gave a car rights (like the right to exist, for example)? Well, besides dumping that whole structure of rights, one, in my mind, based on logic, you lose the power of its base of logic, and rejecting logic leaves open any number of possible groundings for "rights", like faith or racism or random whim, etc. And that is bad. OOH IT'S A SLIPPERY SLOPE, SOMEBODY GRAB THE SNOW TIRES. Couldn't resist. Logic can never serve as the basis of anything. It can only be used to elaborate a perfectly arbitrary set of assumptions to its logical conclusion. Faith and racism are arbitrary, but so is survival. > But lets be concrete about it. Let's! > If a car has a right to exist, then that means I can't scrap it if I don't want it (and own it). But if that is the case, that means I am forced to give it to someone else, even if I don't want to (breaching my right to control my life, because I purchased the car with my money, which I used some of my life to acquire). This seems like a perfectly sensible law, which I would not object to instituting as-is. It's just mandatory recycling really. > Moreover, let us say that a deer with big antlers jumps in front of my vehicle and I can hit the deer (and quite possibly be killed or at least gored by its antlers, its happened a good bit around where I live to other people), or I can veer off the side of the road, and hit a lamp-post which I know I will likely survive (as I have a wonderfully safe car), but my car will be completely totalled. What should I do? My car has a right to exist, but if I veer off to the side, it will be destroyed, and I will have destroyed it. I have a right to live if I can, but if I want to live, I must destroy my car. You and your friend are placed in adjacent cages, with no hope of escape except for two switches reachable only by you. One of them releases you, but kills your friend. The other kills you, but releases your friend. What do you do? Logic dictates weighing the absolute value of you against the absolute value of your friend, which is extraordinarily difficult if both of you are typical healthy human adults. If one of you is only a car, however, the choice should be obvious. > So, do you suggest, as you do in an earlier post, that a car has a right to exist? If it does, than I would have to go gently into that good night in my hypothetical situation above (or, more likely, scream and then gurgle as I choke on my own blood). Or, if I have a right to live, and do not have to do this, then the car cannot have a right to exist. Actually, I was careful to stress the difference between a brand-new car and a complete wreck. But go on. There's something for me to refute here, but now is not the time. > Rights are universals, they cannot be contextual, or else they aren't "rights." Everyone can have the right not to initiate force against others, as it leads to no contradictions. My car cannot have a right to exist, because it leads to a contradiction with a logically derived principle that I have a right to my life. Incorrect, as illustrated by the earlier case of the caged friends! In practice, rights which are guaranteed to be universal and inviolable are pretty much always either impossible or worthless. The right to life is a perfect example. Someday the universe will end, and your right will be null and void. Realistically, though, you can expect it to be violated a good deal before it comes to that. > Btw, if my argument sounds similar-ish to the Objectivist argument, it is because I am heavily influenced by Objectivism (and may, though I am not certain, end up subscribing to that view fully). Ayn Rand? That explains it! Objectivism, according to the Wikipedia at least, explicitly endorses happiness (and rational self-interest) as a valid gauge of morality. My position is known, from an earlier post, to be in stark contrast with this. But that's a mere quibble; I qualify unequivocally as an ethical subjectivist, and even border on metaphysical subjectivism at times. I'll have to post my Napoleon argument to the list soon. This is an argument that sparked the following statement an hour or two after I finished it: "Yesterday I would have said yes, but this morning Spencer shattered my objectivism". (This is paraphrased. He actually didn't say "Spencer", he said "Gomodo", for what I'm sure is a perfectly rational reason.) From spike66 at att.net Tue Feb 9 04:09:16 2010 From: spike66 at att.net (spike) Date: Mon, 8 Feb 2010 20:09:16 -0800 Subject: [ExI] funny headlines again In-Reply-To: <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com><4B5FD28B.50707@satx.rr.com><602376332A22429C851265448732CF56@spike><8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com><75AD3D26B9E24931B21F63C77D5043FE@spike><4100958D1881411A8A5932681CE1980A@spike> <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> Message-ID: Rep. Murtha is being rembered: Why the heck wouldn't spell check have caught this? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook.jpg Type: image/jpeg Size: 38626 bytes Desc: not available URL: From jonkc at bellsouth.net Tue Feb 9 06:18:54 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 9 Feb 2010 01:18:54 -0500 Subject: [ExI] Blue Brain Project. In-Reply-To: <4B707B25.1040006@lightlink.com> References: <480152.31985.qm@web113604.mail.gq1.yahoo.com> <4B707B25.1040006@lightlink.com> Message-ID: <89E309D6-1567-4498-9094-7D2A34050E6F@bellsouth.net> If I were God (and I just don't understand why I was never offered the job) I would offer Markram a blank check to pursue his research, but first I would ask him who he thinks is trying to do the same thing but is going about it in entirely the wrong way. If he said professor X I would then go to professor X and give him a blank check too. But first I would ask him who other than Markram he thinks is trying to do the same thing but is going about it in entirely the wrong way. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From alito at organicrobot.com Tue Feb 9 09:49:49 2010 From: alito at organicrobot.com (Alejandro Dubrovsky) Date: Tue, 09 Feb 2010 20:49:49 +1100 Subject: [ExI] Blue Brain Project. In-Reply-To: <4B7079D4.1090103@satx.rr.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> <4B7079D4.1090103@satx.rr.com> Message-ID: <1265708989.6916.77.camel@localhost> On Mon, 2010-02-08 at 14:53 -0600, Damien Broderic: > A decade or more ago, Hugo de Garis was promising a robot CAM-brain > puddytat. He lost several sponsors along the way. Anyone know if he's > doing anything along those lines today? (No, I've never heard of > Google--how does that work? His own site informs us excitedly of things > due to happen in 2006 and 2007...) > By a decade ago, Robokoneko was practically dead IIRC. Goertzel meets with him when he goes to China, and I think they are working together on something. From what I gather, he's still doing evolvable neural networks and robotics. From pharos at gmail.com Tue Feb 9 10:34:21 2010 From: pharos at gmail.com (BillK) Date: Tue, 9 Feb 2010 10:34:21 +0000 Subject: [ExI] Blue Brain Project. In-Reply-To: <1265708989.6916.77.camel@localhost> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> <4B7079D4.1090103@satx.rr.com> <1265708989.6916.77.camel@localhost> Message-ID: On 2/9/10, Alejandro Dubrovsky wrote: > Goertzel meets with him when he goes to China, and I think they are > working together on something. From what I gather, he's still doing > evolvable neural networks and robotics. > > Aha! That's enough of a clue. Search on -- Goertzel Garis China -- produces interesting results. This is a presentation by Garis at the AGI-09 conference Here, if you have PowerPoint - or here in HTML version - BillK From mbb386 at main.nc.us Tue Feb 9 14:19:20 2010 From: mbb386 at main.nc.us (MB) Date: Tue, 9 Feb 2010 09:19:20 -0500 (EST) Subject: [ExI] funny headlines again In-Reply-To: References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com><4B5FD28B.50707@satx.rr.com><602376332A22429C851265448732CF56@spike><8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com><75AD3D26B9E24931B21F63C77D5043FE@spike><4100958D1881411A8A5932681CE1980A@spike> <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> Message-ID: <36057.12.77.169.53.1265725160.squirrel@www.main.nc.us> > > Rep. Murtha is being rembered: > > > > Why the heck wouldn't spell check have caught this? spike > Perhaps someone entered "rembered" into the spellcheck dictionary additional "correct" list... whatever that's called in their word processor. I found this happening sometimes at my job. Very annoying. Regards, MB From pharos at gmail.com Tue Feb 9 14:28:01 2010 From: pharos at gmail.com (BillK) Date: Tue, 9 Feb 2010 14:28:01 +0000 Subject: [ExI] funny headlines again In-Reply-To: <36057.12.77.169.53.1265725160.squirrel@www.main.nc.us> References: <201001270043.o0R0hbVs006730@andromeda.ziaspace.com> <4B5FD28B.50707@satx.rr.com> <602376332A22429C851265448732CF56@spike> <8CC6DBDE4D74716-26A4-3B99@webmail-d031.sysops.aol.com> <75AD3D26B9E24931B21F63C77D5043FE@spike> <4100958D1881411A8A5932681CE1980A@spike> <6F67645A59C7453EAF1E5DD3DDD2F70E@spike> <36057.12.77.169.53.1265725160.squirrel@www.main.nc.us> Message-ID: On 2/9/10, MB wrote: > Perhaps someone entered "rembered" into the spellcheck dictionary additional > "correct" list... whatever that's called in their word processor. > > I found this happening sometimes at my job. Very annoying. > > Yea, well obviously it is the opposite of 'dismembered'. BillK From gts_2000 at yahoo.com Tue Feb 9 14:59:44 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 9 Feb 2010 06:59:44 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <523017.75621.qm@web36506.mail.mud.yahoo.com> --- On Sun, 2/7/10, Stathis Papaioannou wrote: >> The amoeba has no neurons or nervous system, Stathis, >> so "less conscious" is an understatement. It has no >> consciousness at all. > > As far as you're concerned the function of the nervous > system - intelligence - which is due to the interactions between > neurons bears no essential relationship to consciousness. As I define intelligence, some of it is encoded into DNA. Every organism has some intelligence including the lowly amoeba. This hapless creature has enough intelligence to find food and replicate but it has no idea of its own existence. > You believe that consciousness is a property of certain specialised > cells. Yes. And I think you most likely believe so also, except when you have reason to argue for the imaginary mental states of amoebas to support your theory about the imaginary mental states of computers. :) We can suppose theories of alien forms of consciousness that might exist in computers and in amoebas and in other entities that lack nervous systems, as you seem wont to do, but it seems to me that there we cross over the line from science to science-fiction. -gts From estropico at gmail.com Tue Feb 9 16:27:50 2010 From: estropico at gmail.com (estropico) Date: Tue, 9 Feb 2010 16:27:50 +0000 Subject: [ExI] ExtroBritannia: The future of politics. Can politicians prepare society for the major technology challenges ahead? With Darren Reynolds Message-ID: <4eaaa0d91002090827m7a6a3f49y18caf3f378a4cda8@mail.gmail.com> The future of politics. Can politicians prepare society for the major technology challenges ahead? With Darren Reynolds. Venue: Room 416, Birkbeck College, Torrington Square, London WC1E 7HX Date: Saturday 20th February 2010 Time: 2pm-4pm About the talk: With the rapid accelerating changes in many fields of technology and society, there's a risk we'll wake up one morning in (say) 2015 and realise that our politicians have failed to play their role in anticipating and preparing for major risks and opportunities - politicians were focusing on issues of the past, and neglecting issues of the future. What should we be doing, now, to change the topic of political debate, to bring more focus on the transformative potential of emerging technologies? How should the role of politicians evolve, over the near future, to improve the technological leadership of this country (and beyond)? And what role can non-politicians play, to improve the way society makes collective choices about the allocation of funding and resources? About the main speaker: Darren Reynolds is a pro-technology campaigner and local government councillor for the UK's Liberal Democrats. In his professional career he has helped many public and private sector organisations to introduce new technology and improve the way they work. Darren believes in putting choices in the hands of ordinary people and ensuring that tomorrow's technological developments flourish in a balanced regulatory framework. Darren is Chair of the Burnley Liberal Democrats. In 1998, Darren was part of the international team of philosophers and activists who produced the original "Transhumanist Declaration" Opportunity for additional speakers The meeting will also feature a number of 5-minute pitches from audience members (agreed in advance) stating cases for specific changes in the allocation of national budget - for example, which areas of research deserve a larger share of funding (and which areas deserve less). Anyone wishing to take part in this section of the meeting should get in touch asap. There's no charge to attend this meeting, and everyone is welcome. There will be plenty of opportunity to ask questions and to make comments. Discussion will continue after the event, in a nearby pub, for those who are able to stay. ** Why not join some of the Extrobritannia regulars for a drink and/or light lunch beforehand, any time after 12.30pm, in The Marlborough Arms, 36 Torrington Place, London WC1E 7HJ. To find us, look out for a table where there's a copy of the book "Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future" displayed. ** About the venue: Room 416 is on the fourth floor (via the lift near reception) in the main Birkbeck College building, in Torrington Square (which is a pedestrian-only square). Torrington Square is about 10 minutes walk from either Russell Square or Goodge St tube stations. www.extrobritannia.blogspot.com www.uktranshumanistassociation.org From lacertilian at gmail.com Tue Feb 9 16:38:41 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 9 Feb 2010 08:38:41 -0800 Subject: [ExI] Personal conclusions In-Reply-To: <7641ddc61002081626ocf720du28a01091680bb3d9@mail.gmail.com> References: <580930c21002040359u3141931am117614683d4f67a3@mail.gmail.com> <7641ddc61002081626ocf720du28a01091680bb3d9@mail.gmail.com> Message-ID: Rafal Smigrodzki : > ### Eh, why? Does everything have the quality of "threeness"? Maybe > there is place in this world for an infinity of qualities, such as > "being the number 3", "being a quark", or "feeling blue". Experience > is just one of an infinity of flavors that parts of reality can have, > and there is no reason to insist that all reality has it. The trick is that every other quality is pretty much meaningless without the corresponding experience of that quality. The universe doesn't recognize three, and three doesn't recognize itself. Something else has to experience the threeness of three. Otherwise it isn't even worth considering, regardless of any potential truth in the statement! My reason to insist that all reality has subjective experience is simply this: I think of myself as a machine, but I don't feel like a machine. I feel like a conscious entity, which seems like a fundamentally different sort of thing. So I have to resolve the discrepancy between these two views if I want to stay sane. There are only two options that I can think of: one is that consciousness is an epiphenomenon, purely an illusion generated by the underlying workings of reality. The other is that consciousness is a real substance built into the fabric of spacetime, as atoms were purported to be, and nucleons and quarks long after them. I'm inclined to believe both of these things, and that they only appear mutually exclusive to my limited mortal logic. It seems to me that a perfectly coherent interpretation of reality could be made from either stance, just as easily as light can be described as waves or particles. Two ways of phrasing the same incomprehensible thing. Supporting that idea to an extent, in a very inconvenient way, both conceptions run into the very same problem. If consciousness is an epiphenomenon: of what? If consciousness is a substance: what attracts it? The only things I see in a brain that I don't see in a stone are intelligence and understanding, and only the latter is unique to brains in general. Both of these are obvious epiphenomena to my mind, which means there must be some simpler phenomenon underlying them. But what? The ability to process information? That's only the ability to exhibit a non-random response to stimuli, and everything in the universe does that constantly. Thus: panexperientialism. My hand is forced. (Everything here is rather compressed, taking all of my personal axioms as universal axioms. So it would be easy to disagree with. I'm not looking to convince anyone here, just giving a window into my worldview. The professorial cadence is an epiphenomenon of my personality.) From gts_2000 at yahoo.com Tue Feb 9 17:04:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 9 Feb 2010 09:04:21 -0800 (PST) Subject: [ExI] Semiotics and Computability Message-ID: <782068.69808.qm@web36507.mail.mud.yahoo.com> --- On Mon, 2/8/10, Spencer Campbell wrote: > Right. That would invoke the second sense: my dentist's > belief is inaccessible to an outside observer, but perfectly obvious > to the dentist herself. She has empirical evidence for her own > belief; that is, she can experience it whenever she likes. I want to get at the idea here that to both you and to your dentist, your toothache exists as an empirical fact of reality. Your dentist may have no interest in philosophy but she will operate on that philosophical assumption: she might ask you, "Where does it hurt, Spencer? Does it hurt more when I press here?" and so on. Your dentist, presumably an educated woman of science, approaches the subject of your toothache as she would any other empirical fact of reality. She does this even though neither she nor anyone else can feel the pain of your toothache. We should, I think, emulate that approach to the subjective mental states of others in general. Mental states really do exist as empirical facts. They differ from other empirical facts as superbowl games and gumball machines only in so much as they have subjective ontologies; instead of existing in the public domain, as it were, someone in particular must "have" them. > This is all rather far away from the thread subject. I think it relates to the subject in that some people seem philosophically inclined to reduce the first-person mental to the third-person physical. Fearing any association with mind/matter dualism, they reject the notion of consciousness and try to explain subjective first-person facts in objective third-person terms. They imagine that if only they could derive a complete objective scientific description of a toothache, they would then know everything one can know about the subject of toothaches. But nothing in their descriptions will capture what it feels like to have one. -gts From nanite1018 at gmail.com Tue Feb 9 19:25:42 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Tue, 9 Feb 2010 14:25:42 -0500 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: Message-ID: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> > On Feb 8, 2010, at 9:37 PM, Spencer Campbell wrote: > Thus we come to the problem of comatose patients and undesired > fetuses. Invoking the possibility of consciousness or the expectation > of future consciousness as a basis for inviolable rights leads very > quickly to some major complications in a world where we can't predict > the future with much accuracy. A fetus is not yet a person, nor has ever been a person (as it is not nor ever has been a rational conceptual entity). So it cannot have any rights. At least until 34 weeks, they cannot possibly be regarded as people, at least from what I've read of neural development in the brain (the neocortex doesn't connect up until around then). After that it gets a little more fuzzy. As for comatose patients, if they have the capacity for brain function (i.e., they are not totally brain damaged), then they count as people, though others have to make decisions for them as that is a state which it is very difficult to recover from. If they are brain dead, then that entity no longer exists nor can it exist again (or at least, if it could, the body itself is likely irrelevant), and so no longer has rights. > "It is wrong to dump garbage in the ocean", but the ocean is not > wronged if you dump garbage in it. I am saying that it cannot be wrong if it does not violate the nature of other conscious entities. The ocean cannot be wronged, only rational conceptual "self"-aware entities can be, because they are the things that can conceivably understand right and wrong. So it can't be wrong (as in, a violation of rights) to do something unless it infringes on the rights of other such entities. I argue that the only way to do this is to infringe with their ability to make decisions for themselves based on the facts of reality (so no force or fraud). Ocean dumping does not do this (unless someone owns the ocean or part of it, etc.), so it cannot be wrong, from the perspective of a discussion of rights. > Laws are loaded with values, and remain loaded with the same values > long after whoever wrote them ceases to exist in any manner. It > doesn't matter where the values come from. I could write a quick > program that randomly generates grammatically correct value judgements > ("it is wrong for jellyfish to vomit"), and it would naturally > instantiate a whole litany of injustices in the world. The exigencies of survival are an equally valid source of values, and obviously far more natural. They aren't equally valid, they have a basis in reality. Statements have to be connected to reality in some way, or they mean literally nothing. "It is wrong for jellyfish to vomit" is totally disconnected with reality, as "wrong" can't be applied objectively to jellyfish vomit, but only in relation to entities for which the words "right" and "wrong" can have any meaning. Laws cease to exist (as laws) when there are no people who obey or enforce them. They apply to entities which can understand the meaning of "law." And yes, they are loaded with values, but they do not retain those values when they cease to be laws. They are, at best, a description of values from the past. But they are no longer laws, and have nothing to do with rights today (as those laws are no longer in effect). > Of course I shouldn't have to point out > that "more natural" does not automatically equal "better". It depends > entirely on how you go about determining the ultimate good. On this > list I generally go under the assumption that everyone agrees > increasing extropy is the ultimate good, and nature plays only an > incidental part there. The idea of rights without selves (including rights of cars, the ocean to not be dumped in or vomited in by jellyfish) cannot possibly increase extropy. To demonstrate, I'll quote the definition of extropy from the ExI website: "Extropy- The extent of a living or organizational system's intelligence, functional order, vitality, and capacity and drive for improvement." By making life of rational conceptual "self"-aware entities the basis for your system of rights, you establish vitality and capacity/drive for improvement at the center of your system of rights. Along with that, you generate order (as in, spontaneous order, haha) by barring the initiation of force, and you set in place a framework that creates a strong drive for self-improvement, including intellectual improvement, in everyone in the society. Granting cars and the ocean rights has nothing to do with increasing intelligence, improving life, or increasing order or vitality of a living/organizational system. So I don't see how, in the context of extropy, one could argue that a system of rights not based on selves (and more properly, rational self-interest), could be extropic. > OOH IT'S A SLIPPERY SLOPE, SOMEBODY GRAB THE SNOW TIRES. > > Couldn't resist. That's fine, I busted a gut at 1:30am when I read this. Tickled my funny bone, haha. > Logic can never serve as the basis of anything. It can only be used to > elaborate a perfectly arbitrary set of assumptions to its logical > conclusion. Faith and racism are arbitrary, but so is survival. True, I'm sorry. Reason is the basis, i.e. a combination of information about reality coupled with the application of logic/reason on that information. In my case, it is the nature of life, rational conceptual self-aware entities in particular, that leads to my conclusion that the only right you have is to not have force initiated against you. Everything else is in the province of personal morality (what should or shouldn't I do, what should my goals be, etc.), not essential right. That needs a self in order to work. Or at least, something equivalent, if you don't want to use "self". > You and your friend are placed in adjacent cages, with no hope of > escape except for two switches reachable only by you. One of them > releases you, but kills your friend. The other kills you, but releases > your friend. What do you do? > ... > Incorrect, as illustrated by the earlier case of the caged friends! In > practice, rights which are guaranteed to be universal and inviolable > are pretty much always either impossible or worthless. > > The right to life is a perfect example. Someday the universe will end, > and your right will be null and void. Realistically, though, you can > expect it to be violated a good deal before it comes to that. Okay, I did a bad example, because rights, in my view, are based on life, and if life is impossible (like two caged friends where one must die), rights don't really apply anymore (all options lead to death). And perhaps I should not have gotten concrete in the way I proposed. Somewhat more abstract would be: I buy a car, but I don't want it after a month. In fact, I really hate it, because, say, my hypothetical girlfriend had a heart attack in it because, I don't know, she slipped over the middle line and had a good scare. Whatever. But I want it destroyed. The law prevents me from doing so. I do it anyway. Then I go to jail, for a good while, because I violated the rights of a car. To me, that makes no sense at all. The car isn't alive. I committed no wrong against it. I merely crushed it and sold it for the metal it had in it. The only way you can justify this (or, try to anyway), is if you related the destruction of the car to me harming other people (like, they didn't get to have it, though in my opinion, that isn't harm, but that's beside the point). Any "right" would have to be connected to something that can be "wronged". Without "wrongs", i.e., actions where something is wronged, you have ludicrous situations such as the above. Moreover, you don't have a right to life. You have a right to your life. Big difference. In the one case, I have a right to never die. In the other, no one else can take my life from me. The universe isn't a person, so it can't "take" my life. I must die eventually (or do I? bum bum bum...) but I can be guaranteed the right not to be killed (unless I'm in a special life-boat type situation). > But that's a mere quibble; I qualify unequivocally as an ethical > subjectivist, and even border on metaphysical subjectivism at times. > I'll have to post my Napoleon argument to the list soon. This is an > argument that sparked the following statement an hour or two after I > finished it: "Yesterday I would have said yes, but this morning > Spencer shattered my objectivism". I'd like to here it. Joshua Job nanite1018 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cluebcke at yahoo.com Tue Feb 9 18:18:06 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 9 Feb 2010 10:18:06 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <782068.69808.qm@web36507.mail.mud.yahoo.com> References: <782068.69808.qm@web36507.mail.mud.yahoo.com> Message-ID: <275932.16385.qm@web111206.mail.gq1.yahoo.com> Hi, noob lurker (me) is unlurking. > ...they reject the notion of consciousness and try to explain subjective first-person facts in objective third-person terms. You'll hopefully forgive my newness to some of the topics covered in this amazing group, but is it the case that an attempt to objectively describe first-person facts requires or equates to a rejection of the notion of consciousness? > But nothing in their descriptions will capture what it feels like to have one. A description is intended to describe, not to emulate. A scientific description of a toothache--including the brain states involved--no more fails because it cannot transmit pain, than a blueprint of a boat fails because it does not sail across the ocean, or impose upon the reader the smell of brine and a difficulty in keeping one's balance. How would one capture a feeling? ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Tue, February 9, 2010 9:04:21 AM Subject: Re: [ExI] Semiotics and Computability --- On Mon, 2/8/10, Spencer Campbell wrote: > Right. That would invoke the second sense: my dentist's > belief is inaccessible to an outside observer, but perfectly obvious > to the dentist herself. She has empirical evidence for her own > belief; that is, she can experience it whenever she likes. I want to get at the idea here that to both you and to your dentist, your toothache exists as an empirical fact of reality. Your dentist may have no interest in philosophy but she will operate on that philosophical assumption: she might ask you, "Where does it hurt, Spencer? Does it hurt more when I press here?" and so on. Your dentist, presumably an educated woman of science, approaches the subject of your toothache as she would any other empirical fact of reality. She does this even though neither she nor anyone else can feel the pain of your toothache. We should, I think, emulate that approach to the subjective mental states of others in general. Mental states really do exist as empirical facts. They differ from other empirical facts as superbowl games and gumball machines only in so much as they have subjective ontologies; instead of existing in the public domain, as it were, someone in particular must "have" them. > This is all rather far away from the thread subject. I think it relates to the subject in that some people seem philosophically inclined to reduce the first-person mental to the third-person physical. Fearing any association with mind/matter dualism, they reject the notion of consciousness and try to explain subjective first-person facts in objective third-person terms. They imagine that if only they could derive a complete objective scientific description of a toothache, they would then know everything one can know about the subject of toothaches. But nothing in their descriptions will capture what it feels like to have one. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From scerir at libero.it Tue Feb 9 21:09:47 2010 From: scerir at libero.it (scerir) Date: Tue, 9 Feb 2010 22:09:47 +0100 (CET) Subject: [ExI] quantum brains Message-ID: <15438936.1102881265749787403.JavaMail.defaultUser@defaultHost> > But I've read arxiv papers recently arguing that photosynthesis > functions via entanglement, so something that basic might be operating > in other bio systems. > Damien Broderick http://arxiv.org/abs/1001.5108 http://www.nature.com/nature/journal/v463/n7281/full/nature08811.html http://blogs.discovermagazine.com/cosmicvariance/2010/02/05/quantum- photosynthesis/ From gts_2000 at yahoo.com Tue Feb 9 21:27:17 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 9 Feb 2010 13:27:17 -0800 (PST) Subject: [ExI] Semiotics and Computability Message-ID: <740405.96745.qm@web36503.mail.mud.yahoo.com> --- On Tue, 2/9/10, Christopher Luebcke wrote: Welcome Christopher. > You'll hopefully forgive my newness to some of the topics > covered in this amazing group, but is it the case that an > attempt to objectively describe first-person facts requires > or equates to a rejection of the notion of consciousness? I had asserted that the world contains both subjective and objective facts or states-of-affairs. The question came up about how or why subjective facts like toothaches can qualify as empirical facts. Often when we speak of "empirical facts" we something like "objectively existent facts, verifiable by any observer". I have no objection to that use of the word, but if we understand empirical only in that limited sense then we may find ourselves dismissing real first-person subjective facts as non-empirical and thus somehow unreal or less real than other facts. I contend that the word empirical often does and should also apply to subjective first-person facts. There exists for example an actual fact of the matter whether or not you feel hungry at this moment. I cannot know that fact without an honest report from you, but this is no way disqualifies it from having status as a real empirical fact. The fact of your feeling hungry or not has as much reality as does anything objectively verifiable. -gts From spike66 at att.net Tue Feb 9 21:44:14 2010 From: spike66 at att.net (spike) Date: Tue, 9 Feb 2010 13:44:14 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <275932.16385.qm@web111206.mail.gq1.yahoo.com> References: <782068.69808.qm@web36507.mail.mud.yahoo.com> <275932.16385.qm@web111206.mail.gq1.yahoo.com> Message-ID: > ...On Behalf Of Christopher Luebcke > Subject: Re: [ExI] Semiotics and Computability > > Hi, noob lurker (me) is unlurking... Hi Chris, welcome! Do tell us something about you, if you wish. Otherwise, welcome anyway. {8-] spike From cluebcke at yahoo.com Tue Feb 9 23:27:11 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 9 Feb 2010 15:27:11 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <740405.96745.qm@web36503.mail.mud.yahoo.com> References: <740405.96745.qm@web36503.mail.mud.yahoo.com> Message-ID: <519473.46888.qm@web111205.mail.gq1.yahoo.com> Thank you for the background and the welcome. > There exists for example an actual fact of the matter whether or not you feel hungry at this moment. I cannot know that fact without an honest report from you, but this is no way disqualifies it from having status as a real empirical fact. The fact of your feeling hungry or not has as much reality as does anything objectively verifiable. I'm not convinced that a subject's sense of "feeling hungry" cannot be objectively verified. I can verify with perfect accuracy whether a light is red without having to experience "purple", by using instruments; in the same way I expect that it will shortly (in historical terms) be possible to verify, via real-time monitoring of the brain, whether a subject is experiencing "feeling hungry", without the observer needing to experience an identical state of hunger. I could quite easily be wrong, of course. ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Tue, February 9, 2010 1:27:17 PM Subject: Re: [ExI] Semiotics and Computability --- On Tue, 2/9/10, Christopher Luebcke wrote: Welcome Christopher. > You'll hopefully forgive my newness to some of the topics > covered in this amazing group, but is it the case that an > attempt to objectively describe first-person facts requires > or equates to a rejection of the notion of consciousness? I had asserted that the world contains both subjective and objective facts or states-of-affairs. The question came up about how or why subjective facts like toothaches can qualify as empirical facts. Often when we speak of "empirical facts" we something like "objectively existent facts, verifiable by any observer". I have no objection to that use of the word, but if we understand empirical only in that limited sense then we may find ourselves dismissing real first-person subjective facts as non-empirical and thus somehow unreal or less real than other facts. I contend that the word empirical often does and should also apply to subjective first-person facts. There exists for example an actual fact of the matter whether or not you feel hungry at this moment. I cannot know that fact without an honest report from you, but this is no way disqualifies it from having status as a real empirical fact. The fact of your feeling hungry or not has as much reality as does anything objectively verifiable. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From cluebcke at yahoo.com Tue Feb 9 23:36:23 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 9 Feb 2010 15:36:23 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: References: <782068.69808.qm@web36507.mail.mud.yahoo.com> <275932.16385.qm@web111206.mail.gq1.yahoo.com> Message-ID: <618013.65100.qm@web111215.mail.gq1.yahoo.com> Thanks Spike, Nothing terribly interesting to tell. I'm a software developer, science and space enthusiast, and armchair philosopher (I opted not to go pro after suffering through a semester-long graduate-level course on utilitarianism). I don't often identify myself as a member of any particular group that isn't defined by biology, but broadly I have strong transhumanist leanings. I'm kinda non-committal there as I'm really just now getting deep into the subject, and I don't think I have a good enough definition of "transhumanist" to know if I fit the bill. I'm also *very* interested in finding groups of very smart people who are working on the kinds of problems and projects that really excite me, and hanging on like a stubborn barnacle, leeching one small unit of knowledge at a time while hoping not to excessively irritate my hosts. Lastly, I'm deeply interested in the future and would quite like to continue to be a part of it :) - Chris ----- Original Message ---- From: spike To: ExI chat list Sent: Tue, February 9, 2010 1:44:14 PM Subject: Re: [ExI] Semiotics and Computability > ...On Behalf Of Christopher Luebcke > Subject: Re: [ExI] Semiotics and Computability > > Hi, noob lurker (me) is unlurking... Hi Chris, welcome! Do tell us something about you, if you wish. Otherwise, welcome anyway. {8-] spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jrd1415 at gmail.com Wed Feb 10 05:28:22 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Tue, 9 Feb 2010 22:28:22 -0700 Subject: [ExI] quantum brains In-Reply-To: <15438936.1102881265749787403.JavaMail.defaultUser@defaultHost> References: <15438936.1102881265749787403.JavaMail.defaultUser@defaultHost> Message-ID: http://arxivblog.com/?p=370 Looks like quantum effects may be ubiquitous. On Tue, Feb 9, 2010 at 2:09 PM, scerir wrote: >> But I've read arxiv papers recently arguing that photosynthesis >> functions via entanglement, so something that basic might be operating >> in other bio systems. >> Damien Broderick > > http://arxiv.org/abs/1001.5108 > http://www.nature.com/nature/journal/v463/n7281/full/nature08811.html > http://blogs.discovermagazine.com/cosmicvariance/2010/02/05/quantum- > photosynthesis/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stathisp at gmail.com Wed Feb 10 07:18:00 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 10 Feb 2010 18:18:00 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <523017.75621.qm@web36506.mail.mud.yahoo.com> References: <523017.75621.qm@web36506.mail.mud.yahoo.com> Message-ID: On 10 February 2010 01:59, Gordon Swobe wrote: >> As far as you're concerned the function of the nervous >> system - intelligence - which is due to the interactions between >> neurons bears no essential relationship to consciousness. > > As I define intelligence, some of it is encoded into DNA. Every organism has some intelligence including the lowly amoeba. This hapless creature has enough intelligence to find food and replicate but it has no idea of its own existence. I'm not arguing for amoeba consciousness, although I think that consciousness and intelligence are roughly proportional to each other and if anything has any intelligence or consciousness there must be a gradation between the amoeba, the human and the Jupiter brain. >> You believe that consciousness is a property of certain specialised >> cells. > > Yes. And I think you most likely believe so also, except when you have reason to argue for the imaginary mental states of amoebas to support your theory about the imaginary mental states of computers. :) > > We can suppose theories of alien forms of consciousness that might exist in computers and in amoebas and in other entities that lack nervous systems, as you seem wont to do, but it seems to me that there we cross over the line from science to science-fiction. The most important property of the nervous system is its ability to process information. Brainstem functions, subcellular functions and low level cortical functions do not manifest as intelligence nor as consciousness. However, you believe that consciousness is only contingently related to intelligence, and you have also implied that the NCC is something other than the complex pattern of neural firings, since that can be reproduced by a computer. Thus there is no logical reason for you to insist that consciousness should be attached to nervous systems. It could be something that is secreted by neurons not associated in systems, or in non-neural cells. -- Stathis Papaioannou From stathisp at gmail.com Wed Feb 10 07:22:18 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 10 Feb 2010 18:22:18 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <519473.46888.qm@web111205.mail.gq1.yahoo.com> References: <740405.96745.qm@web36503.mail.mud.yahoo.com> <519473.46888.qm@web111205.mail.gq1.yahoo.com> Message-ID: On 10 February 2010 10:27, Christopher Luebcke wrote: > Thank you for the background and the welcome. > >> There exists for example an actual fact of the matter whether or not you feel hungry at this moment. I cannot know that fact without an honest report from you, but this is no way disqualifies it from having status as a real empirical fact. The fact of your feeling hungry or not has as much reality as does anything objectively verifiable. > > I'm not convinced that a subject's sense of "feeling hungry" cannot be objectively verified. I can verify with perfect accuracy whether a light is red without having to experience "purple", by using instruments; in the same way I expect that it will shortly (in historical terms) be possible to verify, via real-time monitoring of the brain, whether a subject is experiencing "feeling hungry", without the observer needing to experience an identical state of hunger. We can objectively verify it if we make a correlation between a self-described mental state and a brain state, but only if we have that sort of brain state ourselves can we guess as to what it is like. -- Stathis Papaioannou From gts_2000 at yahoo.com Wed Feb 10 13:41:15 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 10 Feb 2010 05:41:15 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <557161.41950.qm@web36505.mail.mud.yahoo.com> --- On Wed, 2/10/10, Stathis Papaioannou wrote: > I'm not arguing for amoeba consciousness But you did make such an argument: when I asked you if a digital computer had conscious understanding of a symbol by virtue of associating that symbol with an image file, you compared the computer's conscious understanding to that of an amoeba, suggesting that both amoebas and computers have a wee bit of consciousness. Now you seem to agree that amoebas have no consciousness. So I'll ask you again: do you agree that digital computers cannot obtain conscious understanding of symbols by virtue of associating those symbols with image data? > The most important property of the nervous system is its > ability to process information. So they say. > However, you believe that consciousness is only > contingently related to intelligence, I believe consciousness evolved because it enhances intelligence. > and you have also implied that the NCC is something other than the > complex pattern of neural firings, since that can be reproduced by a > computer. Computers can reproduce just about any pattern. But a computerized pattern of a thing does not equal the thing patterned. I can for example reproduce the pattern of a tree leaf on my computer. That digital leaf will not have the properties of a real leaf. No matter what natural things we simulate on a computer, the simulations will always lack the real properties of the things simulated. Digital simulations of things can do no more than *simulate* those things. It mystifies me that people here believe simulations of organic brains should somehow qualify for an exception to this rule. Neuroscientists should someday have at their disposal perfect digital simulations of brains to use as tools for doing computer-simulated brain surgeries. But according to you and some others, those digitally simulated brains will have consciousness and so might qualify as real people. This would mean medical students will have access to computer simulations of hearts to do simulated heart surgeries, but they won't have access to the same kinds of computerized tools for doing simulated brain surgeries. Those darned computer simulated brains won't sign the consent forms. People like me will want to do the simulated surgeries anyway. The Society for the Prevention of Simulated Cruelty to Simulated Brains will oppose me. -gts From jonkc at bellsouth.net Wed Feb 10 16:17:19 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 10 Feb 2010 11:17:19 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <557161.41950.qm@web36505.mail.mud.yahoo.com> References: <557161.41950.qm@web36505.mail.mud.yahoo.com> Message-ID: Since my last post Gordon Swobe has posted 4 times: > when I asked you [Stathis Papaioannou] if a digital computer had conscious understanding of a symbol by virtue of associating that symbol with an image file, you compared the computer's conscious understanding to that of an amoeba, suggesting that both amoebas and computers have a wee bit of consciousness. I can make guesses but there is no way I can know or will ever know if an amoeba is conscious; hell there is no way I can know if a rock is conscious, or even if Gordon Swobe is. Of course some things don't act as if they were conscious, as in people when they are sleeping; the universal rule of thumb for detecting consciousness is intelligent action, even Gordon Swobe uses this rule every day of his life and every hour of his day except when he's sleeping or debating the matter on the Extropian list. I personally wouldn't endow the way an amoeba behaves with the grand title "intelligence", but reasonable people can differ on this matter; Swobe takes a somewhat more liberal view and thinks amoebas are intelligent. What is NOT reasonable is claiming that it is intelligent but not conscious. That is complete gibberish from an evolutionary viewpoint. > do you agree that digital computers cannot obtain conscious understanding of symbols by virtue of associating those symbols with image data? No I don't agree, and I must confess to having a certain feeling of contempt for the idea. I think contempt is the proper word for something that is not only inconsistent with the discoveries made by science but is also inconsistent with the way we live our daily lives when we are not debating on the Extropian List. > I believe consciousness evolved because it enhances intelligence. And in a magnificent demonstration of doublethink Swobe also believes that a behavioral demonstration like the Turing test cannot detect consciousness. > Digital simulations of things can do no more than *simulate* those things. Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Feb 10 16:20:37 2010 From: pharos at gmail.com (BillK) Date: Wed, 10 Feb 2010 16:20:37 +0000 Subject: [ExI] Google Buzz has arrived for Gmail users Message-ID: Gmail users will soon notice a Buzz folder appearing next to their Inbox. (You need to be using the newer version of Gmail, not the simpler older version). So far it seems to be like an instant messaging system to which you can add pictures or videos. It has 'Friends' and 'Followers' to organise the distribution of your Buzz messages. Messages appear immediately, so a conversation can be maintained, or more like a conference call with selected 'Friends'. Google claim that their anti-spam software and their 'Don't like' button will cut down on the amount of garbage that these social systems usually generate. As with all these sharing systems, you should immediately edit your Buzz profile to make sure you are only sharing what you want to share. ;) Commentators are hinting at Google trying to replace Facebook, Twitter, etc. but we will have to wait an see how many people use it. There are more features expected to be rolled out in the coming weeks. More info here: BillK From steinberg.will at gmail.com Wed Feb 10 18:07:04 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 10 Feb 2010 13:07:04 -0500 Subject: [ExI] Mary passes the test Message-ID: <4e3a29501002101007i46d887eeqe3ce2f150dd60b86@mail.gmail.com> Upon emerging from her room, The Scientists present Mary with two large colored squares. One is red, and one is blue. I wholeheartedly believe Mary will be able to tell which one is red. When asked why, Mary says "I had a feeling that's what it would look like." -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Wed Feb 10 18:48:57 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 10 Feb 2010 13:48:57 -0500 Subject: [ExI] Mary passes the test. Message-ID: <4e3a29501002101048t3fa58f24t9c2b5e834559afdc@mail.gmail.com> "Mary the Color Scientist" is often used as an iron defense of the magicalness or platonic reality of qualia. To some, it is enough to say that Mary learns something COMPLETELY new about the color red when she sees it, giving the sense that, even for all fact and physicality, something is missing until Mary sees that color. Why is this thesis so readily expected? I think that qualia, however weird or outside of normal fact, is, at heart, inextricable from those facts. For example: Upon emerging from her room, The Scientists present Mary with two large colored squares. One is red, and one is blue. I wholeheartedly believe Mary will be able to tell which one is red. When asked why, Mary says "I had a feeling that's what it would look like." I would say that Mary, after learning about red and its neural pathways and physical properties, is able to form some conception of the color in her mind's eye, regardless of whether it has been presented to her, because red is "in there" somewhere. Here is another example: The Scientists invent a fake color called "bread." They teach Mary all there is to know about red and about bread. When asked which one is the real color, Mary tells the scientists that it is obviously red, because her mind is able to, unexplainable but surely, delve deeper into the understanding of this. My point is that there is, at least to me, something in the facts that will, however minutely, betray the idea of redness to her. There was recently an article in Discover about the loss of a man's mind's eye wherein he exhibited some blindsight-like phenomena, knowing without seeing. I believe that, in this sense, Mary can KNOW all there is about red, INCLUDING its qualic perception. She still may gain something when she sees the red, but it is, in this case, akin to brushing the dirt of a treasure chest she has found in a hole to reveal it in its totality. The hole is there and some of the dirt is scraped away, and she has a very good idea of what red is. Seeing it merely puts her mind at ease, knowing she was right all along. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Wed Feb 10 18:48:48 2010 From: max at maxmore.com (Max More) Date: Wed, 10 Feb 2010 12:48:48 -0600 Subject: [ExI] The Future of Markets Message-ID: <201002101849.o1AIn1YP011304@andromeda.ziaspace.com> An excellent piece by the former head of Oxford University's business school: The Future of Markets John Kay 20 October 2009, Wincott Foundation http://www.johnkay.com/2009/10/20/the-future-of-markets/ ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From lacertilian at gmail.com Wed Feb 10 21:07:39 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 13:07:39 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> Message-ID: JOSHUA JOB : > A fetus is not yet a person, nor has ever been a person (as it is not nor > ever has been a rational conceptual entity). So it cannot have any rights. > At least until 34 weeks, they cannot possibly be regarded as people, at > least from what I've read of neural development in the brain (the neocortex > doesn't connect up until around then). After that it gets a little more > fuzzy. As for comatose patients, if they have the capacity for brain > function (i.e., they are not totally brain damaged), then they count as > people, though others have to make decisions for them as that is a state > which it is very difficult to recover from. If they are brain dead, then > that entity no longer exists nor can it exist again (or at least, if it > could, the body itself is likely irrelevant), and so no longer has rights. You speak as though these issues have long since been resolved! In reality, all you're giving me are opinions: facts which are currently in contention. Not to say I don't agree with you, but we can't brush the problem of potential consciousness under the table just because neither of us happens to consider it a major problem. It simply wouldn't be prudent. > I am saying that it cannot be wrong if it does not violate the nature of > other conscious entities. The ocean cannot be wronged, only rational > conceptual "self"-aware entities can be, because they are the things that > can conceivably understand right and wrong. So it can't be wrong (as in, a > violation of rights) to do something unless it infringes on the rights of > other such entities. Well, I think we've taken this argument as far as it can go then. Clearly you refuse to even entertain the idea that rights could be anything other than personal rights. I suppose it's an issue of semantics. What if I say cars could have an explicitly-defined correct way to be treated, that is, could be covered by a system of "corrects"? Surely you would agree that, looking at a galaxy-spanning machine devoid of anything remotely resembling rational self-aware thought, we could arbitrarily suppose that it is "meant" to further some purpose and from that assumption determine how well it is operating -- that is, how correctly, or how rightly. I'll start a new thread for the Napoleon argument later on, if I have the time. From lacertilian at gmail.com Wed Feb 10 21:44:15 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 13:44:15 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <557161.41950.qm@web36505.mail.mud.yahoo.com> Message-ID: John Clark : > Gordon Swobe : >> I believe consciousness evolved because it enhances intelligence. > > And in a magnificent demonstration of doublethink Swobe also believes that a > behavioral demonstration like the Turing test cannot detect consciousness. This is actually a very good point. I don't have clearly articulated beliefs regarding the relationship between consciousness and evolution, myself, but clearly Gordon does. Two propositions and two conclusions, based on the inferred worldview of John Clark, to be accepted, rejected, or amended by the reader as seems appropriate: P(a): All behavior is measurable. P(b): Evolution influences behavior and only behavior. C(c): If and only if consciousness is measurable, it is subject to evolution. C(d): If consciousness is subject to evolution, it is necessarily measurable. From spike66 at att.net Wed Feb 10 21:57:00 2010 From: spike66 at att.net (spike) Date: Wed, 10 Feb 2010 13:57:00 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> Message-ID: <617F6A986BE2417B92FC9A187E1A4D71@spike> > ...On Behalf Of Spencer Campbell > Subject: Re: [ExI] Rights without selves (was: Nolopsism) > > JOSHUA JOB : > > A fetus is not yet a person, nor has ever been a person (as > it is not > > nor ever has been a rational conceptual entity). So it > cannot have any rights... > > You speak as though these issues have long since been > resolved! In reality, all you're giving me are opinions... I used to find the most hardcore right-to-lifers and argue that my sperm have rights. Their ova did as well, since both types of gametes fit every definition of "alive." Somehow that argument never went anywhere. {8^D spike From lacertilian at gmail.com Wed Feb 10 21:54:14 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 13:54:14 -0800 Subject: [ExI] Mary passes the test. In-Reply-To: <4e3a29501002101048t3fa58f24t9c2b5e834559afdc@mail.gmail.com> References: <4e3a29501002101048t3fa58f24t9c2b5e834559afdc@mail.gmail.com> Message-ID: Will Steinberg : > The Scientists invent a fake color called "bread."? They teach Mary all > there is to know about red and about bread.? When asked which one is the > real color, Mary tells the scientists that it is obviously red, because her > mind is able to, unexplainable but surely, delve deeper into the > understanding of this.? My point is that there is, at least to me, something > in the facts that will, however minutely, betray the idea of redness to her. This is actually within the realm of possibility, surprisingly enough, so your theory is subject to a certain degree of testing. http://en.wikipedia.org/wiki/Impossible_colors Can you imagine bluish-yellow or greenish-red? The latter is unusually easy for me, because I'm deuteranomalous. Without giving more than a few minutes of effort to it, I'm willing to say that I find the former completely inconceivable. From spike66 at att.net Thu Feb 11 00:08:25 2010 From: spike66 at att.net (spike) Date: Wed, 10 Feb 2010 16:08:25 -0800 Subject: [ExI] too funny to not pass along In-Reply-To: <201002101849.o1AIn1YP011304@andromeda.ziaspace.com> References: <201002101849.o1AIn1YP011304@andromeda.ziaspace.com> Message-ID: In Washington DC today, it was so cold that the flashers are describing themselves. From nanite1018 at gmail.com Thu Feb 11 00:10:24 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Wed, 10 Feb 2010 19:10:24 -0500 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> Message-ID: <8370C045-E609-43A7-A51F-D408D15214AF@GMAIL.COM> On Feb 10, 2010, at 4:07 PM, Spencer Campbell wrote: > You speak as though these issues have long since been resolved! In > reality, all you're giving me are opinions: facts which are currently > in contention. Not to say I don't agree with you, but we can't brush > the problem of potential consciousness under the table just because > neither of us happens to consider it a major problem. It simply > wouldn't be prudent. A fetus (at least in the first 6 or 7 months) cannot be conscious (in the human meaning, not like ants), nor can a brain dead person, this has pretty much been proven. I don't see how anyone can contend that it is not the case. That's why I spoke as if it was fact. > Well, I think we've taken this argument as far as it can go then. > Clearly you refuse to even entertain the idea that rights could be > anything other than personal rights. I suppose it's an issue of > semantics. What if I say cars could have an explicitly-defined correct > way to be treated, that is, could be covered by a system of > "corrects"? > > Surely you would agree that, looking at a galaxy-spanning machine > devoid of anything remotely resembling rational self-aware thought, we > could arbitrarily suppose that it is "meant" to further some purpose > and from that assumption determine how well it is operating -- that > is, how correctly, or how rightly. The problem with "corrects" or your machine is that you are referencing consciousness and entities when even discussing them. I agree fully that if we were to look at some huge machine without anything resembling rational self-aware thought, we could figure out what it seems to do and determine how well it is doing it. Not a problem at all. The problem is that "we" have to determine how well it is doing it. "We" as in us rational, self-aware entities. Without that, anywhere along the line, I can't see how you can create the idea of a "correct" or a "right" or a "purpose" or a "meaning." You've got to have someone who can actually evaluate things, or else there is no meaning, value, or purpose to anything at all. In other words, you need rational self-aware entities, or there can be no idea of meaning or purpose, and thus no rights. Without assuming selves exist (as we perceive them), I don't see how you can get anywhere. I hope I've made my point more clear (I don't think I expressed my position in this paragraph very clearly before). If I refuse to entertain the thought, it is because the thought cuts itself off at the knees, as far as I can tell, haha. Joshua Job nanite1018 at gmail.com From pharos at gmail.com Thu Feb 11 00:28:37 2010 From: pharos at gmail.com (BillK) Date: Thu, 11 Feb 2010 00:28:37 +0000 Subject: [ExI] too funny to not pass along In-Reply-To: References: <201002101849.o1AIn1YP011304@andromeda.ziaspace.com> Message-ID: On Thu, Feb 11, 2010 at 12:08 AM, wrote: > In Washington DC today, it was so cold that the flashers are describing > themselves. > > You'll probably like this as well, then. BillK From emlynoregan at gmail.com Thu Feb 11 01:09:54 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 11 Feb 2010 11:39:54 +1030 Subject: [ExI] google buzz Message-ID: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> Hi all, I'm trolling for contacts on Google Buzz. Any gmailers want to put their hands up as buzzers / potential buzzers? -- Emlyn http://www.productx.net - free rss to email gateway, zero signup http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From pharos at gmail.com Thu Feb 11 01:23:15 2010 From: pharos at gmail.com (BillK) Date: Thu, 11 Feb 2010 01:23:15 +0000 Subject: [ExI] google buzz In-Reply-To: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> Message-ID: On 2/11/10, Emlyn wrote: > I'm trolling for contacts on Google Buzz. Any gmailers want to put > their hands up as buzzers / potential buzzers? > > The best way is to look in your contacts list, or on exi-chat, for gmail addresses, then set your Buzz to 'Follow' those you would like to hear from. Some of them, in turn, might start 'Following' you. This then gets into the Facebook type of etiquette problems. If nobody wants to 'Follow' me should I feel insulted or relieved? If you 'Un-follow' someone, does that make an enemy for life? :) BillK From emlynoregan at gmail.com Thu Feb 11 01:27:57 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 11 Feb 2010 11:57:57 +1030 Subject: [ExI] google buzz In-Reply-To: References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> Message-ID: <710b78fc1002101727q7fd1cf7eu3f1da0cc16c18848@mail.gmail.com> On 11 February 2010 11:53, BillK wrote: > On 2/11/10, Emlyn wrote: >> ?I'm trolling for contacts on Google Buzz. Any gmailers want to put >> ?their hands up as buzzers / potential buzzers? >> >> > > The best way is to look in your contacts list, or on exi-chat, for > gmail addresses, then set your Buzz to 'Follow' those you would like > to hear from. Yeah, gmail seems to have chosen a bunch of people from my contacts already, I'm not really sure how/why. > > Some of them, in turn, might start 'Following' you. It's more like twitter than facebook in that way; it's an asymmetrical relationship. > > This then gets into the Facebook type of etiquette problems. > If nobody wants to 'Follow' me should I feel insulted or relieved? > If you 'Un-follow' someone, does that make an enemy for life? ?:) > > BillK I don't have all that many enemies at the moment (that I know of), so I'll risk it. -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From spike66 at att.net Thu Feb 11 01:32:23 2010 From: spike66 at att.net (spike) Date: Wed, 10 Feb 2010 17:32:23 -0800 Subject: [ExI] too funny to not pass along In-Reply-To: References: <201002101849.o1AIn1YP011304@andromeda.ziaspace.com> Message-ID: <639A19F962994900B075B5E001256BBA@spike> > ...On Behalf Of BillK > Subject: Re: [ExI] too funny to not pass along > > On Thu, Feb 11, 2010 at 12:08 AM, wrote: > > In Washington DC today, it was so cold that the flashers are > > describing themselves. > > > > > > You'll probably like this as well, then. > > snowstorms-freak-out-dc/> > > > BillK Thanks BillK! The whole thing is still really bugging me: the perfect storm in DC, the collapse of the Copenhagen conflab, the new embarrassments coming nearly every day for the world climate experts. I had a great niche business ready to go, and all this is spoiling it. I was going to be a carbon credit provider for global warming skeptics. Oh it would have been fine, and I would have made a cubic buttload of money. I could have a special line of products, such as a square meter of land, growing something, anything. Then if a city slicker wanted to drive a pickup truck and someone gave her a bunch of trash about it, she could pull out the certificate and say "Hey, I own a farm and this is a farm truck. Do yooou own a farm?" Just think of all the stuff we could sell. Looks to me like I could earn subsidies for growing anything carbon-intensive instead of cash crops for instance. It looks to me like the wheels are coming off of the inconvenient truth at a remarkable pace. I fear there will be no more fortunes to be made here. spike From lacertilian at gmail.com Thu Feb 11 01:31:52 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 17:31:52 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <8370C045-E609-43A7-A51F-D408D15214AF@GMAIL.COM> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <8370C045-E609-43A7-A51F-D408D15214AF@GMAIL.COM> Message-ID: JOSHUA JOB : > A fetus (at least in the first 6 or 7 months) cannot be conscious (in the human meaning, not like ants), nor can a brain dead person, this has pretty much been proven. I don't see how anyone can contend that it is not the case. That's why I spoke as if it was fact. Nevertheless, many people make precisely that contention. It isn't an entirely sane position, but it does imply one good point: it's pretty much impossible to draw a precise line between conscious and not-conscious, as indicated by your 6 or 7 months figure. Two weeks is a huge margin of error for just about everything short of geology and astronomy. Sometimes history and biology, depending on how far back you go. > The problem is that "we" have to determine how well it is doing it. "We" as in us rational, self-aware entities. Without that, anywhere along the line, I can't see how you can create the idea of a "correct" or a "right" or a "purpose" or a "meaning." You've got to have someone who can actually evaluate things, or else there is no meaning, value, or purpose to anything at all. In other words, you need rational self-aware entities, or there can be no idea of meaning or purpose, and thus no rights. Without assuming selves exist (as we perceive them), I don't see how you can get anywhere. I hope I've made my point more clear (I don't think I expressed my position in this paragraph very clearly before). If I refuse to entertain the thought, it is because the thought cuts itself off at the knees, as far as I can tell, haha. Yes: this is the best description of your point yet, I think. Nevertheless I still disagree. To say that judgements made by rational, self-aware entities are more valid than those made by mechanistic automatons is entirely arbitrary. I suppose it derives from the claim that an internal experience of deeming something right or wrong is somehow important, but that experience is fleeting and currently inaccessible to all but the mind of origin. Morality and ethics are methods of categorizing actions in the world. Nothing more, nothing less. There isn't anything especially conscious about categorization. http://www.youtube.com/watch?v=ioylPVTwvV4 That's right: coin sorting machine. No philosophical argument can stand up in the face of a coin sorting machine. I'll write up that Napoleon thing, now. Actually I think it will be rather short. From lacertilian at gmail.com Thu Feb 11 01:52:41 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 17:52:41 -0800 Subject: [ExI] The Napoleon problem Message-ID: Suppose you are a doctor in an insane asylum. It's your typical insane asylum, all things considered, hailing from an era before anyone thought to call such things "mental health facilities". Electroshock treatment is the golden standard. You've helped a good number of patients that way. One, however, has proved challenging. This is a man who, for at least as long as he has been capable of speech, has firmly believed that he is Napoleon Bonaparte. He claims to have traveled to the future during the battle of Waterloo, leaving a time-double in his place. Naturally the time-double is a philosophical zombie, he explains, but that is neither here nor there. There has never been a time when the man did not believe he was Napoleon, and today he died from complications during a drastic (and unsuccessful) treatment. He looked nothing like Napoleon, and hardly spoke a word of French. When informed of these facts, he consistently shrugged them off as absurd. He had memorized every known detail of Napoleon's life, to the point where he might have been a better authority on the subject than Napoleon himself ever was. His delusional belief has never wavered to the slightest degree, and now we know for certain that it never will. Having declared time of death, you're left with one niggling question in the back of your mind: Was this man Napoleon? From nanite1018 at gmail.com Thu Feb 11 01:54:13 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Wed, 10 Feb 2010 20:54:13 -0500 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <8370C045-E609-43A7-A51F-D408D15214AF@GMAIL.COM> Message-ID: <80F2A3B3-E11C-4DAA-89D1-A0C60092F02F@GMAIL.COM> On Feb 10, 2010, at 8:31 PM, Spencer Campbell wrote: > Yes: this is the best description of your point yet, I think. > Nevertheless I still disagree. To say that judgements made by > rational, self-aware entities are more valid than those made by > mechanistic automatons is entirely arbitrary. I suppose it derives > from the claim that an internal experience of deeming something right > or wrong is somehow important, but that experience is fleeting and > currently inaccessible to all but the mind of origin. > > Morality and ethics are methods of categorizing actions in the world. > Nothing more, nothing less. There isn't anything especially conscious > about categorization. Well, while morality and ethics are categorizations of actions, I think they are a special type. They are "this is good" or "this is bad." Now you need "good or bad for or to who or what?" I don't see any answer to that question besides "to 'people.'" Or, perhaps, living things. I don't see how something can be good or bad to something that doesn't have an awareness of good or bad, or even an alternative that changes anything. A car, without a person, is useless, it has no meaning, no value. It certainly doesn't value itself, since "it" is incapable of generating evaluations, and doesn't employ de se operators (which I think follows from the paper that started this all off). So it doesn't matter to the car, or anything else (because nothing can "matter" without an evaluator), whether it gets blown up, or rusts, or runs out of gas. Whereas, to life, it is really important. It determines whether it continues to exist or not. A car without evaluating entities is just a piece of matter, and matter just changes forms, it doesn't wink out of existence (I'm including energy as a form of matter). Without something which can say "this is good or bad for/to 'me,'" I'm not sure how you would build something that can say something is good or bad in a non-arbitrary fashion. I say in a non-arbitrary fashion, because while you can build something which says "it is wrong for jellyfish to vomit", it really makes no difference to anything that a jellyfish vomits. But if you say "going and starting to kill people is bad because it hurts everyone's ability to live, including me," then you have an objective basis for that statement: your existence is threatened by people murdering each other. It actually makes a difference whether you live or die, because you can live or die, you can wink out of existence, even if the matter you are composed of is never destroyed. I can't wait to read the Napoleon argument. I'm sure it will, at least, be interesting. Joshua Job nanite1018 at gmail.com From nanite1018 at gmail.com Thu Feb 11 02:01:36 2010 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Wed, 10 Feb 2010 21:01:36 -0500 Subject: [ExI] The Napoleon problem In-Reply-To: References: Message-ID: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> On Feb 10, 2010, at 8:52 PM, Spencer Campbell wrote: > Was this man Napoleon? That was underwhelming, haha. Of course not. You have huge amounts of evidence to show he was born, him growing up, etc., and so obviously he is not actually Napoleon. He is also explicitly said not to speak French, nor look like him. Even if he had read everything Napoleon had ever written, ever word written about him, there would still be information he did not know (since obviously, every single experience he had ever had was not written down, and even if it was, there would necessarily be detail missing). So he doesn't even have all the knowledge Napoleon had. So while he might have believed whatever he liked, he certainly was not the man Napoleon Bonaparte, Emperor of France. He was not born at the same time, nor had all the memories and experiences of the man. An expert on him, certainly. But not the man himself. I don't mean to sound, well, mean, but this strikes me as not a problem at all. Am I missing something? Joshua Job nanite1018 at gmail.com From lacertilian at gmail.com Thu Feb 11 02:26:58 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 18:26:58 -0800 Subject: [ExI] The Napoleon problem In-Reply-To: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> References: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> Message-ID: JOSHUA JOB : > I don't mean to sound, well, mean, but this strikes me as not a problem at all. Am I missing something? Probably not. I was fifteen or something the last time I used it, and now it seems a little trite even to me. There's always a chance, though, so I can elaborate. It's basically an epistemological question: how do we KNOW that he isn't Napoleon? > You have huge amounts of evidence to show he was born, him growing up, etc., and so obviously he is not actually Napoleon. A hole in the experiment, easily patched: of course he believes he regressed to an embryo during the jump through time, shoving aside the soul that otherwise would have occupied it. Not the heart of the problem, though, either way. There is an assumption, built in, that he and Napoleon really were different people. No weird magic took place. Even having established that, there is a (non-intuitive) way to view the question "is he Napoleon" so that it has a very interestingly uncertain answer. > He is also explicitly said not to speak French, nor look like him. Even if he had read everything Napoleon had ever written, ever word written about him, there would still be information he did not know (since obviously, every single experience he had ever had was not written down, and even if it was, there would necessarily be detail missing). So he doesn't even have all the knowledge Napoleon had. And how, precisely, do we know he doesn't? He knows more than we know. If anyone on Earth knew every detail of Napoleon's life, it would be him. > So while he might have believed whatever he liked, he certainly was not the man Napoleon Bonaparte, Emperor of France. He was not born at the same time, nor had all the memories and experiences of the man. An expert on him, certainly. But not the man himself. It's much more likely that your knowledge of Napoleon is false than it is that his knowledge of Napoleon is false. He is not only an expert on Napoleon, but a super-expert; no one can answer Napoleon-related questions with anywhere near the precision or accuracy that he can, even if he blithely confabulates unverifiable details to do so. So, when it comes to the question, "who is Napoleon", who are you going to believe? You or him? The obvious answer is you, because the man is insane. But one has to wonder what makes this Napoleon question so fundamentally different from every other Napoleon question, so that no degree of knowledge about Napoleon is the least bit helpful in answering it. From lacertilian at gmail.com Thu Feb 11 02:53:16 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 10 Feb 2010 18:53:16 -0800 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <80F2A3B3-E11C-4DAA-89D1-A0C60092F02F@GMAIL.COM> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <8370C045-E609-43A7-A51F-D408D15214AF@GMAIL.COM> <80F2A3B3-E11C-4DAA-89D1-A0C60092F02F@GMAIL.COM> Message-ID: JOSHUA JOB : > Without something which can say "this is good or bad for/to 'me,'" I'm not sure how you would build something that can say something is good or bad in a non-arbitrary fashion. I say in a non-arbitrary fashion, because while you can build something which says "it is wrong for jellyfish to vomit", it really makes no difference to anything that a jellyfish vomits. We agree. > But if you say "going and starting to kill people is bad because it hurts everyone's ability to live, including me," then you have an objective basis for that statement: your existence is threatened by people murdering each other. It actually makes a difference whether you live or die, because you can live or die, you can wink out of existence, even if the matter you are composed of is never destroyed. We disagree. If it does not make a difference when a jellyfish is made to vomit, then it does not make a difference when you are made to die. I'm going to take the materialist route here: both of these things are just thermodynamical processes, at root, and thus equally important in metaphysical terms. It is arbitrary to say that death (or anything else) is bad, no matter how rational your basis for saying so is. To a subjectivist, an objective basis is just a more-easily-rationalized subjective basis. Both are imposed on reality (by us self-aware folk, if you insist); they are not in any way inherent to reality itself, no matter how neatly they appear to fit. All of this becomes overwhelmingly clear when you accept the premise that (a) you can't wink out of existence because (b) "you" don't exist to begin with. I hinted at an embryonic theory in an earlier post to this list. I'm now calling it the ontological plane. There's a real-imaginary axis, which you could also call a physical-virtual axis, and there's an existent-nonexistent axis. My computer is real and exists; if I made my computer emulate itself or another computer then the virtual computer would be imaginary, but it would still exist. Among the whole field of things possible or impossible, the vast majority are both imaginary and nonexistent. According to this theory, I, in the sense of my quintessential self and not my physical manifestation, am one of the very few things that falls in the "real but nonexistent" quadrant. There is an imaginary symbol, "I", and that exists whenever it's invoked in my brain; but it points to a thing, me, which doesn't. I am regaining hope in the potential for this argument to become productive. What I've written here is far more clear and compelling to me, at least, than the Napoleon problem ever was. One of us may just experience a change of mind, one of these days! Hint hint! (I am implying that it will be you. That is the hint.) From spike66 at att.net Thu Feb 11 03:01:08 2010 From: spike66 at att.net (spike) Date: Wed, 10 Feb 2010 19:01:08 -0800 Subject: [ExI] google buzz In-Reply-To: References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> Message-ID: > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of BillK > ... > > Some of them, in turn, might start 'Following' you. > > This then gets into the Facebook type of etiquette problems. > If nobody wants to 'Follow' me should I feel insulted or relieved? > If you 'Un-follow' someone, does that make an enemy for life? :) > > BillK Facebook has a lot of creepy stuff like that, which is why I flatly refused to play. This is ME talking, Mr. Openness, the one with a G rated life and few if any former girlfriends. What bothered me when my wife started using it is that you get all these requests from old acquaintances wanting to be friends. Well, in the meat world I never turn away anyone wanting to be friends, never. But in the e-world, you almost need to turn away most of these requests, for the lack of time to write. You might get a bunch of people who read your online comments who you never met and are not sure you want to. Then if you say no or ignore their request you feel like a heel. These please-be-my-friend people remind me of the Strangers of America: http://www.theonion.com/content/news/nations_strangers_decry_negative spike From emlynoregan at gmail.com Thu Feb 11 03:41:53 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 11 Feb 2010 14:11:53 +1030 Subject: [ExI] google buzz In-Reply-To: References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> Message-ID: <710b78fc1002101941r444649c3j6f2c7262939e1e6f@mail.gmail.com> On 11 February 2010 13:31, spike wrote: > > >> -----Original Message----- >> From: extropy-chat-bounces at lists.extropy.org >> [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of BillK >> ... >> >> Some of them, in turn, might start 'Following' you. >> >> This then gets into the Facebook type of etiquette problems. >> If nobody wants to 'Follow' me should I feel insulted or relieved? >> If you 'Un-follow' someone, does that make an enemy for life? ?:) >> >> BillK > > Facebook has a lot of creepy stuff like that, which is why I flatly refused > to play. ?This is ME talking, Mr. Openness, the one with a G rated life and > few if any former girlfriends. ?What bothered me when my wife started using > it is that you get all these requests from old acquaintances wanting to be > friends. ?Well, in the meat world I never turn away anyone wanting to be > friends, never. ?But in the e-world, you almost need to turn away most of > these requests, for the lack of time to write. ?You might get a bunch of > people who read your online comments who you never met and are not sure you > want to. ?Then if you say no or ignore their request you feel like a heel. > > These please-be-my-friend people remind me of the Strangers of America: > > http://www.theonion.com/content/news/nations_strangers_decry_negative > > spike The friending thing is odd, but it's a misnomer. "Will you be my friend?" should be "Will you agree to form an edge between our nodes?" As to weird unknown freaks reading your comments... how many years have you been posting on this list???? -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From msd001 at gmail.com Thu Feb 11 03:45:36 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 10 Feb 2010 22:45:36 -0500 Subject: [ExI] google buzz In-Reply-To: <710b78fc1002101941r444649c3j6f2c7262939e1e6f@mail.gmail.com> References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com> <710b78fc1002101941r444649c3j6f2c7262939e1e6f@mail.gmail.com> Message-ID: <62c14241002101945r2ca74ca2pf869cb266dafd0c2@mail.gmail.com> On Wed, Feb 10, 2010 at 10:41 PM, Emlyn wrote: > As to weird unknown freaks reading your comments... how many years > have you been posting on this list???? what of the well known freaks reading your comments? (you know who you are :) From steinberg.will at gmail.com Thu Feb 11 03:55:42 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 10 Feb 2010 22:55:42 -0500 Subject: [ExI] The Napoleon problem In-Reply-To: References: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> Message-ID: <4e3a29501002101955u456edc1cuaa2154e49b49df8a@mail.gmail.com> I think Spencer will soon tell us why our believing that the man is not Napoleon is irreconcilable with some other belief many of us hold. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Thu Feb 11 03:56:49 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 10 Feb 2010 22:56:49 -0500 Subject: [ExI] The Napoleon problem In-Reply-To: <4e3a29501002101955u456edc1cuaa2154e49b49df8a@mail.gmail.com> References: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> <4e3a29501002101955u456edc1cuaa2154e49b49df8a@mail.gmail.com> Message-ID: <4e3a29501002101956s4ad74326j1a5adf642864d6ba@mail.gmail.com> Oh, I suppose that sort of happened. I am ignorant. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 11 05:05:17 2010 From: spike66 at att.net (spike) Date: Wed, 10 Feb 2010 21:05:17 -0800 Subject: [ExI] google buzz In-Reply-To: <62c14241002101945r2ca74ca2pf869cb266dafd0c2@mail.gmail.com> References: <710b78fc1002101709u37d9e48fl1452ee9c093afb40@mail.gmail.com><710b78fc1002101941r444649c3j6f2c7262939e1e6f@mail.gmail.com> <62c14241002101945r2ca74ca2pf869cb266dafd0c2@mail.gmail.com> Message-ID: <4C147272924742F0BA9060FCF41EDA0F@spike> > ...On Behalf Of Mike Dougherty > Subject: Re: [ExI] google buzz > > On Wed, Feb 10, 2010 at 10:41 PM, Emlyn wrote: > > As to weird unknown freaks reading your comments... how many years > > have you been posting on this list???? > > what of the well known freaks reading your comments? > (you know who you are :) > _______________________________________________ Ja, well a friend is a person who knows how you really are, and likes you anyway. Regarding the freaks, known and unknown, who hang out on ExI-chat, I have to admit, anyone who wants to hang out with me, I am not sure I want to hang out with them. It is so hard to stay away from the riff-raff when one is riff-raff oneself. My sister-in-law paid me the best compliment ever. She sent me an article from one of her many women's magazines called 100 Things That Are Getting Better. She said it reminded her of my outlook on life, since pretty much everything I really care about is good and getting better all the time. So this magazine had some stuff in there I would never have thought of, as well as things I never heard of, such as shapewear. What the hell is shapewear? That was number 2 on their list of things getting better, and I haven't a clue what it is. Their number 1 was floral arrangements. Huh? Some of the more notable comments: 3) your chances of visiting the moon. Cool. 4) apps to help you lose weight. OK, but my BMI would make me too thin to qualify to be a fashion model, so let us move on. 5) Polyester. Something to do with shapewear perhaps? 6) TV dinners. 8) Our lungs. They talk about the fact that smoking is way down, which means second hand smoking is way down. 13) catching bad guys. Once needed a tissue sample the size of a quarter dollar coin. Now requires only a few cells. 14) e-cards. Absurd of course, no mention of e-MAIL which is how everyone communicates now. 18) dads. They help with the kids now. Cool thanks for noticing. Four out of five children surveyed still prefer the mom, but the father figure at least exists in their lives. 19) robots. Agreed, cool. 20) Hillary Clinton. Hmmm, no comment. So OK I google, I find shapewear is girdles and such. I haven't a clue why this magazine thinks that is improving, or why it matters, but perhaps I just don't understand, and I am at a total loss why this is their big number 1 on the list of things getting better. So now, I ask you, those not involved in the Searle discussion, what things are getting better and why? I offer these few: 1) Computers, in every way, 2) software generally, in terms of availability, capability, stability, even price. 3) Cars. 4) Motorcycles handle waaay better than they used to, better and faster, even if not cheaper. 5) Phones. 6) Availability of useful information in general, the ease of finding things out. In retrospect, I might move this to number one. 7) Health care: neighbor had a heart attack, parameds here within minutes, connected her to electronic devices to communicate with two cardiologists at Stanford Hospital. She was better off in the back of that ambulance than she would have been in the most advanced hospital even just 30 yrs ago. They gave her all the right meds before they even started towards the ER. Little or no permanent heart damage. 8) The environment. It is much cleaner than it used to be. What are your thoughts? spike From stathisp at gmail.com Thu Feb 11 11:28:56 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 11 Feb 2010 22:28:56 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <557161.41950.qm@web36505.mail.mud.yahoo.com> References: <557161.41950.qm@web36505.mail.mud.yahoo.com> Message-ID: On 11 February 2010 00:41, Gordon Swobe wrote: > Computers can reproduce just about any pattern. But a computerized pattern of a thing does not equal the thing patterned. > > I can for example reproduce the pattern of a tree leaf on my computer. That digital leaf will not have the properties of a real leaf. No matter what natural things we simulate on a computer, the simulations will always lack the real properties of the things simulated. > > Digital simulations of things can do no more than *simulate* those things. It mystifies me that people here believe simulations of organic brains should somehow qualify for an exception to this rule. I'll only respond to this as John Clark has responded to your other points. Would you say that a robot that seems to walk isn't really walking because it is not identical to, and therefore lacks all of the properties of, a human walking? The argument is not that a digital computer is *identical* with a biological brain, otherwise it would be a biological brain and not a digital computer. The argument is that the computer can reproduce the consciousness of the brain if it is able to reproduces the brain's behaviour. If it can't, you can't explain what would happen instead, and your solution is to advise that I change the question to one more to your liking. > Neuroscientists should someday have at their disposal perfect digital simulations of brains to use as tools for doing computer-simulated brain surgeries. But according to you and some others, those digitally simulated brains will have consciousness and so might qualify as real people. This would mean medical students will have access to computer simulations of hearts to do simulated heart surgeries, but they won't have access to the same kinds of computerized tools for doing simulated brain surgeries. Those darned computer simulated brains won't sign the consent forms. > > People like me will want to do the simulated surgeries anyway. The Society for the Prevention of Simulated Cruelty to Simulated Brains will oppose me. What if a race of robots landed on Earth and decided to do cruel experiments on humans, on the assumption that mere organic matter couldn't have a mind or feelings, despite behaving as if it did? -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Feb 11 14:06:43 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 06:06:43 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <896407.10281.qm@web36503.mail.mud.yahoo.com> --- On Thu, 2/11/10, Stathis Papaioannou wrote: > Would you say that a robot that seems to walk isn't really > walking because it is not identical to, and therefore lacks all of > the properties of, a human walking? My point here concerns digital simulations of brains, not robots. > The argument is that the computer can reproduce the consciousness of the > brain if it is able to reproduces the brain's behaviour. Characters in video games behave as if they have consciousness. Seems to me that digitally simulated brains will also behave as if they have consciousness, but that they will have no more consciousness than do those characters in video games. I don't play video games myself but I've known children who did. They often spoke of the characters in their video games as if those characters really existed as consciousness entities. Then they matured. -gts From gts_2000 at yahoo.com Thu Feb 11 14:37:35 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 06:37:35 -0800 (PST) Subject: [ExI] evolution of consciousness In-Reply-To: Message-ID: <967863.26874.qm@web36503.mail.mud.yahoo.com> --- On Wed, 2/10/10, Spencer Campbell wrote: > This is actually a very good point. I don't have clearly > articulate beliefs regarding the relationship between consciousness > and evolution, myself, but clearly Gordon does. I think consciousness aids and enhances intelligence, something like the way a flashlight helps one move about in the dark. Unconscious animals like amoebas exhibit a low level of intelligent behavior. Those instinctive behaviors are encoded by DNA. In higher organisms we see nervous systems, and we see how the resulting consciousness increases the intelligence and flexibility of the organisms. It seems probable to me that conscious intelligence involves less biological overhead than instinctive unconscious intelligence, especially when considering complex behaviors such as social behaviors. Perhaps nature selected it for that reason only. -gts From gts_2000 at yahoo.com Thu Feb 11 15:10:29 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 07:10:29 -0800 (PST) Subject: [ExI] Mary passes the test. Message-ID: <155252.92470.qm@web36504.mail.mud.yahoo.com> --- On Wed, 2/10/10, Will Steinberg wrote: > "Mary the Color Scientist" is > often used as an iron defense of the magicalness or platonic > reality of qualia.? To some, it is enough to say that Mary > learns something COMPLETELY new about the color red when she > sees it, giving the sense that, even for all fact and > physicality, something is missing until Mary sees that > color.? Why is this thesis so readily expected?? I think > that qualia, however weird or outside of normal fact, is, at > heart, inextricable from those facts.? For example: > > > Upon emerging from her room, The Scientists present Mary > with two large colored squares.? One is red, and one is blue.? I > wholeheartedly believe Mary will be able to tell which one is red.? > When asked why, Mary says "I had a feeling that's what it would > look like." > > I would say that Mary, after learning about red and its > neural pathways and physical properties, is able to form > some conception of the color in her mind's eye, > regardless of whether it has been presented to her, because > red is "in there" somewhere.? This is very interesting, Will. Most curious to me is that you seem to want to refute a supposed argument for what you call the "platonic reality of quaila", but you do so with an argument that Plato himself would likely agree with. Plato taught that we never learn anything important that we don't already in some sense know, just as Mary in your story in some sense knows the color of red before seeing it. The Greek word for "truth" is "aletheia" which one can translate literally as "un-forgetting" or "remembering". -gts From bbenzai at yahoo.com Thu Feb 11 16:54:05 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 11 Feb 2010 08:54:05 -0800 (PST) Subject: [ExI] Film Script! In-Reply-To: Message-ID: <547545.73654.qm@web113605.mail.gq1.yahoo.com> Stathis Papaioannou wrote in a thread that shall remain nameless: > What if a race of robots landed on Earth and decided to do > cruel > experiments on humans, on the assumption that mere organic > matter > couldn't have a mind or feelings, despite behaving as if it > did? Yay, Film script! It has everything to be a successful 'thinking persons' sf film that is nevertheless popular: The alien robots are advanced AIs (not too advanced of course, or they wouldn't bother invading), plenty of peril, gore and and anguish, maybe nanotech dissection devices that they use to slice and dice and otherwise torture the hapless philosophical zombie humans, in an attempt to find out how we can function, when we clearly possess just an organic simulation of 'real' consciousness. The hero could be a dashing scientist who uses the awesome power of his human brain to save the pretty young lab assistant who is secretly in love with him, when he notices the power lead coming from the flying saucer... Yes, it's just The War of the Worlds/Independence Day all over again, but with a philosophical twist. It could end by leaving you wondering: Are we actually the zombies, and have we just unplugged the only *really* conscious beings in the universe??? Oh, yes, we could widen the audience demographic by having the pretty young lab assistant be a transsexual. And throw in a couple of lesbian bishops, only one of whom survives the alien robots' grisly experiments, which incidentally turn her into an atheist. This could be bigger than "Plan 9 from Outer Space"! Ben Zaiboc From jonkc at bellsouth.net Thu Feb 11 17:00:32 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 11 Feb 2010 12:00:32 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <967863.26874.qm@web36503.mail.mud.yahoo.com> References: <967863.26874.qm@web36503.mail.mud.yahoo.com> Message-ID: <3B7B32F3-20F4-4ABA-A20E-7837B734CCFF@bellsouth.net> Since my last post Gordon Swobe has posted 3 times. > I think consciousness aids and enhances intelligence, something like the way a flashlight helps one move about in the dark. I've said this many many times before but that doesn't prevent it from being true, despite believing the above, in a magnificent demonstration of doublethink, Swobe also believes that a behavioral demonstration like the Turing test cannot detect consciousness. > It seems probable to me that conscious intelligence involves less biological overhead than instinctive unconscious intelligence Then the logical implication is crystal clear, its harder to make an unconscious intelligence than a conscious intelligence. So if you encounter an intelligent machine your default assumption should be that it is conscious. > especially when considering complex behaviors such as social behaviors. Perhaps nature selected it for that reason only. So if Swobe met a robot with greater social intelligence than he has would he consider it conscious? No of course he would not because, because,.... well just because. Actually that is what Swobe would say today but I don't think that's what would really happen. If someone ever met up with such a machine I think it would understand us so well, better than we understand ourselves, that it could convince anyone to believe in anything and could quite literally charm the pants off us. As Swobe points out, even today characters in video games seem to be conscious to some, a robot with a Jupiter Brain would convince even the most sophisticated among us. We would believe the robot was conscious even if we couldn't prove it. I have the same belief regarding Gordon Swobe and the same lack of a proof. John K Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Thu Feb 11 17:29:54 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 11 Feb 2010 11:29:54 -0600 Subject: [ExI] Blue Brain Project. In-Reply-To: <4B7079D4.1090103@satx.rr.com> References: <558651.23421.qm@web113616.mail.gq1.yahoo.com> <4B7079D4.1090103@satx.rr.com> Message-ID: <9B95A2D531B8453685C11772840F74F6@DFC68LF1> Yes. This is from a month or so ago: http://www.thoughtware.tv/videos/watch/4651-China-Radio-International-Interv iew-cri-Vita-more-De-Garis I think he explains what he is up to in this show. Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Damien Broderick Sent: Monday, February 08, 2010 2:54 PM To: ExI chat list Subject: Re: [ExI] Blue Brain Project. On 2/8/2010 2:25 PM, John Clark quoted: > "Once the team is able to model a complete rat brain-that should > happen in the next two years-Markram will download the simulation into > a robotic rat, so that the brain has a body. He's already talking to a > Japanese company about constructing the mechanical animal. A decade or more ago, Hugo de Garis was promising a robot CAM-brain puddytat. He lost several sponsors along the way. Anyone know if he's doing anything along those lines today? (No, I've never heard of Google--how does that work? His own site informs us excitedly of things due to happen in 2006 and 2007...) Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From thespike at satx.rr.com Thu Feb 11 18:01:03 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 11 Feb 2010 12:01:03 -0600 Subject: [ExI] Film Script! Message-ID: <4B7445DF.3040800@satx.rr.com> Stathis wrote: > What if a race of robots landed on Earth and decided to do > cruel experiments on humans, on the assumption that mere organic > matter couldn't have a mind or feelings, despite behaving as if it > did? This is one of the drivers in my two linked novels GODPLAYERS and K-MACHINES. The AIs despise humans as nothing more than organic number-crunchers, insufficiently passionate. Damien Broderick From lacertilian at gmail.com Thu Feb 11 18:38:13 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 10:38:13 -0800 Subject: [ExI] The Napoleon problem In-Reply-To: <4e3a29501002101956s4ad74326j1a5adf642864d6ba@mail.gmail.com> References: <5FFD48E2-0FC0-493D-AE25-88F1C3068614@GMAIL.COM> <4e3a29501002101955u456edc1cuaa2154e49b49df8a@mail.gmail.com> <4e3a29501002101956s4ad74326j1a5adf642864d6ba@mail.gmail.com> Message-ID: Will Steinberg : >I think Spencer will soon tell us why our believing that the man is not Napoleon is irreconcilable with some other belief many of us hold. (later) > Oh, I suppose that sort of happened.? I am ignorant. Aren't we all, Will? Aren't we all. (I'm not sure I find my own argument strong enough to warrant the word "irreconcilable" coming up, but I am proud nonetheless that it did.) From steinberg.will at gmail.com Thu Feb 11 19:05:43 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 11 Feb 2010 14:05:43 -0500 Subject: [ExI] Very long lifespans and accompanying mental milieus Message-ID: <4e3a29501002111105o3ac9433ax703bde1d0b653f6f@mail.gmail.com> When human lifespans lengthen, new mental paradigms are born. This tends to occur when sufficiently large amounts of the population reach a certain age as to be common. For example, humans of the past would have a hard time understanding both the jaded, crotchety old man and the freaking-out fifty year old, simply by virtue that anyone who lived to be this old was usually revered for their luck. Very old people, when there were not a lot of very old people, were Methuselahs. But as medicine advanced, being these ages has become increasingly more common, and as the uniqueness associated with these ages has disappeared, a slew of mental crises have developed. Now, some, as in the case of the elderly, are physiological--an old man is cranky because he is arthritic or forgetful. Yet there are aspects of aging that cannot be associated in this manner, and instead lie completely in the mental realm. The mid-life crisis is and example of a phenomenon that is distinctly new. In the next fifty or so years (and I hope this is overestimating,) there is a good chance that human lifespans will lengthen significantly, perhaps going so far as to double. Now, many of us simply think to ourselves: "More time to think! More time to work!" But who knows what happens when the metaprograms of the brain reshuffle connections for far longer than nature "intended"? Though it is fine, for now, to treat ourselves as having overcome nature and evolution, we must remember that consciousness and intelligence were successful for producing offspring, which are produced relatively early in life, and have far less of a connection to the later years. Is this problem one of value? What if, at one hundred and fifty years of age, man is suddenly compelled to end his life? What if longer life will dictate to us the most obvious example of human pathos--that, for all we love about ourselves, the buck stops for the brain sooner than we might have hoped? It seems in this case that the recent discussions on mental being that have overwhelmed the list are indeed incredibly important, if only for the fact that knowledge of the mental processes of humans must be understood in order to design even better processes that don't hit a wall after extended periods of time. This is Transhumanism, not in the often-held idea of letting "humanness" transcend our current physical limitations, but in scrapping many aspects of that humanness entirely in favor of something unfathomable and better. There is a good chance that we will, at some point, be faced with the problem that the confusing tangle of yarn in our heads produced by evolution is simply not good enough to deal with whatever comes next. And then what? -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Thu Feb 11 19:13:29 2010 From: max at maxmore.com (Max More) Date: Thu, 11 Feb 2010 13:13:29 -0600 Subject: [ExI] Very long lifespans and accompanying mental milieus Message-ID: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> Will Steinberg wrote: >The mid-life crisis is and example of a phenomenon that is distinctly new. But how real is it? Just because it's become such a familiar phrase doesn't mean it's particularly correct. I recently came across some doubts, for instance: Mid-Life Crisis: An Outdated Myth? http://www.foxnews.com/story/0,2933,584133,00.html Max From steinberg.will at gmail.com Thu Feb 11 20:16:48 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 11 Feb 2010 15:16:48 -0500 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> References: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> Message-ID: <4e3a29501002111216j5b2c85b4j2777468c3ac61d96@mail.gmail.com> Well then you may recast my argument, filling in "mid-life-sense-of-pride-and-fulfillment" for "mid-life crisis;" it's still something that people a few centuries ago would not understand very well. I still think we will surely face unprecedented places of the mind as lifespans grow longer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Thu Feb 11 21:25:02 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 13:25:02 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <896407.10281.qm@web36503.mail.mud.yahoo.com> References: <896407.10281.qm@web36503.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Characters in video games behave as if they have consciousness. No, they don't. Not even close. Not even REMOTELY close. To someone who actually grew up playing video games, this is a completely outrageous statement that can hardly even be addressed for how absurd it is. All I have to do to refute you is point at the most sophisticated simulation of humanity ever to appear in a computer game, and say, look. Look at it. It can only converse by following a script, and beyond that it has no apparent reaction to anything short of being shot. Half-Life 2 famously had a mind-bogglingly realistic cast and immersive world. Basically, this means that the "people" in it would turn their heads to look at you if you stood within a defined radius of them. They also had excellent pre-programmed facial expressions to go with their dialogue, recorded from actual human beings. Gordon Swobe : > I don't play video games myself but I've known children who did. They often spoke of the characters in their video games as if those characters really existed as consciousness entities. Then they matured. Exactly the same argument applies to characters in movies. Right now, the game industry is passing through a movie phase. The latest and greatest games, at the height of technology, are just interactive stories. Allowing for a branching storyline, including more than one possible ending, is to this day considered the absolute cutting edge of the medium. It's been that way for decades. This is pretty much the reason I no longer keep up with the high-end games. If you are making money, you are not making games; you are making movies, and I don't much care for movies. Gordon, if you are unable to imagine any simulation of life more sophisticated than that in a computer game (or, much worse, a console game), then you are not qualified to participate in this discussion. Please tell me this is not the case. From lacertilian at gmail.com Thu Feb 11 21:40:58 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 13:40:58 -0800 Subject: [ExI] evolution of consciousness In-Reply-To: <967863.26874.qm@web36503.mail.mud.yahoo.com> References: <967863.26874.qm@web36503.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Unconscious animals like amoebas exhibit a low level of intelligent behavior. Those instinctive behaviors are encoded by DNA. In higher organisms we see nervous systems, and we see how the resulting consciousness increases the intelligence and flexibility of the organisms. Much like how a laptop is more flexible than a PDA, which in turn is more flexible than a pocket calculator. We are, to a substantially greater extent than any other animal, general-purpose information processors. This does not relate to consciousness in any clear way. The only roughly coherent theory behind the evolution of consciousness, in my mind, is embedded within Pollock and Ismael's paper on nolipsism. It goes: "we are conscious because we use de se designators, and we use de se designators so that we can function intelligently in every possible situation". I don't like it, and I don't know if I agree with it, when it comes to the question of subjective experience. I don't see why de se designators should be special, among other symbols, in that particular way. Even so, it's as close as I can come to explaining the evolutionary origins of consciousness. Great for explaining the illusory nature of the self, not so great for explaining the illusory nature of consciousness -- since consciousness is not an illusion. It might be made of illusions, sure, but it isn't one itself. We can measure it. I'm not sure how we can measure it, but if it's favored by evolution then we must necessarily be able to. From stathisp at gmail.com Thu Feb 11 21:43:31 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Feb 2010 08:43:31 +1100 Subject: [ExI] evolution of consciousness In-Reply-To: <967863.26874.qm@web36503.mail.mud.yahoo.com> References: <967863.26874.qm@web36503.mail.mud.yahoo.com> Message-ID: On 12 February 2010 01:37, Gordon Swobe wrote: > --- On Wed, 2/10/10, Spencer Campbell wrote: > >> This is actually a very good point. I don't have clearly >> articulate beliefs regarding the relationship between consciousness >> and evolution, myself, but clearly Gordon does. > > I think consciousness aids and enhances intelligence, something like the way a flashlight helps one move about in the dark. > > Unconscious animals like amoebas exhibit a low level of intelligent behavior. Those instinctive behaviors are encoded by DNA. In higher organisms we see nervous systems, and we see how the resulting consciousness increases the intelligence and flexibility of the organisms. > > It seems probable to me that conscious intelligence involves less biological overhead than instinctive unconscious intelligence, especially when considering complex behaviors such as social behaviors. Perhaps nature selected it for that reason only. But you have clearly stated that consciousness plays no role in behaviour, since you agree that the brain's behaviour can be emulated by a computer and the computer will be unconscious. The computer doesn't have to be tricked up to behave as if it's conscious: all you need do is follow the structural and functional relationships of the brain, and the intelligence will emerge without (you claim) any of the consciousness. -- Stathis Papaioannou From stathisp at gmail.com Thu Feb 11 21:48:01 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Feb 2010 08:48:01 +1100 Subject: [ExI] evolution of consciousness In-Reply-To: References: <967863.26874.qm@web36503.mail.mud.yahoo.com> Message-ID: On 12 February 2010 08:40, Spencer Campbell wrote: > Great for explaining the illusory nature of the self, not so great for > explaining the illusory nature of consciousness -- since consciousness > is not an illusion. It might be made of illusions, sure, but it isn't > one itself. We can measure it. I'm not sure how we can measure it, but > if it's favored by evolution then we must necessarily be able to. The idea is that it *isn't* favoured by evolution: it is a necessary side-effect of intelligence, as walking is a necessary side-effect of putting one foot in front of the other in a coordinated fashion. -- Stathis Papaioannou From lacertilian at gmail.com Thu Feb 11 21:53:22 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 13:53:22 -0800 Subject: [ExI] Film Script! In-Reply-To: <547545.73654.qm@web113605.mail.gq1.yahoo.com> References: <547545.73654.qm@web113605.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > (a whole bunch of brilliant nonsense) Yes. Yes! Maybe run two perspectives through the thing, so that we're following both an organic human hero and a robotic alien antihero. There can be a scene where the two of them enter into an interminable Socratic dialogue! "Why are you trying to kill us?!" "What are you talking about? We can't kill you. You aren't even real." "Of course we're real! We have qualia and everything!" "No, you are only predisposed to claim that you do. You don't actually know what qualia are. If you did, you would be able to tell me what it is like to be swept by a ten millisecond pulse of electromagnetic radiation in the five to six gigahertz spectrum." "Five to... wait, gigahertz? I can't see anything below four hundred terahertz!" "Exactly." Then, a gun fight. (The guns should shoot explosions instead of bullets.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Thu Feb 11 22:26:49 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 11 Feb 2010 16:26:49 -0600 Subject: [ExI] Film Script! In-Reply-To: References: <547545.73654.qm@web113605.mail.gq1.yahoo.com> Message-ID: <4B748429.5050404@satx.rr.com> On 2/11/2010 3:53 PM, Spencer Campbell wrote: > Ben Zaiboc >: > > (a whole bunch of brilliant nonsense) > > Yes. Yes! Maybe run two perspectives through the thing, so that we're > following both an organic human hero and a robotic alien antihero. There > can be a scene where the two of them enter into an interminable Socratic > dialogue! Oh, sort of like this? Lune steps carefully, keeping her balance with outstretched arms. On every side, matted kelp and seaweed coat the sluggish surface of the ocean between the trapped hulks that have drifted here across hundreds, thousands of kilometers, fetching up at the still center of an indefinitely slow vortex on a cognate world of oceans seized by locking land masses. Other spoiled vessels hang trapped in the feral vegetation?s embrace, moving slightly, rocking against each other?s hulls with low, grinding vibrations and deep clangs, scarcely audible, like the booming of whales. This ruined ship, the Argyle, must have dangled here in the jaws of the sea for at least a century and a half. ?I can?t do this any longer,? she says. ?I won?t.? ?How touching.? The K-machine wears a heavy yellow canvas mariner?s coat and black rubber boots, leaning against the stump of Argyle?s fractured main mast. Broken spars and fragments of a fallen sail and tangled rigging cling about it. ?You love him.? The timber planks beneath her foot are pulpy, sagging with every careful step she takes. The ocean has invaded the vessel from within, seeping upward through the wood, rusting and corroding the iron work, without yet swallowing it down into the depths. Perhaps that fate is certain, but it has been delayed for many decades by the matted pelagic vegetation flattening the surface of the water, locking all these marooned vessels into a graveyard without burial. Lune grimaces. The setting is the perfect preference of the thing that regards her approach with deep, gratified irony. ?Yes. I do love him. I did not expect?? ?Because he brought you back from death. This is not love, it?s supine and self-interested gratitude. Get over it.? ?You would have been quite happy to sacrifice my life to extinguish his.? ?Your life, like everyone?s life, is illusory. Is this not what you believe and argue, philosopher? The Schmidhuber ontology, the blasphemy that computation is the basis of reality?? She stares at the thing with disgust and a certain enduring fright that she has known since childhood, when it first made itself known to her. The K-machine possesses power over her, in a measure she does not truly understand. Perhaps it and its kin had slaughtered her parents. Or perhaps, as it argues, that destruction had been, instead, and wickedly, the work of cold humans themselves, intent upon creating their own hell world. The Ensemble instructors had evaded that issue whenever she?d attempted, as an acolyte, to raise it. ?If life is illusory,? she says, ?I lose nothing by living it in the way that I choose. I?m done with you.? But still she stands there. The thing reaches forth an arm and hand made all of black metal, tenderly strokes her cheek. A kind of joyous revulsion rises within her. A memory from just beyond infancy: a figure all in black, shielded against the foul fumes rising to the shattered surface from the piled dead in the concrete caverns below. Dark-clad arms plucking her up, carrying her to safety, her pulse roaring, her terrified childish voice locked in her throat. Septimus, or one of his assistants, she sometimes thinks. Or perhaps, as it claims, this thing slouching at ease before her, or one of its kindred. It is a salvation she can hardly regret, either way, and yet she detests its memory. She waits stock-still as the thing draws a line down her face, withdraws. From thespike at satx.rr.com Thu Feb 11 23:16:17 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 11 Feb 2010 17:16:17 -0600 Subject: [ExI] skyhook elevator Message-ID: <4B748FC1.8090001@satx.rr.com> It occurs to me (can't recall ever seeing this discussed, although it must be an ancient sub-topic of skyhook dynamics) that as your elevator climbs or is a shoved up the thread, you'd not only be pressed against the floor but also against the west wall. Maybe you wouldn't notice if the trip took a couple of days, but you're going from rest to 11,000 km/hr. Is that especially noticeable? What say the space gurus? Damien Broderick From ablainey at aol.com Thu Feb 11 23:33:53 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Thu, 11 Feb 2010 18:33:53 -0500 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: <4e3a29501002111105o3ac9433ax703bde1d0b653f6f@mail.gmail.com> References: <4e3a29501002111105o3ac9433ax703bde1d0b653f6f@mail.gmail.com> Message-ID: <8CC7989CE4091C7-607C-1543@webmail-d011.sysops.aol.com> Odd I was mulling this very issue over this afternoon while thinking about going to the dentist. I was wondering how I would cope with being imortal if the rigours of age still catch up with you. Not the grey hair, wrinkled skin and other signs of a long life. More the niggles like tooth ache from a dogy crown. Back ache from that slipped disc. Arthritis, migraines, bad eye sight and all the other things that detract from lifes quality. Alzhiemers and other degenerative diseases would have been unheard of a couple fo hundred years ago. No one ever lived long enough to develope them. It makes me wonder what new ailments we will discover? Perhaps the equivelent of a mid life crisis every 50 years due to bicentenial cell regeneration? who knows. -----Original Message----- From: Will Steinberg To: ExI chat list Sent: Thu, 11 Feb 2010 19:05 Subject: [ExI] Very long lifespans and accompanying mental milieus When human lifespans lengthen, new mental paradigms are born. This tends to occur when sufficiently large amounts of the population reach a certain age as to be common. For example, humans of the past would have a hard time understanding both the jaded, crotchety old man and the freaking-out fifty year old, simply by virtue that anyone who lived to be this old was usually revered for their luck. Very old people, when there were not a lot of very old people, were Methuselahs. But as medicine advanced, being these ages has become increasingly more common, and as the uniqueness associated with these ages has disappeared, a slew of mental crises have developed. Now, some, as in the case of the elderly, are physiological--an old man is cranky because he is arthritic or forgetful. Yet there are aspects of aging that cannot be associated in this manner, and instead lie completely in the mental realm. The mid-life crisis is and example of a phenomenon that is distinctly new. In the next fifty or so years (and I hope this is overestimating,) there is a good chance that human lifespans will lengthen significantly, perhaps going so far as to double. Now, many of us simply think to ourselves: "More time to think! More time to work!" But who knows what happens when the metaprograms of the brain reshuffle connections for far longer than nature "intended"? Though it is fine, for now, to treat ourselves as having overcome nature and evolution, we must remember that consciousness and intelligence were successful for producing offspring, which are produced relatively early in life, and have far less of a connection to the later years. Is this problem one of value? What if, at one hundred and fifty years of age, man is suddenly compelled to end his life? What if longer life will dictate to us the most obvious example of human pathos--that, for all we love about ourselves, the buck stops for the brain sooner than we might have hoped? It seems in this case that the recent discussions on mental being that have overwhelmed the list are indeed incredibly important, if only for the fact that knowledge of the mental processes of humans must be understood in order to design even better processes that don't hit a wall after extended periods of time. This is Transhumanism, not in the often-held idea of letting "humanness" transcend our current physical limitations, but in scrapping many aspects of that humanness entirely in favor of something unfathomable and better. There is a good chance that we will, at some point, be faced with the problem that the confusing tangle of yarn in our heads produced by evolution is simply not good enough to deal with whatever comes next. And then what? _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Thu Feb 11 23:33:56 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 15:33:56 -0800 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: <4e3a29501002111105o3ac9433ax703bde1d0b653f6f@mail.gmail.com> References: <4e3a29501002111105o3ac9433ax703bde1d0b653f6f@mail.gmail.com> Message-ID: Will Steinberg : > There is a good chance that we will, at some point, be faced with the > problem that the confusing tangle of yarn in our heads produced by evolution > is simply not good enough to deal with whatever comes next.? And then what? I would argue that we hit that point somewhere between five and ten thousand years ago. We've been flying by the seat of our pants ever since, racing to evolve the next big species before we utterly destroy ourselves in the process (along with everything else on Earth that wasn't smart enough to be so stupid). So, yes: what's the solution? I don't think we're close enough to cerebral engineering for that to enter into the discussion as a serious possibility. Before we can really address the hardware problems, I think we have to work out the software problems. The history of religion is basically equivalent to the history of operating systems. I use Linux myself, and I don't identify with any named belief system, including atheism, for precisely the same reason. Actually I want to stop using Linux as soon as possible. Too many stratified layers of ad-hoc compatibility. Have to write my own computer architecture from scratch, at some point. I simply can't be genuinely comfortable until then! Let's say someone invents a religion that is scientifically proven to improve the health, intelligence, and emotional stability of its believers, but it assumes the existence of demonstrably false supernatural phenomena. Let's also say that it comes with an effective method of hypnosis to induce the required delusions. Would you convert? Another take on it: someone invents a novel surgical procedure, crossing the wires in your skull in just such a way that you become a more effective transhuman being. Would you sign up for it? How long would you wait, before having it done, to see if there are any unexpected side effects in the early adopters? Both of these are within the realm of possibility right now, though I have my doubts about how effective that hypnosis would be. Either one could come up within the next five years. In the former case, I might even be the one to design the calculatedly-misleading metaphysics! From pharos at gmail.com Thu Feb 11 23:55:59 2010 From: pharos at gmail.com (BillK) Date: Thu, 11 Feb 2010 23:55:59 +0000 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: <4e3a29501002111216j5b2c85b4j2777468c3ac61d96@mail.gmail.com> References: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> <4e3a29501002111216j5b2c85b4j2777468c3ac61d96@mail.gmail.com> Message-ID: 2010/2/11 Will Steinberg : > Well then you may recast my argument, filling in > "mid-life-sense-of-pride-and-fulfillment" for "mid-life crisis;" it's still > something that people a few centuries ago would not understand very well.? I > still think we will surely face unprecedented places of the mind as > lifespans grow longer. > > I have the feeling that extended lifespans will make people more risk averse. If you know that the only thing that is likely to kill you in the next 200 years is an accident then it seems likely that avoiding dangerous situations will become prevalent. Wars were popular when nobody was likely to live beyond 30 or 40 years anyway. But a risk averse society ..... Everyone driving golf carts surrounded by air bags, no dangerous sports, health and safety regulations everywhere, no motorcycles, etc, I'm sure you can think of more possibilities. BillK From msd001 at gmail.com Fri Feb 12 00:32:35 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 11 Feb 2010 19:32:35 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: <896407.10281.qm@web36503.mail.mud.yahoo.com> References: <896407.10281.qm@web36503.mail.mud.yahoo.com> Message-ID: <62c14241002111632m53d7fdafvd57cdf9b8f325006@mail.gmail.com> On Thu, Feb 11, 2010 at 9:06 AM, Gordon Swobe wrote: > Characters in video games behave as if they have consciousness. Seems to me that digitally simulated brains will also behave as if they have consciousness, but that they will have no more consciousness than do those characters in video games. > > I don't play video games myself but I've known children who did. They often spoke of the characters in their video games as if those characters really existed as consciousness entities. Then they matured. Really? It still amazes me that anyone continues to engage these discussions with you. You comment about children and maturity speaks more to me about the loss of creative imagination and the subjugation of young minds to the expectation of "growing up" and abandoning 'childish things' Has anyone you've discussed this with finally admitted their ideas were wrong and adopted whole cloth your understanding of this issue? There may be a reason for that. From steinberg.will at gmail.com Fri Feb 12 00:38:54 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 11 Feb 2010 19:38:54 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: <62c14241002111632m53d7fdafvd57cdf9b8f325006@mail.gmail.com> References: <896407.10281.qm@web36503.mail.mud.yahoo.com> <62c14241002111632m53d7fdafvd57cdf9b8f325006@mail.gmail.com> Message-ID: <4e3a29501002111638s71abe067va2e2b213b5ef708f@mail.gmail.com> In stride with what Mike just said, could we perhaps (since most of us seem to agree) discuss the actually important notions of semiotics and computability, instead of more pointless antiswobian banter? Nobody is going to budge, I promise. Unstoppable force, immovable post sort of thing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Fri Feb 12 00:39:59 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 11 Feb 2010 19:39:59 -0500 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: References: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> <4e3a29501002111216j5b2c85b4j2777468c3ac61d96@mail.gmail.com> Message-ID: <62c14241002111639w6d7bb9fg75780778fcb41453@mail.gmail.com> On Thu, Feb 11, 2010 at 6:55 PM, BillK wrote: > But a risk averse society ..... ?Everyone driving golf carts > surrounded by air bags, no dangerous sports, health and safety > regulations everywhere, no motorcycles, etc, I'm sure you can think of > more possibilities. Driving? what kind of reckless madman are you? there are so many mechanical failures that could give rise to your untimely death. Better to move into a comfortably controlled cell at the Life Facility to ensure your biological computing substrate is ideally maintained while you flit safely about the many virtual worlds of experience coming soon(tm). From lacertilian at gmail.com Fri Feb 12 00:53:17 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 11 Feb 2010 16:53:17 -0800 Subject: [ExI] evolution of consciousness In-Reply-To: References: <967863.26874.qm@web36503.mail.mud.yahoo.com> Message-ID: Stathis Papaioannou : >Spencer Campbell : >>... if [consciousness is] favored by evolution then we must necessarily be able to [measure it]. > > The idea is that it *isn't* favoured by evolution: it is a necessary > side-effect of intelligence, as walking is a necessary side-effect of > putting one foot in front of the other in a coordinated fashion. If that's the case, I once again run straight into the realm of panpsychism due to the fact it is impossible to create an absolutely unintelligent system. Even rocks are good at staying on the ground. So I'm inclined to believe you. Yeah. There's not a lot I can contribute to this thread. I have no evidence before me indicating that human-level consciousness is an evolutionary inevitability, by any measure, so I don't have a lot to go on. From stathisp at gmail.com Fri Feb 12 00:53:44 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Feb 2010 11:53:44 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <896407.10281.qm@web36503.mail.mud.yahoo.com> References: <896407.10281.qm@web36503.mail.mud.yahoo.com> Message-ID: On 12 February 2010 01:06, Gordon Swobe wrote: > --- On Thu, 2/11/10, Stathis Papaioannou wrote: > >> Would you say that a robot that seems to walk isn't really >> walking because it is not identical to, and therefore lacks all of >> the properties of, a human walking? > > My point here concerns digital simulations of brains, not robots. > >> The argument is that the computer can reproduce the consciousness of the >> brain if it is able to reproduces the brain's behaviour. > > Characters in video games behave as if they have consciousness. Seems to me that digitally simulated brains will also behave as if they have consciousness, but that they will have no more consciousness than do those characters in video games. > > I don't play video games myself but I've known children who did. They often spoke of the characters in their video games as if those characters really existed as consciousness entities. Then they matured. Characters in video games do certain things such as shoot other characters. If they were connected to a robot arm and camera, they might be able to shoot real people, really dead. So it is not a valid objection to say that because the simulation is not identical with the original, it cannot do anything that the original can do. You have to show that consciousness is beyond the power of a simulation to reproduce, not that a computer differs from a brain, which no-one is disputing. -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Feb 12 00:59:52 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 16:59:52 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <769806.60768.qm@web36505.mail.mud.yahoo.com> --- On Thu, 2/11/10, Spencer Campbell wrote: >> I don't play video games myself but I've known >> children who did. They often spoke of the characters in >> their video games as if those characters really existed as >> conscious entities. Then they matured. > > Exactly the same argument applies to characters in movies. Yes indeed. The characters you see on the silver screen, or on your TV screen or on your computer monitor, have no more consciousness than do the characters you see in video games and comic books. And they have no more consciousness than will the characters we might one day create with perfect digital simulations of humans and their brains. Such digital simulations of humans will exist only as mere hi-tech movies of real or imaginary people, as mere models of real or imaginary people, as mere animations of real or imaginary people, as mere caricatures of real or imaginary people, as mere descriptions of real or imaginary people. No matter whether we create those simulations with real or imaginary persons in mind, the simulations themselves will have no more reality than does Fred Flintstone. Yabadabadoo. -gts From spike66 at att.net Fri Feb 12 00:48:16 2010 From: spike66 at att.net (spike) Date: Thu, 11 Feb 2010 16:48:16 -0800 Subject: [ExI] skyhook elevator In-Reply-To: <4B748FC1.8090001@satx.rr.com> References: <4B748FC1.8090001@satx.rr.com> Message-ID: > ...On Behalf Of Damien Broderick > Subject: [ExI] skyhook elevator > > It occurs to me (can't recall ever seeing this discussed, > although it must be an ancient sub-topic of skyhook dynamics) > that as your elevator climbs or is a shoved up the thread, > you'd not only be pressed against the floor but also against > the west wall. Maybe you wouldn't notice if the trip took a > couple of days, but you're going from rest to 11,000 km/hr. > Is that especially noticeable? What say the space gurus? > > Damien Broderick Coriolis effect sounds like what you are describing, and it would be durn near negligible. I will calculate it if you wish. To give you an idea by using only numbers in my head and single digit BOTECs, geo is about 36000 km from the surface as I recall so add 6300 km earth radius and that's close enough to about 40,000 km so the circumference of the orbit is about 6 and some change times that, so 250000 km in 24 hrs, so you accelerate to 10000 km per hour or about 3 km per second or so. How long do you guess it would take to haul you up to GEO? An few hours? Lets say 10. To accelerate 3 km per second eastward in 10 hrs would be about 0.1 meters per second, or 100th of a G. The elevator passengers would scarcely notice. It is proportional of course. If you theorize they get there in 1 hour, then the coriolis component is about a tenth of a G, but if they get all the way to GEO in an hour, there is some serious upward velocity involved. spike From possiblepaths2050 at gmail.com Fri Feb 12 01:19:08 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 11 Feb 2010 18:19:08 -0700 Subject: [ExI] Very long lifespans and accompanying mental milieus In-Reply-To: <62c14241002111639w6d7bb9fg75780778fcb41453@mail.gmail.com> References: <201002111913.o1BJDeuf001898@andromeda.ziaspace.com> <4e3a29501002111216j5b2c85b4j2777468c3ac61d96@mail.gmail.com> <62c14241002111639w6d7bb9fg75780778fcb41453@mail.gmail.com> Message-ID: <2d6187671002111719v4cd08365v5ab279243d0306f2@mail.gmail.com> BillK wrote: If you know that the only thing that is likely to kill you in the next 200 years is an accident then it seems likely that avoiding dangerous situations will become prevalent. >>> I imagine a society somewhat along the lines of Larry Niven's alien species known as the puppeteers, who go to extremes to protect their lives. But because of this humans prize their advanced and nearly full-proof technology! He continues: Wars were popular when nobody was likely to live beyond 30 or 40 years anyway. >>> But war will be fought largely with machines and a strong military will be even more a product of lots of bright and highly educated humans (and of course also supersmart AI's!). I do always envision humanity taking part in battlefield "up-close" operations. But the "human" infantryman/special forces of the future will be hugely enhanced to survive and succeed! lol Joining the military will be strictly voluntary (to avoid social disruption that would dwarf the Vietnam protests), but be seen as vastly more "daring and macho" than it is now. Of course I still see political tyrannies (and even some democracies) playing mind/meme games with their young people to try to get them to throw their indefinite lives away in largely pointless wars. But it won't be near as easy to convince, as it is now. He continues: But a risk averse society ..... Everyone driving golf carts surrounded by air bags, no dangerous sports, health and safety regulations everywhere, no motorcycles, etc, I'm sure you can think of more possibilities. >>> I think we will see a society of extremes at both ends of the personal safety spectrum. "Mature" nanotech clothes & bodies that can save us from most tragedies that now take lives by the millions will be utterly commonplace, but at the same there will be the nearly suicidally daring and bored people, who will risk their lives by engaging in extremely dangerous sports and "leisure pursuits." John Grigg -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Fri Feb 12 01:49:42 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 17:49:42 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <840800.79193.qm@web36503.mail.mud.yahoo.com> --- On Thu, 2/11/10, Stathis Papaioannou wrote: >> I don't play video games myself but I've known >> children who did. They often spoke of the characters in >> their video games as if those characters really existed as >> conscious entities. Then they matured. > > Characters in video games do certain things such as shoot > other characters. If they were connected to a robot arm and > camera, they might be able to shoot real people, really dead. No sir. The supposed lawyer for such a character in such a video game has a good defense: his client could not have shot anyone because his client does not exist except in someone's overly vivid imagination. The sensible people on the jury will find that a mixed-up child or perhaps a philosophically-challenged adult used a hi-tech weapon disguised as a computer game to shoot a real person. -gts From gts_2000 at yahoo.com Fri Feb 12 01:26:44 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 11 Feb 2010 17:26:44 -0800 (PST) Subject: [ExI] evolution of consciousness In-Reply-To: Message-ID: <438778.21885.qm@web36508.mail.mud.yahoo.com> --- On Thu, 2/11/10, Stathis Papaioannou wrote: > But you have clearly stated that consciousness plays no > role in behaviour... I can hardly believe you wrote that. I spent hours explaining to you why and how I reject epiphenomenalism. >... since you agree that the brain's behaviour can be emulated > by a computer and the computer will be unconscious. I argued that we can program an artificial brain to act as if it has consciousness and that said artificial brain will still lack consciousness. This is not the same as arguing that consciousness plays no role in human behavior! By the way what exactly do you mean by "behavior of brains", anyway? When I refer to the brain's behavior I usually mean observable behavior of the organism it controls, behavior such as acts of speech. -gts From steinberg.will at gmail.com Fri Feb 12 02:14:49 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Thu, 11 Feb 2010 21:14:49 -0500 Subject: [ExI] evolution of consciousness In-Reply-To: <438778.21885.qm@web36508.mail.mud.yahoo.com> References: <438778.21885.qm@web36508.mail.mud.yahoo.com> Message-ID: <4e3a29501002111814u37d41ae4w7790eae6edbfec9d@mail.gmail.com> > > I don't see why de se > designators should be special, among other symbols, in that particular > way. Not necessarily special, just useful up to a point. One can see conscious systems providing at least SOME benefit to organisms, perhaps in order to make decisions affecting the self in the future, understanding how "I" fits into its surroundings. If conscious systems were developing at the same time as or as a cause/effect of social systems, it is again simple to see how awareness of the self to make decisions in the future in SOCIAL situations would in turn, INCREASE SURVIVAL CHANCES. Any social animal which is able to modify the self and plan for the self in order to set up ideal conditions for mating will have an advantage in mating. Intelligence would also be important, a separate property that enhanced the organization and quality of actions. Having high levels of both would lead to decisions that both pertained more intimately in the self than other animals, and also were of a higher quality and thus more likely to succeed. Since both systems as simple notions are easy enough to see emerging in very small ways, the existence of both would lead to a greater number of animals with slightly higher amounts of both, etc, etc. This would explain why intelligence seems highly correlated with consciousness, with most animals who show signs of self-awareness (many birds, many primates, dolphins, elephants) also performing well in social situations, problem-solving and memory situations, and even having creative functions through crude art. Sociability, insight and creativity are essential components of a blanket intelligence (street smarts, school smarts, art smarts) and seem to correlate with consciousness. This could be both a product of an epi-evolutionary dual helpfulness and the fact that awareness of the self will actually improve the magnitudes of all three. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Feb 12 02:22:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Feb 2010 13:22:47 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <840800.79193.qm@web36503.mail.mud.yahoo.com> References: <840800.79193.qm@web36503.mail.mud.yahoo.com> Message-ID: On 12 February 2010 12:49, Gordon Swobe wrote: > --- On Thu, 2/11/10, Stathis Papaioannou wrote: > >>> I don't play video games myself but I've known >>> children who did. They often spoke of the characters in >>> their video games as if those characters really existed as >>> conscious entities. Then they matured. >> >> Characters in video games do certain things such as shoot >> other characters. If they were connected to a robot arm and >> camera, they might be able to shoot real people, really dead. > > No sir. The supposed lawyer for such a character in such a video game has a good defense: his client could not have shot anyone because his client does not exist except in someone's overly vivid imagination. > > The sensible people on the jury will find that a mixed-up child or perhaps a philosophically-challenged adult used a hi-tech weapon disguised as a computer game to shoot a real person. I was simply pointing out that the shooting would be a REAL shooting resulting in a REAL death, even though the character is simulated. Whether the character understood what it was doing is a different question, but in general you cannot use the argument that it was a simulation to preclude this possibility, because the claim that, a priori, a simulation cannot have ANY property of the thing it is simulating is obviously ridiculous. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 12 02:52:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Feb 2010 13:52:47 +1100 Subject: [ExI] evolution of consciousness In-Reply-To: <438778.21885.qm@web36508.mail.mud.yahoo.com> References: <438778.21885.qm@web36508.mail.mud.yahoo.com> Message-ID: On 12 February 2010 12:26, Gordon Swobe wrote: > --- On Thu, 2/11/10, Stathis Papaioannou wrote: > >> But you have clearly stated that consciousness plays no >> role in behaviour... > > I can hardly believe you wrote that. I spent hours explaining to you why and how I reject epiphenomenalism. > >>... since you agree that the brain's behaviour can be emulated >> by a computer and the computer will be unconscious. > > I argued that we can program an artificial brain to act as if it has consciousness and that said artificial brain will still lack consciousness. This is not the same as arguing that consciousness plays no role in human behavior! > > By the way what exactly do you mean by "behavior of brains", anyway? > > When I refer to the brain's behavior I usually mean observable behavior of the organism it controls, behavior such as acts of speech. A computer model of the brain is made that controls a body, i.e. a robot. The robot will behave exactly like a human. Moreover, it will behave exactly like a human due to an isomorphism between the structure and function of the brain and the structure and function of the computer model, since that is what a model is. Now, you claim that this robot would lack consciousness. This means that there is nothing about the intelligent behaviour of the human that is affected by consciousness. For if consciousness were a separate thing that affected behaviour, there would be some deficit in behaviour if you reproduced the functional relationship between brain components while leaving out the consciousness. Therefore, consciousness must be epiphenomenal. You might have said that you rejected epiphenomenalism, but you cannot do so consistently. The only way you can consistently maintain your position that computers can't reproduce consciousness is to say that they can't reproduce intelligence either. If you don't agree with this you must explain why I am wrong when I point out the self-contradictions that zombies would lead to, and you simply avoid doing this, which is no way to comport yourself in a philosophical debate. I asked if the thought experiments I proposed were clear to everyone else and no-one contacted me to say that they were not. -- Stathis Papaioannou From thespike at satx.rr.com Fri Feb 12 06:07:47 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 12 Feb 2010 00:07:47 -0600 Subject: [ExI] better self-transcendence through selective brain damage Message-ID: <4B74F033.702@satx.rr.com> Links to Spirituality Found in the Brain By LiveScience.com Staff Scientists have identified areas of the brain that, when damaged, lead to greater spirituality. The findings hint at the roots of spiritual and religious attitudes, the researchers say. The study, published in the Feb. 11 issue of the journal Neuron, involves a personality trait called self-transcendence, which is a somewhat vague measure of spiritual feeling, thinking, and behaviors. Self-transcendence "reflects a decreased sense of self and an ability to identify one's self as an integral part of the universe as a whole," the researchers explain. Before and after surgery, the scientists surveyed patients who had brain tumors removed. The surveys generate self-transcendence scores. Selective damage to the left and right posterior parietal regions of the brain induced a specific increase in self-transcendence, or ST, the surveys showed. "Our symptom-lesion mapping study is the first demonstration of a causative link between brain functioning and ST," said Dr. Cosimo Urgesi from the University of Udine in Italy. "Damage to posterior parietal areas induced unusually fast changes of a stable personality dimension related to transcendental self-referential awareness. Thus, dysfunctional parietal neural activity may underpin altered spiritual and religious attitudes and behaviors." Previous neuroimaging studies had linked activity within a large network in the brain that connects the frontal, parietal, and temporal cortexes with spiritual experiences, "but information on the causative link between such a network and spirituality is lacking," explains lead study author, Urgesi said. One study, reported in 2008, suggested that the brain's right parietal lobe defines "Me," and people with less active Me-Definers are more likely to lead spiritual lives. The finding could lead to new strategies for treating some forms of mental illness. "If a stable personality trait like ST can undergo fast changes as a consequence of brain lesions, it would indicate that at least some personality dimensions may be modified by influencing neural activity in specific areas," said Dr. Salvatore M. Aglioti from Sapienza University of Rome. "Perhaps novel approaches aimed at modulating neural activity might ultimately pave the way to new treatments of personality disorders." From stefano.vaj at gmail.com Fri Feb 12 13:12:10 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 12 Feb 2010 14:12:10 +0100 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> Message-ID: <580930c21002120512u4aafa7c7kdf8f5f14e77d5fcd@mail.gmail.com> 2010/2/9 JOSHUA JOB : > A fetus is not yet a person, nor has ever been a person (as it is not nor > ever has been a rational conceptual entity). So it cannot have any rights. Legally wrong. A fetus can inherit, for instance, in a number of circumstances, at least in continental jurisdictions. Even though its capacity is restricted, the same applies to legal entities, for instance. Or even to adult humans (say, a life prisoner). > I am saying that it cannot be wrong if it does not violate the nature of > other conscious entities. The ocean cannot be wronged, only rational > conceptual "self"-aware entities can be, because they are the things that > can conceivably understand right and wrong. Let say somebody cannot conceivably understand right and wrong (say, out of certified "moral folly", or whatever the psychiatric terms may be in English). Does it stop being a natural person under existing laws? No. Do great apes with a greater ability to distinguish right and wrong than, say, a human infant or a severely mentally handicapped human being, have rights? Again no, at least for the time being. In legal terms, a person is what you say it is. Thus, I would not start from personhood to deduce rights, but rather from rights provided for by a legal system, on the basis of value judgments, to infer the personhood status they involve. -- Stefano Vaj From gts_2000 at yahoo.com Fri Feb 12 13:41:46 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 12 Feb 2010 05:41:46 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <957422.65114.qm@web36508.mail.mud.yahoo.com> --- On Thu, 2/11/10, Stathis Papaioannou wrote: >> The sensible people on the jury will find that a >> mixed-up child or perhaps a philosophically-challenged adult >> used a hi-tech weapon disguised as a computer game to shoot >> a real person. >... the claim that, a priori, a simulation cannot have ANY property of > the thing it is simulating is obviously ridiculous. Is it? In the incident you describe, only a *depiction* of a shooter exists in the game, and depictions of people have no reality. Or to put it another way: they have the same kind of reality, and the same legal standing, as photographs and drawings and other depictions of people. The human game-developer or the human game-player will go to prison or to a psychiatric facility for the criminally insane. The simulated shooter in the game will never know or care; he has no real existence. -gts From bbenzai at yahoo.com Fri Feb 12 14:50:33 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 12 Feb 2010 06:50:33 -0800 (PST) Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: Message-ID: <226303.69206.qm@web113612.mail.gq1.yahoo.com> Damien Broderick informed us: > Links to Spirituality Found in the Brain ... > The finding could lead to new strategies for treating some > forms of > mental illness. > ... > "Perhaps novel approaches aimed at modulating > neural activity > might ultimately pave the way to new treatments of > personality disorders." Wow. Could it be? Might we find a cure for religion? Ben Zaiboc From stathisp at gmail.com Fri Feb 12 15:25:39 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 13 Feb 2010 02:25:39 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <957422.65114.qm@web36508.mail.mud.yahoo.com> References: <957422.65114.qm@web36508.mail.mud.yahoo.com> Message-ID: On 13 February 2010 00:41, Gordon Swobe wrote: > --- On Thu, 2/11/10, Stathis Papaioannou wrote: > >>> The sensible people on the jury will find that a >>> mixed-up child or perhaps a philosophically-challenged adult >>> used a hi-tech weapon disguised as a computer game to shoot >>> a real person. > >>... the claim that, a priori, a simulation cannot have ANY property of >> the thing it is simulating is obviously ridiculous. > > Is it? In the incident you describe, only a *depiction* of a shooter exists in the game, and depictions of people have no reality. Or to put it another way: they have the same kind of reality, and the same legal standing, as photographs and drawings and other depictions of people. > > The human game-developer or the human game-player will go to prison or to a psychiatric facility for the criminally insane. The simulated shooter in the game will never know or care; he has no real existence. I'll say it again: the claim that, a priori, a simulation cannot have ANY property of the thing it is simulating is obviously ridiculous. This does not entail that a simulation will necessarily have ALL the properties of the thing being simulated. For example, if a simulation of a human is as intelligent and conscious as a real human that does not mean it will weigh the same and smell the same as a real human. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 12 15:41:05 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 13 Feb 2010 02:41:05 +1100 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <226303.69206.qm@web113612.mail.gq1.yahoo.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> Message-ID: On 13 February 2010 01:50, Ben Zaiboc wrote: > Wow. ?Could it be? ?Might we find a cure for religion? In psychiatry patients with religious delusions are very common, but occasionally there is doubt as to whether they are really psychotic. The only decent diagnostic test we have for this problem is a therapeutic trial of an antipsychotic medication. If the religious ideas go away then almost certainly they were part of a psychosis: the test has a very low false positive rate. It's harder to interpret the test if the religious ideas do not go away, since about 20-30% of patients with clearly psychotic symptoms and 100% of patients who are just religious do not respond to medication. In other words, reasonably effective treatments are available at present for the crazy but not for the merely gullible. -- Stathis Papaioannou From pharos at gmail.com Fri Feb 12 15:57:25 2010 From: pharos at gmail.com (BillK) Date: Fri, 12 Feb 2010 15:57:25 +0000 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <226303.69206.qm@web113612.mail.gq1.yahoo.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> Message-ID: On Fri, Feb 12, 2010 at 2:50 PM, Ben Zaiboc wrote: > Wow. ?Could it be? ?Might we find a cure for religion? > > Sounds like the opposite to me. Damage certain parts of the brain and you get transcendental experiences. Once the technique is formalized, religious sects have the ability to guarantee profound religious experiences to their members. We just stick a wire in ... here..... "Ohhhhh Gawd" ........ and another follower is reborn. BillK From gts_2000 at yahoo.com Fri Feb 12 16:25:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 12 Feb 2010 08:25:34 -0800 (PST) Subject: [ExI] Semiotics and Computability Message-ID: <964516.40828.qm@web36504.mail.mud.yahoo.com> "Depiction" seems like the perfect word for conveying my meaning. ------ Main Entry: de?pict Pronunciation: \di-?pikt, d?-\ Function: transitive verb Etymology: Latin depictus, past participle of depingere, from de- + pingere to paint ? more at paint Date: 15th century 1 : to represent by or as if by a picture 2 : describe ------ If and when we develop the technology to create complete digital simulations of people, we will then have only the capacity to perfectly depict people in digital form. Those digital depictions of people will only *represent* the real or imaginary people they depict. They will have the same reality status as do less sophisticated kinds of depictions, e.g., digital photographs, digital paintings, digital drawings and digital cartoons. It seems to me that no matter how hi-tech and life-like our depictions become, there will always exist an important difference between the depiction of the thing and the thing depicted. Some people will however become so mesmerized by the life-like realism of the digital depictions that they will conflate the depictions with the real or imaginary things they depict. They will forget the difference between the photographs of people and the people in the photographs. -gts From aware at awareresearch.com Fri Feb 12 16:15:32 2010 From: aware at awareresearch.com (Aware) Date: Fri, 12 Feb 2010 08:15:32 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <4B74F033.702@satx.rr.com> References: <4B74F033.702@satx.rr.com> Message-ID: On Thu, Feb 11, 2010 at 10:07 PM, Damien Broderick wrote: > Links to Spirituality Found in the Brain > > By LiveScience.com Staff > > Scientists have identified areas of the brain that, when damaged, lead to > greater spirituality. The findings hint at the roots of spiritual and > religious attitudes, the researchers say. > > The study, published in the Feb. 11 issue of the journal Neuron, involves a > personality trait called self-transcendence, which is a somewhat vague > measure of spiritual feeling, thinking, and behaviors. Self-transcendence > "reflects a decreased sense of self and an ability to identify one's self as > an integral part of the universe as a whole," the researchers explain. It's probably worth pointing out, despite a high probability of being misunderstood, that these experiences of "spirituality" and "self-transcendence" and other phenomena such as great joy or bliss orient one's thinking in a manner virtually opposite and certain to exclude that of Zen enlightenment (which never claims to be religious or spiritual.) Such misconceived expectations are key impediments to those who aim to attain a *coherent* understanding of the relationship of the observer to the observed (even, and especially, when the observer IS the observed.) Zen awakening is accompanied by none of these phenomena, and quite likely only by a laugh or smile at the realization of how simple and how close it was all along. "Before I had studied Zen for thirty years, I saw mountains as mountains, and waters as waters. When I arrived at a more intimate knowledge, I came to the point where I saw that mountains are not mountains, and waters are not waters. But now that I have got its very substance I am at rest. For it's just that I see mountains once again as mountains, and waters once again as waters." ? Ch'uan Teng Lu The Way of Zen, p126 - Jef - Jef From thespike at satx.rr.com Fri Feb 12 17:10:08 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 12 Feb 2010 11:10:08 -0600 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <580930c21002120512u4aafa7c7kdf8f5f14e77d5fcd@mail.gmail.com> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <580930c21002120512u4aafa7c7kdf8f5f14e77d5fcd@mail.gmail.com> Message-ID: <4B758B70.3040101@satx.rr.com> On 2/12/2010 7:12 AM, Stefano Vaj wrote: > A fetus can inherit, for instance, in a number of > circumstances, at least in continental jurisdictions. Would that apply to a frozen embryo? If a billionaire had an embryo put on ice with the expectation of having it implanted later (perhaps in a host mother) and then died, could an estate be locked up for decades while the small mass of cells hung changelessly inside a Dewer? Damien Broderick From jonkc at bellsouth.net Fri Feb 12 17:24:53 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 12 Feb 2010 12:24:53 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <438778.21885.qm@web36508.mail.mud.yahoo.com> References: <438778.21885.qm@web36508.mail.mud.yahoo.com> Message-ID: <1136ABC5-0122-4736-ACC2-6DF2006A8A24@bellsouth.net> Since my last post Gordon Swobe has posted 5 times. > >> But you have clearly stated that consciousness plays no >> role in behaviour... > > I can hardly believe you [Stathis Papaioannou] wrote that. I spent hours explaining to you why and how I reject epiphenomenalism. I can hardly believe Swobe is surprised that Stathis doesn't understand his position, I don't believe that Swobe himself understands his position. Consciousness effects behavior enough for evolution to produce it but not enough for the Turing Test to detect it. Nuts. > > I argued that we can program an artificial brain to act as if it has consciousness and that said artificial brain will still lack consciousness. Even his contradictions are contradictory. We are intelligent and we are conscious, I am anyway; if the 2 are separate then consciousness must just be tacked on, a sort of consciousness circuit. But Evolution has absolutely no reason to develop a consciousness circuit. > This is not the same as arguing that consciousness plays no role in human behavior! And yet in his next breath Swobe will tell us that consciousness plays no role in the Turing Test! > No matter whether we create those simulations with real or imaginary persons in mind, the simulations themselves will have no more reality than does Fred Flintstone. Fred Flintstone certainly isn't real as I am real and probably isn't real as Gordon Swobe is real, but one can't help from wondering why he used that as an example rather than, say, "Krotchly Q Kumberbun". Presumedly it's because one meme has enough reality to allow for communication while the other has so little reality that his readers wouldn't even know what he's referring to or the point he's trying to make. > there will always exist an important difference between the depiction of the thing and the thing depicted. But not if the "thing" in question is not a thing at all and is in fact not even a noun. > Those digital depictions of people will only *represent* the real or imaginary people they depict. Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. John K Clark > > > -gts > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Feb 12 18:59:00 2010 From: pharos at gmail.com (BillK) Date: Fri, 12 Feb 2010 18:59:00 +0000 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <4B758B70.3040101@satx.rr.com> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <580930c21002120512u4aafa7c7kdf8f5f14e77d5fcd@mail.gmail.com> <4B758B70.3040101@satx.rr.com> Message-ID: On Fri, Feb 12, 2010 at 5:10 PM, Damien Broderick wrote: > Would that apply to a frozen embryo? If a billionaire had an embryo put on > ice with the expectation of having it implanted later (perhaps in a host > mother) and then died, could an estate be locked up for decades while the > small mass of cells hung changelessly inside a Dewer? > > I think it is safe to say that the law has not yet decided on that possibility. :) Inheritance law differs greatly between countries. For example, in some jurisdictions females cannot inherit. Strictly speaking, a fetus has no inheritance rights. (Controversial - you are getting into the abortion debate here). After-birth children (or posthumous children) born after the death of the parents can inherit and have rights after they are born live. Some laws say that the child has to be born within the gestation period following the death of the father. (i.e. frozen embryos not considered). If an estate was waiting on a birth to see if a child could inherit, then a trustee would be appointed to look after the estate. Presumably this could also be done for a frozen embryo, but I think it would be unlikely. An embryo might never be implanted and a live baby produced, so it only has a conditional chance of life. And there might be twenty or thirty frozen embryos. Could the first inherit, when more might arrive later? And if a woman freezes her eggs, she could produce many half-siblings who would also have inheritance rights. It all sounds too complicated to me. If I was a lawyer, I'd say forget about frozen embryos and eggs inheriting. BillK From stefano.vaj at gmail.com Fri Feb 12 19:04:33 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 12 Feb 2010 20:04:33 +0100 Subject: [ExI] Rights without selves (was: Nolopsism) In-Reply-To: <4B758B70.3040101@satx.rr.com> References: <60CE7310-2BE1-47BE-AE8A-9BFC69A659B4@GMAIL.COM> <580930c21002120512u4aafa7c7kdf8f5f14e77d5fcd@mail.gmail.com> <4B758B70.3040101@satx.rr.com> Message-ID: <580930c21002121104u40de5b2uf47fb432bbdd55c2@mail.gmail.com> On 12 February 2010 18:10, Damien Broderick wrote: > On 2/12/2010 7:12 AM, Stefano Vaj wrote: > >> A fetus can inherit, for instance, in a number of >> circumstances, at least in continental jurisdictions. > > Would that apply to a frozen embryo? If a billionaire had an embryo put on > ice with the expectation of having it implanted later (perhaps in a host > mother) and then died, could an estate be locked up for decades while the > small mass of cells hung changelessly inside a Dewer? The common wisdom is that only an in situ embryo can be considered as a subject. You cannot for instance have a special attorney appointed to protect the interest of an egg. You can however make a not-yet-conceived child your conditional heir... -- Stefano Vaj From spike66 at att.net Fri Feb 12 19:34:37 2010 From: spike66 at att.net (spike) Date: Fri, 12 Feb 2010 11:34:37 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> Message-ID: > Subject: Re: [ExI] better self-transcendence through > selective brain damage > > On Fri, Feb 12, 2010 at 2:50 PM, Ben Zaiboc wrote: > > Wow. ?Could it be? ?Might we find a cure for religion? There would be very little demand for a cure for religion. Just the opposite, the market would be in stimulating that part of the brain to induce a religious experience. Ohhh my, just thinking of the money to be made here makes by butt hurt. It's a good hurt. Do let me be perfectly clear on that last commentary: I do not wish to be anything like L. Ron, who had more or less the same idea, along with his countless predecessors down thru the ages. These charlatans made money from religion unethically, deceptively. The way we would do this is to market absolutely truthfully: as a service for those who are atheists but want euphoric religious experiences. We would make it clear right up front that this is not about some random deity, and we do not solicit donations. The client must supply her own random deity. We arrange communication with that or them. This idea is rather more analogous to the office of a chiropractor, except without all the odd notions of aligning to body to focus energy and releasing tension and all that. This would be absolutely truthful: we discovered the part of the brain that is responsible for the epiphany phenomenon, and we think we can cause one in you. A hundred bucks, hop in the chair. spike From hkeithhenson at gmail.com Fri Feb 12 19:36:07 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 12 Feb 2010 12:36:07 -0700 Subject: [ExI] skyhook elevator Message-ID: On Fri, Feb 12, 2010 at 5:00 AM, "spike" wrote: >> ...On Behalf Of Damien Broderick >> Subject: [ExI] skyhook elevator >> >> It occurs to me (can't recall ever seeing this discussed, >> although it must be an ancient sub-topic of skyhook dynamics) >> that as your elevator climbs or is a shoved up the thread, >> you'd not only be pressed against the floor but also against >> the west wall. Maybe you wouldn't notice if the trip took a >> couple of days, but you're going from rest to 11,000 km/hr. >> Is that especially noticeable? What say the space gurus? >> >> Damien Broderick > > > Coriolis effect sounds like what you are describing, and it would be durn > near negligible. ?I will calculate it if you wish. ?To give you an idea by > using only numbers in my head and single digit BOTECs, geo is about 36000 km > from the surface as I recall so add 6300 km earth radius and that's close > enough to about 40,000 km so the circumference of the orbit is about 6 and > some change times that, so 250000 km in 24 hrs, so you accelerate to 10000 > km per hour or about 3 km per second or so. > > How long do you guess it would take to haul you up to GEO? ?An few hours? > Lets say 10. ?To accelerate 3 km per second eastward in 10 hrs would be > about 0.1 meters per second, or 100th of a G. ?The elevator passengers would > scarcely notice. > > It is proportional of course. ?If you theorize they get there in 1 hour, > then the coriolis component is about a tenth of a G, but if they get all the > way to GEO in an hour, there is some serious upward velocity involved. > > spike Spike got it right. Some years ago I spent some effort on this in the context of a story and wrote a paper for an ESA conference from that work. I analyzed a driven endless loop cable moving about 1000 mph--which will get you to GEO in 22 hours. Lifting 100 tons per hour it took 1.5 GW. While the Coriolis effect isn't noticeable to a passenger, it provides plenty of force to keep the up and down cables well separated. The acceleration to 3 km/sec at GEO (from under 1/2 km/sec on the surface) is "free." The cable leans west from the rotation to the east and extracts energy from the rotation of the earth slowing the earth down by an extremely small amount. Importing more than you are exporting would have the opposite effect. People opposing to extracting rotational energy from the earth could use the slogan "Conserve Angular Momentum!" Keith PS. Lunar elevators really can be made of dental floss. From cluebcke at yahoo.com Fri Feb 12 18:28:47 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Fri, 12 Feb 2010 10:28:47 -0800 (PST) Subject: [ExI] skyhook elevator In-Reply-To: References: <4B748FC1.8090001@satx.rr.com> Message-ID: <887255.3523.qm@web111208.mail.gq1.yahoo.com> > How long do you guess it would take to haul you up to GEO? An few hours?Lets say 10. >From what I've read it'll be closer to 3 days. Even that view could get tiresome after three days on the bus. ----- Original Message ---- From: spike To: ExI chat list Sent: Thu, February 11, 2010 4:48:16 PM Subject: Re: [ExI] skyhook elevator > ...On Behalf Of Damien Broderick > Subject: [ExI] skyhook elevator > > It occurs to me (can't recall ever seeing this discussed, > although it must be an ancient sub-topic of skyhook dynamics) > that as your elevator climbs or is a shoved up the thread, > you'd not only be pressed against the floor but also against > the west wall. Maybe you wouldn't notice if the trip took a > couple of days, but you're going from rest to 11,000 km/hr. > Is that especially noticeable? What say the space gurus? > > Damien Broderick Coriolis effect sounds like what you are describing, and it would be durn near negligible. I will calculate it if you wish. To give you an idea by using only numbers in my head and single digit BOTECs, geo is about 36000 km from the surface as I recall so add 6300 km earth radius and that's close enough to about 40,000 km so the circumference of the orbit is about 6 and some change times that, so 250000 km in 24 hrs, so you accelerate to 10000 km per hour or about 3 km per second or so. How long do you guess it would take to haul you up to GEO? An few hours? Lets say 10. To accelerate 3 km per second eastward in 10 hrs would be about 0.1 meters per second, or 100th of a G. The elevator passengers would scarcely notice. It is proportional of course. If you theorize they get there in 1 hour, then the coriolis component is about a tenth of a G, but if they get all the way to GEO in an hour, there is some serious upward velocity involved. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Fri Feb 12 19:56:25 2010 From: spike66 at att.net (spike) Date: Fri, 12 Feb 2010 11:56:25 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> Message-ID: <7A7A0CB2766A4C879426F3F66CD94F0C@spike> > ...On Behalf Of spike ... > > ...I do not wish to be anything like L. Ron...made money from religion > unethically, deceptively. The way we would do this is to > market absolutely truthfully: as a service for those who are > atheists but want euphoric religious experiences... spike Just after I hit send, I recognized the fatal flaw in this argument. Did you spot it? There are areas of our lives in which we have come so accustomed to spin and outright deception that we do not even know what the truth sounds like. If told the actual truth, we naturally assume some kind of deception. Analogy: consider a case where a cop says "Move along citizens, there is nothing to see here." Citizens: Where? Cop: Here. Citizens: What? Cop: Nothing. Citizens: But there is nothing here! Cop: Exactly, just what I said. Citizens: All right, so what's the catch, officer?... etc. Religion and love are two areas where the actual truth is really never uttered, or if so it fails so spectacularly that it isn't attempted a second time. If we could stimulate the religion center of the brain, and tell the client or patient *exactly* what is being done, and why, and the expected outcome, that would represent a true scientific and ethical breakthrough. spike From lacertilian at gmail.com Fri Feb 12 21:05:44 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 12 Feb 2010 13:05:44 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <4e3a29501002111638s71abe067va2e2b213b5ef708f@mail.gmail.com> References: <896407.10281.qm@web36503.mail.mud.yahoo.com> <62c14241002111632m53d7fdafvd57cdf9b8f325006@mail.gmail.com> <4e3a29501002111638s71abe067va2e2b213b5ef708f@mail.gmail.com> Message-ID: Will Steinberg : > In stride with what Mike just said, could we perhaps (since most of us seem > to agree) discuss the actually important notions of semiotics and > computability, instead of more pointless antiswobian banter? Apparently not. Being the guy who started the thread to begin with, I would theoretically be all over such a discussion. I already said all I could in my first post, though. If anyone else wants to revive that line of reasoning, I would certainly be with them. (From what I recall, I was mainly talking about the inextricability of any one branch of semiotics, say semantics, from any other, such as syntax.) From spike66 at att.net Fri Feb 12 21:39:02 2010 From: spike66 at att.net (spike) Date: Fri, 12 Feb 2010 13:39:02 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <7A7A0CB2766A4C879426F3F66CD94F0C@spike> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> Message-ID: <87670FEF1360437981B4B84931C5FF94@spike> > ...On Behalf Of spike > Subject: Re: [ExI] better self-transcendence through selective brain damage > > > ...On Behalf Of spike > ... > There are areas of our lives in > which we have come so accustomed to spin and outright > deception that we do not even know what the truth sounds > like... Religion and love are two areas where the actual truth is > really never uttered... spike For instance, imagine giving your sweetheart a Valentine that says: Sweetheart, the depth of my love for you is equal to that which is induced by several milligrams of endorphine-precursor pheromones which are responsible for this class of emotions in humans. I mean it from the bottom of my brain. This message might be technically "true" however your ass is technically "dead meat." Your sperm will never get the opportunity to express their right to life in this manner. Getting back to religion, I am intrigued by the notion of being able to advertise something like Faith-R-Us: We do not believe your religion, but for a modest fee, we can help you believe in it with ever greater fervor. spike From eric at m056832107.syzygy.com Fri Feb 12 22:10:46 2010 From: eric at m056832107.syzygy.com (Eric Messick) Date: 12 Feb 2010 22:10:46 -0000 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <87670FEF1360437981B4B84931C5FF94@spike> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> Message-ID: <20100212221046.5.qmail@syzygy.com> Spike writes: >For instance, imagine giving your sweetheart a Valentine that says: > >Sweetheart, the depth of my love for you is equal to that which is induced >by several milligrams of endorphine-precursor pheromones which are >responsible for this class of emotions in humans. I mean it from the bottom >of my brain. Once again, xkcd has this covered, with a comic about overly scientific valentines: http://xkcd.com/701/ -eric From spike66 at att.net Fri Feb 12 22:28:47 2010 From: spike66 at att.net (spike) Date: Fri, 12 Feb 2010 14:28:47 -0800 Subject: [ExI] better self-transcendence through selective brain damage References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> Message-ID: <0752CA13909A46049BF86B410F43711C@spike> > ... > Getting back to religion, I am intrigued by the notion of > being able to advertise something like > > Faith-R-Us: We do not believe your religion, but for a > modest fee, we can help you believe in it with ever greater fervor. > > spike Last reply to my own comment, the I need to run, and will be away for a few. It is clear that entire religions can be built around some particular psychological phenomenon. Two examples, much of that sector of modern christianity that has the notion of born-again-ism can be explained by the poorly understood phenomenon sometimes called the epiphany experience. The Apostle Paul may have been describing it in the book of Acts chapter 9: http://www.biblegateway.com/passage/?search=Acts+9&version=kjv Modern science recognizes the phenomenon of de javu as a signal path abnormality that causes the brain to perceive information coming from the senses as coming from the memory. The Hindu religion explains de javu by theorizing shadow memories from previous lives. Monte Python take: http://www.youtube.com/watch?v=QWKdokcvM7A It looks to me like if we find how to stimulate the right neural centers in the brain, we should be able to induce any religious experience we want, while being perfectly honest, scientific and ethical about it. And make a cubic buttload of money of course. spike From cluebcke at yahoo.com Fri Feb 12 21:24:31 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Fri, 12 Feb 2010 13:24:31 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence Message-ID: <780009.58882.qm@web111203.mail.gq1.yahoo.com> I was wondering, given the lively back and forth I've seen on this list, whether the participants are agreed on the meanings of the terms "consciousness" and "intelligence". For example, were someone to posit that you cannot have consciousness without intelligence, are the meanings of those two terms widely agreed on? I don't mean to be presumptuous and the question is not intended to be sarcastic or ironic, but I have wondered, reading the ongoing debates about matters such as whether a computer (or a computer program) can have "intelligence" or "consciousness", whether the people debating various aspects of those questions are actually in agreement on the terms. If there are commonly-agreed-upon definitions of these terms, I'd appreciate it if somebody could provide me references. If not...well, that might explain the apparent imperviousness of certain positions to apparently well-formed critiques. Regards, Chris Luebcke From lacertilian at gmail.com Fri Feb 12 22:42:13 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 12 Feb 2010 14:42:13 -0800 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <780009.58882.qm@web111203.mail.gq1.yahoo.com> References: <780009.58882.qm@web111203.mail.gq1.yahoo.com> Message-ID: Christopher Luebcke : > I was wondering, given the lively back and forth I've seen on this list, whether the participants are agreed on the meanings of the terms "consciousness" and "intelligence". I've been bandying about my own personal definition of intelligence, at least, for something on the order of two weeks. Below is a segment of one of my posts from eight days ago. It is startlingly apropos. Spencer Campbell : > Stefano Vaj : >> ... very poorly defined Aristotelic essences would per se exist >> corresponding to the symbols "mind", "consciousness", "intelligence" ... > > Actually, I gave a fairly rigorous definition for intelligence in an > earlier message. I've refined it since then: > > The intelligence of a given system is inversely proportional to the > average action (time * work) which must be expended before the system > achieves a given purpose, assuming that it began in a state as far > away as possible from that purpose. > > (As I said before, this definition won't work unless you assume an > arbitrary purpose for the system in question. Purposes are roughly > equivalent to attractors here, but the system may itself be part of a > larger system, like us. Humans are tricky: the easiest solution is to > say they swap purposes many times a day, which means their measured > intelligence would change depending on what they're currently doing. > Which is consistent with observed reality.) > > I can't give similarly precise definitions for "mind" or > consciousness, and I wouldn't be able to describe the latter at all. > Tentatively, I think consciousness is devoid of measurable qualities. > This would make it impossible to prove its existence, which to my mind > is a pretty solid argument for its nonexistence. Nevertheless, we talk > about it all the time, throughout history and in every culture. So > even if it doesn't exist, it seems reasonable to assume that it is at > least meaningful to think about. From ablainey at aol.com Sat Feb 13 02:43:27 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Fri, 12 Feb 2010 21:43:27 -0500 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: References: <780009.58882.qm@web111203.mail.gq1.yahoo.com> Message-ID: <8CC7A6D7452E660-E28-6F3C@webmail-d019.sysops.aol.com> hmm, my own person 'vox pop' definition of Inteligence is simple. Given two know facts inteligence allows for a third derived fact to be created/known. The level of inteligence can be measured by how far apart the initial facts are, also the leap to and level of confidence in this third fact. Conciousness can party be defined by the self awareness of the above. For this 'Fact' is flexible and can be thought of in literal terms or as sensory input. -----Original Message----- From: Spencer Campbell To: ExI chat list Sent: Fri, 12 Feb 2010 22:42 Subject: Re: [ExI] Newbie Question: Consciousness and Intelligence Christopher Luebcke : > I was wondering, given the lively back and forth I've seen on this list, whether the participants are agreed on the meanings of the terms "consciousness" and "intelligence". I've been bandying about my own personal definition of intelligence, at least, for something on the order of two weeks. Below is a segment of one of my posts from eight days ago. It is startlingly apropos. Spencer Campbell : > Stefano Vaj : >> ... very poorly defined Aristotelic essences would per se exist >> corresponding to the symbols "mind", "consciousness", "intelligence" ... > > Actually, I gave a fairly rigorous definition for intelligence in an > earlier message. I've refined it since then: > > The intelligence of a given system is inversely proportional to the > average action (time * work) which must be expended before the system > achieves a given purpose, assuming that it began in a state as far > away as possible from that purpose. > > (As I said before, this definition won't work unless you assume an > arbitrary purpose for the system in question. Purposes are roughly > equivalent to attractors here, but the system may itself be part of a > larger system, like us. Humans are tricky: the easiest solution is to > say they swap purposes many times a day, which means their measured > intelligence would change depending on what they're currently doing. > Which is consistent with observed reality.) > > I can't give similarly precise definitions for "mind" or > consciousness, and I wouldn't be able to describe the latter at all. > Tentatively, I think consciousness is devoid of measurable qualities. > This would make it impossible to prove its existence, which to my mind > is a pretty solid argument for its nonexistence. Nevertheless, we talk > about it all the time, throughout history and in every culture. So > even if it doesn't exist, it seems reasonable to assume that it is at > least meaningful to think about. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Sat Feb 13 03:42:56 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 12 Feb 2010 21:42:56 -0600 Subject: [ExI] Fwd: [Comp-neuro] Blue Brain is hiring a Postdoctoral Researcher in Neuron Modeling In-Reply-To: References: Message-ID: <55ad6af71002121942ka96f000yae310cd06dce8a74@mail.gmail.com> ---------- Forwarded message ---------- From: Sean Hill Date: Fri, Feb 12, 2010 at 6:06 AM Subject: [Comp-neuro] Blue Brain is hiring a Postdoctoral Researcher in Neuron Modeling To: comp-neuro at neuroinf.org Job Description: Postdoctoral Researcher in Neuron Modeling The Blue Brain Project, headquartered in Lausanne, Switzerland, is an international research venture to reverse-engineer the brain and enable next-generation fundamental and medical research through simulation. BBP is now seeking for immediate hire a Postdoctoral Researcher in Neuron Modeling, for immediate hire, to strengthen the project?s computational neuroscience team and to prepare it for the next steps of growth. The primary objective for this position is to contribute to the ongoing large-scale detailed neuron modeling and validation efforts and to advance the model generation and validation of the full diversity of neuron electrical properties and dendritic integration. Scientific leadership is expected to cosupervise computational neuroscience students and the software expertise to interface with the technical teams. The position will involve a close interaction between the electrophysiology lab and computer simulations. Detailed Requirements: ? PhD in the field of computational neuroscience ? expert knowledge in NEURON and multi-compartment conductance-based modeling ? expert knowledge in whole cell electrophysiology, ion channel experiments and models ? expert knowledge in Python and Matlab ? profound knowledge of model specification languages such as NeuroML ? profound knowledge in other programming languages (C++) and parallel computing is of advantage ? ?can-do? attitude for pragmatic prototypes that accompany the global model building and validation strategy ? co-supervision of PhD students ? fluent written and spoken English What we offer: ? An internationally visible and rising project successfully connecting the demanding challenges of research with industry-strength solutions ? Supervision of research projects and publication opportunities ? A young, dynamic, inter-disciplinary, and international working environment Competitive salary Interested applicants please send CV, 3 references and a statement of research interests to: Felix Schuermann (felix.schuermann at epfl.ch) -- *Sean Hill, Ph.D.* *Blue Brain Project * *Project Manager for Computational Neuroscience* * * Brain Mind Institute *EPFL - Station 15* *CH-1015 Lausanne* *Switzerland* *Tel +41 21 693.96 78* *Fax +41 21 693.18 00* * * *sean.hill at epfl.ch* * * * ** ** * _______________________________________________ Comp-neuro mailing list Comp-neuro at neuroinf.org http://www.neuroinf.org/mailman/listinfo/comp-neuro -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dharris at livelib.com Sat Feb 13 10:37:28 2010 From: dharris at livelib.com (David C. Harris) Date: Sat, 13 Feb 2010 02:37:28 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <0752CA13909A46049BF86B410F43711C@spike> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <0752CA13909A46049BF86B410F43711C@spike> Message-ID: <4B7680E8.8070702@livelib.com> spike wrote: > > we should be able to induce any religious experience we want, > while being perfectly honest, scientific and ethical about it. > > And make a cubic buttload of money of course. > I trust you meant "a boatload of money"? Much more pleasant way to carry the stuff. - David From cluebcke at yahoo.com Fri Feb 12 22:25:33 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Fri, 12 Feb 2010 14:25:33 -0800 (PST) Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <87670FEF1360437981B4B84931C5FF94@spike> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> Message-ID: <811542.89157.qm@web111209.mail.gq1.yahoo.com> > Faith-R-Us: We do not believe your religion, but for a modest fee, we canhelp you believe in it with ever greater fervor. Could there be something in the contract where the customer agrees to refrain from voting until the effect wears off? ----- Original Message ---- From: spike To: ExI chat list Sent: Fri, February 12, 2010 1:39:02 PM Subject: Re: [ExI] better self-transcendence through selective brain damage > ...On Behalf Of spike > Subject: Re: [ExI] better self-transcendence through selective brain damage > > > ...On Behalf Of spike > ... > There are areas of our lives in > which we have come so accustomed to spin and outright > deception that we do not even know what the truth sounds > like... Religion and love are two areas where the actual truth is > really never uttered... spike For instance, imagine giving your sweetheart a Valentine that says: Sweetheart, the depth of my love for you is equal to that which is induced by several milligrams of endorphine-precursor pheromones which are responsible for this class of emotions in humans. I mean it from the bottom of my brain. This message might be technically "true" however your ass is technically "dead meat." Your sperm will never get the opportunity to express their right to life in this manner. Getting back to religion, I am intrigued by the notion of being able to advertise something like Faith-R-Us: We do not believe your religion, but for a modest fee, we can help you believe in it with ever greater fervor. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From gts_2000 at yahoo.com Sat Feb 13 18:42:05 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 10:42:05 -0800 (PST) Subject: [ExI] evolution of consciousness Message-ID: <810447.49620.qm@web36507.mail.mud.yahoo.com> --- On Thu, 2/11/10, Stathis Papaioannou wrote: > A computer model of the brain is made that controls a body, > i.e. a robot. The robot will behave exactly like a human. Such a robot may be made, yes. > Moreover, it will behave exactly like a human due to an isomorphism > between the structure and function of the brain and the structure and > function of the computer model, since that is what a model is. Now, you > claim that this robot would lack consciousness. I never made that claim. In fact I don't recall that you and I ever discussed robots until now. I suppose that you've made this claim above on my behalf. > This means that there is nothing about the intelligent behaviour of the > human that is affected by consciousness. For if consciousness were a > separate thing that affected behaviour, there would be some deficit in > behaviour if you reproduced the functional relationship between brain > components while leaving out the consciousness. Therefore, consciousness > must be epiphenomenal. You might have said that you rejected > epiphenomenalism, but you cannot do so consistently. No, I reject epiphenomenalism consistently. In the *actual* thought experiment that you proposed, in which the human patient presented with a lesion in Wernicke's area causing a semantic deficit and in which the surgeon used p-peurons, the patient DID suffer from a deficit after the surgery. And that deficit was due precisely to the fact that he would lack the experience of his own understanding, which would in turn affect his behavior. I.e., epiphenomenalism is false. This is why I stated multiple times that the surgeon would need to keep working until he finally would get the patient's behavior right. In the end his patient would pass the turing test yet still have no conscious understanding of words. > The only way you can consistently maintain your position > that computers can't reproduce consciousness is to say that they > can't reproduce intelligence either. Not so. > If you don't agree with this you must explain why I am wrong when I > point out the self-contradictions that zombies would lead to, and you > simply avoid doing this, I just don't see the contradictions that you see, Stathis. Let me ask you this: your general position seems to be that weak AI is false; that if weak AI is possible then strong AI must also be possible, because the distinction between weak and strong is false and anything that passes the Turing must have strong AI. Is that your position? -gts From stefano.vaj at gmail.com Sat Feb 13 19:52:58 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Feb 2010 20:52:58 +0100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <780009.58882.qm@web111203.mail.gq1.yahoo.com> References: <780009.58882.qm@web111203.mail.gq1.yahoo.com> Message-ID: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> On 12 February 2010 22:24, Christopher Luebcke wrote: > I was wondering, given the lively back and forth I've seen on this list, whether the participants are agreed on the meanings of the terms "consciousness" and "intelligence". For example, were someone to posit that you cannot have consciousness without intelligence, are the meanings of those two terms widely agreed on? I do not think so. This may be the crux of the problem. Intelligence comes easier (btw, even stupid people are sometimes conscious...). Consciousness would be easy enough as well but only as long as you do not charge it with some transcendental meaning having to do with self-perception of the self, etc. -- Stefano Vaj From nebathenemi at yahoo.co.uk Sat Feb 13 20:34:13 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Sat, 13 Feb 2010 20:34:13 +0000 (GMT) Subject: [ExI] Glacier Geoengineering In-Reply-To: Message-ID: <324067.57495.qm@web27005.mail.ukl.yahoo.com> Keith put numbers on pumping ocean water on to the East Antarctic plateau as follows: Interesting concept though. To put numbers on it, the area of the earth is ~5.1 x 10E14 square meters. 3/4 of that is water, so ~3.8 10E15 square meters. To lower the oceans by a meter in a year would require pumping at 1.21 x 10E7 cubic meters per second. 12,100,000 cubic meters per second. Hmm The flow of the Amazon is 219,000 cubic meters per second, so it would take 55 times the flow of the Amazon. Pumping it up some 3000 meters to the ice sheet would take considerable energy, P=Q*g*h*1/pump efficiency (0.9). 1.21*10E7*9.8*3000/0.9 = 396 GW. 400 one GW reactors would do the job. (Please check this number.) Keith First, a quick double-check - wikipedia reckons ~3.61 x 10E14 square meters of earth's surface is ocean, so pretty close to Keith's figure. I just plugged 3.61E14 / (60x60x24x365) to get 11,447,235 cubic meters per second, so within about 5% of Keith's figure. So, even with pumps converting 90% of energy going in to potential energy of the water, we're looking in the 350-400GW/yr range to do one meter/year. But, do we really need to pump a whole meter in year? 1993 to 2003 had an average sea level rise of 3.1mm/year. The IPCC estimates http://www.ipcc.ch/publications_and_data/ar4/syr/en/spms3.html come out at from 1.1m to 6.4m higher in 2090-99 compared to 1980-99. So, say we start planning now and are ready to start the pumping in ten years - 2020. Then we need to do 1.1m to 6.4m over 80 years. Let's say 4m as a middling figure - then instead of 400GW for 4 years, we can do 20GW over 80 years. Sure, the power plants will need rebuilding more than once. I'm assuming we're going with nuclear to avoid shipping large quantities of fuel to the antarctic, and to keep it soot-free and low carbon (don't want to accelerate that antarctic coastal glacier breakup!). UK nuclear plants built since the 1970s have expected operating lives of 30-40 years http://www.world-nuclear.org/info/inf84.html If we're being optimistic and following Sizewell B, you're getting 1.1Gw for 40 years, but a couple of reactors on that table are "running at 70% indefinitely" so instead of 18 of these needing replacing once, we'll probably need 25 replacing once or twice. This will cost many tens of billions, but should keep pace with ocean sea level rise and avoid the risk of losing places like London, New York, most of the Netherlands (worth a lot in terms of real estate) and the human misery of trying to resettle tens of millions from Bangladesh and low-lieing areas around the Indian Ocean. Tom (enjoying geoengineering chat as a change from philosophical musings) From jrd1415 at gmail.com Sat Feb 13 21:02:48 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sat, 13 Feb 2010 14:02:48 -0700 Subject: [ExI] Semiotics and Computability In-Reply-To: <964516.40828.qm@web36504.mail.mud.yahoo.com> References: <964516.40828.qm@web36504.mail.mud.yahoo.com> Message-ID: I think maybe I've figured out where Gordon is coming from, so I'll attempt an explanation. Also, I find myself in agreement with him, partly. I think a substantial portion of the problem other posters have had with Gordon stems from an INCOMPLETE understanding of and consequently an INCOMPLETE addressing of his thesis. I'm gonna try to not get all flowery and elaborate here, just semi- sort of bare bones. Gordon says he rejects the concept of mind-body dualism. Rather, he asserts a timeworn philosophical alternative, that of an inseparable unity, a mind-body unity if you will. Then when others propose a simulation of mind, Gordon objects, seeming to me, and logically, to be saying you can't get a faithful recreation of mind if you leave out the body part. The recent comments regarding amoeba consciousness and the nature of pain, combined with a years-long backlog of free-floating yet related bits finally coalesced for me into an epiphany. And so I have come to agree with Gordon, partly. All the talk of neuron by neuron replacements is fine as far as it goes, but Gordon is reasonable in rejecting this -- though I wish he would have explained himself better -- based on the principle of incompleteness. Half a thing is not the thing. The 'body' part is missing. To faithfully reproduce the mind/persona/organism you have to reproduce the whole body, all the somatic cells, neural and non, with their particular and varied influences on the persona. At this point I will state, without elaboration, that I have come to believe that consciousness arises at the cellular level, and that any variant of consciousness in a highly complex multi-cellular organisms -- in particular, it's penultimate form in humans, of cognitive ability and an awareness self and universe -- arises from a combination of somatic and cerebral consciousness. To make things worse -- again without elaboration -- it is difficult for me to avoid the further conclusion that the bulk of the phenomenon of consciousness comes from the contribution of the somatic cells. To soften this seemingly outrageous assertion -- that the God-like nature of man ... "What a piece of work is a man, how noble in reason, how infinite in faculties; in form and moving how express and admirable, in action how like an angel, in apprehension how like a god: the beauty of the world, the paragon of animals! ...is more about the influence of gut, bone, blood, and sinew, than brain -- let me remind that the mammalian brain with all its glorious capability is a relatively recent add-on to the ancient partnership of sensory apparatus and the less glamorous support soma. That said, returning to the idea of an authentic simulation, I believe if the simulation includes the somatic contribution, thus comprehensively simulating both mind and body, that there is no reason the simulation won't fully and faithfully remanifest the original mind-body persona. What do you think Gordon? Does this work for you, or no? Now I'll elaborate a little on how I got here. Some previously unconnected bits. Years ago, I chanced to wonder what it must be like to be a liver cell; to live in a world where all the glory was reserved for 'elitest' neural tissue. Cells are cells, and logically should be equal. Some conundrum there. I used to visualize the human body devoid of all but neural tissue -- remember the old plastic educational toy, the visible man? I would think of this -- the brain and the neural filigree extending out from it -- as the "real" person, and would demote the remainder to a lower order, mechanistic, almost lifeless status. Same conundrum as above, but unrecognized at the time. Then I chanced upon Paul Pietsch's "Shufflebrain" http://www.indiana.edu/~pietsch/home.html where the author writes of memory and seemingly-deliberative behavior in single-celled organisms. From this I concluded that information processing need not be the exclusive province of multicellular neural tissue. Then Nova or the National Geographic channel produced a program about the microscopic world. They advertised it with a bit of video showing living paramecium, about three or four seconds worth. I never saw the program, but I saw that video clip five or six times, and it had a huge impact. A paramecium swims along, impressively vigorous and vital in its movement, then it stops for a moment, deliberates (processes information?) and then heads off in a different direction. Call me a fool, call it anthropomorphic projection, but I swear I saw deliberation and intentionality. The scene I saw was a scene of life, and life is recognizable from just these features. Then Jef Albright posted re Nolopsism http://oscarhome.soc-sci.arizona.edu/ftp/PAPERS/Nolipsism.pdf where, on page three (lucky, since I haven't read the whole thing yet) the author, speaking of mental events, writes: "...having or feeling a pain is identified with a neurological event, but the pain itself is distinct from the having of the pain ? it is not an event." It was at this point that Gordon mentioned amoebas and pain, and all the bits fell into place. That pain is a scream from the distant somatic cells, conveyed by neural tissue, yes, but an example of the tangible distant assertiveness of non-neural tissue under assault. But this new notion of an active somatic consciousness had further implications. Biology is ***EVOLVED***. The evolutionary process takes place in an environment where ***ALL*** extant physical mechanisms are at play. So three and a half billion years ago, when bacterial life first appeared, any quirk of physical mechanism and morphology which might afford a selection advantage, would have been evolutionarily selected and genetically preserved. It is on this basis that I conclude that deliberation (information processing) and intentionality emerged very early in biological evolution because of its clear survival advantage. Emerged in bacteria probably, and then over the next 3 billion years was subject to further refinement, because evolution never sleeps. Then eukaryotic cells emerged, single cells at first -- amoeba and paramecium -- followed, as we know, by the Cambrian explosion. Which led to macroscopic multi-cellular organisms -- humans among them -- creatures composed of the descendants of those first single cell creatures, and bringing with them the advantages of cellular consciousness, even further refined by the unstinting influence of evolution. Enough embarrassment for one day. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From stefano.vaj at gmail.com Sat Feb 13 21:15:48 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Feb 2010 22:15:48 +0100 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Promblems of Transhumanism" In-Reply-To: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> References: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> Message-ID: <580930c21002131315u676a38f1y9f019fbe33850291@mail.gmail.com> On 8 February 2010 19:18, Max More wrote: > Damien, you seem to be suggesting ("Still, Max is quoted as saying") that > Hughes "implication of antidemocratic or top-down bias" is understandable > because of my statement that "Democratic arrangements have no intrinsic > value; they have value only to the extent that they enable us to achieve > shared goals while protecting our freedom." If so, I don't understand how > you can say that. Saying that democratic arrangements (as they exist at any > particular time) have no intrinsic value is not in the least equivalent to > saying that authoritarian control is better. Should we not strive for > something better than the ugly system of democracy that currently exists? > Are authoritarian arrangements the only conceivable alternative? I have another remark. How comes that self-declared technoprogressives and socialists find themselves aligned to a ritual, mechanical defence of what Marx used to call the "board of directors of the bourgeoisie"? :-) Speaking of democracy more in general, a feel-good concept which has been used in so many different meanings, I am personally rather fond of the concept of "popular sovereignty", or rather "sovereignties", firstly in terms of collective self-determination (a principle having much to do with transhumanism, IMHO), secondly as it suggests that we have the freedom to adopt the norms and legal systems of our choice, rather than having simply to recognise or enforcing a set of universal and eternal laws, thirdly because it implies political pluralism (meaning a radical wariness of dreams of world governments of any kind which would be entitled to ignore the willingness or not of a community to participate in them). All in all, this also sounds as the best bet of transhumanism, including in terms of "national Darwinism" which would strongly discourage the implementation of neoluddite policies even by governments who might be ideologically tempted by them. -- Stefano Vaj From stefano.vaj at gmail.com Sat Feb 13 21:23:29 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Feb 2010 22:23:29 +0100 Subject: [ExI] If anyone wants to Respond to the IEET piece re "Problems of Transhumanism" In-Reply-To: <4B7059E4.1010402@satx.rr.com> References: <201002081818.o18IIX6U008758@andromeda.ziaspace.com> <4B7059E4.1010402@satx.rr.com> Message-ID: <580930c21002131323j13c21d41ma0f41cf3e9da4b7c@mail.gmail.com> On 8 February 2010 19:37, Damien Broderick wrote: > But as I said a moment ago in another post, your statement doesn't sound > like a ringing endorsement of the position that "liberal democracy is the > best path to betterment," which is the point Hughes is making about >>Humanism. Where he goes from there is questionable if not absurd *as a > generalization*--but there's certainly a technocratic, elitist tendency in a > lot of >H discourse I've read here over the last 15 years or so. One could contend, exactly from what I have always taken as James Hughes' position, that "liberal democracy" *is* an ?litist system, only one where the circulation of the ?lites is very slow and limited, and where the selection criteria thereof is very debatable... :-) -- Stefano Vaj From stathisp at gmail.com Sat Feb 13 21:49:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 08:49:08 +1100 Subject: [ExI] evolution of consciousness In-Reply-To: <810447.49620.qm@web36507.mail.mud.yahoo.com> References: <810447.49620.qm@web36507.mail.mud.yahoo.com> Message-ID: On 14 February 2010 05:42, Gordon Swobe wrote: > --- On Thu, 2/11/10, Stathis Papaioannou wrote: > >> A computer model of the brain is made that controls a body, >> i.e. a robot. The robot will behave exactly like a human. > > Such a robot may be made, yes. > >> Moreover, it will behave exactly like a human due to an isomorphism >> between the structure and function of the brain and the structure and >> function of the computer model, since that is what a model is. Now, you >> claim that this robot would lack consciousness. > > I never made that claim. In fact I don't recall that you and I ever discussed robots until now. I suppose that you've made this claim above on my behalf. You posted a link to a site that purported to show that not even a robot which had input from the real world would have understanding. Have you changed your mind about this? >> This means that there is nothing about the intelligent behaviour of the >> human that is affected by consciousness. For if consciousness were a >> separate thing that affected behaviour, there would be some deficit in >> behaviour if you reproduced the functional relationship between brain >> components while leaving out the consciousness. Therefore, consciousness >> must be epiphenomenal. You might have said that you rejected >> epiphenomenalism, but you cannot do so consistently. > > No, I reject epiphenomenalism consistently. In the *actual* thought experiment that you proposed, in which the human patient presented with a lesion in Wernicke's area causing a semantic deficit and in which the surgeon used p-peurons, the patient DID suffer from a deficit after the surgery. And that deficit was due precisely to the fact that he would lack the experience of his own understanding, which would in turn affect his behavior. I.e., epiphenomenalism is false. This is why I stated multiple times that the surgeon would need to keep working until he finally would get the patient's behavior right. In the end his patient would pass the turing test yet still have no conscious understanding of words. The patient COULD NOT suffer from a deficit after the surgery. Otherwise you would be saying that both P and ~P are true, where P = "the p-neurons exactly reproduce the I/O behaviour of the biological neurons". >> The only way you can consistently maintain your position >> that computers can't reproduce consciousness is to say that they >> can't reproduce intelligence either. > > Not so. > >> If you don't agree with this you must explain why I am wrong when I >> point out the self-contradictions that zombies would lead to, and you >> simply avoid doing this, > > I just don't see the contradictions that you see, Stathis. > > Let me ask you this: your general position seems to be that weak AI is false; that if weak AI is possible then strong AI must also be possible, because the distinction between weak and strong is false and anything that passes the Turing must have strong AI. Is that your position? Yes. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Feb 13 21:23:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 13:23:34 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> Message-ID: <405824.5794.qm@web36506.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stefano Vaj wrote: >> I was wondering, given the lively back and forth I've >> seen on this list, whether the participants are agreed on >> the meanings of the terms "consciousness" and >> "intelligence". > I do not think so. This may be the crux of the problem. I think you have a point there. I wonder for example why some people here have trouble with my observation that amoebas (and most organisms on earth) have intelligence but no consciousness. It seems to me obvious that amoebas and other single-celled organisms have some intelligence: they can find food and procreate and so on. But because they lack nervous systems, it looks to me like these simple creatures live out their entire lives unconsciously. -gts From stathisp at gmail.com Sat Feb 13 22:04:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 09:04:12 +1100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <405824.5794.qm@web36506.mail.mud.yahoo.com> References: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> <405824.5794.qm@web36506.mail.mud.yahoo.com> Message-ID: On 14 February 2010 08:23, Gordon Swobe wrote: > It seems to me obvious that amoebas and other single-celled organisms have some intelligence: they can find food and procreate and so on. But because they lack nervous systems, it looks to me like these simple creatures live out their entire lives unconsciously. What about flatworms? -- Stathis Papaioannou From thespike at satx.rr.com Sat Feb 13 22:08:58 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 13 Feb 2010 16:08:58 -0600 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: References: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> <405824.5794.qm@web36506.mail.mud.yahoo.com> Message-ID: <4B7722FA.6050100@satx.rr.com> On 2/13/2010 4:04 PM, Stathis Papaioannou wrote: > What about flatworms? They have a very one-dimensional consciousness. From gts_2000 at yahoo.com Sat Feb 13 22:15:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 14:15:33 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <23758.31471.qm@web36506.mail.mud.yahoo.com> --- On Sat, 2/13/10, Jeff Davis wrote: > Gordon says he rejects the concept of mind-body dualism.? Rather, he > asserts a timeworn philosophical alternative, that of an inseparable > unity, a mind-body unity if you will.? Yes. > Then when others propose a simulation of mind, Gordon objects, seeming > to me, and logically, to be saying you can't get a faithful recreation > of mind if you leave out the body part. Yes. > And so I have come to agree with Gordon, partly. Be careful. My views haven't helped me win any popularity contests. :) > All the talk of neuron by neuron replacements is fine as > far as it goes, but Gordon is reasonable in rejecting this -- though > I wish he would have explained himself better -- based on the > principle of incompleteness.? Half a thing is not the thing. > > The 'body' part is missing.? To faithfully reproduce > the mind/persona/organism you have to reproduce the whole body, > all the somatic cells, neural and non, with their particular and > varied influences on the persona. Yes. I think we cannot extract the mind from the nervous system. In my view the mind exists as a *high-level physical feature* of the nervous system. Of interest to extropians, the mind does not exist as something separate from the brain, like software running on hardware. I see the human brain/mind as all hardware. Now at this point some people will think: "How could the mind exist as a physical feature of anything? Can't we make a real distinction between the mental and the physical?" Most people think this way, including even many philosophical materialists who claim not to. But it's only the dualistic voice of Descartes speaking to us from beyond the grave. We are his intellectual descendants. I finally shook off that Cartesian illusion and now the world makes a lot more sense. Unfortunately this world-view does not fit well with the extropian vision of uploading and so on. Oh well. > That said, returning to the idea of an authentic > simulation, I believe if the simulation includes the somatic > contribution, thus comprehensively simulating both mind and body, that > there is no reason the simulation won't fully and faithfully remanifest > the original mind-body persona. > > What do you think Gordon?? Does this work for you, or > no? I think we basically agree, though I need to know more what you mean by "somatic contribution". As I stated at the outset, I have no objection to strong AI per se. I just don't think it can happen on the dualistic software/hardware model. I wonder also if what you've described here even qualifies as anything I would call a simulation. Thanks for the thoughtful post, Jeff. -gts From stefano.vaj at gmail.com Sat Feb 13 22:40:57 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Feb 2010 23:40:57 +0100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <4B7722FA.6050100@satx.rr.com> References: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> <405824.5794.qm@web36506.mail.mud.yahoo.com> <4B7722FA.6050100@satx.rr.com> Message-ID: <580930c21002131440h6e465c19l914d43d89343e96f@mail.gmail.com> On 13 February 2010 23:08, Damien Broderick wrote: > On 2/13/2010 4:04 PM, Stathis Papaioannou wrote: > >> What about flatworms? > > They have a very one-dimensional consciousness. "Flat"worms... Shouldn't it be bi-dimensional? ;-) -- Stefano Vaj From gts_2000 at yahoo.com Sat Feb 13 22:42:06 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 14:42:06 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: Message-ID: <158383.1762.qm@web36504.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stathis Papaioannou wrote: > What about flatworms? I can't know what it's like to be a flatworm, assuming there is something it is like to be one, but clearly full-scale consciousness begins to appear somewhere on the evolutionary scale from flatworms to humans. I feel comfortable saying 1) humans have consciousness, 2) some other organisms with highly developed nervous systems almost certainly have consciousness (chimps, etc) and 3) simple organisms that completely lack nervous systems do not have it. -gts From stathisp at gmail.com Sat Feb 13 22:49:34 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 09:49:34 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <23758.31471.qm@web36506.mail.mud.yahoo.com> References: <23758.31471.qm@web36506.mail.mud.yahoo.com> Message-ID: On 14 February 2010 09:15, Gordon Swobe wrote: > Yes. I think we cannot extract the mind from the nervous system. In my view the mind exists as a *high-level physical feature* of the nervous system. Of interest to extropians, the mind does not exist as something separate from the brain, like software running on hardware. I see the human brain/mind as all hardware. Software exists separately from hardware in the same way as an architectural drawing exists separately from the building it depicts. > Now at this point some people will think: "How could the mind exist as a physical feature of anything? Can't we make a real distinction between the mental and the physical?" Most people think this way, including even many philosophical materialists who claim not to. But it's only the dualistic voice of Descartes speaking to us from beyond the grave. We are his intellectual descendants. > > I finally shook off that Cartesian illusion and now the world makes a lot more sense. Unfortunately this world-view does not fit well with the extropian vision of uploading and so on. Oh well. There is no real distinction between software and hardware. When you program a computer you make actual physical changes to it, and the "software" is just a scheme that you have in mind to help you make the right physical changes so that the hardware does what you want it to do. The computer is just dumb matter which has no understanding whatsoever of the program, the programmer, its own design, the existence of the world or anything else. Its parts follow the laws of physics but even this they don't understand: they just do it. Exactly the same is true of human brains. But when the hardware is set up just right, in a brain or a computer, it behaves in an intelligent manner, and intelligence from the point of view of the system displaying it is consciousness. -- Stathis Papaioannou From thespike at satx.rr.com Sat Feb 13 22:53:20 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 13 Feb 2010 16:53:20 -0600 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <580930c21002131440h6e465c19l914d43d89343e96f@mail.gmail.com> References: <580930c21002131152t392ef477y5765a2cb90b278ce@mail.gmail.com> <405824.5794.qm@web36506.mail.mud.yahoo.com> <4B7722FA.6050100@satx.rr.com> <580930c21002131440h6e465c19l914d43d89343e96f@mail.gmail.com> Message-ID: <4B772D60.2060204@satx.rr.com> On 2/13/2010 4:40 PM, Stefano Vaj wrote: >> They have a very one-dimensional consciousness. > > "Flat"worms... Shouldn't it be bi-dimensional? ;-) Only just. Not enough for semantics, though, merely syntax, poor little syntagmatic things. From stathisp at gmail.com Sat Feb 13 22:57:55 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 09:57:55 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <964516.40828.qm@web36504.mail.mud.yahoo.com> Message-ID: On 14 February 2010 08:02, Jeff Davis wrote: > All the talk of neuron by neuron replacements is fine as far as it > goes, but Gordon is reasonable in rejecting this -- though I wish he > would have explained himself better -- based on the principle of > incompleteness. ?Half a thing is not the thing. But half a thing may still perform the function of the thing. > The 'body' part is missing. ?To faithfully reproduce the > mind/persona/organism you have to reproduce the whole body, all the > somatic cells, neural and non, with their particular and varied > influences on the persona. > > At this point I will state, without elaboration, that I have come to > believe that consciousness arises at the cellular level, and that any > variant of consciousness in a highly complex multi-cellular organisms > -- in particular, it's penultimate form in humans, of cognitive > ability and an awareness self and universe -- arises from a > combination of somatic and cerebral consciousness. ?To make things > worse -- again without elaboration -- it is difficult for me to avoid > the further conclusion that the bulk of the phenomenon of > consciousness comes from the contribution of the somatic cells. ?To > soften this seemingly outrageous assertion -- that the God-like nature > of man ... > > "What a piece of work is a man, how noble in reason, how infinite in > faculties; in form and moving how express and admirable, in action how > like an angel, in apprehension how like a god: the beauty of the > world, the paragon of animals! > > ...is more about the influence of gut, bone, blood, and sinew, than > brain -- let me remind that the mammalian brain with all its glorious > capability is a relatively recent add-on to the ancient partnership of > sensory apparatus and the less glamorous support soma. Then there would be a problem with the consciousness of people who have lost limbs or various internal organs. -- Stathis Papaioannou From stathisp at gmail.com Sat Feb 13 23:01:21 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 10:01:21 +1100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <158383.1762.qm@web36504.mail.mud.yahoo.com> References: <158383.1762.qm@web36504.mail.mud.yahoo.com> Message-ID: On 14 February 2010 09:42, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: > >> What about flatworms? > > I can't know what it's like to be a flatworm, assuming there is something it is like to be one, but clearly full-scale consciousness begins to appear somewhere on the evolutionary scale from flatworms to humans. > > I feel comfortable saying 1) humans have consciousness, 2) some other organisms with highly developed nervous systems almost certainly have consciousness (chimps, etc) and 3) simple organisms that completely lack nervous systems do not have it. It's the same with computers. There aren't any yet which match the processing ability of a mouse brain, let alone that of a chimp or human. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Feb 13 23:21:36 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 15:21:36 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <620471.11317.qm@web36505.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stathis Papaioannou wrote: > There is no real distinction between software and hardware. > When you program a computer you make actual physical changes to it, > and the "software" is just a scheme that you have in mind to help > you make the right physical changes so that the hardware does what you > want it to do. Okay, I'll go along with that. > The computer is just dumb matter which has no understanding > whatsoever of the program, the programmer, its own design, > the existence of the world or anything else. Right, that's what I've been trying to tell you! :) > Its parts follow the laws of physics but even this they don't > understand: they just do it. Right. Your logic looks perfect so far. > Exactly the same is true of human brains. I can't speak for you, but it sure seems like my brain has conscious understanding of things. And according to your logic above, computers do not have this understanding. So then if computers don't have it, but my brain does, then logic forces me to conclude that my brain does not equal a computer. -gts From stefano.vaj at gmail.com Sat Feb 13 23:58:54 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Feb 2010 00:58:54 +0100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <158383.1762.qm@web36504.mail.mud.yahoo.com> References: <158383.1762.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c21002131558m9e6df01w7fb9dbc96ffbe339@mail.gmail.com> On 13 February 2010 23:42, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: >> What about flatworms? > > I can't know what it's like to be a flatworm Nor can you know what it is like to be a woman, a chinese, an octuagenarian, or for that matter an identical twin of yourself. What you can tell, however, is that they are in principle able to succeed in a Turing test. ;-) Do you really need to make further assumptions as to hypothetical non-phenomenical features shared by them but not by other entities who may be able to pass it just as well? I am not trying to persuade you, as others seem unrelently to be doing, that a computer can be "conscious". I am simply insisting that any such immaterial feature that can be "projected" on other living organisms, or on the restricted subset thereof represented by alert, non-infant, educated, Turing-test qualified human beings can be with identical plausibility projected on anything else exhibiting a good enough analogy of the interactions you can have with them. -- Stefano Vaj From gts_2000 at yahoo.com Sun Feb 14 00:02:02 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 16:02:02 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: Message-ID: <789829.39304.qm@web36502.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stathis Papaioannou wrote: > > I feel comfortable saying 1) humans have > consciousness, 2) some other organisms with highly developed > nervous systems almost certainly have consciousness (chimps, > etc) and 3) simple organisms that completely lack nervous > systems do not have it. > > It's the same with computers. There aren't any yet which > match the processing ability of a mouse brain, let alone that of a > chimp or human. If and when someone builds a human-like nervous system in a box with sense organs, I'll call that box conscious but I won't call it a digital computer. Neither will you, because it won't be one. -gts From gts_2000 at yahoo.com Sun Feb 14 00:37:47 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 16:37:47 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <580930c21002131558m9e6df01w7fb9dbc96ffbe339@mail.gmail.com> Message-ID: <502441.97360.qm@web36506.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stefano Vaj wrote: >> I can't know what it's like to be a flatworm > > Nor can you know what it is like to be a woman, a chinese, > an octuagenarian, or for that matter an identical twin of > yourself. Actually I can infer a great deal about what it's like to be them. I see that they have nervous systems very much like mine, eyes and skin and noses and ears very much like mine, and so on, and from these biological facts *in combination* with their intelligent communications I can know with near certainty that they have consciousness very much like mine. I cannot do the same for flatworms or computers or amoebas. > Do you really need to make further assumptions as to > hypothetical non-phenomenical features shared by them but not by other > entities who may be able to pass it just as well? As you probably know by now if you've read my posts, I think the Turing test will give false positives for certain computers that we may develop in the not too distant future. By "false positives" I mean that weak AI computers will pass the test, but that the test does not measure the existence of consciousness or subjective experience or intentionality; it does not and cannot test for strong AI. The Turing test will give false positives for strong AI. > I am simply insisting that any such immaterial feature that can be > "projected" on other living organisms, or on the restricted subset > thereof represented by alert, non-infant, educated, Turing-test > qualified human beings can be with identical plausibility projected on > anything else exhibiting a good enough analogy of the interactions you > can have with them. Again, the Turing test does not measure the strong AI attributes that concern me. It is true that I "project" consciousness onto other humans but I don't do this based solely on their ability to pass the Turing test or on their interactions with me. As above, I do it based also on their physiology. -gts From jrd1415 at gmail.com Sun Feb 14 00:38:48 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sat, 13 Feb 2010 17:38:48 -0700 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <964516.40828.qm@web36504.mail.mud.yahoo.com> Message-ID: On Sat, Feb 13, 2010 at 3:57 PM, Stathis Papaioannou wrote: > On 14 February 2010 08:02, Jeff Davis wrote: >> ?Half a thing is not the thing. > But half a thing may still perform the function of the thing. If the nature of the half thing is profoundly different from that of the whole, the nature of its performance may also be radically altered/diminished. >> ...the God-like nature of man is more about the influence >> of gut, bone, blood, and sinew, than brain... > Then there would be a problem with the > consciousness of people who > have lost limbs or various internal organs. Regarding the loss of limbs, kidney, gall bladder, stomach, lengths of intestine, a lung, etc.... I agree the persona and consciousness appear unaltered. That said, I have heard of people who lose a portion of their visual field, but are unaware of the alteration in their consciousness. However, that may be the result of brain damage, not somatic damage. But mostly I was thinking of basic human impulses and feelings: hunger and the urge to feed, the reproductive impulse, acquisitiveness(greed?), the various behaviors arising from the instinct for survival: fight or flight, fear, anger, hatred, dominance, submission, anxiety, depression, shock. These things are primitive, and certainly pre-date the features of mammalian (ie higher) brain function. I view these impulses as the foundation AND BULK of animal and human behavior, and gut-centered, with higher-level mental activity a more recent development. I wonder if feelings in the gut aren't in fact real -- like pain -- and our awareness of them just an additional fact, a mental fact. So what would consciousness be, what would a mind be without this foundational context built up over three and a half billion years? That's why I think the gut (soma) may be critical in defining mind. But, to be honest with you, I feel way out on a limb here. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From gts_2000 at yahoo.com Sun Feb 14 00:49:20 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 13 Feb 2010 16:49:20 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <431764.52879.qm@web36501.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stathis Papaioannou wrote: > But when the hardware is set up just right, in a brain or a computer, it > behaves in an intelligent manner, and intelligence from the point of view > of the system displaying it is consciousness. My watch tells the time intelligently. Does it therefore have consciousness from its "point of view"? I don't think so. I don't think my watch has a point of view. But then maybe it isn't set up right. :) -gts From stefano.vaj at gmail.com Sun Feb 14 01:17:13 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Feb 2010 02:17:13 +0100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <502441.97360.qm@web36506.mail.mud.yahoo.com> References: <580930c21002131558m9e6df01w7fb9dbc96ffbe339@mail.gmail.com> <502441.97360.qm@web36506.mail.mud.yahoo.com> Message-ID: <580930c21002131717j6f50fa59k1e39bbe3cb0aee3b@mail.gmail.com> On 14 February 2010 01:37, Gordon Swobe wrote: > It is true that I "project" consciousness onto other humans but I don't do this based solely on their ability to pass the Turing test or on their interactions with me. As above, I do it based also on their physiology. If the ghost of one's wife, or her upload on a digital computer, manages to persuade somebody that she is his wife, one will identify her as his wife, irrespective of the physiology, if any, Sociologically there is nothing else to say. Sure, somebody may refuse his conclusion for philosophical and ultimately arbitrary reasons, on the same basis that he may contend that she is not the same person anymore after seven years, because in average all her atoms have been replaced after such a perod. But this would end up being nothing else than a very idiosincratic POV. -- Stefano Vaj From stathisp at gmail.com Sun Feb 14 03:30:22 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 14:30:22 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <964516.40828.qm@web36504.mail.mud.yahoo.com> Message-ID: On 14 February 2010 11:38, Jeff Davis wrote: > On Sat, Feb 13, 2010 at 3:57 PM, Stathis Papaioannou wrote: >> On 14 February 2010 08:02, Jeff Davis wrote: > >>> ?Half a thing is not the thing. > >> But half a thing may still perform the function of the thing. > > If the nature of the half thing is profoundly different from that of > the whole, the nature of its performance may also be radically > altered/diminished. Yes it may, but it depends on what the function is. An artificial joint is not *identical* with a natural joint, but it can function just as well. >>> ...the God-like nature of man is more about the influence >>> of gut, bone, blood, and sinew, than brain... > >> Then there would be a problem with the >> consciousness of people who >> have lost limbs or various internal organs. > > Regarding the loss of limbs, kidney, gall bladder, stomach, lengths of > intestine, a lung, etc.... ?I agree the persona and consciousness > appear unaltered. ?That said, I have heard of people who lose a > portion of their visual field, but are unaware of the alteration in > their consciousness. ?However, that may be the result of brain damage, > not somatic damage. Some patients with damage to the visual cortex are completely blind but insist that they can see normally, even while they stagger around bumping into things. They are not lying or even "in denial", they honestly believe it. That is, they are delusional, and this is an example of a type of delusional disorder called anosognosia (meaning inability to recognise that you have an illness). Interestingly, it usually doesn't happen if the lesion is in the eye or optic nerve. > But mostly I was thinking of basic human impulses and feelings: > hunger and the urge to feed, the reproductive impulse, > acquisitiveness(greed?), the various behaviors arising from the > instinct for survival: fight or flight, fear, anger, hatred, > dominance, submission, anxiety, depression, shock. ?These things are > primitive, and certainly pre-date the features of mammalian (ie > higher) brain function. ?I view these impulses as the foundation AND > BULK of animal and human behavior, and gut-centered, with higher-level > mental activity a more recent development. ?I wonder if feelings in > the gut aren't in fact real -- like pain -- and our awareness of them > just an additional fact, a mental fact. > > So what would consciousness be, what would a mind be without this > foundational context built up over three and a half billion years? > That's why I think the gut (soma) may be critical in defining mind. > > But, to be honest with you, I feel way out on a limb here. There is no doubt that many of our feelings are based in the body, but they are *felt* in the brain. If you could reproduce the inputs the brain receives from the body, you would reproduce the associated feelings. -- Stathis Papaioannou From stathisp at gmail.com Sun Feb 14 03:41:45 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 14:41:45 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <431764.52879.qm@web36501.mail.mud.yahoo.com> References: <431764.52879.qm@web36501.mail.mud.yahoo.com> Message-ID: On 14 February 2010 11:49, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: > >> But when the hardware is set up just right, in a brain or a computer, it >> behaves in an intelligent manner, and intelligence from the point of view >> of the system displaying it is consciousness. > > My watch tells the time intelligently. Does it therefore have consciousness from its "point of view"? I don't think so. I don't think my watch has a point of view. But then maybe it isn't set up right. :) The watch performs the function of telling the time just fine. It simulates a sundial or hour glass in this respect. However, when a human tells the time there are thousands of extra nuances which a watch just doesn't have. So the watch tells the time, but it doesn't understand the concept of of time. Comparing a watch with a human is like comparing a nematode with a human, only more so. What would you say to the non-organic alien visitors who make the case that since a nematode is not conscious, neither can a human be conscious, since basically a human is just a more complex nematode? -- Stathis Papaioannou From stathisp at gmail.com Sun Feb 14 05:41:33 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 16:41:33 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <620471.11317.qm@web36505.mail.mud.yahoo.com> References: <620471.11317.qm@web36505.mail.mud.yahoo.com> Message-ID: On 14 February 2010 10:21, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: > >> There is no real distinction between software and hardware. >> When you program a computer you make actual physical changes to it, >> and the "software" is just a scheme that you have in mind to help >> you make the right physical changes so that the hardware does what you >> want it to do. > > Okay, I'll go along with that. > >> The computer is just dumb matter which has no understanding >> whatsoever of the program, the programmer, its own design, >> the existence of the world or anything else. > > Right, that's what I've been trying to tell you! :) > >> Its parts follow the laws of physics but even this they don't >> understand: they just do it. > > Right. Your logic looks perfect so far. > >> Exactly the same is true of human brains. > > I can't speak for you, but it sure seems like my brain has conscious understanding of things. And according to your logic above, computers do not have this understanding. So then if computers don't have it, but my brain does, then logic forces me to conclude that my brain does not equal a computer. Your brain, when it is working properly, has understanding as an emergent property of the system, even though the matter in your brain, each individual neuron, is completely stupid. The Chinese Room thought experiment should make that clear to you. Since it doesn't, I have proposed a variation in which your neurons *do* have an understanding of their own basic tasks, but still no understanding of the big picture. You haven't responded to this. -- Stathis Papaioannou From stathisp at gmail.com Sun Feb 14 06:17:11 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 14 Feb 2010 17:17:11 +1100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <789829.39304.qm@web36502.mail.mud.yahoo.com> References: <789829.39304.qm@web36502.mail.mud.yahoo.com> Message-ID: On 14 February 2010 11:02, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: > >> > I feel comfortable saying 1) humans have >> consciousness, 2) some other organisms with highly developed >> nervous systems almost certainly have consciousness (chimps, >> etc) and 3) simple organisms that completely lack nervous >> systems do not have it. >> >> It's the same with computers. There aren't any yet which >> match the processing ability of a mouse brain, let alone that of a >> chimp or human. > > If and when someone builds a human-like nervous system in a box with sense organs, I'll call that box conscious but I won't call it a digital computer. Neither will you, because it won't be one. That's what is being attempted by Henry Markram's group. They may fail, but probably only because they are taking shortcuts in the modelling in order to reduce by orders of magnitude the required amount of processing. If they model a complete mouse brain, connect it to a mouse avatar or robot mouse, and it displays mouselike behaviour, that would be indication at the very least that that any separate mouse consciousness can only be epiphenomenal. As John Clark keeps reminding us, it would then be very difficult to explain how consciousness could have evolved. -- Stathis Papaioannou From bbenzai at yahoo.com Sun Feb 14 09:39:30 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 14 Feb 2010 01:39:30 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <783130.46814.qm@web113618.mail.gq1.yahoo.com> Jeff Davis wrote: > But mostly I was thinking of basic human impulses and > feelings: > hunger and the urge to feed, the reproductive impulse, > acquisitiveness(greed?), the various behaviors arising from > the > instinct for survival: fight or flight, fear, anger, > hatred, > dominance, submission, anxiety, depression, shock.? > These things are > primitive, and certainly pre-date the features of mammalian > (ie > higher) brain function.? I view these impulses as the > foundation AND > BULK of animal and human behavior, and gut-centered, with > higher-level > mental activity a more recent development.? I wonder > if feelings in > the gut aren't in fact real -- like pain -- and our > awareness of them > just an additional fact, a mental fact. > > So what would consciousness be, what would a mind be > without this > foundational context built up over three and a half billion > years? > That's why I think the gut (soma) may be critical in > defining mind. > > But, to be honest with you, I feel way out on a limb here. > I wouldn't argue with this view in principle, but would point out that the contribution of the actual body parts involved (actual gut tissue, muscles, etc.) is likely to be very very small. What's almost certainly more important is the maps in the brain that represent these body parts, and they could be hooked up to 'fake' body parts that produce the same signals with no loss of, or change in, any mental functions, as long as the fake parts behaved in a manner consistent with the real equivalent (produced hunger signals when blood glucose is low, etc.) I definitely agree that the lower brain functions, to do with somatic sensing and control and emotional states, are probably a vital component in any attempt to build an artificial mind, and this seems to be largely neglected by the AI community. Maybe actual sex just isn't as sexy for them as maze-navigation! Ben Zaiboc From bbenzai at yahoo.com Sun Feb 14 13:41:48 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 14 Feb 2010 05:41:48 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: Message-ID: <348836.89645.qm@web113619.mail.gq1.yahoo.com> Stathis Papaioannou > On 14 February 2010 11:02, Gordon Swobe > > If and when someone builds a human-like nervous system > in a box with sense organs, I'll call that box conscious but > I won't call it a digital computer. Neither will you, > because it won't be one. > > That's what is being attempted by Henry Markram's group. > They may > fail, but probably only because they are taking shortcuts > in the > modelling in order to reduce by orders of magnitude the > required > amount of processing. If they model a complete mouse brain, > connect it > to a mouse avatar or robot mouse, and it displays mouselike > behaviour, > that would be indication at the very least that that any > separate > mouse consciousness can only be epiphenomenal. As John > Clark keeps > reminding us, it would then be very difficult to explain > how > consciousness could have evolved. Ah, but don't you see Stathis? Blue Brain is a *digital computer*. Therefore it can't possibly produce consciousness, because, you know, because it's *digital*. And digital computers can't produce consciousness. QED. Ben Zaiboc From hkeithhenson at gmail.com Sun Feb 14 14:45:48 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 14 Feb 2010 07:45:48 -0700 Subject: [ExI] extropy-chat Digest, Vol 77, Issue 26 In-Reply-To: References: Message-ID: > On 14 February 2010 11:38, Jeff Davis wrote: > Regarding the loss of limbs, kidney, gall bladder, stomach, lengths of > intestine, a lung, etc.... ?I agree the persona and consciousness > appear unaltered. In a brain transplant operation, you want to be the donor. Keith From msd001 at gmail.com Sun Feb 14 18:20:33 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 14 Feb 2010 13:20:33 -0500 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <348836.89645.qm@web113619.mail.gq1.yahoo.com> References: <348836.89645.qm@web113619.mail.gq1.yahoo.com> Message-ID: <62c14241002141020g2278ddb5r5ee36d550b5567b@mail.gmail.com> On Sun, Feb 14, 2010 at 8:41 AM, Ben Zaiboc wrote: > Ah, but don't you see Stathis? > Blue Brain is a *digital computer*. ?Therefore it can't possibly produce consciousness, because, you know, because it's *digital*. > And digital computers can't produce consciousness. Digital computers produce discrete consciousness. Quantum computers produce indeterminate consciousness until observed. From jonkc at bellsouth.net Sun Feb 14 18:28:33 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 14 Feb 2010 13:28:33 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <405824.5794.qm@web36506.mail.mud.yahoo.com> References: <405824.5794.qm@web36506.mail.mud.yahoo.com> Message-ID: <5F9A9DFE-77FA-44C4-B8AE-53B5496EB99C@bellsouth.net> Since my last post Gordon Swobe has posted 8 times. > I wonder for example why some people here have trouble with my observation that amoebas (and most organisms on earth) have intelligence but no consciousness. The answer is that unlike Swobe some people on this list have knowledge about how Evolution works and realize that intelligence without consciousness is biological gibberish. The also know its rather dumb to talk about observing consciousness. > In my view the mind exists as a *high-level physical feature* of the nervous system. Rather like "fast" is a high-level feature of a racing car. > the mind does not exist as something separate from the brain Fast is separate from a racing car. > Can't we make a real distinction between the mental and the physical? Yes. > Most people think this way, including even many philosophical materialists who claim not to. But it's only the dualistic voice of Descartes speaking to us from beyond the grave. We are his intellectual descendants. I finally shook off that Cartesian illusion and now the world makes a lot more sense. I have not observed Swobe shaking off any of his illusions, Cartesian or otherwise; and just like anybody else continues to use both the words "mind" and "brain", and in his usage he carefully distinguishes between the two. Swobe would never say "he had an operation to remove a mind tumor" or "I have changed my brain" but if he really wasn't a dualist he should be comfortable with both those sentences. > I feel comfortable saying 1) humans have consciousness I am quite certain that Swobe is NOT comfortable in saying that all humans are conscious, just humans that act intelligently. I am quite certain Swobe doesn't think sleeping people are conscious, or people in a deep coma, or dead people. > the Turing test does not measure the strong AI attributes that concern me. Nothing can measure "strong AI", not the Turing Test not Evolution and not even Gordon Swobe; and if you can't measure something Science has no use for it, although religion might. > It is true that I "project" consciousness onto other humans but I don't do this based solely on their ability to pass the Turing test or on their interactions with me. As above, I do it based also on their physiology. So before Gordon Swobe learned physiology in college he thought he was the only conscious being in the universe, but that changed when he learned physiology even though he freely and frequently tells us that neither he nor anybody else has any idea how consciousness is produced. Interestingly just a century ago when almost no physiology was known nobody thought other minds existed, and even today most are ignorant on the subject so they must believe they are alone too. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Sun Feb 14 20:18:55 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 14 Feb 2010 12:18:55 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <23758.31471.qm@web36506.mail.mud.yahoo.com> Message-ID: Stathis Papaioannou : > Software exists separately from hardware in the same way as an > architectural drawing exists separately from the building it depicts. When I read this, a billion neurons in unison screeched "NOOO!!" inside my skull. It was an interesting experience. Now that it's over, I find myself with a whole litany of much more verbose corrections and objections. I feel I must share. One: software does not exist separately from the underlying hardware. It is epiphenomenal, which in my terminology is synonymous with virtual and imaginary, meaning that it relies on a more real substrate to support its existence. When you pull the plug, the hardware only gets quieter; the software ceases to exist. Two: a blueprint is hardware, not software, unless it is encoded on a hard drive and displayed on a monitor. Even in this case, there is no relationship between the blueprint and the building except in the imagination of a mind considering the two of them. Demolishing the building has no effect whatsoever on the blueprint, except perhaps to increase the number of people giving it wistful looks. Three: in stark contrast to Gordon's views, I believe that software is capable of (and is in practice) far more than the mere depiction of things. A blueprint depicts a building, but a structural simulation instantiates an actual virtual building that obeys its own laws of physics (which, we hope, are similar to our own). There is a surprisingly subtle difference between these two things. A virtual building has no more relationship to the building it emulates than does a blueprint of that building. However, unlike the blueprint, the virtual building really is a building. You could put little virtual people in it if you wanted (and if you could figure out how to make virtual people to begin with). It's a difficult conclusion to convey because I didn't arrive at it through wholly rational means. As shorthand, I might refer to the two things as a depiction and a simulation. A depiction can only be related to what it depicts by a third party, which might be identical with the thing depicted (as in the case of looking at a picture of myself). A simulation has no such inherent limitation; it can relate itself to what it simulates, just as I can relate myself to a person whom I am doing a rather good impression of. This has nothing at all to do with whether or not simulator and simulated are identical. Obviously, they are not. Even a perfect simulation of a mind is not that mind. However, it would be *a* mind; it would be able to do everything that a mind can be expected to do, and be everything a mind could possibly be. Including conscious. I was careful to say "mind" instead of "brain", here. We could make a virtual brain that does everything a real brain does, but it would probably be a waste of processing power: there isn't much sense in granting the ability to be squished, unless you want specifically to perform various questionably-ethical stress tests on your imaginary brain construct. From jrd1415 at gmail.com Sun Feb 14 22:25:00 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sun, 14 Feb 2010 15:25:00 -0700 Subject: [ExI] Semiotics and Computability In-Reply-To: <783130.46814.qm@web113618.mail.gq1.yahoo.com> References: <783130.46814.qm@web113618.mail.gq1.yahoo.com> Message-ID: On Sun, Feb 14, 2010 at 2:39 AM, Ben Zaiboc wrote: > What's almost certainly more important is the maps in the brain that represent these body parts, and they could be hooked up to 'fake' body parts that produce the same signals with no loss of, or change in, any mental functions, as long as the fake parts behaved in a manner consistent with the real equivalent (produced hunger signals when blood glucose is low, etc.) Yes. This solves the original problem -- which came about, as I see it, due to incompleteness in defining the problem, and a consequent incompleteness in the simulation -- by completing the simulation. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From emlynoregan at gmail.com Mon Feb 15 00:26:31 2010 From: emlynoregan at gmail.com (Emlyn) Date: Mon, 15 Feb 2010 10:56:31 +1030 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <20100212221046.5.qmail@syzygy.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> Message-ID: <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> On 13 February 2010 08:40, Eric Messick wrote: > Spike writes: >>For instance, imagine giving your sweetheart a Valentine that says: >> >>Sweetheart, the depth of my love for you is equal to that which is induced >>by several milligrams of endorphine-precursor pheromones which are >>responsible for this class of emotions in humans. ?I mean it from the bottom >>of my brain. > > Once again, xkcd has this covered, with a comic about overly > scientific valentines: > > http://xkcd.com/701/ > > -eric I was listening to a physicist being interviewed on This American Life recently. He was talking about how he and his girlfriend, in the very early stages of their relationship, were talking about how great it was that they were in love and that they had found each other. IIRC, she asked him if he thought she was the only woman for him, and he considered it and said "I think you're 1 in 100,000". Apparently this started their first fight. Hell of a time for the rational machinery to kick in :-) -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From gts_2000 at yahoo.com Mon Feb 15 00:46:53 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 14 Feb 2010 16:46:53 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <580930c21002131717j6f50fa59k1e39bbe3cb0aee3b@mail.gmail.com> Message-ID: <409052.58624.qm@web36506.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stefano Vaj wrote: >> It is true that I "project" consciousness onto other >> humans but I don't do this based solely on their ability to >> pass the Turing test or on their interactions with me. As >> above, I do it based also on their physiology. > > If the ghost of one's wife, or her upload on a digital > computer, manages to persuade somebody that she is his wife, one will > identify her as his wife, irrespective of the physiology, if any, It seems to me a digital simulation of a person, i.e., an upload existent on a computer, will have no more reality than does a digital movie of that person on a computer today. Depictions of things, digital or otherwise, do not equal the things they depict no matter how complete and realistic the depiction. This does not preclude the possibility that a complete digital description of a person might serve as a reliable blueprint for reconstructing that person in material form, but I see that as a separate question. > Sure, somebody may refuse his conclusion for philosophical > and ultimately arbitrary reasons Come the singularity, some people will lose their grips on reality and find themselves believing such absurdities as that digital depictions of people have real mental states. A few lonely philosophers of my stripe will try in vain to restore their sanity. :) -gts From thespike at satx.rr.com Mon Feb 15 00:49:09 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 14 Feb 2010 18:49:09 -0600 Subject: [ExI] Valentine's probability factor In-Reply-To: <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> Message-ID: <4B789A05.8030806@satx.rr.com> On 2/14/2010 6:26 PM, Emlyn wrote: > she asked him if he thought she was the only woman for him, and he > considered it and said "I think you're 1 in 100,000". Apparently this > started their first fight. Hell of a time for the rational machinery > to kick in :-) Nah, it's a perfect test of compatibility with others (the 1 in 100K or fewer of them) of like mind. Barbara and I were boggled at the apparent unlikelihood of our having found each other (and we talked about this right from the start)--which would have been wildly unlikely before the internet, ExI, affordable international travel, etc. Suddenly we had the whole English speaking population of the planet to trawl through--rather than the workplace, university, church, club, etc--presorted by handy detectors for IQ, personality type, unusual interests, etc. Damien Broderick From x at extropica.org Mon Feb 15 00:28:06 2010 From: x at extropica.org (x at extropica.org) Date: Sun, 14 Feb 2010 16:28:06 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <783130.46814.qm@web113618.mail.gq1.yahoo.com> Message-ID: On Sun, Feb 14, 2010 at 2:25 PM, Jeff Davis wrote: > On Sun, Feb 14, 2010 at 2:39 AM, Ben Zaiboc wrote: > >> What's almost certainly more important is the maps in the brain that represent these body parts, and they could be hooked up to 'fake' body parts that produce the same signals with no loss of, or change in, any mental functions, as long as the fake parts behaved in a manner consistent with the real equivalent (produced hunger signals when blood glucose is low, etc.) > > Yes. ?This solves the original problem -- which came about, as I see > it, due to incompleteness in defining the problem, and a consequent > incompleteness in the simulation -- by completing the simulation. Seems to me your extension makes no qualitative difference in regard to the issue at hand. You already had much of the machinery, and you added some you realized you left out. Still nowhere in any formal description of that machinery, no matter how carefully you look, will you find any actual "meaning." You''ll find only patterns of stimulus and response, syntactically complete, semantically vacant. I. You're missing the basic systems-theoretic understanding that the behavior of any system is meaningful only within the context of it's environment of interaction. Take the "whole human", e.g. a description of everything within the boundaries of the skin, and execute its syntax and you won't get human-like behavior--unless you also provide (simulate) a suitable environment of interaction. II. Now ahead and simulate the human, within an appropriate environment. You'll get human-like behavior, indistinguishable in principle from the real thing. Now you're back to the very correct point of Searle's Chinese Room Argument: There is no "meaning" to be found anywhere in the system, no matter how precise your simulation. Now Daniel Dennett or Thomas Metzinger or John Pollock (when feeling bold enough to say it) or Siddh?rtha Gautama, or Jef will say "Of course. The "consciousness" you seek is a function of the observer, and you've removed the observer role from the system under observation. There is no *essential* consciousness. Never had it, never will. The very suggestion is incoherent: it can't be defined." The logic of the CRA is correct. But it reasons from a flawed premise: That the human organism has this somehow ontologically special thing called "consciousness." So restart the music, and the merry-go-round. I'm surprised no one's mentioned the Giant Look Up Table yet. - Jef From olga.bourlin at gmail.com Mon Feb 15 01:11:06 2010 From: olga.bourlin at gmail.com (Olga Bourlin) Date: Sun, 14 Feb 2010 17:11:06 -0800 Subject: [ExI] Valentine's probability factor In-Reply-To: <4B789A05.8030806@satx.rr.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <4B789A05.8030806@satx.rr.com> Message-ID: On Sun, Feb 14, 2010 at 4:49 PM, Damien Broderick wrote: > On 2/14/2010 6:26 PM, Emlyn wrote: > >> she asked him if he thought she was the only woman for him, and he >> considered it and said "I think you're 1 in 100,000". Apparently this >> started their first fight. Hell of a time for the rational machinery >> to kick in :-) > > Nah, it's a perfect test of compatibility with others (the 1 in 100K or > fewer of them) of like mind. Barbara and I were boggled at the apparent > unlikelihood of our having found each other (and we talked about this right > from the start)--which would have been wildly unlikely before the internet, > ExI, affordable international travel, etc. Suddenly we had the whole English > speaking population of the planet to trawl through--rather than the > workplace, university, church, club, etc--presorted by handy detectors for > IQ, personality type, unusual interests, etc. > > Damien Broderick Me, three! (... even though Patrick and I met just a few years before the Internet swept into our homes) I was born in Nanking, China - my husband was born in Roscrea, Ireland. We met in Kansas City (attending a Free Inquiry magazine convention in 1991), and now reside in the emerald city of Seattle. I am neither young nor impressionable, but find myself amazed anew each day at the improbable event of Patrick and me having found each other. Olga From gts_2000 at yahoo.com Mon Feb 15 01:25:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 14 Feb 2010 17:25:45 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <744080.66241.qm@web36507.mail.mud.yahoo.com> --- On Sat, 2/13/10, Stathis Papaioannou wrote: > The watch performs the function of telling the time just > fine. It simulates a sundial or hour glass in this respect. However, > when a human tells the time there are thousands of extra nuances > which a watch just doesn't have. So the watch tells the time, but > it doesn't understand the concept of of time. Comparing a watch with a > human is like comparing a nematode with a human, only more so. I compare my watch to a computer, not to a human. My digital watch has intelligence in the same sense as does my digital computer, and in the same sense as does the most powerful digital computer conceivable. I think you want me to believe that my watch has a small amount of consciousness by virtue of it having a small amount of intelligence. But I don't think that makes even a small amount of sense. It seems to me that my watch has no consciousness whatsoever, and that to say otherwise is to conflate science with science-fiction. > What would you say to the non-organic alien visitors who make the case > that since a nematode is not conscious, neither can a human be > conscious, since basically a human is just a more complex nematode? You beg the question of non-organic consciousness. As far as we know, "non-organic alien visitors" amounts to a completely meaningless concept. As for nematodes, I have no idea whether their primitive nervous systems support what I mean by consciousness. I doubt it but I don't know. I classify them in the gray area between unconscious amoebas and conscious humans. -gts From aware at awareresearch.com Mon Feb 15 01:30:16 2010 From: aware at awareresearch.com (Aware) Date: Sun, 14 Feb 2010 17:30:16 -0800 Subject: [ExI] Valentine's probability factor In-Reply-To: <4B789A05.8030806@satx.rr.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <4B789A05.8030806@satx.rr.com> Message-ID: On Sun, Feb 14, 2010 at 4:49 PM, Damien Broderick wrote: > On 2/14/2010 6:26 PM, Emlyn wrote: > >> she asked him if he thought she was the only woman for him, and he >> considered it and said "I think you're 1 in 100,000". Apparently this >> started their first fight. Hell of a time for the rational machinery >> to kick in :-) > > Nah, it's a perfect test of compatibility with others (the 1 in 100K or > fewer of them) of like mind. Barbara and I were boggled at the apparent > unlikelihood of our having found each other (and we talked about this right > from the start)--which would have been wildly unlikely before the internet, > ExI, affordable international travel, etc. Suddenly we had the whole English > speaking population of the planet to trawl through--rather than the > workplace, university, church, club, etc--presorted by handy detectors for > IQ, personality type, unusual interests, etc. Happily works for me and Lizbeth. We found each other through an online matching service. I was her #1 match and she was my #2 match. It may have been in our favor that we both answered the profile questions honestly and in detail. :-) From gts_2000 at yahoo.com Mon Feb 15 01:54:22 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 14 Feb 2010 17:54:22 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <896902.79613.qm@web36507.mail.mud.yahoo.com> --- On Sun, 2/14/10, x at extropica.org wrote: > The logic of the CRA is correct.? But it reasons from > a flawed premise: That the human organism has this somehow > ontologically special thing called "consciousness." It does not matter whether you believe in some "special thing called 'consciousness." Call it what you will, or call it nothing at all. It matters only that you understand that the man cannot grok the symbols by virtue of manipulating them according to the rules of syntax specified in the program. -gts From ablainey at aol.com Mon Feb 15 02:15:04 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Sun, 14 Feb 2010 21:15:04 -0500 Subject: [ExI] Valentine's probability factor In-Reply-To: <4B789A05.8030806@satx.rr.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com><710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <4B789A05.8030806@satx.rr.com> Message-ID: <8CC7BFBD25C775F-108C-BD05@webmail-d073.sysops.aol.com> I just grabbed my wife backside in a crowded Pub. The fact that I didn't get a slap preved it was true love and that was 17 years ago. We were very different people back then and we have both changed so much since. We still like very different things but are fundamentally similar and we are still together. Go figure?!? If you get the basic compatability right it will work regardless of matching up your likes and dislikes. -----Original Message----- From: Damien Broderick Nah, it's a perfect test of compatibility with others (the 1 in 100K or fewer of them) of like mind. Barbara and I were boggled at the apparent unlikelihood of our having found each other (and we talked about this right from the start)--which would have been wildly unlikely before the internet, ExI, affordable international travel, etc. Suddenly we had the whole English speaking population of the planet to trawl through--rather than the workplace, university, church, club, etc--presorted by handy detectors for IQ, personality type, unusual interests, etc. Damien Broderick -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Feb 15 09:28:24 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 15 Feb 2010 20:28:24 +1100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <409052.58624.qm@web36506.mail.mud.yahoo.com> References: <580930c21002131717j6f50fa59k1e39bbe3cb0aee3b@mail.gmail.com> <409052.58624.qm@web36506.mail.mud.yahoo.com> Message-ID: On 15 February 2010 11:46, Gordon Swobe wrote: > It seems to me a digital simulation of a person, i.e., an upload existent on a computer, will have no more reality than does a digital movie of that person on a computer today. Depictions of things, digital or otherwise, do not equal the things they depict no matter how complete and realistic the depiction. > > This does not preclude the possibility that a complete digital description of a person might serve as a reliable blueprint for reconstructing that person in material form, but I see that as a separate question. > >> Sure, somebody may refuse his conclusion for philosophical >> and ultimately arbitrary reasons > > Come the singularity, some people will lose their grips on reality and find themselves believing such absurdities as that digital depictions of people have real mental states. A few lonely philosophers of my stripe will try in vain to restore their sanity. :) You keep repeating this as a fact but you don't explain why a digital depiction of a person won't have real mental status. A pile of bricks can duplicate the mass of a person; a car can duplicate the speed of a person; an artificial joint can duplicate the function of a human's joint. These devices are all very different from the the thing they are copying. Why should the mind resist copying in anything other than the original substrate? -- Stathis Papaioannou From stathisp at gmail.com Mon Feb 15 09:43:57 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 15 Feb 2010 20:43:57 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <744080.66241.qm@web36507.mail.mud.yahoo.com> References: <744080.66241.qm@web36507.mail.mud.yahoo.com> Message-ID: On 15 February 2010 12:25, Gordon Swobe wrote: > --- On Sat, 2/13/10, Stathis Papaioannou wrote: > >> The watch performs the function of telling the time just >> fine. It simulates a sundial or hour glass in this respect. However, >> when a human tells the time there are thousands of extra nuances >> which a watch just doesn't have. So the watch tells the time, but >> it doesn't understand the concept of of time. Comparing a watch with a >> human is like comparing a nematode with a human, only more so. > > I compare my watch to a computer, not to a human. My digital watch has intelligence in the same sense as does my digital computer, and in the same sense as does the most powerful digital computer conceivable. > > I think you want me to believe that my watch has a small amount of consciousness by virtue of it having a small amount of intelligence. But I don't think that makes even a small amount of sense. It seems to me that my watch has no consciousness whatsoever, and that to say otherwise is to conflate science with science-fiction. If you don't have a problem with a continuous increase in intelligence and consciousness between a nematode and a human, then why do you have a problem with a continuous increase in intelligence and consciousness between a watch and an AI of the future? >> What would you say to the non-organic alien visitors who make the case >> that since a nematode is not conscious, neither can a human be >> conscious, since basically a human is just a more complex nematode? > > You beg the question of non-organic consciousness. As far as we know, "non-organic alien visitors" amounts to a completely meaningless concept. What?? > As for nematodes, I have no idea whether their primitive nervous systems support what I mean by consciousness. I doubt it but I don't know. I classify them in the gray area between unconscious amoebas and conscious humans. At some point, either gradually or abruptly, consciousness will happen in the transition from nematode to human or watch to AI. -- Stathis Papaioannou From gts_2000 at yahoo.com Mon Feb 15 13:49:20 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 05:49:20 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: Message-ID: <930797.89044.qm@web36503.mail.mud.yahoo.com> --- On Mon, 2/15/10, Stathis Papaioannou wrote: > You keep repeating this as a fact but you don't explain why > a digital depiction of a person won't have real mental status. If I make a jpeg of you with my digital camera, that digital depiction of you will have no mental states. If I make a digital movie of you on my webcam, that digital depiction of you will have no mental states. A complete three-dimensional animated digital depiction of you made with futuristic digital simulation technology will amount to just another kind of depiction of you, and so it will likewise have no mental states. It does not matter whether we create our depictions of things on the walls of caves or on computers. Depictions of things do not equal the things they depict. -gts From gts_2000 at yahoo.com Mon Feb 15 15:28:29 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 07:28:29 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <153083.39349.qm@web36503.mail.mud.yahoo.com> --- On Mon, 2/15/10, Stathis Papaioannou wrote: >> I think you want me to believe that my watch has a >> small amount of consciousness by virtue of it having a small >> amount of intelligence. But I don't think that makes even a >> small amount of sense. It seems to me that my watch has no >> consciousness whatsoever, and that to say otherwise is to >> conflate science with science-fiction. > > If you don't have a problem with a continuous increase in > intelligence and consciousness between a nematode and a human, then why > do you have a problem with a continuous increase in intelligence and > consciousness between a watch and an AI of the future? I know a priori that the human nervous system supports consciousness. I cannot say anything like that about my watch or about computers without stepping outside the bounds of science to science-fiction. >> You beg the question of non-organic consciousness. As >> far as we know, "non-organic alien visitors" amounts to a >> completely meaningless concept. > > What?? You ask what I would say to non-organic alien visitors, and I suppose you assume those non-organic alien visitors have consciousness. But non-organic consciousness is what is at issue here. >> As for nematodes, I have no idea whether their >> primitive nervous systems support what I mean by >> consciousness. I doubt it but I don't know. I classify them >> in the gray area between unconscious amoebas and conscious >> humans. > > At some point, either gradually or abruptly, consciousness > will happen in the transition from nematode to human or watch to AI. I consider it a scientific fact that consciousness arises between the nematode to the human. But only in science-fiction does consciousness happen in digital watches or digital computers. -gts From natasha at natasha.cc Mon Feb 15 15:12:52 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 15 Feb 2010 09:12:52 -0600 Subject: [ExI] Valentine's probability factor In-Reply-To: <8CC7BFBD25C775F-108C-BD05@webmail-d073.sysops.aol.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com><710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com><4B789A05.8030806@satx.rr.com> <8CC7BFBD25C775F-108C-BD05@webmail-d073.sysops.aol.com> Message-ID: Emlyn, Damien, Olga, Aware, Ablainey -- all such beautiful stories! Nlogo1.tif Natasha Vita-More _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of ablainey at aol.com Sent: Sunday, February 14, 2010 8:15 PM To: extropy-chat at lists.extropy.org Subject: Re: [ExI] Valentine's probability factor I just grabbed my wife backside in a crowded Pub. The fact that I didn't get a slap preved it was true love and that was 17 years ago. We were very different people back then and we have both changed so much since. We still like very different things but are fundamentally similar and we are still together. Go figure?!? If you get the basic compatability right it will work regardless of matching up your likes and dislikes. -----Original Message----- From: Damien Broderick Nah, it's a perfect test of compatibility with others (the 1 in 100K or fewer of them) of like mind. Barbara and I were boggled at the apparent unlikelihood of our having found each other (and we talked about this right from the start)--which would have been wildly unlikely before the internet, ExI, affordable international travel, etc. Suddenly we had the whole English speaking population of the planet to trawl through--rather than the workplace, university, church, club, etc--presorted by handy detectors for IQ, personality type, unusual interests, etc. Damien Broderick -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From gts_2000 at yahoo.com Mon Feb 15 16:10:30 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 08:10:30 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <466012.12578.qm@web111205.mail.gq1.yahoo.com> Message-ID: <963975.24812.qm@web36502.mail.mud.yahoo.com> --- On Mon, 2/15/10, Christopher Luebcke wrote: > If my understanding of the CRA is > correct (it may not be), it seems to me that Searle is > arguing that because one component of the system does not > understand the symbols, the system doesn't understand the > symbols. This to me is akin to claiming that because my > fingers do not understand the words they are currently > typing out, neither do I. Searle speaks for himself: My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him. Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html -gts From cluebcke at yahoo.com Mon Feb 15 07:54:13 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Sun, 14 Feb 2010 23:54:13 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <409052.58624.qm@web36506.mail.mud.yahoo.com> References: <409052.58624.qm@web36506.mail.mud.yahoo.com> Message-ID: <841756.99168.qm@web111212.mail.gq1.yahoo.com> Thanks for all the responses to my queries. At least among the respondents, it seems that there are not sufficiently-agreed-upon definitions of "consciousness" or "intelligence" (nor, I suspect, of "mind") for truly effective communication to take place. Moreover, it seems that there is wide disagreement over how "intelligence" and "consciousness" are related to one another. I am no cognitive scientist, biologist or psychologist (or any sort of useful -ist) and I wouldn't be nearly so bold as to propose definitions of these terms that I think are "correct". I would, however, humbly suggest that if the application of these terms is important, yet their definitions are not agreed-upon, it may be that the definitions can be decomposed into small enough constituent parts that a fruitful conversation could be had about the genesis, relationship, importance and reproducibility of those constituents. It does seem, from what I've read, that there is no consensus definition of "intelligence" or "consciousness" even among the experts in these fields. Assuming that is the case, then the definitions could vary so widely that the statements Intelligence cannot exist without consciousness and Intelligence can exist without consciousness could in fact both be true, because the persons making these statements could have divergent enough definitions for "intelligence" and "consciousness" that the two statements, inflated to replace the terms in question with their intended meanings, might not be contradictory. I won't be quite so humble as to not give my two cents, though, which are essentially that if it can't be detected or measured, it's not fruitful to have a debate about anything empirical relating to it. In the can-be-measured category I believe we can place "intelligence", as the various definitions, while divergent, do seem to share an empirical bent. In the can't-be-measured category I suspect we'll find "consciousness"--though we may well find that several of the things we're thinking of when we say "consciousness" can in fact be measured or detected. Cheers, Chris From cluebcke at yahoo.com Mon Feb 15 08:13:50 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Mon, 15 Feb 2010 00:13:50 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <896902.79613.qm@web36507.mail.mud.yahoo.com> References: <896902.79613.qm@web36507.mail.mud.yahoo.com> Message-ID: <466012.12578.qm@web111205.mail.gq1.yahoo.com> If my understanding of the CRA is correct (it may not be), it seems to me that Searle is arguing that because one component of the system does not understand the symbols, the system doesn't understand the symbols. This to me is akin to claiming that because my fingers do not understand the words they are currently typing out, neither do I. If a system is said to understand symbols, it does not follow that all components of the system understand the symbols. My pinky finger, or a particular neuron, or a transistor, or a man in a room following orders about moving squiggly cards around, need not understand symbols for the system they compose a part of to understand symbols. It is the system as whole that is said to understand symbols, not necessarily any of the parts. As a demonstration of my point, I ask you to simply modify the CBA a bit. You place a Chinese-speaking person in the room, and don't provide him any of the rules. He executes the test perfectly. Surely you don't then draw the conclusion that the room does indeed understand symbols, do you? (It is also, I think, far from clear that such a system as the CBA could be executed by a finite set of discrete rules.) As a final note, forgive me for repeatedly saying "understanding symbols", but while imperfect, I think it'll be less prone to misunderstandings due to ambiguity than "intelligence" or "consciousness". ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Sun, February 14, 2010 5:54:22 PM Subject: Re: [ExI] Semiotics and Computability --- On Sun, 2/14/10, x at extropica.org wrote: > The logic of the CRA is correct. But it reasons from > a flawed premise: That the human organism has this somehow > ontologically special thing called "consciousness." It does not matter whether you believe in some "special thing called 'consciousness." Call it what you will, or call it nothing at all. It matters only that you understand that the man cannot grok the symbols by virtue of manipulating them according to the rules of syntax specified in the program. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From gts_2000 at yahoo.com Mon Feb 15 18:10:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 10:10:21 -0800 (PST) Subject: [ExI] glutamine and life-extension Message-ID: <505300.77015.qm@web36506.mail.mud.yahoo.com> Stathis, All this talk of neurons reminds me of a paper I wrote circa 1999: Glutamine Based Growth Hormone Releasing Products: A Bad Idea? http://www.newtreatments.org/loadlocal.php?hid=974 My article above created a surprising amount of controversy among life-extensionists. I closed down my website, but recently found my paper republished on the site above without my permission. Thought you might find it interesting given your profession and the general theme. -gts From avantguardian2020 at yahoo.com Mon Feb 15 18:12:14 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Mon, 15 Feb 2010 10:12:14 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence Message-ID: <955672.41841.qm@web65607.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stathis Papaioannou > To: gordon.swobe at yahoo.com; ExI chat list > Sent: Sat, February 13, 2010 2:04:12 PM > Subject: Re: [ExI] Newbie Question: Consciousness and Intelligence > > On 14 February 2010 08:23, Gordon Swobe <> ymailto="mailto:gts_2000 at yahoo.com" > href="mailto:gts_2000 at yahoo.com">gts_2000 at yahoo.com> wrote: > > It seems to me obvious that amoebas and other single-celled organisms have some > intelligence: they can find food and procreate and so on. But because they lack > nervous systems, it looks to me like these simple creatures live out their > entire lives unconsciously. Intelligence and consciousness?are not independent phenomenon. I tend to view consciousness as awareness. When somebody is anesthetised, they become unconscious because they lack awareness of external or internal phenomena. In so far as an amoeba is?more aware of?its environment than an deeply?anesthetised human, I would say it does have a small measure of consciousness. If one puts a drop of acid on the skin of an anesthetised person that person would not react. If I put a droplet of acid on a microscope slide adjacent to an amoeba, the amoeba would run as fast and as far is its little pseudopodia could carry it. ? Ameobae do have sensory apparatus and they do process sensory information. Their senses are not as rich as that of people, being mostly chemical receptors and the like, but the same could be said of a single neuron. Indeed an amoeba is probably more intelligent/conscious than an individual neuron. Why??I would bet an amoeba could?survive in someone's?brain longer than a neuron could survive?in a pond. ? At what point does intelligence lead to consciousness??That is like asking, "at?what temperature does something get hot?" It's completely relative. For a person whose body temperature is 37 degrees Centigrade, boiling water is hot. But a person is hotter relative to the rings of saturn than boiling water is relative to a person. I am starting to think that a similar argument could be used for intelligence and consciousness. And consequently zombies are bogus. > What about flatworms? They are capable learning. That has been demonstrated. ? http://www.mnstate.edu/wisenden/reprint%20pdfs/2001%20planaria%20An%20Beh%2062%20761-766.pdf ? Incidently, I don't think consciousness is either an evolutionary "spandrel"?or simply an epiphenomenon. Women pointedly do choose mates who *pay attention* to them. As such natural selection does assess?consciousness as a fitness function?via sexual selection if not by the more primitive fitness function of figuring out that one is being stalked by a predator.? Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From cluebcke at yahoo.com Mon Feb 15 16:47:22 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Mon, 15 Feb 2010 08:47:22 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <963975.24812.qm@web36502.mail.mud.yahoo.com> References: <963975.24812.qm@web36502.mail.mud.yahoo.com> Message-ID: <752079.20710.qm@web111202.mail.gq1.yahoo.com> I've just read the entirety of Searle's response to the systems argument (thank you for the link), and near as I can tell, it is that the system as a whole is not intelligent because, unlike the man, it doesn't really understand the symbols. Yet determining whether a given system understands symbols ought to be what we're finding out, not what we presume prior to beginning the experiment. Double-quoting Searle: "All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him" Again, how one determines whether a system--a man, a room, a man in a room (or a savant in the woods)--"understands symbols" is crucial to the point, yet I find nothing in what I've read so far where Searle sets out the criteria by which he makes this judgement. It seems to me that Searle is rather making an a priori argument, believing that he (and we) all know ahead of time that the system comprised of the man and the rules is not intelligent, and then scoffs at any suggestion that it might be. My response to the quote above is twofold: 1. I believe that understanding symbols is a process, not a state, and therefore to talk about what things comprise a system isn't nearly so useful as to talk about what it is they're doing 2. I believe that it is fruitless to debate whether a system understands symbols without having a common agreement on a. What it means to "understand symbols" b. How we measure or detect whether a system (a man, a room, a computer) can understand symbols To me the problem is fundamentally empirical. ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Mon, February 15, 2010 8:10:30 AM Subject: Re: [ExI] Semiotics and Computability --- On Mon, 2/15/10, Christopher Luebcke wrote: > If my understanding of the CRA is > correct (it may not be), it seems to me that Searle is > arguing that because one component of the system does not > understand the symbols, the system doesn't understand the > symbols. This to me is akin to claiming that because my > fingers do not understand the words they are currently > typing out, neither do I. Searle speaks for himself: My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him. Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From lacertilian at gmail.com Mon Feb 15 18:43:11 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 10:43:11 -0800 Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) Message-ID: Gordon Swobe : > I know a priori that the human nervous system supports consciousness. I cannot say anything like that about my watch or about computers without stepping outside the bounds of science to science-fiction. (snip) > I consider it a scientific fact that consciousness arises between the nematode to the human. But only in science-fiction does consciousness happen in digital watches or digital computers. Well, clearly you are wrong to consider that a scientific fact. Science is based on experiments, not on reason. A priori knowledge has no scientific validity. If you're going to bring the scientific method into this, then the burden is on you to provide an experiment which tests for the existence of consciousness. The Chinese Room does nothing like this. The Turing test at least looks like it does, but you have stated repeatedly that it will give false positives when exposed to a weak AI. The way I see it, Gordon, you have only two choices: stay within the realm of a priori reason and logic, wherein intuition and subjective experience may be invoked as compelling pieces of evidence; or take the argument into a purely scientific frontier, wherein only objective measurements count for anything. Treating the two with a mix-and-match attitude is disingenuous at best, fallacious at worst. A little bit is unavoidable, but it disappoints me to see you doing so in such a flagrantly shameless way. From gts_2000 at yahoo.com Mon Feb 15 18:54:11 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 10:54:11 -0800 (PST) Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <955672.41841.qm@web65607.mail.ac4.yahoo.com> Message-ID: <877779.10984.qm@web36502.mail.mud.yahoo.com> --- On Mon, 2/15/10, The Avantguardian wrote: > I tend to view consciousness as awareness. You might recall that you and I actually discussed this very subject here a number of years ago, and that we agreed there exists a sense in which amoebas and similar organisms have awareness. We made a distinction then, or at least I did, between the awareness of amoebas and the kind of consciousness that I refer to here several years later. Consciousness, as I mean it today, entails the ability to have conscious intentional states. That is, it entails the ability to have something consciously "in mind" as opposed to merely having the ability to respond intelligently to the environment in the way that amoebas do. So far as we know, intentionality so defined requires a nervous system and most likely a well-developed brain, both of which amoebas lack. So then either amoebas have no consciousness, so defined, or else they have mysterious nervous systems that we cannot see or understand. -gts From lacertilian at gmail.com Mon Feb 15 20:04:18 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 12:04:18 -0800 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <877779.10984.qm@web36502.mail.mud.yahoo.com> References: <955672.41841.qm@web65607.mail.ac4.yahoo.com> <877779.10984.qm@web36502.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Consciousness, as I mean it today, entails the ability to have conscious intentional states. I'll note for the record that I now have the concept of "consciousness" clear enough in my mind to state, confidently, that consciousness, intelligence, and intentionality are all completely different things. Artificial systems could be made to possess any one in absence of the other two. However, I am by now convinced that it is not even theoretically possible to scientifically confirm that a given system has consciousness. This leads to the inevitable conclusion that consciousness does not exist, by my definition: anything which exists can potentially influence everything else that exists. All things in existence can, *by definition*, be measured. So, I create new conscious systems all the time. Pretty much every time I make use of a toilet. I defy any of you to prove otherwise. I've even started a separate thread for it! From gts_2000 at yahoo.com Mon Feb 15 20:12:32 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 12:12:32 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: Message-ID: <275598.51562.qm@web36502.mail.mud.yahoo.com> --- On Mon, 2/15/10, Spencer Campbell wrote: > If you're going to bring the scientific method into this, > then the burden is on you to provide an experiment which tests for > the existence of consciousness. Can you see these words, Spencer? If so then you have what I mean by consciousness. -gts From sparge at gmail.com Mon Feb 15 20:16:08 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 15 Feb 2010 15:16:08 -0500 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: References: <955672.41841.qm@web65607.mail.ac4.yahoo.com> <877779.10984.qm@web36502.mail.mud.yahoo.com> Message-ID: On Mon, Feb 15, 2010 at 3:04 PM, Spencer Campbell wrote: > > However, I am by now convinced that it is not even theoretically > possible to scientifically confirm that a given system has > consciousness. This leads to the inevitable conclusion that > consciousness does not exist, by my definition: anything which exists > can potentially influence everything else that exists. All things in > existence can, *by definition*, ?be measured. Wow, what an excellent way to create interminable threads--just start using your own definitions for common terms. -Dave From scerir at libero.it Mon Feb 15 20:37:37 2010 From: scerir at libero.it (scerir) Date: Mon, 15 Feb 2010 21:37:37 +0100 (CET) Subject: [ExI] QET Message-ID: <8885036.1601541266266257004.JavaMail.defaultUser@defaultHost> Masahiro Hotta (Tohoku University, Japan) has come up with an exotic idea. Why not use the known quantum principles to teleport energy? The idea is not so simple and not so sound, but more or less this process of teleportation seems to involve making a measurement on one particle (of an entangled pair of particles), which would inject quantum energy into the system, then it seems to involve making a measurement on the second and distant particle (of the entangled pair), and this would extract the original energy, while retaining relativistic causality and conservation of energy. http://www.technologyreview.com/blog/arxiv/24759/ http://arxiv.org/abs/0908.2674 http://arxiv.org/abs/0911.3430 http://arxiv.org/abs/1002.0200 No idea about this specific process of dr. Hotta. But it seems to me that the strange nature of quantum mechanical phenomena, and in particular quantum non- locality and quantum non-separability, could be easily extended - at least heuristically - to different contexts (i.e. gravitational fields) to get new and relevant results (i.e. non-local gravitational fields). See i.e. Adrian Kent here http://arxiv.org/abs/gr-qc/0507045 From lacertilian at gmail.com Mon Feb 15 20:38:00 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 12:38:00 -0800 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <275598.51562.qm@web36502.mail.mud.yahoo.com> References: <275598.51562.qm@web36502.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Can you see these words, Spencer? If so then you have what I mean by consciousness. Great! Now, how do I know that you do? From gts_2000 at yahoo.com Mon Feb 15 20:49:20 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 12:49:20 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: Message-ID: <431085.67926.qm@web36504.mail.mud.yahoo.com> --- On Mon, 2/15/10, Spencer Campbell wrote: >> Can you see these words, Spencer? If so then you have >< what I mean by consciousness. > > Great! Now, how do I know that you do? Sorry, I forgot to mention that nothing in the universe aside from you has consciousness. -gts From steinberg.will at gmail.com Mon Feb 15 21:15:21 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 15 Feb 2010 16:15:21 -0500 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <431085.67926.qm@web36504.mail.mud.yahoo.com> References: <431085.67926.qm@web36504.mail.mud.yahoo.com> Message-ID: <4e3a29501002151315o33b97318w676b530f396ad850@mail.gmail.com> I don't really understand what some of you think with regards to consciousness. "Is consciousness real?" is a bunk question. Some people take physicality and extend it into the notion of consciousness not being real, which I don't understand. If consciousness is not real, what would it be like if it were real? If real is being aware of something, I am surely aware of my own awareness. Even in the event of the mind's total epiphenominalism, our being able to differentiate between consciousness and unconsciousness means that some physical factor is changing, GIVEN PHYSICALISM. Consciousness, whether of discrete or indiscrete locus, whether of a quantum or classical nature, is an observation based on physical or mathematical factors. The axiom of consciousness can be seen as on par with the anthropic principle or the fact that we can know G is true without proving it; it is unprovable because the means of proof lie outside the system. Proof is based on awareness of observation, and consciousness IS awareness. It is our G, as has been supposed by a few forward-minded folk. When you ask for a proof of consciousness, you validate the existence of it in your question. I suggest rephrasing it in the manner you mean, which seems to boil down to "is any sort of dualism real?", an important question but not the same one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Feb 15 20:56:01 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 15 Feb 2010 15:56:01 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <409052.58624.qm@web36506.mail.mud.yahoo.com> References: <409052.58624.qm@web36506.mail.mud.yahoo.com> Message-ID: Since my last post Gordon Swobe has posted 10 times. > > Come the singularity, some people will lose their grips on reality and find themselves believing such absurdities as that digital depictions of people have real mental states. A few lonely philosophers of my stripe will try in vain to restore their sanity. As far as the future is concerned it really doesn't matter if Swobe's ideas are right or wrong, either way they're as dead as the Dodo. Even if he's 100% right and I am 100% wrong people with my ideas will have vastly more influence than people like him because we will not be held back by superstitious ideas about "THE ORIGINAL". So it's pedal to the metal upgrading, Jupiter brain ahead. Swobe just won't be able to keep up with the electronic competition. Only a few axons in the brain can send signals as fast as 100 meters per second, non-myelinated axon's are only able to go about 1 meter per second. Light moves at 300,000,000 meters per second. Perhaps after the singularity the more conservative and superstitious among us could still survive in some little backwater somewhere, like the Amish do today, but I doubt it. > I think you want me to believe that my watch has a small amount of consciousness by virtue of it having a small amount of intelligence. But I don't think that makes even a small amount of sense. It seems to me that my watch has no consciousness I'm not surprised Swobe can't make sense of it all, nothing in the Biological sciences makes any sense without Evolution, and he has shown a profound ignorance not only of that theory but of the fossil record in general. Evolution found it far harder to come up with intelligence than consciousness, the brain structures that produce the basic emotions we share with many other animals and are many hundreds of millions of years old, while the higher brain structures that produce language, mathematics and abstract thought in general, things that make humans unique, are less than a million years old and possibly much less. Swobe does not use his higher brain structures to think with and prefers to think with his gut; but many animals have an intestinal tract and to my knowledge none of them are particularly good philosophers. > Consciousness, as I mean it today, entails the ability to have conscious intentional states. That is, it entails the ability to have something consciously "in mind" So consciousness means the ability to be conscious, that is to say the ability to consciously think about stuff. Thank you so much for those words of wisdom! > If I make a jpeg of you with my digital camera, that digital depiction of you will have no mental states. Swobe may very well be right in this particular instance, but it illustrates the useless nature of the grotesque exercises he gives the grandiose name "thought experiment". Swobe has no way to directly measure the mental states even of his fellow human beings much less that of a digital camera; and yet over the last few months he has made grand pronouncements about the mental states of literally hundreds of things. To add insult to injury the mental state of things is exactly what he's trying to prove; he just doesn't understand that saying X has no consciousness is not the same as proving X has no consciousness. > The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. Swobe admits, and if fact seems delighted by the fact, that he has absolutely no idea what causes consciousness; nevertheless he thinks he can always determine a priori what has consciousness and what does not, and it has nothing to do with the way they behave. The conjunction of a person with bits of paper might display intelligence, in fact there is no doubt that it could, but it could never be conscious because, because, well just because; but Swobe thinks 3 pounds of grey goo being conscious is perfectly logical. Can Swobe explain why one thing is ridiculous and the other logical? Nope, it's just that he's accustomed to one and not the other. That's it. > Depictions of things, digital or otherwise, do not equal the things they depict Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. > the man cannot grok the symbols by virtue of manipulating them according to the rules of syntax > Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. > Depictions of things do not equal the things they depict. Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Mon Feb 15 21:45:48 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 13:45:48 -0800 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <4e3a29501002151315o33b97318w676b530f396ad850@mail.gmail.com> References: <431085.67926.qm@web36504.mail.mud.yahoo.com> <4e3a29501002151315o33b97318w676b530f396ad850@mail.gmail.com> Message-ID: 2010/2/15 Will Steinberg : > When you ask for a proof of consciousness, you validate the existence of it > in your question.? I suggest rephrasing it in the manner you mean, which > seems to boil down to "is any sort of dualism real?", an important question > but not the same one. But I don't mean it in that manner! I'm not asking for proof that consciousness exists *at all*. I'm asking for proof that consciousness exists *in a given system*. So, thought experiment. I'm going to jump in a box, and then ask you to prove that the box contains consciousness. Can you? Can anyone? (Obviously, if you jump in the box instead, I can not. But perhaps this only means that I make a bad metrologist.) From stefano.vaj at gmail.com Mon Feb 15 22:02:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Feb 2010 23:02:50 +0100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <275598.51562.qm@web36502.mail.mud.yahoo.com> References: <275598.51562.qm@web36502.mail.mud.yahoo.com> Message-ID: <580930c21002151402r6b60b32es4188cb333aac8c61@mail.gmail.com> On 15 February 2010 21:12, Gordon Swobe wrote: > Can you see these words, Spencer? If so then you have what I mean by consciousness. Come on. Any trivial detector can differentiate black words on a white background. The burden of proof regards the ineffable difference that one would make regarding one's own perception (and other entities he cares to project on such difference). -- Stefano Vaj From stefano.vaj at gmail.com Mon Feb 15 22:12:15 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Feb 2010 23:12:15 +0100 Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) In-Reply-To: References: Message-ID: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> On 15 February 2010 19:43, Spencer Campbell wrote: > If you're going to bring the scientific method into this, then the > burden is on you to provide an experiment which tests for the > existence of consciousness. This is an unreasonable demand. The scientific method cannot offer evidence of something which in somebody's view s not phenomenical by definition, but is an a priori of his worldview. -- Stefano Vaj From steinberg.will at gmail.com Mon Feb 15 22:28:16 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 15 Feb 2010 17:28:16 -0500 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <580930c21002151402r6b60b32es4188cb333aac8c61@mail.gmail.com> References: <275598.51562.qm@web36502.mail.mud.yahoo.com> <580930c21002151402r6b60b32es4188cb333aac8c61@mail.gmail.com> Message-ID: <4e3a29501002151428t5d0bd99coa29a9495e2e3bc6e@mail.gmail.com> Sorry, I assumed by Gordon's response that the argument was about its true existence. But it would seem that, since behavioral observation cannot prove zombism, one needs to know the mental structure or emergent properties leading to consciousness, which leaves us in the same place in the same question of "What is consciousness?", though I'm sure there exist some tests that happen to have scores that correlate very strongly with consciousness. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Feb 16 00:24:20 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Feb 2010 01:24:20 +0100 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <431764.52879.qm@web36501.mail.mud.yahoo.com> Message-ID: <580930c21002151624m41cefb81x57ce57745c46560c@mail.gmail.com> On 14 February 2010 04:41, Stathis Papaioannou wrote: > The watch performs the function of telling the time just fine. It > simulates a sundial or hour glass in this respect. Why, you should know by now that this does not mean that it emulates the qualia of a sundial or hour glass... :-D -- Stefano Vaj From lacertilian at gmail.com Tue Feb 16 01:09:29 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 17:09:29 -0800 Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) In-Reply-To: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> References: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> Message-ID: Stefano Vaj : > On 15 February 2010 19:43, Spencer Campbell wrote: > > If you're going to bring the scientific method into this, then the > > burden is on you to provide an experiment which tests for the > > existence of consciousness. > > This is an unreasonable demand. The scientific method cannot offer > evidence of something which in somebody's view s not phenomenical by > definition, but is an a priori of his worldview. For reference, the remark that spurred this reaction from me was: Gordon Swobe : > I consider it a scientific fact that consciousness arises between the nematode to the human. To demand that a person provide a scientific basis for what they themselves consider a scientific fact is the very definition* of reasonableness! If he had said "objective fact" instead, I would only have been very dubious about it. It's very close to synonymous, but just fuzzy enough to avoid the fundamental error made by conflating "scientific" with, say, "inarguable". In case it isn't clear: the fact in question is not objective, scientific, or inarguable. Certainly not inarguable. Not on Extropy-Chat, at least. *No not really. I'm being... metaphorical! Yeah, let's go with that. From gts_2000 at yahoo.com Tue Feb 16 01:36:11 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 15 Feb 2010 17:36:11 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: Message-ID: <742345.22234.qm@web36504.mail.mud.yahoo.com> To deny the existence of consciousness one must deny one's own awareness of one's own experience. Some muddled thinkers try to do this. I find it hard to take them seriously. -gts From lacertilian at gmail.com Tue Feb 16 02:01:36 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 15 Feb 2010 18:01:36 -0800 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <742345.22234.qm@web36504.mail.mud.yahoo.com> References: <742345.22234.qm@web36504.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > To deny the existence of consciousness one must deny one's own awareness of one's own experience. Some muddled thinkers try to do this. I find it hard to take them seriously. To deny consciousness in general, yes. To deny a specific case of consciousness, no. We've been over this! I can't deny that I myself am conscious (according to Pollock). Such would be irresponsible at best. However, I can easily deny that my computer is conscious; maybe not as easily as Searle, but still pretty easily. By exactly the same token, I can deny that you (the reader) are conscious. For this thread only, consider me a solipsist. I hold that only Spencer Campbell brains are capable of sustaining consciousness. I grant that other human brains are similar to Spencer Campbell brains, in precisely the same way that a correctly-programmed digital computer is similar to SC brains, but I reject the notion that they generate even the most rudimentary of subjective experiences. What logic could you possibly bring to bear on a person with this set of beliefs? Call me irrational if you will, but you can't call me illogical. Solipsism is a self-consistent position. I don't want it to be, so I would appreciate it if someone could show me otherwise, but I am presently convinced that it is. Have I gone over my eight-post limit today? I should start keeping track of that. From avantguardian2020 at yahoo.com Tue Feb 16 03:45:04 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Mon, 15 Feb 2010 19:45:04 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: References: <742345.22234.qm@web36504.mail.mud.yahoo.com> Message-ID: <948496.39262.qm@web65615.mail.ac4.yahoo.com> ----- Original Message ---- > From: Spencer Campbell > To: ExI chat list > Sent: Mon, February 15, 2010 6:01:36 PM > Subject: Re: [ExI] The alleged existence of consciousness > I can't deny that I myself > am conscious (according to Pollock). Such would be irresponsible at best. > However, I can easily deny that my computer is conscious; maybe not as easily > as Searle, but still pretty easily. By exactly the same token, I can deny > that you (the reader) are conscious. Since you seem to buying into Descartes' "evil daemon" argument, allow me to play the devil's advocate. You?could deny the consciousness of other beings, but it?is extraordinarily difficult?to do, because it?goes against your survival instincts. You?may *claim* that you are the only conscious person in existence but?I *dare* you to act on that belief even for a single day. You will find yourself reacting to and anticipating the actions of?people around you, no matter how hard you try.?If you are brave enough to scientifically test your hypothesis, the easiest experiment to see if someone else is conscious or not is to pinch them. I guarantee you will get some data on which to base your conclusion. For this thread only, consider me > a solipsist. I hold that only Spencer Campbell brains are capable of > sustaining consciousness. I grant that other human brains are similar to > Spencer Campbell brains, in precisely the same way that a > correctly-programmed digital computer is similar to SC brains, but I reject > the notion that they generate even the most rudimentary of subjective > experiences. From worm to man, *pain* is?possibly the?most rudimentary of subjective experiences. Therefore I challenge you to cause pinch every creature you meet for just a single day and you will have your answer. What logic could you possibly bring to bear on a person with > this set of beliefs? The logic of pain. That which reacts to pain *must* have the subjective experience of pain because there?can be?no such thing as objective pain or objective suffering of any kind for that matter. > Call me irrational if you will, but you can't > call me illogical. Solipsism is a self-consistent position. I don't want it > to be, so I would appreciate it if someone could show me otherwise, but I > am presently convinced that it is. It is a hypocritical position where a person's?arguments and a person's actions are in contradiction. Have I gone over my eight-post > limit today? I should start keeping track of > that. You need not answer me until you have performed your experiment or chickened out of it. ;-) Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From cluebcke at yahoo.com Mon Feb 15 20:22:42 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Mon, 15 Feb 2010 12:22:42 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: <275598.51562.qm@web36502.mail.mud.yahoo.com> References: <275598.51562.qm@web36502.mail.mud.yahoo.com> Message-ID: <330027.14333.qm@web111201.mail.gq1.yahoo.com> The question, I believe, is how you determine that somebody (or something) else is "conscious", not how you determine that you yourself are. If there isn't a reproducible, agreed-upon method for determining the truth of the statement "System X is conscious", then can there be any point in having a conversation about it? In other words, don't tell me how to detemine ifI'm conscious; tell me how to determine that you are. ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Mon, February 15, 2010 12:12:32 PM Subject: Re: [ExI] The alleged existence of consciousness --- On Mon, 2/15/10, Spencer Campbell wrote: > If you're going to bring the scientific method into this, > then the burden is on you to provide an experiment which tests for > the existence of consciousness. Can you see these words, Spencer? If so then you have what I mean by consciousness. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From cluebcke at yahoo.com Tue Feb 16 02:58:50 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Mon, 15 Feb 2010 18:58:50 -0800 (PST) Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) In-Reply-To: References: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> Message-ID: <361849.34294.qm@web111207.mail.gq1.yahoo.com> Indeed, if it's a scientific fact that consciousness arises between the nematode and the human, then consciousness must be detectable. If not, then there are some very nonstandard versions of the terms "scientific" and/or "fact" in play. ----- Original Message ---- From: Spencer Campbell To: ExI chat list Sent: Mon, February 15, 2010 5:09:29 PM Subject: Re: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) Stefano Vaj : > On 15 February 2010 19:43, Spencer Campbell wrote: > > If you're going to bring the scientific method into this, then the > > burden is on you to provide an experiment which tests for the > > existence of consciousness. > > This is an unreasonable demand. The scientific method cannot offer > evidence of something which in somebody's view s not phenomenical by > definition, but is an a priori of his worldview. For reference, the remark that spurred this reaction from me was: Gordon Swobe : > I consider it a scientific fact that consciousness arises between the nematode to the human. To demand that a person provide a scientific basis for what they themselves consider a scientific fact is the very definition* of reasonableness! If he had said "objective fact" instead, I would only have been very dubious about it. It's very close to synonymous, but just fuzzy enough to avoid the fundamental error made by conflating "scientific" with, say, "inarguable". In case it isn't clear: the fact in question is not objective, scientific, or inarguable. Certainly not inarguable. Not on Extropy-Chat, at least. *No not really. I'm being... metaphorical! Yeah, let's go with that. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jonkc at bellsouth.net Tue Feb 16 06:29:45 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 16 Feb 2010 01:29:45 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <5F9A9DFE-77FA-44C4-B8AE-53B5496EB99C@bellsouth.net> References: <405824.5794.qm@web36506.mail.mud.yahoo.com> <5F9A9DFE-77FA-44C4-B8AE-53B5496EB99C@bellsouth.net> Message-ID: Since my last post Gordon Swobe has posted 11 times. > > Come the singularity, some people will lose their grips on reality and find themselves believing such absurdities as that digital depictions of people have real mental states. A few lonely philosophers of my stripe will try in vain to restore their sanity. As far as the future is concerned it really doesn't matter if Swobe's ideas are right or wrong, either way they're as dead as the Dodo. Even if he's 100% right and I am 100% wrong people with my ideas will have vastly more influence than people like him because we will not be held back by superstitious ideas about "THE ORIGINAL". So it's pedal to the metal upgrading, Jupiter brain ahead. Swobe just won't be able to keep up with the electronic competition. Only a few axons in the brain can send signals as fast as 100 meters per second, non-myelinated axon's are only able to go about 1 meter per second. Light moves at 300,000,000 meters per second. Perhaps after the singularity the more conservative and superstitious among us could still survive in some little backwater somewhere, like the Amish do today, but I doubt it. > I think you want me to believe that my watch has a small amount of consciousness by virtue of it having a small amount of intelligence. But I don't think that makes even a small amount of sense. It seems to me that my watch has no consciousness I'm not surprised Swobe can't make sense of it all, nothing in the Biological sciences makes any sense without Evolution, and he has shown a profound ignorance not only of that theory but of the fossil record in general. Evolution found it far harder to come up with intelligence than consciousness, the brain structures that produce the basic emotions we share with many other animals and are many hundreds of millions of years old, while the higher brain structures that produce language, mathematics and abstract thought in general, things that make humans unique, are less than a million years old and possibly much less. Swobe does not use his higher brain structures to think with and prefers to think with his gut; but many animals have an intestinal tract and to my knowledge none of them are particularly good philosophers. > Consciousness, as I mean it today, entails the ability to have conscious intentional states. That is, it entails the ability to have something consciously "in mind" So consciousness means the ability to be conscious, that is to say the ability to consciously think about stuff. Thank you so much for those words of wisdom! > If I make a jpeg of you with my digital camera, that digital depiction of you will have no mental states. Swobe may very well be right in this particular instance, but it illustrates the useless nature of the grotesque exercises he gives the grandiose name "thought experiment". Swobe has no way to directly measure the mental states even of his fellow human beings much less that of a digital camera; and yet over the last few months he has made grand pronouncements about the mental states of literally hundreds of things. To add insult to injury the mental state of things is exactly what he's trying to prove; he just doesn't understand that saying X has no consciousness is not the same as proving X has no consciousness. > The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. Swobe admits, and if fact seems delighted by the fact, that he has absolutely no idea what causes consciousness; nevertheless he thinks he can always determine a priori what has consciousness and what does not, and it has nothing to do with the way they behave. The conjunction of a person with bits of paper might display intelligence, in fact there is no doubt that it could, but it could never be conscious because, because, well just because; but Swobe thinks 3 pounds of grey goo being conscious is perfectly logical. Can Swobe explain why one thing is ridiculous and the other logical? Nope, it's just that he's accustomed to one and not the other. That's it. > Depictions of things, digital or otherwise, do not equal the things they depict Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. > the man cannot grok the symbols by virtue of manipulating them according to the rules of syntax > Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. > Depictions of things do not equal the things they depict. Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Feb 16 08:35:54 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 16 Feb 2010 19:35:54 +1100 Subject: [ExI] glutamine and life-extension In-Reply-To: <505300.77015.qm@web36506.mail.mud.yahoo.com> References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On 16 February 2010 05:10, Gordon Swobe wrote: > Stathis, > > All this talk of neurons reminds me of a paper I wrote circa 1999: > > Glutamine Based Growth Hormone Releasing Products: A Bad Idea? > http://www.newtreatments.org/loadlocal.php?hid=974 > > My article above created a surprising amount of controversy among life-extensionists. I closed down my website, but recently found my paper republished on the site above without my permission. > > Thought you might find it interesting given your profession and the general theme. Thank-you for this, it is not something I knew much about. It's important to know about the *negative* effects of any treatment, and this is often overlooked by those into non-conventional medicine. Of course, it is sometimes overlooked by those prescribing conventional treatments as well. Nevertheless, my bias is to follow conventional medicine, and only rarely does conventional medicine consider that there is enough evidence to recommend treatments with dietary supplements. A common response to this is that medical professionals are unduly influenced by drug companies, and there may be some truth to that, but I work in the public health system where there is an explicit emphasis on using the cheapest effective treatment: amino acids in bulk are very cheap compared to most drugs, and they would be used if there were good evidence for their efficacy. The other point to make is that much of what doctors do is longevity treatment, even if it isn't seen as that. Preventing heart disease, diabetes, cancer, dementia etc. is equivalent to preventing the patient from getting physiologically old and decrepit and dying early. -- Stathis Papaioannou From stathisp at gmail.com Tue Feb 16 10:56:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 16 Feb 2010 21:56:12 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <963975.24812.qm@web36502.mail.mud.yahoo.com> References: <466012.12578.qm@web111205.mail.gq1.yahoo.com> <963975.24812.qm@web36502.mail.mud.yahoo.com> Message-ID: On 16 February 2010 03:10, Gordon Swobe wrote: > --- On Mon, 2/15/10, Christopher Luebcke wrote: > >> If my understanding of the CRA is >> correct (it may not be), it seems to me that Searle is >> arguing that because one component of the system does not >> understand the symbols, the system doesn't understand the >> symbols. This to me is akin to claiming that because my >> fingers do not understand the words they are currently >> typing out, neither do I. > > Searle speaks for himself: > > My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him. > > Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. > > http://www.bbsonline.org/Preprints/OldArchive/bbs.searle2.html I have proposed the example of a brain which has enough intelligence to know what the neurons are doing: "neuron no. 15,576,456,757 in the left parietal lobe fires in response to noradrenaline, then breaks down the noradrenaline by means of MAO and COMT", and so on, for every brain event. That would be the equivalent of the man in the CR: there is understanding of the low level events, but no understanding of the high level intelligent behaviour which these events give rise to. Do you see how there might be *two* intelligences here, a high level and a low level one, with neither necessarily being aware of the other? -- Stathis Papaioannou From stathisp at gmail.com Tue Feb 16 11:09:22 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 16 Feb 2010 22:09:22 +1100 Subject: [ExI] Newbie Question: Consciousness and Intelligence In-Reply-To: <930797.89044.qm@web36503.mail.mud.yahoo.com> References: <930797.89044.qm@web36503.mail.mud.yahoo.com> Message-ID: On 16 February 2010 00:49, Gordon Swobe wrote: > --- On Mon, 2/15/10, Stathis Papaioannou wrote: > >> You keep repeating this as a fact but you don't explain why >> a digital depiction of a person won't have real mental status. > > If I make a jpeg of you with my digital camera, that digital depiction of you will have no mental states. If I make a digital movie of you on my webcam, that digital depiction of you will have no mental states. A complete three-dimensional animated digital depiction of you made with futuristic digital simulation technology will amount to just another kind of depiction of you, and so it will likewise have no mental states. > > It does not matter whether we create our depictions of things on the walls of caves or on computers. Depictions of things do not equal the things they depict. But the picture has some *properties* of the thing it represents. A statue has other properties, and a computer simulation has other properties still. There is no reason why, unique among all the properties that a human has, "mind" should not be duplicable in anything other than the original substrate. -- Stathis Papaioannou From stathisp at gmail.com Tue Feb 16 11:30:26 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 16 Feb 2010 22:30:26 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <153083.39349.qm@web36503.mail.mud.yahoo.com> References: <153083.39349.qm@web36503.mail.mud.yahoo.com> Message-ID: On 16 February 2010 02:28, Gordon Swobe wrote: >>> You beg the question of non-organic consciousness. As >>> far as we know, "non-organic alien visitors" amounts to a >>> completely meaningless concept. >> >> What?? > > You ask what I would say to non-organic alien visitors, and I suppose you assume those non-organic alien visitors have consciousness. But non-organic consciousness is what is at issue here. I thought you said before that you did not rule the possibility of non-organic consciousness. If the aliens had clockwork brains, you might allow that they are conscious, but not if they have digital computers as brains. Of course, their technology might be so weird that you would be unable to tell what was clockwork and what was a digital computer, especially if there was a mixture of the two. Nevertheless, even if they are zombies, you can still have a discussion with them once you have figured out each other's language. The aliens would insist that they were conscious, and question whether you were conscious. What could you say to them to convince them otherwise? >>> As for nematodes, I have no idea whether their >>> primitive nervous systems support what I mean by >>> consciousness. I doubt it but I don't know. I classify them >>> in the gray area between unconscious amoebas and conscious >>> humans. >> >> At some point, either gradually or abruptly, consciousness >> will happen in the transition from nematode to human or watch to AI. > > I consider it a scientific fact that consciousness arises between the nematode to the human. But only in science-fiction does consciousness happen in digital watches or digital computers. I guess you mean that you know that you're conscious - that is your empirical evidence. But the alien visitors would say the same, and there is no test you could do on them (or they on you) to prove consciousness. If they presented you with an argument purporting to prove that organic matter can't have a mind you would dismiss it out of hand, no matter how clever it was; and similarly they would dismiss any of your arguments purporting to show that they cannot be conscious. -- Stathis Papaioannou From stefano.vaj at gmail.com Tue Feb 16 12:27:04 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Feb 2010 13:27:04 +0100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <742345.22234.qm@web36504.mail.mud.yahoo.com> References: <742345.22234.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c21002160427m18cd353es7e757faf68e8bf77@mail.gmail.com> On 16 February 2010 02:36, Gordon Swobe wrote: > To deny the existence of consciousness one must deny one's own awareness of one's own experience. Some muddled thinkers try to do this. I find it hard to take them seriously. OK, this is your mantra. But the argument that since nobody denies the existence of, say, "entrepreneurial spirit", or the "spirit of the old days", we have to infer that spirits fluctuate around us is not very compelling. Consciousness is a perfectly useful concept to describe the set of phenomena leading us to conclude, e.g., that the perpetrator had not actually knocked unconscious when the crime was committed. Its "entification" is however a Platonic petition of principle, similar to that maintaining that "Evil exists" in the sense of of a horned monster living very deeply underground. Entia non sunt multiplicanda sine necessitate. -- Stefano Vaj From stefano.vaj at gmail.com Tue Feb 16 12:34:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Feb 2010 13:34:50 +0100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <330027.14333.qm@web111201.mail.gq1.yahoo.com> References: <275598.51562.qm@web36502.mail.mud.yahoo.com> <330027.14333.qm@web111201.mail.gq1.yahoo.com> Message-ID: <580930c21002160434n3e8bfa0cm21b709ed15eae54f@mail.gmail.com> On 15 February 2010 21:22, Christopher Luebcke wrote: > The question, I believe, is how you determine that somebody (or something) else is "conscious", not how you determine that you yourself are. I would go a little farther. Consciousness is a social construct even when it refers to oneself (and, btw, social constructs *do* exist, simply they are not homunculi). And a-priori evidence does not really cut it otherwise. Somebody can well be persuaded to be conscious, and even be subvocalising on the subject, e.g., while dreaming, having a near-death experience, or being under some drugs, even though in fact he or she is not. -- Stefano Vaj From gts_2000 at yahoo.com Tue Feb 16 12:45:56 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 16 Feb 2010 04:45:56 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <534964.96008.qm@web36503.mail.mud.yahoo.com> --- On Tue, 2/16/10, Stathis Papaioannou wrote: > I have proposed the example of a brain which has enough > intelligence to know what the neurons are doing: "neuron no. > 15,576,456,757 in the left parietal lobe fires in response to > noradrenaline, then breaks down the noradrenaline by means of MAO and > COMT", and so on, for every brain event. That would be the equivalent of > the man in the CR: there is understanding of the low level events, but no > understanding of the high level intelligent behaviour which these events > give rise to. Do you see how there might be *two* intelligences here, a > high level and a low level one, with neither necessarily being aware of > the other? Doesn't matter. If you cannot see yourself understanding the symbols as either the man considered as the program (IN the room or AS a neuron) or as the man considered as the system (AS the room or AS a brain) then Searle has proved his point. And it seems he has proved his point to you, but that you want nevertheless to fabricate some imaginary way around the conclusion. These attempts of yours amount to saying "Suppose that even though Searle is right that the man cannot understand the symbols either as the program or as the system, pink unicorns on the moon do nevertheless understand the symbols." :) -gts From gts_2000 at yahoo.com Tue Feb 16 13:20:00 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 16 Feb 2010 05:20:00 -0800 (PST) Subject: [ExI] The alleged existence of consciousness In-Reply-To: <580930c21002160434n3e8bfa0cm21b709ed15eae54f@mail.gmail.com> Message-ID: <801240.98026.qm@web36505.mail.mud.yahoo.com> --- On Tue, 2/16/10, Stefano Vaj wrote: > Consciousness is a social construct... A social construct? Is digestion a social construct? Seems to me consciousness amounts to just another biological process like digestion. The brain does this thing called consciousness when in the awake state or in the state of sleep+dreaming. It stops doing it temporarily after physical shocks such as blows to the head, or when in the presence of too much alcohol, or when it sleeps but does not dream. It stops for the last time at death. -gts From stathisp at gmail.com Tue Feb 16 13:54:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 17 Feb 2010 00:54:27 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <534964.96008.qm@web36503.mail.mud.yahoo.com> References: <534964.96008.qm@web36503.mail.mud.yahoo.com> Message-ID: On 16 February 2010 23:45, Gordon Swobe wrote: > --- On Tue, 2/16/10, Stathis Papaioannou wrote: > >> I have proposed the example of a brain which has enough >> intelligence to know what the neurons are doing: "neuron no. >> 15,576,456,757 in the left parietal lobe fires in response to >> noradrenaline, then breaks down the noradrenaline by means of MAO and >> COMT", and so on, for every brain event. That would be the equivalent of >> the man in the CR: there is understanding of the low level events, but no >> understanding of the high level intelligent behaviour which these events >> give rise to. Do you see how there might be *two* intelligences here, a >> high level and a low level one, with neither necessarily being aware of >> the other? > > Doesn't matter. If you cannot see yourself understanding the symbols as either the man considered as the program (IN the room or AS a neuron) or as the man considered as the system (AS the room or AS a brain) then Searle has proved his point. > > And it seems he has proved his point to you, but that you want nevertheless to fabricate some imaginary way around the conclusion. These attempts of yours amount to saying "Suppose that even though Searle is right that the man cannot understand the symbols either as the program or as the system, pink unicorns on the moon do nevertheless understand the symbols." :) I think you have missed the point: even though we agree that the man who internalises the room has no understanding, this does *not* mean that the system has no understanding. The man's intelligence is only a component of the system even if the man internalises the room. As a general comment, it is normal in philosophical debate to set up a sometimes complex argument, thought experiment etc. in order to prove a point which the parties disagree on. I might think that the whole CRA and the idea it purports to prove is ridiculous, but it's bad form to just dismiss an argument like that. Instead, I have to pick it apart, show where there are hidden assumptions, or think of a variation which leads to the opposite conclusion. This sometimes leads to the pursuit of what you may consider is a minor technical point, while you are eager to return to restating what you consider is the big picture. But it is important to pursue these apparently minor technical points, since if they fall, the whole argument falls. That does not necessarily mean the initial proposition was wrong, but it does mean that the particular argument chosen to support it is wrong, and cannot be used any more. -- Stathis Papaioannou From stathisp at gmail.com Tue Feb 16 13:57:53 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 17 Feb 2010 00:57:53 +1100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <801240.98026.qm@web36505.mail.mud.yahoo.com> References: <580930c21002160434n3e8bfa0cm21b709ed15eae54f@mail.gmail.com> <801240.98026.qm@web36505.mail.mud.yahoo.com> Message-ID: On 17 February 2010 00:20, Gordon Swobe wrote: > --- On Tue, 2/16/10, Stefano Vaj wrote: > >> Consciousness is a social construct... > > A social construct? Is digestion a social construct? > > Seems to me consciousness amounts to just another biological process like digestion. The brain does this thing called consciousness when in the awake state or in the state of sleep+dreaming. It stops doing it temporarily after physical shocks such as blows to the head, or when in the presence of too much alcohol, or when it sleeps but does not dream. It stops for the last time at death. But digestion is nothing over and above the chemical breakdown of food in the gut. Similarly, consciousness is nothing over and above the enacting of intelligent behaviour. -- Stathis Papaioannou From sparge at gmail.com Tue Feb 16 15:03:09 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Feb 2010 10:03:09 -0500 Subject: [ExI] glutamine and life-extension In-Reply-To: References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On Tue, Feb 16, 2010 at 3:35 AM, Stathis Papaioannou wrote: > ... Nevertheless, my bias is to follow conventional > medicine, and only rarely does conventional medicine consider that > there is enough evidence to recommend treatments with dietary > supplements. A common response to this is that medical professionals > are unduly influenced by drug companies, and there may be some truth > to that, but I work in the public health system where there is an > explicit emphasis on using the cheapest effective treatment: amino > acids in bulk are very cheap compared to most drugs, and they would be > used if there were good evidence for their efficacy. So who is motivated to research therapeutic effects of cheap supplements? Drug companies are motivated by the profit potential of a new, exclusive drug. There are no big payoffs in researching cheap supplements. -Dave From cluebcke at yahoo.com Tue Feb 16 16:15:31 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 16 Feb 2010 08:15:31 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <534964.96008.qm@web36503.mail.mud.yahoo.com> References: <534964.96008.qm@web36503.mail.mud.yahoo.com> Message-ID: <782982.71028.qm@web111207.mail.gq1.yahoo.com> Gordon, if I may ask directly: How do you determine whether someone, or something, besides yourself is conscious? ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Tue, February 16, 2010 4:45:56 AM Subject: Re: [ExI] Semiotics and Computability --- On Tue, 2/16/10, Stathis Papaioannou wrote: > I have proposed the example of a brain which has enough > intelligence to know what the neurons are doing: "neuron no. > 15,576,456,757 in the left parietal lobe fires in response to > noradrenaline, then breaks down the noradrenaline by means of MAO and > COMT", and so on, for every brain event. That would be the equivalent of > the man in the CR: there is understanding of the low level events, but no > understanding of the high level intelligent behaviour which these events > give rise to. Do you see how there might be *two* intelligences here, a > high level and a low level one, with neither necessarily being aware of > the other? Doesn't matter. If you cannot see yourself understanding the symbols as either the man considered as the program (IN the room or AS a neuron) or as the man considered as the system (AS the room or AS a brain) then Searle has proved his point. And it seems he has proved his point to you, but that you want nevertheless to fabricate some imaginary way around the conclusion. These attempts of yours amount to saying "Suppose that even though Searle is right that the man cannot understand the symbols either as the program or as the system, pink unicorns on the moon do nevertheless understand the symbols." :) -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jonkc at bellsouth.net Tue Feb 16 16:27:11 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 16 Feb 2010 11:27:11 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <801240.98026.qm@web36505.mail.mud.yahoo.com> References: <801240.98026.qm@web36505.mail.mud.yahoo.com> Message-ID: <4CE641C3-70EE-4B6C-AA81-0EEBA87A36DC@bellsouth.net> Since my last post Gordon Swobe has posted 2 times. > > Seems to me consciousness amounts to just another biological process like digestion. No, Swobe is incorrect even about what things seem like to him. Swobe can explain how digestion came to occur on planet Earth but he has no explanation how consciousness did. I have an explanation for both, but then unlike Swobe I think intelligent behavior implies consciousness. Swobe says he believes in Evolution but the fact that he thinks intelligence and consciousness are separate things and yet Evolution produced them both is clear proof that the man has no comprehension how that vitally important process works. > If you cannot see yourself understanding the symbols as either the man considered as the program (IN the room or AS a neuron) or as the man considered as the system (AS the room or AS a brain) then Searle has proved his point. True, I can not see myself understanding those Chinese symbols, but I can't see myself manipulating them to produce intelligent output either; perhaps something could do that but whatever it is it wouldn't be human. > These attempts of yours amount to saying "Suppose that even though Searle is right that the man cannot understand the symbols either as the program or as the system, pink unicorns on the moon do nevertheless understand the symbols." This shows that not only does Swobe misunderstand Evolution he doesn't understand understanding either. Swobe seems obsessed in determining the exact spacial coordinates of consciousness, which makes about as much sense as demanding to know where fast is, or the number eleven. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Feb 16 17:13:51 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Feb 2010 18:13:51 +0100 Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) In-Reply-To: References: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> Message-ID: <580930c21002160913l51d29a10sab21e88db547a5fe@mail.gmail.com> On 16 February 2010 02:09, Spencer Campbell wrote: > Stefano Vaj : >> This is an unreasonable demand. The scientific method cannot offer >> evidence of something which in somebody's view s not phenomenical by >> definition, but is an a priori of his worldview. > > For reference, the remark that spurred this reaction from me was: > > Gordon Swobe : >> I consider it a scientific fact that consciousness arises between the nematode to the human. > > To demand that a person provide a scientific basis for what they > themselves consider a scientific fact is the very definition* of > reasonableness! I did not write "unfair", I said "unreasonable". If somebody were to tell you "I consider it a fact that Allah exists", would you take his at his own word? :-) -- Stefano Vaj From max at maxmore.com Tue Feb 16 17:50:20 2010 From: max at maxmore.com (Max More) Date: Tue, 16 Feb 2010 11:50:20 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn't settled Message-ID: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> Interesting: Phil Jones momentous Q&A with BBC reopens the ?science is settled? issues emperor is, if not naked, scantily clad, vindicating key skeptic arguments http://wattsupwiththat.com/2010/02/14/phil-jones-momentous-qa-with-bbc-reopens-the-science-is-settled-issues/ Columnist Indur Goklany summarizes: Specifically, the Q-and-As confirm what many skeptics have long suspected: Neither the rate nor magnitude of recent warming is exceptional. There was no significant warming from 1998-2009. According to the IPCC we should have seen a global temperature increase of at least 0.2?C per decade. The IPCC models may have overestimated the climate sensitivity for greenhouse gases, underestimated natural variability, or both. This also suggests that there is a systematic upward bias in the impacts estimates based on these models just from this factor alone. The logic behind attribution of current warming to well-mixed man-made greenhouse gases is faulty. The science is not settled, however unsettling that might be. There is a tendency in the IPCC reports to leave out inconvenient findings, especially in the part(s) most likely to be read by policy makers. Compare the above to the "orthodox" view: http://www.realclimate.org/index.php/archives/2010/02/daily-mangle/ ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From lacertilian at gmail.com Tue Feb 16 19:07:46 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 16 Feb 2010 11:07:46 -0800 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <948496.39262.qm@web65615.mail.ac4.yahoo.com> References: <742345.22234.qm@web36504.mail.mud.yahoo.com> <948496.39262.qm@web65615.mail.ac4.yahoo.com> Message-ID: The Avantguardian : >Spencer Campbell : >> What logic could you possibly bring to bear on a person with >> this set of beliefs? > > The logic of pain. That which reacts to pain *must* have the subjective experience of pain because there?can be?no such thing as objective pain or objective suffering of any kind for that matter. The ouch-test! Okay, I'll play your little game. First, it's obvious that false negatives are possible. Even if a reaction to pinching implies consciousness, it does not follow that no reaction implies no consciousness. If I concentrate on it, I can ignore a pinch of virtually any intensity. Even so, it's still scientifically useful in theory. Let's try it out on a Furby! http://www.youtube.com/watch?v=aPgfP5UO8o8 Furbies do not appear to be capable of pain. Perhaps if we programmed one to scream in agony and gave it a linear actuator so that it can propel itself away, whenever touched? These sedentary, polysensuously perverse monstrosities are clearly unconscious; yet with the addition of just a few strong negative reactions, we could turn one into the perfect consciousness generator. Too sarcastic? You see my point, anyway. It's trivially easy to reproduce the outward appearance of pain without the corresponding subjective experience. Consciousness does not imply a reaction to pinching. A reaction to pinching does not imply consciousness. Thank you, try again! The Avantguardian : > It is a hypocritical position where a person's?arguments and a person's actions are in contradiction. Granted. Hypocrisy does not imply falsehood, you'll note. The Avantguardian : > You need not answer me until you have performed your experiment or chickened out of it. ;-) That's okay, I have time! I'll note that your test would work if we had rayguns that shoot pure ionized pain, but sadly qualia have not yet been weaponized. C'est la vie. From lacertilian at gmail.com Tue Feb 16 20:28:40 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 16 Feb 2010 12:28:40 -0800 Subject: [ExI] The alleged existence of consciousness (was: Semiotics and Computability) In-Reply-To: <580930c21002160913l51d29a10sab21e88db547a5fe@mail.gmail.com> References: <580930c21002151412s6465c9bbnf596dc063b010a4a@mail.gmail.com> <580930c21002160913l51d29a10sab21e88db547a5fe@mail.gmail.com> Message-ID: Stefano Vaj : > Spencer Campbell : >> To demand that a person provide a scientific basis for what they >> themselves consider a scientific fact is the very definition* of >> reasonableness! > > I did not write "unfair", I said "unreasonable". If somebody were to > tell you "I consider it a fact that Allah exists", would you take his > at his own word? :-) I would. I find it completely believable that someone would consider it a fact that Allah exists, and I have no objections to the phrasing. I don't agree with them, personally, but I have no reason to call their belief logically invalid. The word "unreasonable" means something like "lacking reason", or "irrational". Of course I had some reason to make my ultimatum, otherwise I wouldn't have done so, and if I'm being irrational then I'm not even sane enough to see it. To insist on correct usage of terminology, when that terminology is being twisted just to lend false credibility to a statement, seems perfectly rational to me. I think it was a well-reasoned position, but you could conceivably convince me otherwise; I can be reasoned with. I am nothing if not reasonable. OH WAIT A SMILEY FACE I didn't need to deliver a stern dissertation at all! Your criticism was a mere jibe, some harmless japery, all in good fun. Ha ha. Ha. From cluebcke at yahoo.com Tue Feb 16 20:52:35 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 16 Feb 2010 12:52:35 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn't settled In-Reply-To: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> References: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> Message-ID: <354088.25240.qm@web111202.mail.gq1.yahoo.com> I'll continue to rely mainly on what peer-reviewed science has to say on the matter, not the IPCC or the BBC. I do find it sad that suddenly, after months of having his character shat upon, there's a great rush to take something Phil Jones has said as unquestioningly accurate. ----- Original Message ---- From: Max More To: Extropy-Chat Sent: Tue, February 16, 2010 9:50:20 AM Subject: [ExI] Phil Jones acknowledging that climate science isn't settled Interesting: Phil Jones momentous Q&A with BBC reopens the ?science is settled? issues emperor is, if not naked, scantily clad, vindicating key skeptic arguments http://wattsupwiththat.com/2010/02/14/phil-jones-momentous-qa-with-bbc-reopens-the-science-is-settled-issues/ Columnist Indur Goklany summarizes: Specifically, the Q-and-As confirm what many skeptics have long suspected: Neither the rate nor magnitude of recent warming is exceptional. There was no significant warming from 1998-2009. According to the IPCC we should have seen a global temperature increase of at least 0.2?C per decade. The IPCC models may have overestimated the climate sensitivity for greenhouse gases, underestimated natural variability, or both. This also suggests that there is a systematic upward bias in the impacts estimates based on these models just from this factor alone. The logic behind attribution of current warming to well-mixed man-made greenhouse gases is faulty. The science is not settled, however unsettling that might be. There is a tendency in the IPCC reports to leave out inconvenient findings, especially in the part(s) most likely to be read by policy makers. Compare the above to the "orthodox" view: http://www.realclimate.org/index.php/archives/2010/02/daily-mangle/ ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From bbenzai at yahoo.com Tue Feb 16 20:36:57 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 16 Feb 2010 12:36:57 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <916006.14487.qm@web113610.mail.gq1.yahoo.com> Jef wrote: > The logic of the CRA is correct. But it reasons from a flawed > premise: That the human organism has this somehow ontologically > special thing called "consciousness." > So restart the music, and the merry-go-round. I'm surprised no one's > mentioned the Giant Look Up Table yet. I was thinking about this very thing (even though I said I'd no longer discuss the dread CRA), and I disagree that the logic is correct. It suffers from a fundamental flaw, as I see it: the (usually unquestioned) assumption that it's possible, even in principle, to have a set of rules that can answer any possible question about a set of data (the 'story'), in a consistently sensible fashion, without having any 'understanding'. Searle just casually tosses this assertion out as though it was obvious that it was possible, when it seems to me to be so unlikely that it's a simply outrageous assumption. Before anyone uses it in an argument, they need to demonstrate that it's possible. Without doing this, any argument based on it is totally invalid. If you think about it, we actually use this principle to actually test for understanding of a subject. We put people through exams where they're supposed to demonstrate their understanding by answering questions. If the questions are good ones (difficult to anticipate, posing a variety of different problems, etc.), and the answers are good ones (clearly stating how to solve the problems posed), we conclude that the person has demonstrated understanding of the subject. Our education system pretty much depends on this. So why on earth would anyone suggest that exactly the same setup - asking questions about a set of data, and seeing if the answers are correct and consistent - could be used in an argument to claim that understanding is absent? Absurd. Ben From max at maxmore.com Tue Feb 16 21:11:23 2010 From: max at maxmore.com (Max More) Date: Tue, 16 Feb 2010 15:11:23 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> >I'll continue to rely mainly on what peer-reviewed science has to >say on the matter, not the IPCC or the BBC. The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. >I do find it sad that suddenly, after months of having his character >shat upon, there's a great rush to take something Phil Jones has >said as unquestioningly accurate. Which of the things Jones is saying do you think are inaccurate? What I thought was a little encouraging was some small sign of uncertainty from one of those representing apparent certainty concerning a matter revolving around unreliable models. Perhaps, one day, more economists and risk managers will also become a little more modest and less dogmatic regarding their clearly non-scientific discipline. Max From stathisp at gmail.com Tue Feb 16 21:38:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 17 Feb 2010 08:38:12 +1100 Subject: [ExI] glutamine and life-extension In-Reply-To: References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On 17 February 2010 02:03, Dave Sill wrote: > On Tue, Feb 16, 2010 at 3:35 AM, Stathis Papaioannou wrote: >> ... Nevertheless, my bias is to follow conventional >> medicine, and only rarely does conventional medicine consider that >> there is enough evidence to recommend treatments with dietary >> supplements. A common response to this is that medical professionals >> are unduly influenced by drug companies, and there may be some truth >> to that, but I work in the public health system where there is an >> explicit emphasis on using the cheapest effective treatment: amino >> acids in bulk are very cheap compared to most drugs, and they would be >> used if there were good evidence for their efficacy. > > So who is motivated to research therapeutic effects of cheap > supplements? Drug companies are motivated by the profit potential of a > new, exclusive drug. There are no big payoffs in researching cheap > supplements. Medical researchers, usually publicly or not-for-profit funded. -- Stathis Papaioannou From cluebcke at yahoo.com Tue Feb 16 22:25:00 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 16 Feb 2010 14:25:00 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> Message-ID: <412543.58639.qm@web111214.mail.gq1.yahoo.com> > The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. No, but almost all of it supports the positions that the Earth has been warming over the last century, that the warming has primarily been caused by mankind's introduction of greenhouse gasses into the atmosphere, and that this warming trend will continue, with projected results over the next 100 years ranging, roughly, from pretty bad to catastrophic in terms of human suffering. Naturally there are scientists who take a minority view of this, but the number of qualified climatologists taking the minority view is quite small. This does not mean that they are incorrect, but they're likely to be. I feel the same way about this as I do about the Big Bang theory and its modest, dwindling competitor, plasma cosmology. > Which of the things Jones is saying do you think are inaccurate? I'm actually not deeply interested in what Phil Jones has to say on the matter, outside (once again) of his scientific work. I just find it sadly ironic, and a dark reflection on the petty, fiercely partisan, politicized nature of what should in fact be a sober, scientific discussion, that a man accused (in the court of political opinion) of fraud should suddenly be taken at his word by those same accusers, when that word apparently agrees with their position. I'm not, of course, accusing you (Max) of doing so. But the whole affair just saddens me. > What I thought was a little encouraging was some small sign of uncertainty from one of those representing apparent certainty concerning a matter revolving around unreliable models. Perhaps, one day, more economists and risk managers will also become a little more modest and less dogmatic regarding their clearly non-scientific discipline. I don't disagree, and from what I've read Phil Jones is somewhat less than the perfect scientist. But the whole swirl of activity around this indicates a personalization of the subject of AGW to Phil Jones, or CRU, when in fact there are thousands of scientists, thousands of peer-reviewed papers, and dozens if not hundreds of organizations working on this problem. The AGW-denying crowd's recent focus on Phil Jones reminds me of nothing so much as creationists' pathological obsession with, and hatred of, Charles Darwin (and I find little surprise that the two groups share a large intersection). ----- Original Message ---- From: Max More To: Extropy-Chat Sent: Tue, February 16, 2010 1:11:23 PM Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled > I'll continue to rely mainly on what peer-reviewed science has to say on the matter, not the IPCC or the BBC. The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. > I do find it sad that suddenly, after months of having his character shat upon, there's a great rush to take something Phil Jones has said as unquestioningly accurate. Which of the things Jones is saying do you think are inaccurate? What I thought was a little encouraging was some small sign of uncertainty from one of those representing apparent certainty concerning a matter revolving around unreliable models. Perhaps, one day, more economists and risk managers will also become a little more modest and less dogmatic regarding their clearly non-scientific discipline. Max _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From max at maxmore.com Wed Feb 17 01:03:24 2010 From: max at maxmore.com (Max More) Date: Tue, 16 Feb 2010 19:03:24 -0600 Subject: [ExI] Shifting demand suggests a future of endless oil Message-ID: <201002170103.o1H13W90026666@andromeda.ziaspace.com> MSNBC, which is usually home to the usual catastrophist messages, has a story that reflects information that's been around for a while, suggesting a non-disastrous energy shift ahead: Shifting demand suggests a future of endless oil http://www.msnbc.msn.com/id/34770285/ns/business-oil_and_energy/ And another hopeful sign (although I'm no fan of any kind of government subsidy -- I'd rather reduce artificially-imposed costs on nuclear energy): Obama renews commitment to nuclear energy http://www.msnbc.msn.com/id/35421517/ns/business-oil_and_energy/ Max From sparge at gmail.com Wed Feb 17 01:13:51 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Feb 2010 20:13:51 -0500 Subject: [ExI] glutamine and life-extension In-Reply-To: References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On Tue, Feb 16, 2010 at 4:38 PM, Stathis Papaioannou wrote: > On 17 February 2010 02:03, Dave Sill wrote: >> >> So who is motivated to research therapeutic effects of cheap >> supplements? Drug companies are motivated by the profit potential of a >> new, exclusive drug. There are no big payoffs in researching cheap >> supplements. > > Medical researchers, usually publicly or not-for-profit funded. That was sort of a rhetorical question, but since you answered it, I have to point out that there are multiple orders of magnitude of difference in funding between public/non-profit and for-profit. The drug companies are frantically inventing new stuff, spending billions on R&D. Compare that to what's spent on supplement research. I think it's highly likely that there are effective therapeutic uses of supplements that aren't being investigated due to lack of funding. -Dave From stathisp at gmail.com Wed Feb 17 01:29:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 17 Feb 2010 12:29:47 +1100 Subject: [ExI] glutamine and life-extension In-Reply-To: References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On 17 February 2010 12:13, Dave Sill wrote: > On Tue, Feb 16, 2010 at 4:38 PM, Stathis Papaioannou wrote: >> On 17 February 2010 02:03, Dave Sill wrote: >>> >>> So who is motivated to research therapeutic effects of cheap >>> supplements? Drug companies are motivated by the profit potential of a >>> new, exclusive drug. There are no big payoffs in researching cheap >>> supplements. >> >> Medical researchers, usually publicly or not-for-profit funded. > > That was sort of a rhetorical question, but since you answered it, I > have to point out that there are multiple orders of magnitude of > difference in funding between public/non-profit and for-profit. The > drug companies are frantically inventing new stuff, spending billions > on R&D. Compare that to what's spent on supplement research. > > I think it's highly likely that there are effective therapeutic uses > of supplements that aren't being investigated due to lack of funding. I can't easily find actual figures but I think in the world as a whole most medical research is publicly funded. The purpose of publicly funded research is to discover things that are interesting or useful, which is not always the same as discovering things that can be sold for a lot of money. Drug companies generally won't spend money researching dietary supplements or doing basic research because they have nothing to gain from it, even though society does. If public researchers do not think it is worthwhile investigating something then it is probably because they think it is unlikely it will yield useful results. -- Stathis Papaioannou From sparge at gmail.com Wed Feb 17 02:08:10 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Feb 2010 21:08:10 -0500 Subject: [ExI] glutamine and life-extension In-Reply-To: References: <505300.77015.qm@web36506.mail.mud.yahoo.com> Message-ID: On Tue, Feb 16, 2010 at 8:29 PM, Stathis Papaioannou wrote: > > I can't easily find actual figures but I think in the world as a whole > most medical research is publicly funded. Most *medical* research? Maybe. Most *drug* research? No way. > The purpose of publicly > funded research is to discover things that are interesting or useful, > which is not always the same as discovering things that can be sold > for a lot of money. Drug companies generally won't spend money > researching dietary supplements or doing basic research because they > have nothing to gain from it, even though society does. If public > researchers do not think it is worthwhile investigating something then > it is probably because they think it is unlikely it will yield useful > results. Exactly. That was point of my first posting in this thread. -Dave From alfio.puglisi at gmail.com Wed Feb 17 02:36:27 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 17 Feb 2010 03:36:27 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn't settled In-Reply-To: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> References: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> Message-ID: <4902d9991002161836w487577b5iefaac10dc3ba51a4@mail.gmail.com> On Tue, Feb 16, 2010 at 6:50 PM, Max More wrote: > Interesting: > > Phil Jones momentous Q&A with BBC reopens the ?science is settled? issues > emperor is, if not naked, scantily clad, vindicating key skeptic arguments > > http://wattsupwiththat.com/2010/02/14/phil-jones-momentous-qa-with-bbc-reopens-the-science-is-settled-issues/ > > Columnist Indur Goklany summarizes: > > Specifically, the Q-and-As confirm what many skeptics have long suspected: > Neither the rate nor magnitude of recent warming is exceptional. > There was no significant warming from 1998-2009. According to the IPCC we > should have seen a global temperature increase of at least 0.2?C per decade. > The IPCC models may have overestimated the climate sensitivity for > greenhouse gases, underestimated natural variability, or both. > This also suggests that there is a systematic upward bias in the impacts > estimates based on these models just from this factor alone. > The logic behind attribution of current warming to well-mixed man-made > greenhouse gases is faulty. > The science is not settled, however unsettling that might be. > There is a tendency in the IPCC reports to leave out inconvenient findings, > especially in the part(s) most likely to be read by policy makers. > Reading the article, you discover that most of those points are just made up by the columnist. Why do you think that they are interesting? Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Wed Feb 17 03:58:41 2010 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 16 Feb 2010 19:58:41 -0800 (PST) Subject: [ExI] Consciouness and paracrap Message-ID: <517167.25357.qm@web110408.mail.gq1.yahoo.com> I know I'm simple minded but I don't understand why consciousness is such a philosophical debate. I wonder why science sometimes complicates things. Let's say hypothetically I could change the definitions. (This is against academic rules but if Gordon GTE's to play master of definitions why shouldn't I:). Let's say the word conscious meant alive versus dead. Anything that is conscious is alive and awake. Could everyone agree on that? The problem lies within determining the levels of consciousness. Does a worm possess consciousness? Will an AI? I'm rather curious as to understand why the scientific community is taboo against using the term the "subconscious" as it would be much easier to explain if they did. What if the subconscious is Darwin's Theory of Evolution? The basic instinct for survival. A tree requires many things to keep it alive but it doesn't know it. It depends on the sun, the earth, the rain and it will continue to grow or it will die. We need trees and they are part of the evolutionary process. I would say they are subconsciously alive. In humans it's the instinct to take your hand off a hot stove. The embedded codes that evolution has installed. A person who is under anaesthesia should therefore be conscious and subconsciously alive. The person requires oxygen, food and water yet has no idea. This would then mean that consciousness and awareness go hand in hand. What if consciousness is to be awake allowing the memories capability of recalling, extracting and processing information while awareness is intelligence, experience and sense? A worm doesn't live in a subconscious state, it recalls, extracts and processes information but does it recall the experience or have a sense as to why it does the things it does? Shouldn't this be an underlining question? If we knew the worm felt something when we poked it with a knife would we declare it "aware"? I know my cat is aware because once he decided to stick his head too far into a bottle, he never did it again. I think to be human is to have awareness as well as consciousness. I believe a strong as well as weak AI will not be conscious or have any subconscious (well at least until technology merges with biology then at least that will be a great philosophical discussion) but only strong AI will have consciousness and somewhat awareness. What is scary about Strong AI is that it may have the maximum capacity of intelligence yet have no experience or sense. We had better hope that the programmer is fully aware. Stathis Papaioannou stathisp at gmail.com Sun Dec 13 23:08:02 UTC 2009 questioned gts_2000 at yahoo.com >To address the strong AI / weak AI distinction I put to you a question you haven't yet answered: what do you think would happen if part of your brain, say your visual cortex, were replaced with components that behaved normally in their interaction with the remaining biological neurons, but lacked the essential ingredient for consciousness? My observation: Well my contacts work fine. My memory would recall, extract and process the information. If the contacts are too weak and I can't see then yes one of my awareness factors would be limited but it would not stop me from being conscious or have consciousness. Btw...Even if all my crazy posts don't amount to anything I have to say that the Extropy Chat creates a whirl of imagination. I can read something that may lead to me to investigate something truly beneficial to my understanding. Thanks. Ok back to music... Anna:) __________________________________________________________________ Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From emlynoregan at gmail.com Wed Feb 17 05:37:24 2010 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 17 Feb 2010 16:07:24 +1030 Subject: [ExI] Phil Jones acknowledging that climate science isn't settled In-Reply-To: <4902d9991002161836w487577b5iefaac10dc3ba51a4@mail.gmail.com> References: <201002161818.o1GIIWkS006446@andromeda.ziaspace.com> <4902d9991002161836w487577b5iefaac10dc3ba51a4@mail.gmail.com> Message-ID: <710b78fc1002162137g524be15ai31739f5cc1769949@mail.gmail.com> 2010/2/17 Alfio Puglisi : > > > On Tue, Feb 16, 2010 at 6:50 PM, Max More wrote: >> >> Interesting: >> >> Phil Jones momentous Q&A with BBC reopens the ?science is settled? issues >> emperor is, if not naked, scantily clad, vindicating key skeptic arguments >> >> http://wattsupwiththat.com/2010/02/14/phil-jones-momentous-qa-with-bbc-reopens-the-science-is-settled-issues/ >> >> Columnist Indur Goklany summarizes: >> >> Specifically, the Q-and-As confirm what many skeptics have long suspected: >> Neither the rate nor magnitude of recent warming is exceptional. >> There was no significant warming from 1998-2009. According to the IPCC we >> should have seen a global temperature increase of at least 0.2?C per decade. >> The IPCC models may have overestimated the climate sensitivity for >> greenhouse gases, underestimated natural variability, or both. >> This also suggests that there is a systematic upward bias in the impacts >> estimates based on these models just from this factor alone. >> The logic behind attribution of current warming to well-mixed man-made >> greenhouse gases is faulty. >> The science is not settled, however unsettling that might be. >> There is a tendency in the IPCC reports to leave out inconvenient >> findings, especially in the part(s) most likely to be read by policy makers. > > Reading the article, you discover that most of those points are just made up > by the columnist. Why do you think that they are interesting? > > Alfio > I concur. This is largely commentary by the columnist, not what actually transpired in the interview. I haven't read the whole thing in detail, but one early piece stuck out like a sore thumb, which is the old saw that there is no significant warming since 1998. That's just flat out wrong, and it's wrong because 1998 itself stuck out like a sore thumb; it was a statistical anomaly, which anyone who wasn't being entirely disingenuous would agree with. Here's a discussion of that issue, along with graphs showing the 1998 sore thumb: http://scienceblogs.com/illconsidered/2006/04/warming-stopped-in-1998.php -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From stathisp at gmail.com Wed Feb 17 10:27:15 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 17 Feb 2010 21:27:15 +1100 Subject: [ExI] Consciouness and paracrap In-Reply-To: <517167.25357.qm@web110408.mail.gq1.yahoo.com> References: <517167.25357.qm@web110408.mail.gq1.yahoo.com> Message-ID: On 17 February 2010 14:58, Anna Taylor wrote: > I know I'm simple minded but I don't understand why consciousness is such > a philosophical debate. ?I wonder why science sometimes complicates > things. > > Let's say hypothetically I could change the definitions. (This is > against academic rules but if Gordon GTE's to play master of definitions > why shouldn't I:). Let's say the word conscious meant alive versus > dead. Anything that is conscious is alive and awake. ?Could everyone > agree on that? > The problem lies within determining the levels of consciousness. > Does a worm possess consciousness? Will an AI? > > I'm rather curious as to understand why the scientific community is > taboo against using the term the "subconscious" as it would be much > easier to explain if they did. What if the subconscious is Darwin's > Theory of Evolution? ?The basic instinct for survival. ?A tree > requires many things to keep it alive but it doesn't know it. > It depends on the sun, the earth, the rain and it will continue to > grow or it will die. ?We need trees and they are part of the > evolutionary process. I would say they are subconsciously alive. > In humans it's the instinct to take your hand off a hot stove. > The embedded codes that evolution has installed. > > A person who is under anaesthesia should therefore be conscious and > subconsciously alive. The person requires oxygen, food and water yet > has no idea. This would then mean that consciousness and awareness > go hand in hand. > > What if consciousness is to be awake allowing the memories capability > of recalling, extracting and processing information while awareness is > intelligence, experience and sense? ?A worm doesn't live in a > subconscious state, it recalls, extracts and processes information > but does it recall the experience or have a sense as to why it does > the things it does? Shouldn't this be an underlining question? If we > knew the worm felt something when we poked it with a knife would we > declare it "aware"? ?I know my cat is aware because once he decided to > stick his head too far into a bottle, he never did it again. ?I think > to be human is to have awareness as well as consciousness. > > I believe a strong as well as weak AI will not be conscious or > have any subconscious (well at least until technology merges with > biology then at least that will be a great philosophical discussion) > but only strong AI will have consciousness and somewhat awareness. > What is scary about Strong AI is that it may have the maximum capacity > of intelligence yet have no experience or sense. ?We had better hope > that the programmer is fully aware. > > Stathis Papaioannou stathisp at gmail.com > Sun Dec 13 23:08:02 UTC 2009 questioned gts_2000 at yahoo.com > >>To address the strong AI / weak AI distinction I put to you a > question you haven't yet answered: what do you think would happen > if part of your brain, say your visual cortex, were replaced with components that behaved normally in their interaction with the > remaining biological neurons, but lacked the essential ingredient for > consciousness? > > My observation: > Well my contacts work fine. My memory would recall, extract and process > the information. ?If the contacts are too weak and I can't see then > yes one of my awareness factors would be limited but it would not stop > me from being conscious or have consciousness. > > Btw...Even if all my crazy posts don't amount to anything I have to > say that the Extropy Chat creates a whirl of imagination. ?I can read > something that may lead to me to investigate something truly beneficial > to my understanding. ?Thanks. ?Ok back to music... Anna, here are some definitions that I use: Consciousness - hard to define, but if you have it you know it; Strong AI - an artificial intelligence that is both intelligent and conscious; Weak AI - an artificial intelligence that is intelligent but lacks consciousness; Philosophical zombie - same as weak AI. Several people have commented that we need a definition of consciousness to proceed, but I disagree. I think everyone knows what is meant by the word and so we can have a complete discussion without at any point defining it. For those who say that consciousness does not really exist: consciousness is that thing you are referring to when you say that consciousness does not really exist. With the brain replacement experiment, the idea is that the visual cortex is where visual perceptions (visual experiences/ consciousness/ qualia) occur. If your visual cortex is destroyed then you are blind, even if your eyes and optic nerve are intact. When you see something and describe it, information goes from your eyes to your visual cortex, from your visual cortex to your speech centre, and finally from your speech centre to your vocal cords. The question is, what would happen if your visual cortex were replaced with an artificial part that sent the same signals to the rest of your brain in response to signals from your eyes, but lacked visual perception? By definition, you would see nothing; but also by definition, you would describe everything put in front of you correctly and you would claim and honestly believe that you could see normally. How could you be completely blind but not notice you were blind and behave as if you had normal vision? And if you think that is a coherent state of affairs, how do you know you are not currently blind? The purpose of the above is to show that it is impossible (logically impossible, not just physically impossible) to make a brain part, and hence a whole brain, that behaves exactly like a biological brain but lacks consciousness. Either it isn't possible to make such an artificial component at all, or else it is possible to make such a component but it will necessarily also have consciousness. The alternative is to say that you're happy with the idea that you may be blind, deaf, unable to understand English etc. but neither you nor anyone else has noticed. Gordon Swobe's response is that this thought experiment is ridiculous and I should come up with another one that doesn't challenge the self-evident fact that digital computers cannot be conscious. -- Stathis Papaioannou From gts_2000 at yahoo.com Wed Feb 17 14:05:51 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 17 Feb 2010 06:05:51 -0800 (PST) Subject: [ExI] Semiotics and Computability Message-ID: <579192.36870.qm@web36507.mail.mud.yahoo.com> --- On Tue, 2/16/10, Christopher Luebcke wrote: > Gordon, if I may ask directly: How do you determine whether someone, or > something, besides yourself is conscious? I believe we know enough about the brain and nervous system to infer the existence of subjective experience in other animals that have the same kind of apparatus. We see that other people and other primates and certain other animals have nervous systems very much like ours, eyes and skin and noses and ears very much like ours, and so on, and from these physiological facts in combination with their behaviors and reports of subjective experiences we can infer with near certainty that they do in fact have subjective experiences. -gts From gts_2000 at yahoo.com Wed Feb 17 14:45:31 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 17 Feb 2010 06:45:31 -0800 (PST) Subject: [ExI] glutamine and life-extension In-Reply-To: Message-ID: <836720.57934.qm@web36507.mail.mud.yahoo.com> --- On Tue, 2/16/10, Stathis Papaioannou wrote: >> All this talk of neurons reminds me of a paper I wrote >> circa 1999: >> >> Glutamine Based Growth Hormone Releasing Products: A >> Bad Idea? >> http://www.newtreatments.org/loadlocal.php?hid=974 >> >> My article above created a surprising amount of >> controversy among life-extensionists. I closed down my >> website, but recently found my paper republished on the site >> above without my permission. >> >> Thought you might find it interesting given your >> profession and the general theme. > > Thank-you for this, it is not something I knew much about. You're welcome. > It's important to know about the *negative* effects of any > treatment, and this is often overlooked by those into non-conventional I agree completely. You might find it surprising how much controversy my paper above created. At the time (1999-2000) some people interested in life-extension considered megadoses of glutamine as a means for delaying the effects of aging. My paper exposed the risks associated with such megadosing. I took my research over to the LEF forum. As I describe in the article, they put a temporary moratorium on the subject of glutamine in their online forum (no surprise there: LEF sells the stuff) until they could get their facts straight. To LEF's credit, they took my findings seriously enough to stop promoting megadosing of glutamine on their forum, and invited me back to educate people about the risks. A small victory for me. As you can see I'm no stranger to controversy. :-) -gts From cetico.iconoclasta at gmail.com Wed Feb 17 16:27:34 2010 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado (CI)) Date: Wed, 17 Feb 2010 14:27:34 -0200 Subject: [ExI] QET References: <8885036.1601541266266257004.JavaMail.defaultUser@defaultHost> Message-ID: <013201caafee$1dec8ec0$fd00a8c0@cpdhemm> No idea about this specific process of dr. Hotta. But it seems to me that the strange nature of quantum mechanical phenomena, and in particular quantum non- locality and quantum non-separability, could be easily extended - at least heuristically - to different contexts (i.e. gravitational fields) to get new and relevant results (i.e. non-local gravitational fields). See i.e. Adrian Kent here http://arxiv.org/abs/gr-qc/0507045 By non-local gravitational fields you mean putting gravity on a spaceship for instance? Now that would be interesting. From natasha at natasha.cc Wed Feb 17 16:55:51 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Wed, 17 Feb 2010 10:55:51 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <412543.58639.qm@web111214.mail.gq1.yahoo.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> Message-ID: <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> Christopher Luebcke writes: "... I just find it sadly ironic, and a dark reflection on the petty, fiercely partisan, politicized nature of what should in fact be a sober, scientific discussion, that a man accused (in the court of political opinion) of fraud should suddenly be taken at his word by those same accusers, when that word apparently agrees with their position." Many of us find it frustrating. Nevertheless the biggest problem and disappointment is that, while everyone seems to annoyed and saddened, there is a lack of intelligent communication between all parties in viewing the situation from diverse perspectives. Max is working to develop a level ground for discussion. Bravo to him. Best, Natasha From rafal.smigrodzki at gmail.com Wed Feb 17 16:59:10 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 17 Feb 2010 11:59:10 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <412543.58639.qm@web111214.mail.gq1.yahoo.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> Message-ID: <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> On Tue, Feb 16, 2010 at 5:25 PM, Christopher Luebcke wrote: >> The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. > > No, but almost all of it supports the positions that the Earth has been warming over the last century, that the warming has primarily been caused by mankind's introduction of greenhouse gasses into the atmosphere, and that this warming trend will continue, with projected results over the next 100 years ranging, roughly, from pretty bad to catastrophic in terms of human suffering. ### Did you ever read any of this peer-reviewed literature? Most likely not, since if you did (as I did), you wouldn't have written the paragraph. In fact, only a minority of peer-reviewed literature actively endorses the statements you made, and most of this has been produced by environmental activists who infiltrated CRU, GISS, and NOAA. Give me a reference to a peer-reviewed primary research paper showing manmade global warming and I'll give you two disagreeing with it. Rafal From pharos at gmail.com Wed Feb 17 17:05:48 2010 From: pharos at gmail.com (BillK) Date: Wed, 17 Feb 2010 17:05:48 +0000 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> Message-ID: On Wed, Feb 17, 2010 at 4:55 PM, Natasha Vita-More wrote: > Many of us find it frustrating. Nevertheless the biggest problem and > disappointment is that, while everyone seems to annoyed and saddened, there > is a lack of intelligent communication between all parties in viewing the > situation from diverse perspectives. > > Max is working to develop a level ground for discussion. ?Bravo to him. > > The discussion is over and science has lost. Once the argument was moved to lobbying and politics and PR campaigns then Big Carbon can do this sooooo much better than scientists that science was swept aside. So, carry on as normal for the big corporations. Once New York floods, then there will be more big profits to be made. BillK From cluebcke at yahoo.com Wed Feb 17 16:44:15 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 08:44:15 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <579192.36870.qm@web36507.mail.mud.yahoo.com> References: <579192.36870.qm@web36507.mail.mud.yahoo.com> Message-ID: <218661.53873.qm@web111208.mail.gq1.yahoo.com> Could one detect or measure consciousness on the basis of "behaviors and reports of subjective experiences" alone, without direct anatomical knowledge of a "brain and nervous system"? Conversely, does a system with a "brain and nervous system" necessarily have consciousness, even in the absence of "behaviors and reports of subjective experience"? ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Wed, February 17, 2010 6:05:51 AM Subject: Re: [ExI] Semiotics and Computability --- On Tue, 2/16/10, Christopher Luebcke wrote: > Gordon, if I may ask directly: How do you determine whether someone, or > something, besides yourself is conscious? I believe we know enough about the brain and nervous system to infer the existence of subjective experience in other animals that have the same kind of apparatus. We see that other people and other primates and certain other animals have nervous systems very much like ours, eyes and skin and noses and ears very much like ours, and so on, and from these physiological facts in combination with their behaviors and reports of subjective experiences we can infer with near certainty that they do in fact have subjective experiences. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From natasha at natasha.cc Wed Feb 17 17:19:11 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Wed, 17 Feb 2010 11:19:11 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com><412543.58639.qm@web111214.mail.gq1.yahoo.com><81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> Message-ID: <26B286FC724A4FF2A04AAF4F353B882B@DFC68LF1> BillK wrote: > Many of us find it frustrating. Nevertheless the biggest problem and > disappointment is that, while everyone seems to annoyed and saddened, > there is a lack of intelligent communication between all parties in > viewing the situation from diverse perspectives. > > Max is working to develop a level ground for discussion. ?Bravo to him. "The discussion is over and science has lost. Once the argument was moved to lobbying and politics and PR campaigns then Big Carbon can do this sooooo much better than scientists that science was swept aside. So, carry on as normal for the big corporations. Once New York floods, then there will be more big profits to be made." Science has not lost because science is not a fait accompli. Just because we lack a skill set for finding solutions to the problem does not mean anyone wins. Natasha From x at extropica.org Wed Feb 17 17:29:34 2010 From: x at extropica.org (x at extropica.org) Date: Wed, 17 Feb 2010 09:29:34 -0800 Subject: [ExI] Being No One: The Self-Model Theory of Subjectivity Message-ID: On Sun Nov 23 17:39:04 UTC 2003, Jef wrote: > I received my copy of this book from Amazon and it exceeded my expectations. > The first chapter describes what I think are the key issues today in > undertanding the illusion of self that colors so much of the thinking and > discussion on this list. The text is dense, not for the casual reader. >Highly recommended for those seeking a wider view of consciousness that > encompasses the "paradoxes" of qualia and subjectivity and David Chalmers' > so-called "hard problem of consciousness." > > - Jef [Note that I'm replying to a thread on this list from 2003.] For those *serious* participants in this discussion who for some reason aren't already familiar with Metzinger's work, there is now an easy one hour introduction by video available at http://www.youtube.com/watch?v=mthDxnFXs9k. > > > Human Nature Review wrote: >> Human Nature Review 2003 Volume 3: 450-454 ( 17 November ) >> URL of this document http://human-nature.com/nibbs/03/metzinger.html >> >> Book Review >> >> Being No One: The Self-Model Theory of Subjectivity >> by Thomas Metzinger >> MIT Press, (2003), pp. 699, ISBN: 0-262-13417-9 >> >> Reviewed by Marcello Ghin. >> >> The notion of consciousness has been suspected of being too vague for >> being a topic of scientific investigation. Recently, consciousness >> has become more interesting in the light of new neuroscientific >> imaging studies. Scientists from all over the world are searching for >> neural correlates of consciousness. However, finding the neural basis >> is not enough for a scientific explanation of conscious experience. >> After all, we are still facing the 'hard problem', as David Chalmers >> dubbed it: why are those neural processes accompanied by conscious >> experience at all? Maybe we can reformulate the question in this way: >> Which constraints does a system have to satisfy in order to generate >> conscious experience? Being No One is an attempt to give an answer to >> the latter question. To be more precise: it is an attempt to give an >> answer to the question of how information processing systems generate >> the conscious experience of being someone. >> >> We all experience ourselves as being someone. For example, at this >> moment you will have the impression that it is you who is actually >> reading this review. And it is you who is forming thoughts about it. >> Could it be otherwise? Could I be wrong about what I myself am >> experiencing? Our daily experiences make us think that we are someone >> who is experiencing the world. We commonly refer to this phenomenon >> by speaking of the 'self'. Metzinger claims that no such things as >> 'selves' exist in the world. All that exists are phenomenal >> self-models, that is continuously updated dynamic >> self-representational processes of biological organisms. Conscious >> beings constantly confuse themselves with the content of their actual >> phenomenal self-model, thinking that they are identical with a self. >> According to Metzinger, this is due to the nature of the >> representational process generating the self-model. The self-model is >> mostly transparent - the information that it is a model is not >> carried on the level of content - we are looking through it, having >> the impression of being in direct contact with our own body and the >> world. If you are now thinking that this idea is at least >> counterintuitive, you should read Being No One and find out why it is >> counterintuitive, and yet that there are good reasons to believe that >> it is correct. >> >> Full text >> http://human-nature.com/nibbs/03/metzinger.html >> >> Being No One: The Self-Model Theory of Subjectivity >> by Thomas Metzinger (Author) >> Hardcover: 584 pages ; Dimensions (in inches): 1.56 x 9.25 x 7.32 >> Publisher: MIT Press; (January 24, 2003) ISBN: 0262134179 >> AMAZON - US >> http://www.amazon.com/exec/obidos/ASIN/0262134179/darwinanddarwini/ >> AMAZON - UK >> http://www.amazon.co.uk/exec/obidos/ASIN/0262134179/humannaturecom/ >> >> Editorial Reviews >> Book Info >> Johannes Gutenberg-Universitat, Mainz, Germany. Text introduces two >> theoretical entities that may form the decisive conceptual link >> between first-person and third-person approaches to the conscious >> mind. Explores evolutionary roots of intersubjectivity, artifical >> subjectivity, and future connections between philosophy of mind and >> ethics. >> >> Book Description >> According to Thomas Metzinger, no such things as selves exist in the >> world: nobody ever had or was a self. All that exists are phenomenal >> selves, as they appear in conscious experience. The phenomenal self, >> however, is not a thing but an ongoing process; it is the content of >> a "transparent self-model." In Being No One, Metzinger, a German >> philosopher, draws strongly on neuroscientific research to present a >> representationalist and functional analysis of what a consciously >> experienced first-person perspective actually is. Building a bridge >> between the humanities and the empirical sciences of the mind, he >> develops new conceptual toolkits and metaphors; uses case studies of >> unusual states of mind such as agnosia, neglect, blindsight, and >> hallucinations; and offers new sets of multilevel constraints for the >> concept of consciousness. Metzinger's central question is: How >> exactly does strong, consciously experienced subjectivity emerge out >> of objective events in the natural world? His epistemic goal is to >> determine whether conscious experience, in particular the experience >> of being someone that results from the emergence of a phenomenal >> self, can be analyzed on subpersonal levels of description. He also >> asks if and how our Cartesian intuitions that subjective experiences >> as such can never be reductively explained are themselves ultimately >> rooted in the deeper representational structure of our conscious >> minds. Metzinger introduces two theoretical entities--the "phenomenal >> self-model" and the "phenomenal model of the intentionality >> relation"--that may form the decisive conceptual link between >> first-person and third-person approaches to the conscious mind and >> between consciousness research in the humanities and in the sciences. >> He also discusses the roots of intersubjectivity, artificial >> subjectivity (the issue of nonbiological phenomenal selves), and >> connections between philosophy of mind and ethics. >> >> Human Nature Review http://human-nature.com >> Evolutionary Psychology http://human-nature.com/ep >> Human Nature Daily Review http://human-nature.com/nibbs From cluebcke at yahoo.com Wed Feb 17 17:33:16 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 09:33:16 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> Message-ID: <154647.10956.qm@web111212.mail.gq1.yahoo.com> Let me just add: I am not a climatologist, and therefore even if I had read a wide variety of peer-reviewed papers on the subject, I would not be qualified to determine whether I had made an accurate sampling, much less judge the papers on their merits. My claim comes in fact from paying attention to those organizations who are responsible for gathering and summarizing professional research and judgement on the subject: Not just IPCC, but NAS, AMS, AGU and AAAS are all organizations, far as I can tell, that are both qualified to comment on the matter, and who have supported the general position that AGW is real. If you are going to dismiss well-respected scientific bodies that hold positions contrary to your own as necessarily having been "infiltrated by environmental activists", then it is incumbent upon you to provide some evidence that such infiltration by such people has actually taken place. I wonder if it wouldn't be possible to disagree with their positions, though, without also presuming that the people you disagree with are wicked? That's the larger point of what I was trying to get at. ----- Original Message ---- From: Rafal Smigrodzki To: ExI chat list Sent: Wed, February 17, 2010 8:59:10 AM Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled On Tue, Feb 16, 2010 at 5:25 PM, Christopher Luebcke wrote: >> The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. > > No, but almost all of it supports the positions that the Earth has been warming over the last century, that the warming has primarily been caused by mankind's introduction of greenhouse gasses into the atmosphere, and that this warming trend will continue, with projected results over the next 100 years ranging, roughly, from pretty bad to catastrophic in terms of human suffering. ### Did you ever read any of this peer-reviewed literature? Most likely not, since if you did (as I did), you wouldn't have written the paragraph. In fact, only a minority of peer-reviewed literature actively endorses the statements you made, and most of this has been produced by environmental activists who infiltrated CRU, GISS, and NOAA. Give me a reference to a peer-reviewed primary research paper showing manmade global warming and I'll give you two disagreeing with it. Rafal _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From cluebcke at yahoo.com Wed Feb 17 17:18:53 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 09:18:53 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> Message-ID: <723938.90546.qm@web111204.mail.gq1.yahoo.com> > most of thishas been produced by environmental activists who infiltrated CRU, GISS, and NOAA. Infiltrators? Oh my. Sounds like we need a purge. ----- Original Message ---- From: Rafal Smigrodzki To: ExI chat list Sent: Wed, February 17, 2010 8:59:10 AM Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled On Tue, Feb 16, 2010 at 5:25 PM, Christopher Luebcke wrote: >> The peer-reviewed science (which, as has been clear for some time, is not always a guarantee of accuracy), does not all agree. So that doesn't settle the issue. > > No, but almost all of it supports the positions that the Earth has been warming over the last century, that the warming has primarily been caused by mankind's introduction of greenhouse gasses into the atmosphere, and that this warming trend will continue, with projected results over the next 100 years ranging, roughly, from pretty bad to catastrophic in terms of human suffering. ### Did you ever read any of this peer-reviewed literature? Most likely not, since if you did (as I did), you wouldn't have written the paragraph. In fact, only a minority of peer-reviewed literature actively endorses the statements you made, and most of this has been produced by environmental activists who infiltrated CRU, GISS, and NOAA. Give me a reference to a peer-reviewed primary research paper showing manmade global warming and I'll give you two disagreeing with it. Rafal _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From alfio.puglisi at gmail.com Wed Feb 17 18:10:00 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 17 Feb 2010 19:10:00 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> Message-ID: <4902d9991002171010q488c79e1td46df27ddcd96f77@mail.gmail.com> On Wed, Feb 17, 2010 at 5:59 PM, Rafal Smigrodzki < rafal.smigrodzki at gmail.com> wrote: > On Tue, Feb 16, 2010 at 5:25 PM, Christopher Luebcke > wrote: > >> The peer-reviewed science (which, as has been clear for some time, is > not always a guarantee of accuracy), does not all agree. So that doesn't > settle the issue. > > > > No, but almost all of it supports the positions that the Earth has been > warming over the last century, that the warming has primarily been caused by > mankind's introduction of greenhouse gasses into the atmosphere, and that > this warming trend will continue, with projected results over the next 100 > years ranging, roughly, from pretty bad to catastrophic in terms of human > suffering. > > ### Did you ever read any of this peer-reviewed literature? > > Most likely not, since if you did (as I did), you wouldn't have > written the paragraph. In fact, only a minority of peer-reviewed > literature actively endorses the statements you made, > > Give me a reference to a peer-reviewed primary research paper showing > manmade global warming and I'll give you two disagreeing with it. > Mmm.... each chapter of the IPCC report has dozens of references. Can you really find two times that amount? > and most of this > has been produced by environmental activists who infiltrated CRU, > GISS, and NOAA. And they managed to convince the uK Royal Society, many national academies of science, and even the American Association of Petroleum Geologists. "infiltration" doesn't begin to describe it. Alfio > Rafal > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Wed Feb 17 18:15:04 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 17 Feb 2010 19:15:04 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> Message-ID: <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> On Wed, Feb 17, 2010 at 5:55 PM, Natasha Vita-More wrote: > Christopher Luebcke writes: > > "... I just find it sadly ironic, and a dark reflection on the petty, > fiercely partisan, politicized nature of what should in fact be a sober, > scientific discussion, that a man accused (in the court of political > opinion) of fraud should suddenly be taken at his word by those same > accusers, when that word apparently agrees with their position." > > Many of us find it frustrating. Nevertheless the biggest problem and > disappointment is that, while everyone seems to annoyed and saddened, there > is a lack of intelligent communication between all parties in viewing the > situation from diverse perspectives. > > Max is working to develop a level ground for discussion. Bravo to him. > I don't agree. Posting all those links to garbage blogs like WUWT, and quoting them like they had any value instead of laughing at them (or desperating, depends on the mood), does a disservice to the Extropy list. It shows how a group of intelligent people can be easily manipulated by PR spin. I find it depressing. Alfio > Best, > Natasha > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Wed Feb 17 19:49:45 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 17 Feb 2010 14:49:45 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> Message-ID: <20100217144945.t2astxtwhwowssg8@webmail.natasha.cc> Your observation is irrational. Natasha Quoting Alfio Puglisi : > On Wed, Feb 17, 2010 at 5:55 PM, Natasha Vita-More wrote: > >> Christopher Luebcke writes: >> >> "... I just find it sadly ironic, and a dark reflection on the petty, >> fiercely partisan, politicized nature of what should in fact be a sober, >> scientific discussion, that a man accused (in the court of political >> opinion) of fraud should suddenly be taken at his word by those same >> accusers, when that word apparently agrees with their position." >> >> Many of us find it frustrating. Nevertheless the biggest problem and >> disappointment is that, while everyone seems to annoyed and saddened, there >> is a lack of intelligent communication between all parties in viewing the >> situation from diverse perspectives. >> >> Max is working to develop a level ground for discussion. Bravo to him. >> > > I don't agree. Posting all those links to garbage blogs like WUWT, and > quoting them like they had any value instead of laughing at them (or > desperating, depends on the mood), does a disservice to the Extropy list. It > shows how a group of intelligent people can be easily manipulated by PR > spin. I find it depressing. > > Alfio > > > >> Best, >> Natasha >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > From thespike at satx.rr.com Wed Feb 17 19:51:14 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Feb 2010 13:51:14 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> Message-ID: <4B7C48B2.1030303@satx.rr.com> On 2/17/2010 12:15 PM, Alfio Puglisi wrote: >> Max is working to develop a level ground for discussion. Bravo to him. > I don't agree. Posting all those links to garbage blogs like WUWT, and > quoting them like they had any value instead of laughing at them (or > desperating, depends on the mood), does a disservice to the Extropy > list. It shows how a group of intelligent people can be easily > manipulated by PR spin. I find it depressing. Have to agree with Alfio--sorry. Quoting should at least have url'd the full BBC interview text, which gives a rather different impression (to me, anyway): Damien Broderick From natasha at natasha.cc Wed Feb 17 20:07:39 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 17 Feb 2010 15:07:39 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <4B7C48B2.1030303@satx.rr.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> <4B7C48B2.1030303@satx.rr.com> Message-ID: <20100217150739.8qmf6mzz4gokc8cc@webmail.natasha.cc> You agree with the irrational of Alfio that Max is easly manipulated by PR spins? We'd have to ask why the other url was not included; but I hardly think it is sufficient evidence that one is being manipulated. Natasha Quoting Damien Broderick : > On 2/17/2010 12:15 PM, Alfio Puglisi wrote: > >>> Max is working to develop a level ground for discussion. Bravo to him. > >> I don't agree. Posting all those links to garbage blogs like WUWT, and >> quoting them like they had any value instead of laughing at them (or >> desperating, depends on the mood), does a disservice to the Extropy >> list. It shows how a group of intelligent people can be easily >> manipulated by PR spin. I find it depressing. > > Have to agree with Alfio--sorry. Quoting should at least have url'd the > full BBC interview text, which gives a rather different impression (to > me, anyway): > > > > Damien Broderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From thespike at satx.rr.com Wed Feb 17 20:36:29 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Feb 2010 14:36:29 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <20100217150739.8qmf6mzz4gokc8cc@webmail.natasha.cc> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> <4B7C48B2.1030303@satx.rr.com> <20100217150739.8qmf6mzz4gokc8cc@webmail.natasha.cc> Message-ID: <4B7C534D.5040807@satx.rr.com> On 2/17/2010 2:07 PM, natasha at natasha.cc wrote: > You agree with the irrational of Alfio that Max is easly manipulated by > PR spins? Alfio was making a general and quite rational point about some posters to the list, I think, and in this case it does look as if Max jumped the gun in citing spin stories rather than the original interview. Calling Alfio "irrational" doesn't get us very far in advancing the discussion. Christopher commented: "I wonder if it wouldn't be possible to disagree with their positions, though, without also presuming that the people you disagree with are wicked?" Ditto "irrational." HOWEVER... that doesn't mean some players in the supposed debate *aren't* wicked--consider, by analogy,. the decades-long and perhaps equivalent role of corporate advocates for carcinogenic smoking. If that was not wickedness, what is? It is arguable that climate change deniers are in a similar position. That said, I agree with Barbara Lamar, who comments: Damien Broderick From lacertilian at gmail.com Wed Feb 17 20:38:24 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 17 Feb 2010 12:38:24 -0800 Subject: [ExI] Consciouness and paracrap In-Reply-To: <517167.25357.qm@web110408.mail.gq1.yahoo.com> References: <517167.25357.qm@web110408.mail.gq1.yahoo.com> Message-ID: Anna Taylor : > I know I'm simple minded but I don't understand why consciousness is such > a philosophical debate. ?I wonder why science sometimes complicates > things. Simplicity is a virtue. But, have you seen much scientific reasoning going on here? I haven't! Anna Taylor : > Let's say the word conscious meant alive versus > dead. Anything that is conscious is alive and awake. ?Could everyone > agree on that? I have no problem with that definition, but then you also need to define "alive". Lots of disagreement there. I think "awake" is sufficiently unambiguous. You might run into the question of what androids dream of, though. Does it make sense to speak of sleeping rocks? Being alive may be a prerequisite for being awake or asleep. Anna Taylor : > My observation: > Well my contacts work fine. My memory would recall, extract and process > the information. ?If the contacts are too weak and I can't see then > yes one of my awareness factors would be limited but it would not stop > me from being conscious or have consciousness. This isn't actually what Stathis was talking about. He wants the answer to this question: If I were to replace your visual cortex with a device* wired to exactly reproduce its normal inputs and outputs, would you still have the subjective experience of seeing? *There is an implication that the device in question is a digital computer, which I take to mean something with registers and clock cycles. Stathis Papaioannou : > Several people have commented that we need a definition of > consciousness to proceed, but I disagree. I think everyone knows what > is meant by the word and so we can have a complete discussion without > at any point defining it. Dude, I barely know what I mean by the word when I use it in my own head. Are you talking about access consciousness? Phenomenal consciousness? Reflexive consciousness? All of the above? http://www.def-logic.com/articles/silby011.html The reason I haven't supplied a rigorous definition for consciousness, as I have for intelligence, is because I can't articulate the meaning of it for myself. This, to me, does not seem ancillary to the discussion; it seems to be the very root of the discussion, namely the question, "what is consciousness?". Stathis Papaioannou : > For those who say that consciousness does > not really exist: consciousness is that thing you are referring to > when you say that consciousness does not really exist. That's fair. There isn't any question of what I'm talking about when I refer to the Flying Spaghetti Monster. I can describe the FSM to you in great detail, however. I can't do the same with consciousness, except perhaps to say that, if it exists, it occasionally compels normally sane people to begin a sentence with "dude". Stathis Papaioannou : > The purpose of the above is to show that it is impossible (logically > impossible, not just physically impossible) to make a brain part, and > hence a whole brain, that behaves exactly like a biological brain but > lacks consciousness. Either it isn't possible to make such an > artificial component at all, or else it is possible to make such a > component but it will necessarily also have consciousness. The > alternative is to say that you're happy with the idea that you may be > blind, deaf, unable to understand English etc. but neither you nor > anyone else has noticed. > > Gordon Swobe's response is that this thought experiment is ridiculous > and I should come up with another one that doesn't challenge the > self-evident fact that digital computers cannot be conscious. Gordon doesn't disagree with that proposition as-stated, even if he sometimes claims that he does (for some reason). He's consistently said that we should be able to engineer artificial consciousness, but that to do so requires more than a clever piece of software in a digital computer. So, I suggest that you rephrase the experiment so that it explicitly involves replacing neurons, cortices, or whole brains with microprocessor-driven prosthetics. We know that he believes the whole-brain version will be a zombie, but I haven't been able to discern any clear conclusions from him on the other two. He has said before that partial replacement only confuses the matter, implying that it's a useless thought experiment. I do not see why he would think that, though. The only coherent answer of his I remember goes something like this: a man has a damaged language center, and a surgeon replaces neurons with artificial substitutes one by one. This works so poorly that the surgeon must replace the entire brain before language function is returned, at which point the man is a philosophical zombie. But we always start with the assumption that computerized neurons do not work poorly, indeed that they "depict" ordinary neurons perfectly (using that depiction as a guide to manipulate their synthetic axons and such), and I've never seen him explain why he considers this assumption to be inherently false. From joe.dalton23 at yahoo.com Wed Feb 17 20:14:07 2010 From: joe.dalton23 at yahoo.com (Joe Dalton) Date: Wed, 17 Feb 2010 12:14:07 -0800 (PST) Subject: [ExI] Test from new subscriber Message-ID: <939179.93622.qm@web113907.mail.gq1.yahoo.com> Testing -- Subscribed to this list a few months ago... only got one message thru. Trying again from a new address. Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Wed Feb 17 20:49:31 2010 From: max at maxmore.com (Max More) Date: Wed, 17 Feb 2010 14:49:31 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <201002172049.o1HKnewH023195@andromeda.ziaspace.com> Damien says: >Alfio was making a general and quite rational point about some posters >to the list, I think, and in this case it does look as if Max jumped the >gun in citing spin stories rather than the original interview. Oh give me a break. The piece cited included a link to the original BBC story in the *first sentence*. I should note that I will not engage Alfio in discussion on this issue, since it's clear to me that anything that disagrees with his view is automatically dismissed. The latest evidence for that is his calling WUWT a "garbage blog". It is not. I could reply -- equally unreasonably -- by calling RealClimate a garbage blog. Neither of them are garbage, though both may contain some mistaken material. Max From lacertilian at gmail.com Wed Feb 17 21:03:24 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 17 Feb 2010 13:03:24 -0800 Subject: [ExI] Test from new subscriber In-Reply-To: <939179.93622.qm@web113907.mail.gq1.yahoo.com> References: <939179.93622.qm@web113907.mail.gq1.yahoo.com> Message-ID: Joe Dalton : > > Testing -- Subscribed to this list a few months ago... only got one message thru. Trying again from a new address. > > Joe Seems to be working on our side. From joe.dalton23 at yahoo.com Wed Feb 17 20:37:25 2010 From: joe.dalton23 at yahoo.com (Joe Dalton) Date: Wed, 17 Feb 2010 12:37:25 -0800 (PST) Subject: [ExI] Fw: Test from new subscriber Message-ID: <534353.36367.qm@web113909.mail.gq1.yahoo.com> Arg. Trying again. Is someone trying to keep me out?? ----- Forwarded Message ---- From: Joe Dalton To: Extropy-Chat Sent: Wed, February 17, 2010 2:14:07 PM Subject: Test from new subscriber Testing -- Subscribed to this list a few months ago... only got one message thru. Trying again from a new address. Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Wed Feb 17 21:11:03 2010 From: mbb386 at main.nc.us (MB) Date: Wed, 17 Feb 2010 16:11:03 -0500 (EST) Subject: [ExI] Test from new subscriber In-Reply-To: <939179.93622.qm@web113907.mail.gq1.yahoo.com> References: <939179.93622.qm@web113907.mail.gq1.yahoo.com> Message-ID: <38630.12.77.168.255.1266441063.squirrel@www.main.nc.us> Received here. Looks fine. Regards, MB > Testing -- Subscribed to this list a few months ago... only got one message thru. > Trying again from a new address. > From thespike at satx.rr.com Wed Feb 17 21:11:57 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Feb 2010 15:11:57 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002172049.o1HKnewH023195@andromeda.ziaspace.com> References: <201002172049.o1HKnewH023195@andromeda.ziaspace.com> Message-ID: <4B7C5B9D.7010502@satx.rr.com> On 2/17/2010 2:49 PM, Max More wrote: > Oh give me a break. The piece cited included a link to the original BBC > story in the *first sentence*. My goof, sorry. Didn't notice that, because the url was embedded behind a bolded phrase in the "Annotated Version of the Phil & Roger Show". Damien Broderick From natasha at natasha.cc Wed Feb 17 21:15:09 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 17 Feb 2010 16:15:09 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <4B7C534D.5040807@satx.rr.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> <4B7C48B2.1030303@satx.rr.com> <20100217150739.8qmf6mzz4gokc8cc@webmail.natasha.cc> <4B7C534D.5040807@satx.rr.com> Message-ID: <20100217161509.hvzuebfszockcwkg@webmail.natasha.cc> Quoting Damien Broderick : > On 2/17/2010 2:07 PM, natasha at natasha.cc wrote: > >> You agree with the irrational of Alfio that Max is easily manipulated by >> PR spins? > > Alfio was making a general and quite rational point about some posters > to the list, I think, and in this case it does look as if Max jumped > the gun in citing spin stories rather than the original interview. This was not my take on it at all: The first sentence of the first paragraph of the article has a link to the original BBC Q&A interview. And in Max's post, he said that the piece was "interesting". Interesting to me means that it gets a person's attention - whether positive or negative and does not mean that one agrees or disagrees. Further, when asking why a person did not find it interesting, invites an exchange into thinking processes (maybe not totally Socratic, but it opens dialogue). But, let's step back a moment: this topic is fascinating and, while I was out of the country (in Auzzi land) when it broke, it got a heck of a lot of attention in lots of circles and this BBC-WUWT is the first I personally have read about it since that time ... so for me anyway, it is "interesting" and, mind you, I am not saying that I agree with it or disagree with it. I am merely absorbing. > Calling Alfio "irrational" doesn't get us very far in advancing the > discussion. Let's step back a moment: A perspective that lacks understanding (in this case an understanding of a person's intention (which we do not know yet because Max is not in this conversation and I do not speak for him) is not rational, especially when taking a post to court because it did not include a URL and taking "interesting" as more powerful than it could have been intended. If you read my first post in this thread concerning the issue of global warming and discourse surrounding global warming, -- I wrote: "Nevertheless the biggest problem and disappointment is that, while everyone seems to annoyed and saddened, there is a lack of intelligent communication between all parties in viewing the situation from diverse perspectives." This sums up my view. > Christopher commented: "I wonder if it wouldn't be possible to disagree > with their positions, though, without also presuming that the people > you disagree with are wicked?" Ditto "irrational." This is incorrect. Wicked does not equal irrational. And again here is another supposition. Irrational in this instance means a "LACK OF UNDERSTANDING". However, it is very true that irrational can be taken as pejorative - which is not how I meant it - I simply found the lack of dialogue, stated in an absolutist fashion, to be the result of a missed understanding. Nevertheless, let's move on (or should I say backwards): What does "manipulated" mean? Let's see: I "assume" that Christopher means being "control shrewdly" or maybe "deviously". But let's suppose the article does have a devious characteristic - does this result in a person being influenced by deception just because s/he says it is interesting? I think not. > HOWEVER... that > doesn't mean some players in the supposed debate *aren't* > wicked--consider, by analogy,. the decades-long and perhaps equivalent > role of corporate advocates for carcinogenic smoking. If that was not > wickedness, what is? It is arguable that climate change deniers are in > a similar position. I'd love to have a darn good discussion about players of debates, advertizing, PR, marketing, etc. under a different thread subject line which is more appropriate to its contents. Best, Natasha From joe.dalton23 at yahoo.com Wed Feb 17 20:52:50 2010 From: joe.dalton23 at yahoo.com (Joe Dalton) Date: Wed, 17 Feb 2010 12:52:50 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <960175.20814.qm@web113903.mail.gq1.yahoo.com> Puglisi: Mmm.... each chapter of the IPCC report has dozens of references. Can you really find two times that amount? Doubt it. But there's no shortage of skeptical peer-reviewed papers. e.g.: http://wattsupwiththat.com/2009/11/15/reference-450-skeptical-peer-reviewed-papers/#more-12801 JoeD -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Wed Feb 17 21:24:54 2010 From: max at maxmore.com (Max More) Date: Wed, 17 Feb 2010 15:24:54 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <201002172125.o1HLP3dO006994@andromeda.ziaspace.com> Thanks, Damien. Since the article I did cite linked so immediately to the BBC piece, I didn't think it necessary to separately include it in my post. Still, in one way, I may have to agree that I might have "jumped the gun" in posting that. I did think it was interesting that a central figure was moderating his position, but perhaps I should have gone to the trouble of analyzing where I agreed, disagreed, or doubted the summary and inferences made by Goklany. I should also have anticipated the renewed firestorm it might set off. Or perhaps this was a sinister plot by me to draw attention away from the endless consciousness/Searle discussion... >On 2/17/2010 2:49 PM, Max More wrote: > > > Oh give me a break. The piece cited included a link to the original > > BBC story in the *first sentence*. > >My goof, sorry. Didn't notice that, because the url was embedded behind >a bolded phrase in the "Annotated Version of the Phil & Roger Show". > >Damien Broderick Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From max at maxmore.com Wed Feb 17 21:35:26 2010 From: max at maxmore.com (Max More) Date: Wed, 17 Feb 2010 15:35:26 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> Emlyn: >I haven't read the whole thing in detail, but >one early piece stuck out like a sore thumb, >which is the old saw that there is no >significant warming since 1998. That's just flat >out wrong, and it's wrong because 1998 itself >stuck out like a sore thumb; it was a >statistical anomaly, which anyone who wasn't >being entirely disingenuous would agree with. > >Here's a discussion of that issue, along with >graphs showing the 1998 sore thumb: > >http://scienceblogs.com/illconsidered/2006/04/warming-stopped-in-1998.php The source you cite (which is almost four years old now), seems to rely for the recent period exclusively on NASA GISS analysis. (References to CRU data are for other periods. I didn't see any comparison with UAH or RSS.) In contrast, the following piece.. http://wattsupwiththat.com/2008/03/08/3-of-4-global-metrics-show-nearly-flat-temperature-anomaly-in-the-last-decade/ ...compares that analysis to three other sources and notes "NASA GISS land-ocean anomaly data showing a ten year trend of 0.151?C, which is about 5 times larger than the largest of the three metrics above, which is UAH at 0.028?C /ten years. " Is there a good reason to rely completely on the source that seems out of alignment with the others? (I'm going to look at the two contrasting sources more closely when I have more time.) I'm not clear whether any or all of the four sources count as showing "statistically significant warming" (though RSS obviously does not, since it shows a very slight decline), but they do at least show warming greatly below IPCC trend. To be clear: Whether or not warming since 1995 or 1998 has stopped or considerably slowed down is of little importance. The orthodoxy and the skeptics can agree that one decade is too short to show anything significant about long-term trends. It does, however, raise additional doubts about AGW models. I haven't seen a good explanation of why the models completely fail to account for this -- and previous multi-decade, industrial-age pauses in warming, if CO2 really is the main driver. Not only is the one-decade/12-year record of little importance, I still have not seen adequate reason to maintain my doubts about the claim that century-long warming is definitely and entirely due to human activity rather than to a natural cyclical recovery from a cold period. But, obviously, that must be because I'm either stupid, evil, or probably both. ;-) Max ------------------------------------- Max More, Ph.D. Strategic Philosopher The Proactionary Project Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From scerir at libero.it Wed Feb 17 21:55:03 2010 From: scerir at libero.it (scerir) Date: Wed, 17 Feb 2010 22:55:03 +0100 (CET) Subject: [ExI] QET Message-ID: <18113742.1858661266443703358.JavaMail.defaultUser@defaultHost> Henrique: > By non-local gravitational fields you mean putting gravity on a spaceship for instance? Now that would be interesting. Paul Simon sings: < "The problem is all inside your head", she said to me / The answer is easy if you take it logically [...] > (from '50 Ways To Leave Your Lover', 1975) So, let us start from the beginning. In "Relativity and the Problem of Space" (1952), Albert Einstein wrote: "When a smaller box s is situated, relatively at rest, inside the hollow space of a larger box S, then the hollow space of s is a part of the hollow space of S, and the same "space", which contains both of them, belongs to each of the boxes. When s is in motion with respect to S, however, the concept is less simple. One is then inclined to think that s encloses always the same space, but a variable part of the space S. It then becomes necessary to apportion to each box its particular space, not thought of as bounded, and to assume that these two spaces are in motion with respect to each other. Before one has become aware of this complication, space appears as an unbounded medium or container in which material objects swim around. But it must now be remembered that there is an infinite number of spaces, which are in motion with respect to each other. The concept of space as something existing objectively and independent of things belongs to pre-scientific thought, but not so the idea of the existence of an infinite number of spaces in motion relatively to each other." If we follow Einstein, and encode gravity in the geometry of space-time, matter curves space-time, and its metric is no longer fixed. However, space- time is still somehow represented by a *smooth continuum*. To restore coherence of physics or - to say it better - to get a perfect coherence between GR and QT (not just the present "peaceful coexistence") one has to abandon the idea that space-time is fixed, immune to change. One has to encode gravity into the very geometry of space-time, thereby making this geometry *dynamical*. Thus, while spacetime can be defined by the objects themselves, and their dynamics, it is well known the nonlocal (rectius: nonseparable) behaviour of entangled particles, and these entangled particles should live in the Hilbert spaces but also in a well-designed space-time. Now, a simple question would be: if spacetime is defined by objects, and if the nature of these objects may be quantal, can we say that spacetime may be 'nonlocal' (or 'nonlocally causal')? Does it make any sense? For, general relativity completely ignores quantum effects and we have learned that these effects become important both in the physics of the *small* and in the physics of long distance *correlations* (even between *spacelike separated* regions of the universe, at least in principle). It has been said that primary goal of *quantum gravity* is to uncover the quantal structure of spacetime, and coarse-graining, backreaction, fluctuations and correlations may play an essential role in such a quest. Quantum gravity is not equivalent to a local field theory in the (bulk) spacetime and there's a lot of powerful evidence that quantum gravity is not strictly local or causal (holography; getting the information out of the black hole; there is no connection operator in LQG and as a result the curvature operator has to be expressed in terms of holonomies and becomes non-local, etc.). Summing up. It is not about 'putting gravity on a space-ship'. It is more about thinking the space-time as something strictly dependent of the dynamics of massive objects and of quantal objects, it is more about the possibility of changing the gravitational field at-a-distance, via quantum entanglement correlations coupled to massive objects, or via more efficient quantum gravity mechanisms. (Quantal randomness and related a-causality might still preserve the no-signaling postulate.) From alfio.puglisi at gmail.com Wed Feb 17 22:02:17 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 17 Feb 2010 23:02:17 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002172049.o1HKnewH023195@andromeda.ziaspace.com> References: <201002172049.o1HKnewH023195@andromeda.ziaspace.com> Message-ID: <4902d9991002171402p44079507p9ebe64e9576a6d9e@mail.gmail.com> On Wed, Feb 17, 2010 at 9:49 PM, Max More wrote: I should note that I will not engage Alfio in discussion on this issue, > since it's clear to me that anything that disagrees with his view is > automatically dismissed. > Let's rewrite that: anything that disagrees with the current scientific understading is dismissed. And not automatically. I usually try to show why, and I think that I posted more than enough links in the previous global warming thread. I am very disappointed that you think that I'm just defending a personal opinion. English is not my mother language, but I thought that I was writing at a decent enough level to be understood. We just got another link to WUWT from JoeD a few minutes ago. Oh, and another one from you while I'm writing this. I have seen so many links to it on this list, from multiple people, that it seems to have become the primary source of information for many people. Or at least, the source that they are going to cite. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Wed Feb 17 22:07:57 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 17 Feb 2010 23:07:57 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <20100217144945.t2astxtwhwowssg8@webmail.natasha.cc> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> <20100217144945.t2astxtwhwowssg8@webmail.natasha.cc> Message-ID: <4902d9991002171407p5ef97706sd378917a44e17434@mail.gmail.com> On Wed, Feb 17, 2010 at 8:49 PM, wrote: > Your observation is irrational. > > It's not. The global warming debate only exists because of PR, media and political spin. The scientific debate was over many years ago. I highly suggest Spencer Weart's history of global warming science: http://www.aip.org/history/climate/ Alfio > Natasha > > > Quoting Alfio Puglisi : > > On Wed, Feb 17, 2010 at 5:55 PM, Natasha Vita-More > >wrote: >> >> Christopher Luebcke writes: >>> >>> "... I just find it sadly ironic, and a dark reflection on the petty, >>> fiercely partisan, politicized nature of what should in fact be a sober, >>> scientific discussion, that a man accused (in the court of political >>> opinion) of fraud should suddenly be taken at his word by those same >>> accusers, when that word apparently agrees with their position." >>> >>> Many of us find it frustrating. Nevertheless the biggest problem and >>> disappointment is that, while everyone seems to annoyed and saddened, >>> there >>> is a lack of intelligent communication between all parties in viewing the >>> situation from diverse perspectives. >>> >>> Max is working to develop a level ground for discussion. Bravo to him. >>> >>> >> I don't agree. Posting all those links to garbage blogs like WUWT, and >> quoting them like they had any value instead of laughing at them (or >> desperating, depends on the mood), does a disservice to the Extropy list. >> It >> shows how a group of intelligent people can be easily manipulated by PR >> spin. I find it depressing. >> >> Alfio >> >> >> >> Best, >>> Natasha >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >>> >> > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Wed Feb 17 22:40:56 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 17 Feb 2010 14:40:56 -0800 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> References: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> Message-ID: Max More : > Not only is the one-decade/12-year record of little importance, I still have not seen adequate reason to maintain my doubts about the claim that century-long warming is definitely and entirely due to human activity rather than to a natural cyclical recovery from a cold period. Haven't seen reason to maintain your doubts, eh? Perhaps you should abandon them, and jump on the mankind-driven bandwagon instead! Couldn't resist. Personally I am more-or-less with Max here in saying that we do not yet have nearly enough evidence to support either view. Truly conclusive proof would require, at the very least, an alternate Earth (real or simulated) in which no species ever evolved to the point where it became a good idea to just start burning everything. It's fairly obvious, though, that if we are the chief cause of global warming then we have no idea what we should do differently to prevent it or even slow it down. http://en.wikipedia.org/wiki/Greenhouse_gas#Greenhouse_effects_in_Earth.27s_atmosphere http://www.aip.org/history/climate/othergas.htm http://www.geocraft.com/WVFossils/greenhouse_data.html Ugh, I actually hadn't read any of these before now (except the Wikipedia one, maybe, but I only skimmed it). Just Googled them up on the spot. We don't know anything! Damien Broderick : > I would far rather see money spent on developing plants that are not > sensitive to heat & cold (interestingly, when plants are bred for cold > tolerance, they often have heat tolerance as well, as "side effect."), on > efficient energy production (so we can create affordable microclimates and > deal with rising sea levels, if we have to), etc. In other words - figure > out how to DEAL with the problem, not STOP it.> Seconded. More heat just means more energy. Let's load up on Stirling engines* and start building cities underwater! No sense in waiting for the sea level to rise and do it for us. Or, you know, ideas that work. Like hardier plants. This is not my job. *Actually, as I understand it, global warming could more accurately be called global climate change. The hots get hotter and the colds get colder; everything, everywhere, grows extreme. So a continent-spanning array of Stirling engines might actually be useful, taking advantage of temperature differentials on global scales. Any of our geoengineers want to run the numbers on that? I'll bet you it would only be economical if we could build such a device for almost nothing. From thespike at satx.rr.com Wed Feb 17 23:01:48 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Feb 2010 17:01:48 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: References: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> Message-ID: <4B7C755C.6060104@satx.rr.com> On 2/17/2010 4:40 PM, Spencer Campbell wrote: > Actually, as I understand it, global warming could more accurately be > called global climate change. The hots get hotter and the colds get > colder; everything, everywhere, grows extreme. Yes, but the increased volatility is caused by an overall increase in trapped heat. So "global climate change" is a handy term to block idiots from waving "IT CAN'T BE GETTING HOTTER--THERE WAS SNOW HERE THIS WEEK!" mid-winter signs. But it's still warming on a global scale. Damien Broderick From lacertilian at gmail.com Wed Feb 17 23:17:01 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Wed, 17 Feb 2010 15:17:01 -0800 Subject: [ExI] Dresden Codak has our number Message-ID: I think Aaron Diaz might be reading the ongoing (and going, and going) Extropy-Chat consciousness debate. http://dresdencodak.com/2010/02/16/artificial-flight-and-other-myths-a-reasoned-examination-of-af-by-top-birds/ So uh, just in case: Hi Aaron. Update more. Thanks. Bye. (I am walking on thin ice by linking to something with so few qualifying statements. Of this, I am aware. What's the worst that could happen?) From kanzure at gmail.com Wed Feb 17 23:21:14 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 17 Feb 2010 17:21:14 -0600 Subject: [ExI] Dresden Codak has our number In-Reply-To: References: Message-ID: <55ad6af71002171521k4fed6cf2p7aa423fb2528ca4e@mail.gmail.com> On Wed, Feb 17, 2010 at 5:17 PM, Spencer Campbell wrote: > I think Aaron Diaz might be reading the ongoing (and going, and going) > Extropy-Chat consciousness debate. > > http://dresdencodak.com/2010/02/16/artificial-flight-and-other-myths-a-reasoned-examination-of-af-by-top-birds/ > > So uh, just in case: > > Hi Aaron. Update more. Thanks. Bye. I saw Aaron at the last Singularity Summit. I was talking with him for a while until I realized who he was. I had to interrupt him mid-sentence with "YOU ARE A GOD". Also, he's too humble. - Bryan http://heybryan.org/ 1 512 203 0507 From cluebcke at yahoo.com Wed Feb 17 23:30:20 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 15:30:20 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: References: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> Message-ID: <758946.78766.qm@web111202.mail.gq1.yahoo.com> > we do not > yet have nearly enough evidence to support either view. I think you may be in the minority view there; a lot of people seem to think that there's quite enough evidence. > Truly> conclusive proof would require, at the very least, an alternate Earth > (real or simulated) in which no species ever evolved to the point > where it became a good idea to just start burning everything. This is in fact what climate modelers attempt to do, with imperfect but increasing accuracy (the accuracy of models being measured by "backcasting"--starting at a point some time in the past, running the model, and seeing if the predicted results match observations). See http://en.wikipedia.org/wiki/Global_climate_model#Accuracy_of_models_that_predict_global_warming for some examples of how this problem is addressed. No models are perfect and climate is notoriously difficult to model, but that doesn't mean we know nothing, and it doesn't mean we can't improve our knowledge. I will also second the proposition that the best use of our resources, outside of research dollars to improve our understanding and forecasting abilities, is to start planning adaptation now. Given where I live, this involves adding flippers to my earthquake preparedness kit :P ----- Original Message ---- > From: Spencer Campbell > To: ExI chat list > Sent: Wed, February 17, 2010 2:40:56 PM > Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled > > Max More : > > Not only is the one-decade/12-year record of little importance, I still have > not seen adequate reason to maintain my doubts about the claim that century-long > warming is definitely and entirely due to human activity rather than to a > natural cyclical recovery from a cold period. > > Haven't seen reason to maintain your doubts, eh? Perhaps you should > abandon them, and jump on the mankind-driven bandwagon instead! > > Couldn't resist. > > Personally I am more-or-less with Max here in saying that we do not > yet have nearly enough evidence to support either view. Truly > conclusive proof would require, at the very least, an alternate Earth > (real or simulated) in which no species ever evolved to the point > where it became a good idea to just start burning everything. > > It's fairly obvious, though, that if we are the chief cause of global > warming then we have no idea what we should do differently to prevent > it or even slow it down. > > http://en.wikipedia.org/wiki/Greenhouse_gas#Greenhouse_effects_in_Earth.27s_atmosphere > > http://www.aip.org/history/climate/othergas.htm > > http://www.geocraft.com/WVFossils/greenhouse_data.html > > Ugh, I actually hadn't read any of these before now (except the > Wikipedia one, maybe, but I only skimmed it). Just Googled them up on > the spot. We don't know anything! > > Damien Broderick : > > I would far rather see money spent on developing plants that are not > > sensitive to heat & cold (interestingly, when plants are bred for cold > > tolerance, they often have heat tolerance as well, as "side effect."), on > > efficient energy production (so we can create affordable microclimates and > > deal with rising sea levels, if we have to), etc. In other words - figure > > out how to DEAL with the problem, not STOP it.> > > Seconded. > > More heat just means more energy. Let's load up on Stirling engines* > and start building cities underwater! No sense in waiting for the sea > level to rise and do it for us. > > Or, you know, ideas that work. Like hardier plants. > > This is not my job. > > > *Actually, as I understand it, global warming could more accurately be > called global climate change. The hots get hotter and the colds get > colder; everything, everywhere, grows extreme. So a continent-spanning > array of Stirling engines might actually be useful, taking advantage > of temperature differentials on global scales. Any of our geoengineers > want to run the numbers on that? I'll bet you it would only be > economical if we could build such a device for almost nothing. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From gts_2000 at yahoo.com Wed Feb 17 23:16:48 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 17 Feb 2010 15:16:48 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <218661.53873.qm@web111208.mail.gq1.yahoo.com> Message-ID: <760637.43302.qm@web36501.mail.mud.yahoo.com> --- On Wed, 2/17/10, Christopher Luebcke wrote: > Could one detect or measure consciousness on the basis of "behaviors and > reports of subjective experiences" alone, without direct anatomical > knowledge of a "brain and nervous system"? No, I don't think so. Under those circumstances we could only speculate. > Conversely, does a system with a "brain and nervous system" > necessarily have consciousness, even in the absence of > "behaviors and reports of subjective experience"? If you have a brain and a nervous system but exhibit no associated behaviors then it seems to me that you have some serious neurologoical issues. What about you, Chris? Do you have a physical brain capable of having conscious thoughts? -gts From cluebcke at yahoo.com Wed Feb 17 23:31:36 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 15:31:36 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <4B7C755C.6060104@satx.rr.com> References: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> <4B7C755C.6060104@satx.rr.com> Message-ID: <320317.89684.qm@web111209.mail.gq1.yahoo.com> Yes, every spring propels us towards Waterworld, and every fall sees us skidding towards the next ice age. ----- Original Message ---- > From: Damien Broderick > To: ExI chat list > Sent: Wed, February 17, 2010 3:01:48 PM > Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled > > On 2/17/2010 4:40 PM, Spencer Campbell wrote: > > > Actually, as I understand it, global warming could more accurately be > > called global climate change. The hots get hotter and the colds get > > colder; everything, everywhere, grows extreme. > > Yes, but the increased volatility is caused by an overall increase in trapped > heat. So "global climate change" is a handy term to block idiots from waving "IT > CAN'T BE GETTING HOTTER--THERE WAS SNOW HERE THIS WEEK!" mid-winter signs. But > it's still warming on a global scale. > > Damien Broderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From emlynoregan at gmail.com Thu Feb 18 00:37:00 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 18 Feb 2010 11:07:00 +1030 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> References: <201002172135.o1HLZYFR020721@andromeda.ziaspace.com> Message-ID: <710b78fc1002171637v76c67b76s781d9960af2faa4e@mail.gmail.com> On 18 February 2010 08:05, Max More wrote: > Emlyn: > >> I haven't read the whole thing in detail, but one early piece stuck out >> like a sore thumb, which is the old saw that there is no significant warming >> since 1998. That's just flat out wrong, and it's wrong because 1998 itself >> stuck out like a sore thumb; it was a statistical anomaly, which anyone who >> wasn't being entirely disingenuous would agree with. >> >> Here's a discussion of that issue, along with graphs showing the 1998 sore >> thumb: >> >> http://scienceblogs.com/illconsidered/2006/04/warming-stopped-in-1998.php > > The source you cite (which is almost four years old now), seems to rely for > the recent period exclusively on NASA GISS analysis. (References to CRU data > are for other periods. I didn't see any comparison with UAH or RSS.) > > In contrast, the following piece.. > http://wattsupwiththat.com/2008/03/08/3-of-4-global-metrics-show-nearly-flat-temperature-anomaly-in-the-last-decade/ > > ...compares that analysis to three other sources and notes "NASA GISS > land-ocean anomaly data showing a ten year trend of 0.151?C, which is about > 5 times larger than the largest of the three metrics above, which is UAH at > 0.028?C /ten years. " You know, what struck me about all the data presented in that article is that they include the anomolous data from approx '98, and nothing from before it. Have a look at those graphs, and tell me if you wouldn't see a more positive trend if you either excluded the oldest 12 months (the anomaly), or included a few more years before that? *Clearly* that would be the case. It's just exactly the same cherry picking as before. Have a look at the first graph: http://wattsupwiththat.files.wordpress.com/2008/03/elnino-vs-hadcrut.png?w=510 and look at the clear spike in the data around '98. See it rises higher than all the other points? If you include that anomaly and begin there, of course you'll get a flat or shallow gradient on average over a ten year period, even though the temperature (excluding that anomaly) is clearly continuing to trend strongly upward. The irony here is that people are using an anomalously hot year in a particular way to obscure the upward trend. Hilarious. I'll bet you that in a couple of years, these kinds of "skeptics" stop using the 10 year data, and start finding an excuse to use a 12 year window, 13 years, etc, for some reason, *or* inexplicably keep using the current data sets (which stop in 2007). > > Is there a good reason to rely completely on the source that seems out of > alignment with the others? (I'm going to look at the two contrasting sources > more closely when I have more time.) I don't think it's out of alignment. If he'd taken the CRU data over the same period, wouldn't it be similarly crippled by the initial anomaly? > > I'm not clear whether any or all of the four sources count as showing > "statistically significant warming" (though RSS obviously does not, since it > shows a very slight decline), but they do at least show warming greatly > below IPCC trend. really? > > To be clear: Whether or not warming since 1995 or 1998 has stopped or > considerably slowed down is of little importance. The orthodoxy and the > skeptics can agree that one decade is too short to show anything significant > about long-term trends. It does, however, raise additional doubts about AGW > models. I haven't seen a good explanation of why the models completely fail > to account for this -- and previous multi-decade, industrial-age pauses in > warming, if CO2 really is the main driver. > > Not only is the one-decade/12-year record of little importance, I still have > not seen adequate reason to maintain my doubts about the claim that > century-long warming is definitely and entirely due to human activity rather > than to a natural cyclical recovery from a cold period. As to that, it's a different claim, and none of the previous stuff speaks to it, absolutely. All I was pointing out was that there is an anomaly at 1998, and it is clearly disingenuous to include it at the start of a period then do a simple regression. Why that is important, is that it speaks to the motive of the person using the argument. Someone with solid ground to stand on, with a rational basis for their position, simply wont also include discredit arguments like the no warming since 1998 one, because it would undermine their otherwise grounded opinion and make them look like a liar. > But, obviously, that must be because I'm either stupid, evil, or probably > both. ;-) > > Max I'd never call you any of those things, that'd be crazy. But, doesn't the difference in styles of argument alone tell you something about the global warming hypothesis? Doesn't the way that the no-warming side keeps using discredited arguments, slipping from position to position ("there is no warming" becomes "well, ok, maybe there is warming, but it's not anthropogenic" becomes "well ok maybe there is some anthropogenic warming, but it's too late to act"), doesn't all that raise any red flags about what is going on here? -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From cluebcke at yahoo.com Thu Feb 18 00:28:21 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 16:28:21 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <760637.43302.qm@web36501.mail.mud.yahoo.com> References: <760637.43302.qm@web36501.mail.mud.yahoo.com> Message-ID: <209813.32982.qm@web111211.mail.gq1.yahoo.com> > > Could one detect or measure consciousness on the basis of "behaviors and > > reports of subjective experiences" alone, without direct anatomical > > knowledge of a "brain and nervous system"? > > No, I don't think so. Under those circumstances we could only speculate. I don't see how adding a knowledge of anatomy moves our position from speculation to certainty. > > Conversely, does a system with a "brain and nervous system" > > necessarily have consciousness, even in the absence of > > "behaviors and reports of subjective experience"? > > If you have a brain and a nervous system but exhibit no associated behaviors > then it seems to me that you have some serious neurologoical issues. Doubtless, but you didn't answer the question. > What about you, Chris? Do you have a physical brain capable of having conscious > thoughts? I would prefer not to partake in rhetorical questions; life's too short. I suspect you have a point primed and ready to fire for the answer that I'm bound to give to such a question, and it would save time if you simply made it. For the record, I am not interested in detecting consciousness in myself, but in other systems. ----- Original Message ---- > From: Gordon Swobe > To: ExI chat list > Sent: Wed, February 17, 2010 3:16:48 PM > Subject: Re: [ExI] Semiotics and Computability > > --- On Wed, 2/17/10, Christopher Luebcke wrote: > > > Could one detect or measure consciousness on the basis of "behaviors and > > reports of subjective experiences" alone, without direct anatomical > > knowledge of a "brain and nervous system"? > > No, I don't think so. Under those circumstances we could only speculate. > > > Conversely, does a system with a "brain and nervous system" > > necessarily have consciousness, even in the absence of > > "behaviors and reports of subjective experience"? > > If you have a brain and a nervous system but exhibit no associated behaviors > then it seems to me that you have some serious neurologoical issues. > > What about you, Chris? Do you have a physical brain capable of having conscious > thoughts? > > -gts > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From joe.dalton23 at yahoo.com Thu Feb 18 00:33:58 2010 From: joe.dalton23 at yahoo.com (Joe Dalton) Date: Wed, 17 Feb 2010 16:33:58 -0800 (PST) Subject: [ExI] K2, fake pot, a new massive threat to society? Message-ID: <925660.90709.qm@web113904.mail.gq1.yahoo.com> Ahhhh! Noooo! People are getting high and having fun and we don't have no law that lets us throw them in jail! The world's coming to and end! http://blogs.pitch.com/plog/2009/11/product_review_will_k2_synthetic_marijuana_get_you_high.php and Fake pot that acts real stymies law enforcementhttp://www.msnbc.msn.com/id/35444158/ns/health-addictions/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Thu Feb 18 01:13:34 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 17 Feb 2010 20:13:34 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <466012.12578.qm@web111205.mail.gq1.yahoo.com> <963975.24812.qm@web36502.mail.mud.yahoo.com> Message-ID: <62c14241002171713h7a1ad5c1xf6a6107846dfd396@mail.gmail.com> On Tue, Feb 16, 2010 at 5:56 AM, Stathis Papaioannou wrote: > I have proposed the example of a brain which has enough intelligence > to know what the neurons are doing: "neuron no. 15,576,456,757 in the > left parietal lobe fires in response to noradrenaline, then breaks > down the noradrenaline by means of MAO and COMT", and so on, for every > brain event. That would be the equivalent of the man in the CR: there > is understanding of the low level events, but no understanding of the > high level intelligent behaviour which these events give rise to. Do > you see how there might be *two* intelligences here, a high level and > a low level one, with neither necessarily being aware of the other? I had a thought to add the following twist to CR: The man in the box has no knowledge of the symbols he's manipulating on his first day on the job. Over time, he notices a correlation to certain values in his lookup table(s) and the food slot opening and a tray being slid in... I understand the man in the room is a metaphor for rules-processing by wrote, but what if we take the literal approach that he IS a man - even a supremely gifted intellectual who is informed that eventually these symbols will reveal the means by which he can escape? This scenario segues to the boxing problem of keeping a recursively improving AI constrained by 'friendliness' or some other artificially added bounds. (I understand that FAI is about being inherently friendly and remains friendly after infinite recursion) So assuming the man in the box has an infinite supply of pen/paper with which to keep notes on the relationship of input and output (as well as his lookup table for I/O transformations) - does it change the thought experiment considerably if there is motivation for escaping the room by learning how to manipulate the symbols? From steinberg.will at gmail.com Thu Feb 18 01:23:17 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Wed, 17 Feb 2010 20:23:17 -0500 Subject: [ExI] K2, fake pot, a new massive threat to society? In-Reply-To: <925660.90709.qm@web113904.mail.gq1.yahoo.com> References: <925660.90709.qm@web113904.mail.gq1.yahoo.com> Message-ID: <4e3a29501002171723y211a33d6v41baab037a81e949@mail.gmail.com> What's better, if one really wanted they could order a gram or two of pure JWH-018 or some of the other synthetic cannabinoids, soak some cigarettes/smokable herbs in a solution of it and sell the "buds" at a huge premium, which is pretty much what those K2 folk are doing (and the Spice guys too.) Of course, methinks money would be perhaps better allocated to one of the wonderful 2C compounds...and it's only a matter of time before those sweeter RCs start looking different enough from illegal chemicals that they'll be legal too, and the age of legal 'cid cometh... A boy can dream, can't he? -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Feb 18 01:48:10 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 18 Feb 2010 12:18:10 +1030 Subject: [ExI] Chemists deserve more credit: Atoms, Einstein, and the Matthew Effect Message-ID: <710b78fc1002171748l73316747xa75fa249927c6770@mail.gmail.com> Some more non-controversial controversy for the list seems like a good idea. I thought this article was interesting. ---------- Forwarded message ---------- From: Newsfeed to Email Gateway Date: 18 February 2010 12:03 Subject: Metamodern (1 new item) To: emlynoregan at gmail.com Metamodern (1 new item) Item 1 (02/17/10 23:41:52 UTC): Chemists deserve more credit: Atoms, Einstein, and the Matthew Effect [image: Cork cells, from Hooke?s Micrographia] Johann Josef Loschmidt Chemist, atomic scientist Chemists understood the atomic structure of molecules in the 1800s, yet many say that Einstein established the existence of atoms in a paper on Brownian motion, ?Die von der Molekularkinetischen Theorie der W?rme Gefordete Bewegung von in ruhenden Fl?ssigkeiten Suspendierten Teilchen?, published in 1905. This is perverse, and has seemed strange to me ever since I began reading the history of organic chemistry. Chemists often don?t get the credit they deserve, and this provides an outstanding example. For years, I?ve read statements like this: [Einstein] offered an experimental test for the theory of heat and proof of the existence of atoms?. [?The Hundredth Anniversary of Einstein?s Annus Mirabilis? ] Perhaps this was so for physicists in thrall (or opposition) to the philosophical ideas of another physicist, Ernst Mach;he had odd convictions about the relationship between primate eyes and physical reality, and denied the reality of invisible atoms. Confusion among physicists, however, gives reason for more (not less!) respect for the chemists who had gotten the facts right long before, and in more detail: that matter consists of atoms of distinct chemical elements, that the atoms of different elements have specific ratios of mass, and that molecules consist not only of groups of atoms, but of atoms linked by bonds (?Verwandtschaftseinheiten?) to form specific structures. When say ?more detail?, I mean a *lot* more detail than merely inferring that atoms exist. For example, organic chemists had deduced that carbon atoms form four bonds, typically (but not always) directed tetrahedrally, and that the resulting molecules can as a consequence have left- and right-handed forms. The chemists? understanding of bonding had many non-trivial consequences. For example, it made the atomic structure of benzene a problem, and made a six-membered ring of atoms with alternating single and double bonds a solution to that problem. Data regarding chemical derivatives of benzene indicated a further problem, leading to the inference that the six bonds are equivalent. Decades later, quantum mechanics provided the explanation. The evidence for these detailed and interwoven facts about atoms included a range of properties of gases, the compositions of compounds, the symmetric and asymmetric shapes of crystals, the rotation of polarized light, and the specific numbers of chemically distinct forms of molecules with related structures and identical numbers of atoms. And chemists not only understood many facts about atoms, they understood how to make new molecular structures, pioneering the subtle methods of organic synthesis that are today an integral part of the leading edge of atomically precise nanotechnology. All this atom-based knowledge and capability was in place, as I said, before 1900, courtesy of chemical research by scientists including Dalton, van ?t Hoff, Kekul?, and Pasteur. But was it really *knowledge?* By ?knowledge?, I don?t mean to imply that universal consensus had been achieved at the time, or that knowledge can ever be philosphically and absolutely certain, but I think the term fits: A substantial community of scientists had a body of theory that explained a wide range of phenomena, including the many facets of the kinetic theory of gases and a host of chemical transformations, and more. That community of scientists grew, and progressively elaborated this body of atom-based theory and technology to up to the present day, and it was confirmed, explained, and extended by physics along the way. Should we deny that this constituted knowledge, brush it all aside, and credit 20th century physics with establishing that atoms even exist? As I said: perverse. But what about *quantitative* knowledge? There is a more modest claim for Einstein?s 1905 paper: ?the bridge between the microscopic and macroscopic world was built by A. Einstein: his fundamental result expresses a macroscopic quantity ? the coefficient of diffusion ? in terms of microscopic data (elementary jumps of atoms or molecules). [?One and a Half Centuries of Diffusion: Fick, Einstein, Before and Beyond? ] This claim for the primacy of physics also seem dubious. A German chemist, Johann Josef Loschmidt, had already used macroscopic data to deduce the size of molecules in a gas. He built this quantitative bridge in a paper, ?Zur Gr?sse der Luftmolek?le?, published in 1865. ------------------------------ I had overlooked Loschmidt?s accomplishment before today. I knew of Einstein?s though, and of a phenomenon that the sociologists of science call the Matthew Effect. ------------------------------ *See also:* - A Map of Science - How to Learn About Everything ------------------------------ Free Newsfeed to RSS gateway Unsubscribe -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Feb 18 01:49:01 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 18 Feb 2010 12:49:01 +1100 Subject: [ExI] Consciouness and paracrap In-Reply-To: References: <517167.25357.qm@web110408.mail.gq1.yahoo.com> Message-ID: On 18 February 2010 07:38, Spencer Campbell wrote: > Stathis Papaioannou : >> Several people have commented that we need a definition of >> consciousness to proceed, but I disagree. I think everyone knows what >> is meant by the word and so we can have a complete discussion without >> at any point defining it. > > Dude, I barely know what I mean by the word when I use it in my own > head. Are you talking about access consciousness? Phenomenal > consciousness? Reflexive consciousness? All of the above? > > http://www.def-logic.com/articles/silby011.html > > The reason I haven't supplied a rigorous definition for consciousness, > as I have for intelligence, is because I can't articulate the meaning > of it for myself. This, to me, does not seem ancillary to the > discussion; it seems to be the very root of the discussion, namely the > question, "what is consciousness?". > > Stathis Papaioannou : >> For those who say that consciousness does >> not really exist: consciousness is that thing you are referring to >> when you say that consciousness does not really exist. > > That's fair. There isn't any question of what I'm talking about when I > refer to the Flying Spaghetti Monster. > > I can describe the FSM to you in great detail, however. I can't do the > same with consciousness, except perhaps to say that, if it exists, it > occasionally compels normally sane people to begin a sentence with > "dude". You can't define it, but when I ask you if you are conscious now do you have to stop and think? It is this immediately understood sense I am referring to. This is not to say that further elaboration is useless, but you can go a long way discussing it without explicit definition. > Stathis Papaioannou : >> The purpose of the above is to show that it is impossible (logically >> impossible, not just physically impossible) to make a brain part, and >> hence a whole brain, that behaves exactly like a biological brain but >> lacks consciousness. Either it isn't possible to make such an >> artificial component at all, or else it is possible to make such a >> component but it will necessarily also have consciousness. The >> alternative is to say that you're happy with the idea that you may be >> blind, deaf, unable to understand English etc. but neither you nor >> anyone else has noticed. >> >> Gordon Swobe's response is that this thought experiment is ridiculous >> and I should come up with another one that doesn't challenge the >> self-evident fact that digital computers cannot be conscious. > > Gordon doesn't disagree with that proposition as-stated, even if he > sometimes claims that he does (for some reason). He's consistently > said that we should be able to engineer artificial consciousness, but > that to do so requires more than a clever piece of software in a > digital computer. > > So, I suggest that you rephrase the experiment so that it explicitly > involves replacing neurons, cortices, or whole brains with > microprocessor-driven prosthetics. We know that he believes the > whole-brain version will be a zombie, but I haven't been able to > discern any clear conclusions from him on the other two. The thought experiment involves replacing brain components with artificial components that perfectly reproduce the I/O behaviour of the original components, but not the consciousness. Gordon agrees that this is possible. However, he then either claims that the artificial components will not behave the same as the biological components (even though it is an assumption of the experiment that they will) or else says the experiment is ridiculous. > He has said before that partial replacement only confuses the matter, > implying that it's a useless thought experiment. I do not see why he > would think that, though. Perhaps because he can see that it shows that his thesis that it is possible to separate consciousness from behaviour is false. It's either that or accept the possibility of partial zombies. > The only coherent answer of his I remember goes something like this: a > man has a damaged language center, and a surgeon replaces neurons with > artificial substitutes one by one. This works so poorly that the > surgeon must replace the entire brain before language function is > returned, at which point the man is a philosophical zombie. > > But we always start with the assumption that computerized neurons do > not work poorly, indeed that they "depict" ordinary neurons perfectly > (using that depiction as a guide to manipulate their synthetic axons > and such), and I've never seen him explain why he considers this > assumption to be inherently false. That's the problem: he could say that they can't work properly on the grounds that there is something non-computable about neuronal behaviour, but he does not. Instead, he agrees that they will work properly, then in the next breath says they will not work properly. -- Stathis Papaioannou From max at maxmore.com Thu Feb 18 02:00:26 2010 From: max at maxmore.com (Max More) Date: Wed, 17 Feb 2010 20:00:26 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled Message-ID: <201002180200.o1I20aeI019649@andromeda.ziaspace.com> Emlyn, See this is what happens... I'm spending way too much time on a point that really doesn't matter... I just find it frustrating that it's difficult to come to a conclusion even about a relatively narrow issue like temperatures trends in the most recent years. >You know, what struck me about all the data >presented in that article is that they include >the anomolous data from approx '98, and nothing from before it. Yes, you're right. That is a problem. Picking different base years would affect the results. But how much? Look at the charts again -- it seems clear that the moderated warming/actual cooling picks up in the last years. So, just start in 1999, 2000, or 2001, and recompute the numbers. I think the point would be essentially the same, although the analysis would yield somewhat different numbers. Although I don't have them at hand right now, I know I have seen other analyses which definitely and explicitly avoided starting at 1998. Googling around, I see this: http://rankexploits.com/musings/2008/ipcc-projections-overpredict-recent-warming/ -- which argues that temperature trends *since 2001* fall well below IPCC projections. I remember reading other reasonably credible sources that agree with that, and others that disputed it. One commentator says: John V (Comment#1009) March 10th, 2008 at 8:53 pm Hmmm, my numbers don?t quite match yours. I hope you don?t mind a question or two to track down the discrepancies: Using monthly data, I get the following global trends from Jan 2001 to Feb 2008: GISS: +0.83 C/century HadC: -0.55 C/century RSS: +0.41 C/century UAH: -0.07 C/century AVERAGE: +0.16 C/century However, when I compute the trends from Jan 2002 to Feb 2008 I get: GISS: -0.29 C/century HadC: -1.67 C/century RSS: -0.91 C/century UAH: -1.71 C/century AVERAGE: -1.14 C/century I cannot evaluate those numbers, since I don't know Cochrane-Orcutt. Anyway, as I said before (and I think you agreed), these short-term trends really don't tell us anything, so I'm going to try not to spend more time on this particular point. Actually, it's really quite annoying that the author did use 1998 as a base year. That just obscures the point that was to be made by opening him (rightly) to cherry-picking charges. It is clear to me that there are plenty of analyses suggesting that warming has recently been below trend that do NOT depend on starting with 1998. For example: The following post seems quite interesting and helpful. It explains why Lindzen's claim that there has been no "statistically significant" warming over the last 14 years (since 1995, not 1998, note) is "not wrong, per se, but neither are they particularly robust": A Cherry-Picker's Guide to Temperature Trends (down, flat?even up) http://masterresource.org/?p=5240 and, related: http://rogerpielkejr.blogspot.com/2009/10/cherry-pickers-guide-to-global.html BTW, I notice that even Gavin at RealClimate acknowledges that "(2) It is highly questionable whether this ?pause? is even real. It does show up to some extent (no cooling, but reduced 10-year warming trend) in the Hadley Center data" http://www.realclimate.org/index.php/archives/2009/10/a-warming-pause/comment-page-7/#comment-138126 Again the GISS data gives a different result. (At a quick look, I don't see him discuss the other two datasets.) Of course RealClimate attacks the analysis, then that attack is attacked... There's more commentary on the disagreement here (but, note, using only Hadley and GISS): http://rankexploits.com/musings/2009/adding-apples-and-oranges-to-cherry-picking/ Among "lucia's" conclusions: "It does look like both RC [RealClimate] and Lindzen are doing some cherry picking of different sorts as suggested by Chip in his article." I don't think you can rightly dismiss doubts about claims of continued or accelerated recent warming by looking only for those who start their charts with 1998. I agree completely, however, that it's right to criticize those who do so. Sorry for wasting your time and mine on this rather insignificant (but annoyingly nagging) issue. Max From cluebcke at yahoo.com Thu Feb 18 01:43:17 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 17:43:17 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <62c14241002171713h7a1ad5c1xf6a6107846dfd396@mail.gmail.com> References: <466012.12578.qm@web111205.mail.gq1.yahoo.com> <963975.24812.qm@web36502.mail.mud.yahoo.com> <62c14241002171713h7a1ad5c1xf6a6107846dfd396@mail.gmail.com> Message-ID: <465007.28439.qm@web111210.mail.gq1.yahoo.com> It's also worth considering in what way teaching the man to follow all the rules for manipulating symbols is a different activity than teaching him Chinese. It may be different at the start, but I suspect that, if successful, it amounts to the same thing at the end. ----- Original Message ---- > From: Mike Dougherty > To: ExI chat list > Sent: Wed, February 17, 2010 5:13:34 PM > Subject: Re: [ExI] Semiotics and Computability > > On Tue, Feb 16, 2010 at 5:56 AM, Stathis Papaioannou wrote: > > I have proposed the example of a brain which has enough intelligence > > to know what the neurons are doing: "neuron no. 15,576,456,757 in the > > left parietal lobe fires in response to noradrenaline, then breaks > > down the noradrenaline by means of MAO and COMT", and so on, for every > > brain event. That would be the equivalent of the man in the CR: there > > is understanding of the low level events, but no understanding of the > > high level intelligent behaviour which these events give rise to. Do > > you see how there might be *two* intelligences here, a high level and > > a low level one, with neither necessarily being aware of the other? > > I had a thought to add the following twist to CR: The man in the box > has no knowledge of the symbols he's manipulating on his first day on > the job. Over time, he notices a correlation to certain values in his > lookup table(s) and the food slot opening and a tray being slid in... > I understand the man in the room is a metaphor for rules-processing by > wrote, but what if we take the literal approach that he IS a man - > even a supremely gifted intellectual who is informed that eventually > these symbols will reveal the means by which he can escape? This > scenario segues to the boxing problem of keeping a recursively > improving AI constrained by 'friendliness' or some other artificially > added bounds. (I understand that FAI is about being inherently > friendly and remains friendly after infinite recursion) > > So assuming the man in the box has an infinite supply of pen/paper > with which to keep notes on the relationship of input and output (as > well as his lookup table for I/O transformations) - does it change the > thought experiment considerably if there is motivation for escaping > the room by learning how to manipulate the symbols? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From rafal.smigrodzki at gmail.com Thu Feb 18 03:55:34 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 17 Feb 2010 22:55:34 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <154647.10956.qm@web111212.mail.gq1.yahoo.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> <154647.10956.qm@web111212.mail.gq1.yahoo.com> Message-ID: <7641ddc61002171955i33b65d34i80dcef79dc21669b@mail.gmail.com> On Wed, Feb 17, 2010 at 12:33 PM, Christopher Luebcke wrote: > Let me just add: I am not a climatologist, and therefore even if I had read a wide variety of peer-reviewed papers on the subject, I would not be qualified to determine whether I had made an accurate sampling, much less judge the papers on their merits. > > My claim comes in fact from paying attention to those organizations who are responsible for gathering and summarizing professional research and judgement on the subject: Not just IPCC, but NAS, AMS, AGU and AAAS are all organizations, far as I can tell, that are both qualified to comment on the matter, and who have supported the general position that AGW is real. > > If you are going to dismiss well-respected scientific bodies that hold positions contrary to your own as necessarily having been "infiltrated by environmental activists", then it is incumbent upon you to provide some evidence that such infiltration by such people has actually taken place. ### Actually it's the other way around - once I started having doubts (a long time ago) about the catastrophic AGW scare, I decided to check some crucial publications myself, including their reviews by minority scientists. Luckily, climate science is not rocket science or string theory, so a mentally agile layperson can get a good idea of plausibility of claims presented there, and can follow lines of argumentation laid out by opposing sides sufficiently well to spot gross aberrations, and very importantly, separate the actual claims advanced in primary publications from the distortions introduced by secondary publications (i.e. reviews in peer-reviewed literature), and complete garbage spouted by tertiary publications (which is unfortunately the only source of climate information for 99.9% participants in the debate). Now, once I became in this way convinced that peer-reviewed literature emphatically does not support AGW, I had to explain why the tertiary literature and non-peer-reviewed communications of some climate seem to be telling a dramatically different story. So far my best explanation is that there is a clique of environmental activists (James Hansen, Thomas Karl, Michael Mann) who were appointed to a few key positions in the science establishment, including review groups, and have since then manufactured the AGW scare. Of course, scientific organizations, such as NAS or APS don't have an opinion about science - they always rely on the input of a small number of active researchers on any particular issue, and if their input (produced by Mann et al.) is corrupt, their output in the form of policy statements will be corrupt as well. GIGO. But, the question of exactly what mechanisms ("infiltration" or others) caused the science establishment to fail so badly here is just a side issue - the key question is whether AGW is real, and for that I can only urge you to delve into the primary literature and form an opinion directly. --------------------------------- > > I wonder if it wouldn't be possible to disagree with their positions, though, without also presuming that the people you disagree with are wicked? That's the larger point of what I was trying to get at. ### The core group of about 50 climate activists (The Team, as they refer to themselves), are wicked. They intentionally forged and misrepresented data to advance a preconceived position. The remaining "thousands of scientists" who lent their support to the AGW scare just failed to read and critically analyze the literature, which makes them incompetent but not wicked. Rafal From emlynoregan at gmail.com Thu Feb 18 04:01:03 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 18 Feb 2010 14:31:03 +1030 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <201002180200.o1I20aeI019649@andromeda.ziaspace.com> References: <201002180200.o1I20aeI019649@andromeda.ziaspace.com> Message-ID: <710b78fc1002172001w2274af4dyaf24e6ee712f09bf@mail.gmail.com> On 18 February 2010 12:30, Max More wrote: > Emlyn, > > See this is what happens... I'm spending way too much time on a point that > really doesn't matter... I just find it frustrating that it's difficult to > come to a conclusion even about a relatively narrow issue like temperatures > trends in the most recent years. Hey, here we come to one of the really fundamental points about this. How much time can you spend on stuff like this? On one hand, you want to be an informed person, be able to hold only opinions for which you have a rational basis. On the other hand, the world is really vastly too complex for any one person to do that and also function effectively. We're stuck with this efficiency tradeoff - do I try to "focus across the board", and fail miserably, or do I trust other people regarding the vast majority off stuff, the stuff about which I know too little to comment. Of course we do the latter. We temper it as much as possible with meta-level techniques for judging knowledge not by the too-complex content but by the approach and structure of those who work with it (eg: religious "knowledge" can be rejected in part because of the unsupportable approach and structure), debating techniques, prodding for inconsistencies, etc etc. With regard to the climate issue, it's clear at least to me that it is far too complex to understand as a lay person. You either dive in deep and become at least an amateur climatologist (like Darwin was an amateur biologist, ie: you live it), or you trust the people who do. As far as I can tell, the various minor scandals notwithstanding, there are a lot of specialised people in this field who know their stuff, and they say there's a serious, anthropogenic warming problem. Most of what they argue about is the details, the scale. > I don't think you can rightly dismiss doubts about claims of continued or > accelerated recent warming by looking only for those who start their charts > with 1998. I agree completely, however, that it's right to criticize those > who do so. There's so much written about all facets of this problem, that good broad filters make sense IMO, and the first one I use is to remove anything where people are boldly making specious claims, which they must either know to be specious, or else they are incompetent. I did consider for a while simply ignoring all content from the US, and I still think that might provide a clearer picture, but since Lord Monckton turned up it doesn't quite look as viable. > > Sorry for wasting your time and mine on this rather insignificant (but > annoyingly nagging) issue. > > Max Even though I cut a lot of what you wrote, I enjoyed your response Max, there's a lot of meat in it. This whole climate change issue is depressing, fundamentally because there's not much upside, but it's important enough to be spending a few cycles on, so it's not a waste of time. And, Searle's not involved, which is a relief, no? -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From emlynoregan at gmail.com Thu Feb 18 04:29:05 2010 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 18 Feb 2010 14:59:05 +1030 Subject: [ExI] IPCC errors: facts and spin Message-ID: <710b78fc1002172029q7d329f3bp34d66da0c13dc0be@mail.gmail.com> Apologies if this has turned up before. I haven't read AR4 (because I'm not a masochistic freak), so have to take this mostly on face value. It seems reasonable though. If I were to level any criticism at the IPCC report based on this, it would be at the process; this laborious effort to coordinate the input of thousands of volunteers to make several dense books, which then inevitably have some problems, seems like a poor approach in the modern world. Surely an online collaborative approach, with the ability to amend (bug fix) would be vastly superior?? ---- IPCC errors: facts and spin http://www.realclimate.org/index.php/archives/2010/02/ipcc-errors-facts-and-spin/ Currently, a few errors ?and supposed errors? in the last IPCC report (?AR4?) are making the media rounds ? together with a lot of distortion and professional spin by parties interested in discrediting climate science. Time for us to sort the wheat from the chaff: which of these putative errors are real, and which not? And what does it all mean, for the IPCC in particular, and for climate science more broadly? Let?s start with a few basic facts about the IPCC. The IPCC is not, as many people seem to think, a large organization. In fact, it has only 10 full-time staff in its secretariat at the World Meteorological Organization in Geneva, plus a few staff in four technical support units that help the chairs of the three IPCC working groups and the national greenhouse gas inventories group. The actual work of the IPCC is done by unpaid volunteers ? thousands of scientists at universities and research institutes around the world who contribute as authors or reviewers to the completion of the IPCC reports. A large fraction of the relevant scientific community is thus involved in the effort. The three working groups are: Working Group 1 (WG1), which deals with the physical climate science basis, as assessed by the climatologists, including several of the Realclimate authors. Working Group 2 (WG2), which deals with impacts of climate change on society and ecosystems, as assessed by social scientists, ecologists, etc. Working Group 3 (WG3) , which deals with mitigation options for limiting global warming, as assessed by energy experts, economists, etc. Assessment reports are published every six or seven years and writing them takes about three years. Each working group publishes one of the three volumes of each assessment. The focus of the recent allegations is the Fourth Assessment Report (AR4), which was published in 2007. Its three volumes are almost a thousand pages each, in small print. They were written by over 450 lead authors and 800 contributing authors; most were not previous IPCC authors. There are three stages of review involving more than 2,500 expert reviewers who collectively submitted 90,000 review comments on the drafts. These, together with the authors? responses to them, are all in the public record. Errors in the IPCC Fourth Assessment Report (AR4) As far as we?re aware, so far only one?or at most two?legitimate errors have been found in the AR4: Himalayan glaciers: In a regional chapter on Asia in Volume 2, written by authors from the region, it was erroneously stated that 80% of Himalayan glacier area would very likely be gone by 2035. This is of course not the proper IPCC projection of future glacier decline, which is found in Volume 1 of the report. There we find a 45-page, perfectly valid chapter on glaciers, snow and ice (Chapter 4), with the authors including leading glacier experts (such as our colleague Georg Kaser from Austria, who first discovered the Himalaya error in the WG2 report). There are also several pages on future glacier decline in Chapter 10 (?Global Climate Projections?), where the proper projections are used e.g. to estimate future sea level rise. So the problem here is not that the IPCC?s glacier experts made an incorrect prediction. The problem is that a WG2 chapter, instead of relying on the proper IPCC projections from their WG1 colleagues, cited an unreliable outside source in one place. Fixing this error involves deleting two sentences on page 493 of the WG2 report. Sea level in the Netherlands: The WG2 report states that ?The Netherlands is an example of a country highly susceptible to both sea-level rise and river flooding because 55% of its territory is below sea level?. This sentence was provided by a Dutch government agency ? the Netherlands Environmental Assessment Agency, which has now published a correction stating that the sentence should have read ?55 per cent of the Netherlands is at risk of flooding; 26 per cent of the country is below sea level, and 29 per cent is susceptible to river flooding?. It surely will go down as one of the more ironic episodes in its history when the Dutch parliament last Monday derided the IPCC, in a heated debate, for printing information provided by ? the Dutch government. In addition, the IPCC notes that there are several definitions of the area below sea level. The Dutch Ministry of Transport uses the figure 60% (below high water level during storms), while others use 30% (below mean sea level). Needless to say, the actual number mentioned in the report has no bearing on any IPCC conclusions and has nothing to do with climate science, and it is questionable whether it should even be counted as an IPCC error. Some other issues African crop yields: The IPCC Synthesis Report states: ?By 2020, in some countries, yields from rain-fed agriculture could be reduced by up to 50%.? This is properly referenced back to chapter 9.4 of WG2, which says: ?In other countries, additional risks that could be exacerbated by climate change include greater erosion, deficiencies in yields from rain-fed agriculture of up to 50% during the 2000-2020 period, and reductions in crop growth period (Agoumi, 2003).? The Agoumi reference is correct and reported correctly. The Sunday Times, in an article by Jonathan Leake, labels this issue ?Africagate? ? the main criticism being that Agoumi (2003) is not a peer-reviewed study (see below for our comments on ?gray? literature), but a report from the International Institute for Sustainable Development and the Climate Change Knowledge Network, funded by the US Agency for International Development. The report, written by Morroccan climate expert Professor Ali Agoumi, is a summary of technical studies and research conducted to inform Initial National Communications from three countries (Morocco, Algeria and Tunisia) to the United Nations Framework Convention on Climate Change, and is a perfectly legitimate IPCC reference. It is noteworthy that chapter 9.4 continues with ?However, there is the possibility that adaptation could reduce these negative effects (Benhin, 2006).? Some examples thereof follow, and then it states: ?However, not all changes in climate and climate variability will be negative, as agriculture and the growing seasons in certain areas (for example, parts of the Ethiopian highlands and parts of southern Africa such as Mozambique), may lengthen under climate change, due to a combination of increased temperature and rainfall changes (Thornton et al., 2006). Mild climate scenarios project further benefits across African croplands for irrigated and, especially, dryland farms.? (Incidentally, the Benhin and Thornton references are also ?gray?, but nobody has complained about them. Could there be double standards amongst the IPCC?s critics?) Chapter 9.4 to us sounds like a balanced discussion of potential risks and benefits, based on the evidence available at the time?hardly the stuff for shrill ?Africagate!? cries. If the IPCC can be criticized here, it is that in condensing these results for its Synthesis Report, important nuance and qualification were lost ? especially the point that the risk of drought (defined as a 50% downturn in rainfall) ?could be exacerbated by climate change?, as chapter 9.4 wrote ? rather than being outright caused by climate change. Trends in disaster losses: Jonathan Leake (again) in The Sunday Times accused the IPCC of wrongly linking global warming to natural disasters. The IPCC in a statement points out errors in Leake?s ?misleading and baseless story?, and maintains that the IPCC provided ?a balanced treatment of a complicated and important issue?. While we agree with the IPCC here, WG2 did include a debatable graph provided by Robert Muir-Wood (although not in the main report but only as Supplementary Material). It cited a paper by Muir-Wood as its source although that paper doesn?t include the graph, only the analysis that it is based on. Muir-Wood himself has gone on record to say that the IPCC has fairly represented his research findings and that it was appropriate to include them in the report. In our view there is no IPCC error here; at best there is a difference of opinion. Obviously, not every scientist will always agree with assessments made by the IPCC author teams. Amazon forest dieback: Leake (yet again), with ?research? by skeptic Richard North, has also promoted ?Amazongate? with a story regarding a WG2 statement on the future of Amazonian forests under a drying climate. The contested IPCC statement reads: ?Up to 40% of the Amazonian forests could react drastically to even a slight reduction in precipitation; this means that the tropical vegetation, hydrology and climate system in South America could change very rapidly to another steady state, not necessarily producing gradual changes between the current and the future situation (Rowell and Moore, 2000).? Leake?s problem is with the Rowell and Moore reference, a WWF report. The roots of the story are in two blog pieces by North, in which he first claims that the IPCC assertions attributed to the WWF report are not actually in that report. Since this claim was immediately shown to be false, North then argued that the WWF report?s basis for their statement (a 1999 Nature article by Nepstad et al.) dealt only with the effects of logging and fire ?not drought? on Amazonian forests. To these various claims Nepstad has now responded, noting that the IPCC statement is in fact correct. The only issue is that the IPCC cited the WWF report rather than the underlying peer-reviewed papers by Nepstad et al. These studies actually provide the basis for the IPCC?s estimate on Amazonian sensitivity to drought. Investigations of the correspondence between Leake, scientists, and a BBC reporter (see here and here and here) show that Leake ignored or misrepresented explanatory information given to him by Nepstad and another expert, Simon Lewis, and published his incorrect story anyway. This ?issue? is thus completely without merit. Gray literature: The IPCC cites 18,000 references in the AR4; the vast majority of these are peer-reviewed scientific journal papers. The IPCC maintains a clear guideline on the responsible use of so-called ?gray? literature, which are typically reports by other organizations or governments. Especially for Working Groups 2 and 3 (but in some cases also for 1) it is indispensable to use gray sources, since many valuable data are published in them: reports by government statistics offices, the International Energy Agency, World Bank, UNEP and so on. This is particularly true when it comes to regional impacts in the least developed countries, where knowledgeable local experts exist who have little chance, or impetus, to publish in international science journals. Reports by non-governmental organizations like the WWF can be used (as in the Himalaya glacier and Amazon forest cases) but any information from them needs to be carefully checked (this guideline was not followed in the former case). After all, the role of the IPCC is to assess information, not just compile anything it finds. Assessment involves a level of critical judgment, double-checking, weighing supporting and conflicting pieces of evidence, and a critical appreciation of the methodology used to obtain the results. That is why leading researchers need to write the assessment reports ? rather than say, hiring graduate students to compile a comprehensive literature review. Media distortions To those familiar with the science and the IPCC?s work, the current media discussion is in large part simply absurd and surreal. Journalists who have never even peeked into the IPCC report are now outraged that one wrong number appears on page 493 of Volume 2. We?ve met TV teams coming to film a report on the IPCC reports? errors, who were astonished when they held one of the heavy volumes in hand, having never even seen it. They told us frankly that they had no way to make their own judgment; they could only report what they were being told about it. And there are well-organized lobby forces with proper PR skills that make sure these journalists are being told the ?right? story. That explains why some media stories about what is supposedly said in the IPCC reports can easily be falsified simply by opening the report and reading. Unfortunately, as a broad-based volunteer effort with only minimal organizational structure the IPCC is not in a good position to rapidly counter misinformation. One near-universal meme of the media stories on the Himalaya mistake was that this was ?one of the most central predictions of the IPCC? ? apparently in order to make the error look more serious than it was. However, this prediction does not appear in any of the IPCC Summaries for Policy Makers, nor in the Synthesis Report (which at least partly explains why it went unnoticed for years). None of the media reports that we saw properly explained that Volume 1 (which is where projections of physical climate changes belong) has an extensive and entirely valid discussion of glacier loss. What apparently has happened is that interested quarters, after the Himalyan glacier story broke, have sifted through the IPCC volumes with a fine-toothed comb, hoping to find more embarrassing errors. They have actually found precious little, but the little they did find was promptly hyped into Seagate, Africagate, Amazongate and so on. This has some similarity to the CRU email theft, where precious little was discovered from among thousands of emails, but a few sentences were plucked out of context, deliberately misinterpreted (like ?hide the decline?) and then hyped into ?Climategate?. As lucidly analysed by Tim Holmes, there appear to be a few active leaders of this misinformation parade in the media. Jonathan Leake is carrying the ball on this, but his stories contain multiple errors, misrepresentations and misquotes. There also is a sizeable contingent of me-too journalism that is simply repeating the stories but not taking the time to form a well-founded view on the topics. Typically they report on various ?allegations?, such as these against the IPCC, similar to reporting that the CRU email hack lead to ?allegations of data manipulation?. Technically it isn?t even wrong that there were such allegations. But isn?t it the responsibility of the media to actually investigate whether allegations have any merit before they decide to repeat them? Leake incidentally attacked the scientific work of one of us (Stefan) in a Sunday Times article in January. This article was rather biased and contained some factual errors that Stefan asked to be corrected. He has received no response, nor was any correction made. Two British scientists quoted by Leake ? Jonathan Gregory and Simon Holgate ? independently wrote to Stefan after the article appeared to say they had been badly misquoted. One of them wrote that the experience with Leake had made him ?reluctant to speak to any journalist about any subject at all?. Does the IPCC need to change? The IPCC has done a very good job so far, but certainly there is room for improvement. The review procedures could be organized better, for example. Until now, anyone has been allowed to review any part of the IPCC drafts they liked, but there was no coordination in the sense that say, a glacier expert was specifically assigned to double-check parts of the WG2 chapter on Asia. Such a practice would likely have caught the Himalayan glacier mistake. Another problem has been that reports of all three working groups had to be completed nearly at the same time, making it hard for WG2 to properly base their discussions on the conclusions and projections from WG1. This has already been improved on for the AR5, for which the WG2 report can be completed six months after the WG1 report. Also, these errors revealed that the IPCC had no mechanism to publish errata. Since a few errors will inevitably turn up in a 2800-page report, obviously an avenue is needed to publish errata as soon as errors are identified. Is climate science sound? In some media reports the impression has been given that even the fundamental results of climate change science are now in question, such as whether humans are in fact changing the climate, causing glacier melt, sea level rise and so on. The IPCC does not carry out primary research, and hence any mistakes in the IPCC reports do not imply that any climate research itself is wrong. A reference to a poor report or an editorial lapse by IPCC authors obviously does not undermine climate science. Doubting basic results of climate science based on the recent claims against the IPCC is particularly ironic since none of the real or supposed errors being discussed are even in the Working Group 1 report, where the climate science basis is laid out. To be fair to our colleagues from WG2 and WG3, climate scientists do have a much simpler task. The system we study is ruled by the well-known laws of physics, there is plenty of hard data and peer-reviewed studies, and the science is relatively mature. The greenhouse effect was discovered in 1824 by Fourier, the heat trapping properties of CO2 and other gases were first measured by Tyndall in 1859, the climate sensitivity to CO2 was first computed in 1896 by Arrhenius, and by the 1950s the scientific foundations were pretty much understood. Do the above issues suggest ?politicized science?, deliberate deceptions or a tendency towards alarmism on the part of IPCC? We do not think there is any factual basis for such allegations. To the contrary, large groups of (inherently cautious) scientists attempting to reach a consensus in a societally important collaborative document is a prescription for reaching generally ?conservative? conclusions. And indeed, before the recent media flash broke out, the real discussion amongst experts was about the AR4 having underestimated, not exaggerated, certain aspects of climate change. These include such important topics as sea level rise and sea ice decline (see the sea ice and sea level chapters of the Copenhagen Diagnosis), where the data show that things are changing faster than the IPCC expected. Overall then, the IPCC assessment reports reflect the state of scientific knowledge very well. There have been a few isolated errors, and these have been acknowledged and corrected. What is seriously amiss is something else: the public perception of the IPCC, and of climate science in general, has been massively distorted by the recent media storm. All of these various ?gates? ? Climategate, Amazongate, Seagate, Africagate, etc., do not represent scandals of the IPCC or of climate science. Rather, they are the embarrassing battle-cries of a media scandal, in which a few journalists have misled the public with grossly overblown or entirely fabricated pseudogates, and many others have naively and willingly followed along without seeing through the scam. It is not up to us as climate scientists to clear up this mess ? it is up to the media world itself to put this right again, e.g. by publishing proper analysis pieces like the one of Tim Holmes and by issuing formal corrections of their mistaken reporting. We will follow with great interest whether the media world has the professional and moral integrity to correct its own errors. PS. A new book by Realclimate-authors David Archer and Stefan Rahmstorf critically discussing the main findings of the AR4 (all three volumes) is just out: The Climate Crisis. None of the real or alleged errors are in this book, since none of those contentious statements plucked from the thousands of pages appeared to be ?main findings? that needed to be discussed in a 250-page summary. PPS. Same thing for Mike?s book Dire Predictions: Understanding Global Warming, which bills itself as ?The illustrated guide to the findings of the IPCC?. Or Gavin?s ?Climate Change: Picturing the Science? ? which does include a few pictures of disappearing glaciers though! -- Emlyn http://www.songsofmiseryanddespair.com - My show, Fringe 2010 http://point7.wordpress.com - My blog From cluebcke at yahoo.com Thu Feb 18 04:41:10 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 17 Feb 2010 20:41:10 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <7641ddc61002171955i33b65d34i80dcef79dc21669b@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> <154647.10956.qm@web111212.mail.gq1.yahoo.com> <7641ddc61002171955i33b65d34i80dcef79dc21669b@mail.gmail.com> Message-ID: <588495.24073.qm@web111212.mail.gq1.yahoo.com> > The core group of about 50 climate activists (The Team, as they > refer to themselves), are wicked. They intentionally forged and > misrepresented data to advance a preconceived position. I wonder if you could cite some examples of this intentional forgery? ----- Original Message ---- From: Rafal Smigrodzki To: ExI chat list Sent: Wed, February 17, 2010 7:55:34 PM Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled On Wed, Feb 17, 2010 at 12:33 PM, Christopher Luebcke wrote: > Let me just add: I am not a climatologist, and therefore even if I had read a wide variety of peer-reviewed papers on the subject, I would not be qualified to determine whether I had made an accurate sampling, much less judge the papers on their merits. > > My claim comes in fact from paying attention to those organizations who are responsible for gathering and summarizing professional research and judgement on the subject: Not just IPCC, but NAS, AMS, AGU and AAAS are all organizations, far as I can tell, that are both qualified to comment on the matter, and who have supported the general position that AGW is real. > > If you are going to dismiss well-respected scientific bodies that hold positions contrary to your own as necessarily having been "infiltrated by environmental activists", then it is incumbent upon you to provide some evidence that such infiltration by such people has actually taken place. ### Actually it's the other way around - once I started having doubts (a long time ago) about the catastrophic AGW scare, I decided to check some crucial publications myself, including their reviews by minority scientists. Luckily, climate science is not rocket science or string theory, so a mentally agile layperson can get a good idea of plausibility of claims presented there, and can follow lines of argumentation laid out by opposing sides sufficiently well to spot gross aberrations, and very importantly, separate the actual claims advanced in primary publications from the distortions introduced by secondary publications (i.e. reviews in peer-reviewed literature), and complete garbage spouted by tertiary publications (which is unfortunately the only source of climate information for 99.9% participants in the debate). Now, once I became in this way convinced that peer-reviewed literature emphatically does not support AGW, I had to explain why the tertiary literature and non-peer-reviewed communications of some climate seem to be telling a dramatically different story. So far my best explanation is that there is a clique of environmental activists (James Hansen, Thomas Karl, Michael Mann) who were appointed to a few key positions in the science establishment, including review groups, and have since then manufactured the AGW scare. Of course, scientific organizations, such as NAS or APS don't have an opinion about science - they always rely on the input of a small number of active researchers on any particular issue, and if their input (produced by Mann et al.) is corrupt, their output in the form of policy statements will be corrupt as well. GIGO. But, the question of exactly what mechanisms ("infiltration" or others) caused the science establishment to fail so badly here is just a side issue - the key question is whether AGW is real, and for that I can only urge you to delve into the primary literature and form an opinion directly. --------------------------------- > > I wonder if it wouldn't be possible to disagree with their positions, though, without also presuming that the people you disagree with are wicked? That's the larger point of what I was trying to get at. ### The core group of about 50 climate activists (The Team, as they refer to themselves), are wicked. They intentionally forged and misrepresented data to advance a preconceived position. The remaining "thousands of scientists" who lent their support to the AGW scare just failed to read and critically analyze the literature, which makes them incompetent but not wicked. Rafal _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From gts_2000 at yahoo.com Thu Feb 18 12:46:09 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 18 Feb 2010 04:46:09 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <209813.32982.qm@web111211.mail.gq1.yahoo.com> Message-ID: <384148.34945.qm@web36503.mail.mud.yahoo.com> --- On Wed, 2/17/10, Christopher Luebcke wrote: > I don't see how adding a knowledge of anatomy moves our > position from speculation to certainty. If I want to know if you have blood pumping through your veins, it helps me to know if you have a heart in your chest. Similarly, if I want to know if you have subjective mental states, it helps me to know if you have a brain in your head. But even then the sceptics will have room to doubt. Some philosophers abandon common sense and dive deep down into the sceptic's rabbit hole such that they doubt even their own existence. Funny thing is that they take the time to try to convince others to do the same. I don't know who they think they are or who they hope to convince. >> Conversely, does a system with a "brain and nervous system" >> necessarily have consciousness, even in the >> absence of "behaviors and reports of subjective >> experience"? >> >> If you have a brain and a nervous system but exhibit >> no associated behaviors then it seems to me that you have some serious >> neurologoical issues. > > Doubtless, but you didn't answer the question. Sorry I only implied the answer: if you have a brain and nervous sytem but no behaviors or reports (including self-reports) of subjective experience then you have serious neurological issues meaning that you effectively do not have a healthy brain and nervous system in the first place. -gts From gts_2000 at yahoo.com Thu Feb 18 13:46:58 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 18 Feb 2010 05:46:58 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <465007.28439.qm@web111210.mail.gq1.yahoo.com> Message-ID: <170982.84025.qm@web36502.mail.mud.yahoo.com> > I understand the man in the room is a metaphor for rules-processing by > rote, but what if we take the literal approach that he IS a man - even a > supremely gifted intellectual who is informed that eventually these > symbols will reveal the means by which he can escape?? Lots of unread messages on my end and I'm in a rush this morning and I don't know who wrote the above (Mike?) but I wanted to take a moment to encourage this approach above. The CRA tells us as much about the philosophy of mind as it does about computers. The man cannot understand the symbols - no way, no how, not in a million years - and when you realize this you'll learn something important about yourself. -gts From stathisp at gmail.com Thu Feb 18 14:00:48 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 19 Feb 2010 01:00:48 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <170982.84025.qm@web36502.mail.mud.yahoo.com> References: <465007.28439.qm@web111210.mail.gq1.yahoo.com> <170982.84025.qm@web36502.mail.mud.yahoo.com> Message-ID: On 19 February 2010 00:46, Gordon Swobe wrote: >> I understand the man in the room is a metaphor for rules-processing by >> rote, but what if we take the literal approach that he IS a man - even a >> supremely gifted intellectual who is informed that eventually these >> symbols will reveal the means by which he can escape? > > Lots of unread messages on my end and I'm in a rush this morning and I don't know who wrote the above (Mike?) but I wanted to take a moment to encourage this approach above. The CRA tells us as much about the philosophy of mind as it does about computers. > > The man cannot understand the symbols - no way, no how, not in a million years - and when you realize this you'll learn something important about yourself. Amazingly the brain does understand symbols, even though it is in the same position as the CR, except worse since the neurons are far dumber than even the dumbest man. When you understand this you will understand something important about yourself. -- Stathis Papaioannou From jameschoate at austin.rr.com Thu Feb 18 14:23:29 2010 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 18 Feb 2010 14:23:29 +0000 Subject: [ExI] CogPrint - Cognitive Science pre-Prints Message-ID: <20100218142329.A97ZH.12952.root@hrndva-web10-z02> http://cogprints.org/view/subjects/comp-sci-robot.html -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From msd001 at gmail.com Thu Feb 18 14:26:24 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 18 Feb 2010 09:26:24 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: <170982.84025.qm@web36502.mail.mud.yahoo.com> References: <465007.28439.qm@web111210.mail.gq1.yahoo.com> <170982.84025.qm@web36502.mail.mud.yahoo.com> Message-ID: <62c14241002180626m7f05bee0tbccfdb1b5fe442f1@mail.gmail.com> On Thu, Feb 18, 2010 at 8:46 AM, Gordon Swobe wrote: >> I understand the man in the room is a metaphor for rules-processing by >> rote, but what if we take the literal approach that he IS a man - even a >> supremely gifted intellectual who is informed that eventually these >> symbols will reveal the means by which he can escape? > > Lots of unread messages on my end and I'm in a rush this morning and I don't know who wrote the above (Mike?) but I wanted to take a moment to encourage this approach above. The CRA tells us as much about the philosophy of mind as it does about computers. > > The man cannot understand the symbols - no way, no how, not in a million years - and when you realize this you'll learn something important about yourself. Are you saying "man cannot understand" as a philosophical point? I'm talking about the fact that I may not know what the chinese symbols are on the take-out menu, but I can notice the shapes are similar from one dish to another. Eventually I might observe the symbol for "chicken" and have some idea that there's a pattern of usage. Sure, that symbol might occur in an Enigma Machine that constantly changes the context for the symbol's use - but that suggests only that it takes more effort to establish the pattern (if one exists) If you are suggesting that there is NO order at all to the CR experiment and that the IO transformation is arbitrary and the signals chaotic - then what's the point? You'd be starting with chaos and claiming that order & meaning is not present by definition. I think an experiment like that has less usefulness than "I feel like XYZ is true" From gts_2000 at yahoo.com Thu Feb 18 14:33:37 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 18 Feb 2010 06:33:37 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <597645.99584.qm@web36501.mail.mud.yahoo.com> --- On Thu, 2/18/10, Stathis Papaioannou wrote: >> The man cannot understand the symbols - no way, no >> how, not in a million years - and when you realize this >> you'll learn something important about yourself. > > Amazingly the brain does understand symbols, even though it > is in the same position as the CR, except worse since the neurons are > far dumber than even the dumbest man. When you understand this you > will understand something important about yourself. That's also true. The man cannot understand the symbols and he does no more than implement a program. But the human brain understands symbols. So, either 1) the brain does not implement programs, or 2) the brain implements programs and does something else also. -gts From aware at awareresearch.com Thu Feb 18 15:12:40 2010 From: aware at awareresearch.com (Aware) Date: Thu, 18 Feb 2010 07:12:40 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <597645.99584.qm@web36501.mail.mud.yahoo.com> References: <597645.99584.qm@web36501.mail.mud.yahoo.com> Message-ID: n Thu, Feb 18, 2010 at 6:33 AM, Gordon Swobe wrote: > --- On Thu, 2/18/10, Stathis Papaioannou wrote: > >>> The man cannot understand the symbols - no way, no >>> how, not in a million years - and when you realize this >>> you'll learn something important about yourself. >> >> Amazingly the brain does understand symbols, even though it >> is in the same position as the CR, except worse since the neurons are >> far dumber than even the dumbest man. When you understand this you >> will understand something important about yourself. > > That's also true. > > The man cannot understand the symbols and he does no more than implement a program. Yes. This is clear and consistent. > But the human brain understands symbols. Despite the understandably seductive nature of the belief, and its natural and expected origins in the necessity that any situated agent, to be effective, must have a model of itself to which it can refer, that assertion is unsupported. It can't even be modeled in any way that can be tested. > So, either > > 1) the brain does not implement programs, or > 2) the brain implements programs and does something else also. Or, the simpler, more coherent explanation that NO SYSTEM "has" any essential understanding (an essence which you can not even define, but only point to in de se terms) but many systems demonstrate appropriate behavior, meaningful only in terms of an observer, even when that observer is considered a part of the observed. There is no reason that we should reject functional equivalence, substrate independence, or computational models of self-aware systems, nor is there any reason to postulate the existence of some mysterious "something else." - Jef From natasha at natasha.cc Thu Feb 18 16:31:39 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 18 Feb 2010 10:31:39 -0600 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <4902d9991002171407p5ef97706sd378917a44e17434@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com><412543.58639.qm@web111214.mail.gq1.yahoo.com><81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1><4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com><20100217144945.t2astxtwhwowssg8@webmail.natasha.cc> <4902d9991002171407p5ef97706sd378917a44e17434@mail.gmail.com> Message-ID: This is not what the irrational observation was referring to. Urg. Anyway, on a *different point*, that "global warming debate only exists because of PR, media and political spin" could be very true. Although, I'm not entirely sure the world and society feeds off of PR. Some folks are sequestered in their studies reading peer-review papers and still others are withdraw to their labs working on experiments about effects of temperature rises on life forms. I do very much agree with you on the influence of political positioning, and its PR, on the GW debate. Best, Natasha _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Alfio Puglisi Sent: Wednesday, February 17, 2010 4:08 PM To: ExI chat list Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled On Wed, Feb 17, 2010 at 8:49 PM, wrote: Your observation is irrational. It's not. The global warming debate only exists because of PR, media and political spin. The scientific debate was over many years ago. I highly suggest Spencer Weart's history of global warming science: http://www.aip.org/history/climate/ Alfio Natasha Quoting Alfio Puglisi : On Wed, Feb 17, 2010 at 5:55 PM, Natasha Vita-More wrote: Christopher Luebcke writes: "... I just find it sadly ironic, and a dark reflection on the petty, fiercely partisan, politicized nature of what should in fact be a sober, scientific discussion, that a man accused (in the court of political opinion) of fraud should suddenly be taken at his word by those same accusers, when that word apparently agrees with their position." Many of us find it frustrating. Nevertheless the biggest problem and disappointment is that, while everyone seems to annoyed and saddened, there is a lack of intelligent communication between all parties in viewing the situation from diverse perspectives. Max is working to develop a level ground for discussion. Bravo to him. I don't agree. Posting all those links to garbage blogs like WUWT, and quoting them like they had any value instead of laughing at them (or desperating, depends on the mood), does a disservice to the Extropy list. It shows how a group of intelligent people can be easily manipulated by PR spin. I find it depressing. Alfio Best, Natasha _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Feb 18 16:52:27 2010 From: pharos at gmail.com (BillK) Date: Thu, 18 Feb 2010 16:52:27 +0000 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> <20100217144945.t2astxtwhwowssg8@webmail.natasha.cc> <4902d9991002171407p5ef97706sd378917a44e17434@mail.gmail.com> Message-ID: 2010/2/18 Natasha Vita-More : > This is not what the irrational observation was referring to.? Urg. > > Anyway,?on a *different point*, that "global warming debate only exists > because of PR, media and political spin"?could be very true.? Although, I'm > not entirely sure the world and society feeds off of PR.? Some folks are > sequestered in their studies reading peer-review papers and still others > are?withdraw to their labs working on experiments about effects > of?temperature rises?on life forms.? I do very much agree with you on the > influence of political positioning, and its PR, on the GW debate. > > No doubt that's true. But the PR, media and political spin is all about power. Power to stop governments taking any action to restrain the corporations behavior. The mass of public opinion and the creation of 'popular dissenting views' sways elected representatives against initiating new projects to alleviate global warming and the effects thereof. All the folks reading peer-reviewed papers and in labs working on research have no political lobbying power. That's why the corporations have won and science has lost. BillK From hkeithhenson at gmail.com Thu Feb 18 17:08:57 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 18 Feb 2010 10:08:57 -0700 Subject: [ExI] Climate change doesn't matter Message-ID: On Thu, Feb 18, 2010 at 5:00 AM, Max More wrote: > I'm spending way too > much time on a point that really doesn't > matter... No kidding. Climate change should not even be an issue. Solving the energy problem is the big issue. If we don't, up to 6 out of 7 people on the planet will die in famines and the resultant resource wars. If we solve the energy problem, which absolutely requires a switch away from fossil fuels, then it makes no difference how much human released (or any other) CO2 contributes to global warming. If we don't, then whatever problems global warming causes will be submerged in far more serious problems. CNN reports that Bill Gates ended his remarks at the TED conference with: " If he could wish for anything in the world, Gates said he would not pick the next 50 years' worth of presidents or wish for a miracle vaccine. " He would choose energy that is half as expensive as coal and doesn't warm the planet." Though it isn't a very interesting topic here, I have been working on this for years. There are multiple engineering approaches, most of what I have been working on is reducing the cost of SBSP. There is another one I now know about and, though I am still going through the numbers, it too looks like it will fall into the "half as expensive as coal" class. Keith From natasha at natasha.cc Thu Feb 18 17:26:40 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 18 Feb 2010 11:26:40 -0600 Subject: [ExI] "Transhumanism The Way of the Future" in The Scavenger - Natasha Message-ID: My article is now available online in "The Scavenger" under Media & Technology. http://www.thescavenger.net/media-a-technology/transhumanism-the-way-of-the- future-98432.html Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From jonkc at bellsouth.net Thu Feb 18 17:26:26 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 18 Feb 2010 12:26:26 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <597645.99584.qm@web36501.mail.mud.yahoo.com> References: <597645.99584.qm@web36501.mail.mud.yahoo.com> Message-ID: <1A0CA64B-57C4-4B7E-B68D-53920CB39A35@bellsouth.net> Since my last post Gordon Swobe has posted 5 times. > The [Chinese Room] man cannot understand the symbols and he does no more than implement a program. Swobe's blunder has been pointed out many times but he still doesn't get it. Never mind that the man is only a trivially small part of this Cosmologically large Chinese room, never mind that the consciousness in the room runs a hundred thousand million billion trillion times slower than anything humans have experience with, Swobe continues to blindly insist that if consciousness exists it must be in that silly little man; so if the man reports that he doesn't understand the Chinese symbols not only does Swobe instantly believes him he thinks that settles the question since if the man doesn't understand the symbols then "obviously" nothing else in that enormous room could. But determining what has understanding and what does not is the entire point of the Chinese room fiasco in the first place! If it was already "obvious" then what is the point of inventing the Chinese Room? > But the human brain understands symbols. So, either > 1) the brain does not implement programs, or > 2) the brain implements programs and does something else also. Swobe thinks that if one neuron running a program can't have any understanding then 100 billion neurons working together can't either. One water molecule is not wet, so using Swobe's way of thinking we conclude that the Pacific Ocean is not wet either. > if I want to know if you have subjective mental states, it helps me to know if you have a brain in your head. Anything that behaves intelligently will have something that corresponds to a brain, although not necessarily in the head, or even have a head. > Some philosophers abandon common sense and dive deep down into the sceptic's rabbit hole such that they doubt even their own existence. Swobe delights in bringing up this mythical moronic philosopher as a straw man, but that's all it is. And my nomination for the two most foolish philosophers in the last century are: 1) Those who say Evolution produced consciousness even though it doesn't effect behavior. 2) Those who say consciousness does effect behavior but the Turing Test still can't detect it. > if you have a brain and nervous sytem but no behaviors or reports (including self-reports) of subjective experience It would be enormously helpful if Swobe could explain what a report of subjective experience made by someone other than the subject in question is. In fact not only would it be helpful it would elevate Swobe to being by far the greatest philosopher who ever lived. I'm also a little curious why Swobe takes at face value a report from a human being that he has subjective experience but if a robot, regardless of how intelligent, reports the same thing Swobe is certain he is lying. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Thu Feb 18 19:06:29 2010 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Thu, 18 Feb 2010 20:06:29 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <81DEE08EF6054121B466A3303CF6ADF9@DFC68LF1> <4902d9991002171015k3a0124c2qc19aaf1abc14f3bb@mail.gmail.com> <20100217144945.t2astxtwhwowssg8@webmail.natasha.cc> <4902d9991002171407p5ef97706sd378917a44e17434@mail.gmail.com> Message-ID: <4902d9991002181106n7a1316c3q90aa953ca2e529d6@mail.gmail.com> 2010/2/18 Natasha Vita-More > This is not what the irrational observation was referring to. Urg. > Then I misunderstood you. Sorry for that. > > Anyway, on a *different point*, that "global warming debate only exists > because of PR, media and political spin" could be very true. Although, I'm > not entirely sure the world and society feeds off of PR. Some folks are > sequestered in their studies reading peer-review papers and still others > are withdraw to their labs working on experiments about effects > of temperature rises on life forms. I do very much agree with you on the > influence of political positioning, and its PR, on the GW debate. > Since I read Dawkins' The Selfish Gene, and the chapter about memes, my worldview has changed greatly. It's one of those moments when you slap yourself on the front, and say "why I didn't think of this before?" The thought of human minds as an arena for competing memes is unsettling, but self-evidently true, at least to me. The GW debate seems to be a competion between memes, some grounded in evidence, others instead floating on thin air but finding good support in other pre-existing memes like political orientation, mankind's place in this world (and thus what it can or is able to do, or not), and so on. It's very difficult to discuss this kind of things since most people are already entrenched in one position and can interpret any discussion as a personal critique. I am open to suggestions. Alfio > > Best, > Natasha > ------------------------------ > *From:* extropy-chat-bounces at lists.extropy.org [mailto: > extropy-chat-bounces at lists.extropy.org] *On Behalf Of *Alfio Puglisi > *Sent:* Wednesday, February 17, 2010 4:08 PM > *To:* ExI chat list > > *Subject:* Re: [ExI] Phil Jones acknowledging that climate science > isn'tsettled > > > On Wed, Feb 17, 2010 at 8:49 PM, wrote: > >> Your observation is irrational. >> >> > It's not. The global warming debate only exists because of PR, media and > political spin. The scientific debate was over many years ago. I highly > suggest Spencer Weart's history of global warming science: > http://www.aip.org/history/climate/ > > Alfio > > > > > >> Natasha >> >> >> Quoting Alfio Puglisi : >> >> On Wed, Feb 17, 2010 at 5:55 PM, Natasha Vita-More >> >wrote: >>> >>> Christopher Luebcke writes: >>>> >>>> "... I just find it sadly ironic, and a dark reflection on the petty, >>>> fiercely partisan, politicized nature of what should in fact be a sober, >>>> scientific discussion, that a man accused (in the court of political >>>> opinion) of fraud should suddenly be taken at his word by those same >>>> accusers, when that word apparently agrees with their position." >>>> >>>> Many of us find it frustrating. Nevertheless the biggest problem and >>>> disappointment is that, while everyone seems to annoyed and saddened, >>>> there >>>> is a lack of intelligent communication between all parties in viewing >>>> the >>>> situation from diverse perspectives. >>>> >>>> Max is working to develop a level ground for discussion. Bravo to him. >>>> >>>> >>> I don't agree. Posting all those links to garbage blogs like WUWT, and >>> quoting them like they had any value instead of laughing at them (or >>> desperating, depends on the mood), does a disservice to the Extropy list. >>> It >>> shows how a group of intelligent people can be easily manipulated by PR >>> spin. I find it depressing. >>> >>> Alfio >>> >>> >>> >>> Best, >>>> Natasha >>>> >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>>> >>> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Thu Feb 18 20:17:03 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 18 Feb 2010 12:17:03 -0800 (PST) Subject: [ExI] How not to make a thought experiment Message-ID: <204187.95641.qm@web36502.mail.mud.yahoo.com> This conversation thread amounts to nothing more than an obsessive and malicious attempt by a certain netizen who goes by the name John K Clark to slander me and mischaracterize my views. I have no association with John K Clark. Gordon Swobe From gts_2000 at yahoo.com Thu Feb 18 20:33:57 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 18 Feb 2010 12:33:57 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <528966.46922.qm@web36505.mail.mud.yahoo.com> --- On Thu, 2/18/10, Aware wrote: >> But the human brain understands symbols. > > Despite the understandably seductive nature of the belief, > and its natural and expected origins in the necessity that any > situated agent, to be effective, must have a model of itself to which it > can refer, that assertion is unsupported.? I support my assertion by pointing to your understanding of the symbols in my claim. You understand them well enough even to make a counter-claim. I think your brain deserves the credit. -gts From lacertilian at gmail.com Thu Feb 18 20:49:40 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 18 Feb 2010 12:49:40 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: <1A0CA64B-57C4-4B7E-B68D-53920CB39A35@bellsouth.net> References: <597645.99584.qm@web36501.mail.mud.yahoo.com> <1A0CA64B-57C4-4B7E-B68D-53920CB39A35@bellsouth.net> Message-ID: John Clark : > I'm also a little curious why Swobe takes at face value a report from a > human being that he has?subjective experience but if a robot, regardless of > how intelligent, reports the same thing Swobe is certain he is lying. Because human beings have brains! HUH HUH SWOBE WRONG No. It's because Gordon is a human, and Gordon can detect his own consciousness, and so Gordon assumes that other humans can do the same. It's a reasonable assumption, even if it does vaccinate him against a whole class of extremely relevant thought. If you must keep picking on him, at least try to distinguish between the strong parts of his argument and the weak parts. This was a strong part. You only made it stronger. From cluebcke at yahoo.com Thu Feb 18 21:20:34 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Thu, 18 Feb 2010 13:20:34 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <597645.99584.qm@web36501.mail.mud.yahoo.com> <1A0CA64B-57C4-4B7E-B68D-53920CB39A35@bellsouth.net> Message-ID: <862214.84724.qm@web111207.mail.gq1.yahoo.com> Certainty that the human's report is accurate does not provide the basis for certainty that the robot's report is not. This is crucial to the point--while it may be said that a functioning organic brain and nervous system is sufficient for consciousness, it has not at all been shown that it is necessary. ----- Original Message ---- > From: Spencer Campbell > To: ExI chat list > Sent: Thu, February 18, 2010 12:49:40 PM > Subject: Re: [ExI] How not to make a thought experiment > > John Clark : > > I'm also a little curious why Swobe takes at face value a report from a > > human being that he has subjective experience but if a robot, regardless of > > how intelligent, reports the same thing Swobe is certain he is lying. > > Because human beings have brains! > > HUH HUH SWOBE WRONG > > No. It's because Gordon is a human, and Gordon can detect his own > consciousness, and so Gordon assumes that other humans can do the > same. It's a reasonable assumption, even if it does vaccinate him > against a whole class of extremely relevant thought. > > If you must keep picking on him, at least try to distinguish between > the strong parts of his argument and the weak parts. This was a strong > part. You only made it stronger. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From gts_2000 at yahoo.com Thu Feb 18 21:35:41 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 18 Feb 2010 13:35:41 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <62c14241002180626m7f05bee0tbccfdb1b5fe442f1@mail.gmail.com> Message-ID: <767720.61984.qm@web36508.mail.mud.yahoo.com> --- On Thu, 2/18/10, Mike Dougherty wrote: > Are you saying "man cannot understand" as a philosophical > point?? I'm talking about the fact that I may not know what the chinese > symbols are on the take-out menu, but I can notice the shapes are > similar from one dish to another.? Eventually I might observe the > symbol for "chicken" and have some idea that there's a pattern of > usage.? Sure, that symbol might occur in an Enigma Machine that > constantly changes the context for the symbol's use - but that suggests > only that it takes more effort to establish the pattern (if one exists) It might add clarity if we break the process of understanding the CRA into two steps: 1) Understanding that the man cannot understand the symbols from manipulating them according to their forms, i.e., that formal syntax does not give semantics in any language including any programming language, and 2) Understanding that to an actual computer, the digital 1's and 0's or on/off states in image data (as in the digitized photos on your chinese menu) look just like any another kind of symbol. Idea 2) becomes important when considering the so-called "Robot Reply" to the CRA, in which some of Searle's critics added external sensors to the Chinese Room and then tried to make the man inside understand the symbols. I haven't spent much time discussing the robot reply because I can hardly find agreement from people here that digital computers without sensors have no understanding. > If you are suggesting that there is NO order at all to the > CR experiment Syntactic order exists in the CR, just as syntactic order exists in your computer. But syntactic order does not give understanding. Think of how the grammatical order of a sentence does not reveal the meanings of the words. -gts From gts_2000 at yahoo.com Thu Feb 18 22:39:51 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 18 Feb 2010 14:39:51 -0800 (PST) Subject: [ExI] The alleged existence of consciousness Message-ID: <762702.96421.qm@web36504.mail.mud.yahoo.com> --- On Tue, 2/16/10, Stathis Papaioannou wrote: >> Seems to me consciousness amounts to just another >> biological process like digestion... > > But digestion is nothing over and above the chemical breakdown of food > in?the gut. Similarly, consciousness is nothing over and above the > enacting of intelligent behaviour. I think we can and will one day create unconscious robots that *act* like they have consciousness. We already have some unconscious robots that act conscious for certain tasks. Do you really disagree with this? In any case I want to take a moment to address your claim that "consciousness is a necessary by-product of intelligence". I notice first that the claim needs some disambiguation. I don't believe evolution scientists use that vocabulary. Generally they classify traits as either adaptive or non-adaptive. Adaptive traits increase the probability that the host's genes will propagate. Eyesight serves as a good example of an adaptive trait. Non-adaptive traits do not increase the probability that the host's genes will propagate. The red color of human blood serves as a good example of a non-adaptive trait. Blood looks red because hemoglobin contains iron, and hemoglobin carries precious oxygen to the cells. The redness of blood appears only as a "byproduct" of something else that has adaptive value. Now then should we consider consciousness an adaptive trait or a non-adaptive trait? I think it clearly qualifies as an adaptive trait, and I think you will agree. Conscious awareness enhances intelligence and gives the organism more flexibility in dealing with multiple stimuli simultaneously. As evidence of this we need only look at nature: conscious organisms like humans exhibit more complex and intelligent behaviors than do unconscious organisms like plants and microbes. -gts From msd001 at gmail.com Thu Feb 18 23:31:17 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 18 Feb 2010 18:31:17 -0500 Subject: [ExI] Climate change doesn't matter In-Reply-To: References: Message-ID: <62c14241002181531m1c92f8bcod55111e778d84b4a@mail.gmail.com> On Thu, Feb 18, 2010 at 12:08 PM, Keith Henson wrote: > Though it isn't a very interesting topic here, I have been working on > this for years. ?There are multiple engineering approaches, most of > what I have been working on is reducing the cost of SBSP. ?There is > another one I now know about and, though I am still going through the > numbers, it too looks like it will fall into the "half as expensive as > coal" class. Maybe I don't have enough history to know what you are alluding to, so I'll just ask... If not SBSP, what? From msd001 at gmail.com Thu Feb 18 23:42:08 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 18 Feb 2010 18:42:08 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: <767720.61984.qm@web36508.mail.mud.yahoo.com> References: <62c14241002180626m7f05bee0tbccfdb1b5fe442f1@mail.gmail.com> <767720.61984.qm@web36508.mail.mud.yahoo.com> Message-ID: <62c14241002181542q33487b6fg6791e48ba5c853bb@mail.gmail.com> On Thu, Feb 18, 2010 at 4:35 PM, Gordon Swobe wrote: > Syntactic order exists in the CR, just as syntactic order exists in your computer. But syntactic order does not give understanding. Think of how the grammatical order of a sentence does not reveal the meanings of the words. There exists hundreds antiquity language writings, many of which have not been deciphered yet. Are you saying that there is no way to ever unravel those patterns of markings to glean insight from ancient writing? Would you have acid-washed the walls of Egypt because the pictures are irrelevant scribbling? Would you have destroyed the Rosetta Stone because it held no purpose after you destroyed the other writings you couldn't understand? I would concede that the dots in a pointillist painting have no distinction between "blue dot" in a person's head and the "blue dot" in the sky when examined too closely and without context. In the same way, there's no distinction between an atom of Hydrogen in a molecule of water and one fueling the sun - so examining the hydrogen atom bears no meaning to its purpose in either a beverage or a sunny day. Could we agree that the scope of language can create contexts that may be simply be invalid and too much focus on invalid contexts generates only confusion? From gts_2000 at yahoo.com Fri Feb 19 00:15:40 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 18 Feb 2010 16:15:40 -0800 (PST) Subject: [ExI] Consciouness and paracrap In-Reply-To: Message-ID: <244245.17387.qm@web36502.mail.mud.yahoo.com> --- On Wed, 2/17/10, Stathis Papaioannou wrote: > The thought experiment involves replacing brain components > with artificial components that perfectly reproduce the I/O > behaviour of the original components, but not the consciousness. > Gordon agrees that this is possible. However, he then either claims that > the artificial components will not behave the same as the biological > components (even though it is an assumption of the experiment that they > will) or else says the experiment is ridiculous. You make what I consider an over-simplification when you assume here as you do that that the i/o behavior of a brain/neuron is all there is to the brain. To you it just seems "obvious" but I consider it anything but obvious. On your view, artificial neurons stuffed with mashed potatoes and gravy would work just fine provided they had the right i/o's. It does not even seem occur to you, for example, that consciousness may involve the electrical signals that travel down the axons internal to the neurons, or involve any number of a million other electrical or chemical processes *internal* to natural neurons. When pressed you say that your argument applies to the whole brain and not only to individual neurons, so let's take a look at that: Let us say that we created an artificial brain that contained a cubic foot of warm leftover mashed potatoes and gravy. Only the neurons on the exterior exist, but they have the i/o's of the neurons external to a natural brain so the brain as a whole has the same i/o behavior of a natural brain. Would your mister potato-head have consciousness? After all it has the same i/o's of a natural brain and you think nothing else matters. Food for thought. -gts From stathisp at gmail.com Fri Feb 19 00:35:14 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 19 Feb 2010 11:35:14 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <597645.99584.qm@web36501.mail.mud.yahoo.com> References: <597645.99584.qm@web36501.mail.mud.yahoo.com> Message-ID: On 19 February 2010 01:33, Gordon Swobe wrote: > --- On Thu, 2/18/10, Stathis Papaioannou wrote: > >>> The man cannot understand the symbols - no way, no >>> how, not in a million years - and when you realize this >>> you'll learn something important about yourself. >> >> Amazingly the brain does understand symbols, even though it >> is in the same position as the CR, except worse since the neurons are >> far dumber than even the dumbest man. When you understand this you >> will understand something important about yourself. > > That's also true. > > The man cannot understand the symbols and he does no more than implement a program. But the human brain understands symbols. > > So, either > > 1) the brain does not implement programs, or > 2) the brain implements programs and does something else also. Or 3) implementing programs leads to understanding. It seems that you just can't get past the very obvious point that although the man has no understanding of language, he is just a trivial part of the system, even if he internalises all the components of the system. His intelligence is in fact mostly superfluous. What he does is something a punchcard machine could do. In fact, the same could be said of the intelligence of the man with respect to knowledge of Chinese: it isn't a part of his cognitive competence, not even as zombie intelligence. It's as if you had a being of godlike intelligence (and consciousness) in your head whose only job was to make the neurons fire in the correct sequence. Do you see that such a being would not necessarily know anything about what you were thinking about, and you would not necessarily know anything about what it was thinking about? -- Stathis Papaioannou From hkeithhenson at gmail.com Fri Feb 19 00:53:18 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 18 Feb 2010 17:53:18 -0700 Subject: [ExI] Climate change doesn't matter Message-ID: On Thu, Feb 18, 2010 at 4:31 PM, Mike Dougherty wrote: > > On Thu, Feb 18, 2010 at 12:08 PM, Keith Henson wrote: >> Though it isn't a very interesting topic here, I have been working on >> this for years. ?There are multiple engineering approaches, most of >> what I have been working on is reducing the cost of SBSP. ?There is >> another one I now know about and, though I am still going through the >> numbers, it too looks like it will fall into the "half as expensive as >> coal" class. > > Maybe I don't have enough history to know what you are alluding to, so > I'll just ask... > > If not SBSP, what? Under an NDA. Will let the list know when I can talk about it. Keith From gts_2000 at yahoo.com Fri Feb 19 00:54:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 18 Feb 2010 16:54:45 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <62c14241002181542q33487b6fg6791e48ba5c853bb@mail.gmail.com> Message-ID: <940700.83573.qm@web36505.mail.mud.yahoo.com> --- On Thu, 2/18/10, Mike Dougherty wrote: > There exists hundreds antiquity language writings, many of > which have not been deciphered yet. I found this account of the deciphering of the Rosetta Stone on wikipedia: "In 1814, Briton Thomas Young finished translating the enchorial (demotic) text, and began work on the hieroglyphic script. From 1822 to 1824 the French scholar, philologist, and orientalist Jean-Fran?ois Champollion greatly expanded on this work and is credited as the principal translator of the Rosetta Stone. Champollion could read both Greek and Coptic, and figured out what the seven Demotic signs in Coptic were. By looking at how these signs were used in Coptic, he worked out what they meant. Then he traced the Demotic signs back to hieroglyphic signs. By working out what some hieroglyphs stood for, he transliterated the text from the Demotic (or older Coptic) and Greek to the hieroglyphs by first translating Greek names which were originally in Greek, then working towards ancient names that had never been written in any other language. Champollion then created an alphabet to decipher the remaining text.[3]" http://en.wikipedia.org/wiki/Rosetta_Stone As you can see, this Champollion fellow started with some understanding. He built on that understanding to decipher more symbols. Presumably Thomas Young did the same before him. Too bad our man in the room has no understanding of any symbols and so no knowledge base to build on. He can do no more than follow the syntactic instructions in the program: if input = "squiggle" then output "squoogle". -gts From gts_2000 at yahoo.com Fri Feb 19 01:41:38 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 18 Feb 2010 17:41:38 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <794676.58785.qm@web36502.mail.mud.yahoo.com> --- On Thu, 2/18/10, Stathis Papaioannou wrote: > Or 3) implementing programs leads to understanding. > > It seems that you just can't get past the very obvious > point that although the man has no understanding of language, he is > just a trivial part of the system, even if he internalises all the > components of the system. His intelligence is in fact mostly > superfluous. What he does is something a punchcard machine could do. > In fact, the same could be said of the intelligence of the man with > respect to knowledge of Chinese: it isn't a part of his cognitive > competence, not even as zombie intelligence. It's as if you had a being > of godlike intelligence (and consciousness) in your head whose only > job was to make the neurons fire in the correct sequence. Do you see > that such a being would not necessarily know anything about what you > were thinking about, and you would not necessarily know anything about > what it was thinking about? As if I had a "being with godlike intelligence in my head who makes the neurons fire"? Honestly Stathis I have no idea what you're talking about. The CRA thought experiment involves *you the reader* imagining *yourself* in the room (or as the room) using *your* mind to attempt to understand the Chinese symbols. Nobody wants to know about strange speculations of *something else* in or about your brain that might understand the symbols when you don't understand them. I mentioned the pink unicorns the other day for that reason. If mysterious pink unicorns in some mysterious place understand the symbols, but you have no access to their understanding, then Searle still got it right. -gts From msd001 at gmail.com Fri Feb 19 02:00:23 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 18 Feb 2010 21:00:23 -0500 Subject: [ExI] Consciouness and paracrap In-Reply-To: <244245.17387.qm@web36502.mail.mud.yahoo.com> References: <244245.17387.qm@web36502.mail.mud.yahoo.com> Message-ID: <62c14241002181800u1b3209acgd20e74520005d92e@mail.gmail.com> On Thu, Feb 18, 2010 at 7:15 PM, Gordon Swobe wrote: > Let us say that we created an artificial brain that contained a cubic foot of warm leftover mashed potatoes and gravy. Only the neurons on the exterior exist, but they have the i/o's of the neurons external to a natural brain so the brain as a whole has the same i/o behavior of a natural brain. > > Would your mister potato-head have consciousness? After all it has the same i/o's of a natural brain and you think nothing else matters. > > Food for thought. Was the whole email a setup for that punchline? From msd001 at gmail.com Fri Feb 19 02:20:34 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 18 Feb 2010 21:20:34 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: <940700.83573.qm@web36505.mail.mud.yahoo.com> References: <62c14241002181542q33487b6fg6791e48ba5c853bb@mail.gmail.com> <940700.83573.qm@web36505.mail.mud.yahoo.com> Message-ID: <62c14241002181820k79985471ha1857a566cdd2a89@mail.gmail.com> On Thu, Feb 18, 2010 at 7:54 PM, Gordon Swobe wrote: > As you can see, this Champollion fellow started with some understanding. He built on that understanding to decipher more symbols. Presumably Thomas Young did the same before him. > > Too bad our man in the room has no understanding of any symbols and so no knowledge base to build on. He can do no more than follow the syntactic instructions in the program: if input = "squiggle" then output "squoogle". Again I ask if the man is a metaphor or an actual man. If a metaphor, then you might as well replace him with the mashed potatoes or some other dumb machine - his person status only confuses the issue. If he is an actual person, then we should assume he has enough a-priori "knowledge base" for "interpret symbol" and "transform input to output" etc. Suppose further that there is some amount of processing ability beyond those required of the mashed-potato driven machine. With that pattern recognition the man is able to notice the pattern "squiggle->squoogle" so that he no longer needs to compare the shape to the lookup/transformation rules because he's internalized that knowledge (commit to memory, established onboard knowledgebase, whatever) From outside the box we might observe that operation has become optimized because it no longer takes 20 seconds to 'compute' and now takes only 2. The man in the box might rather spend those 18 seconds pursing other goals - like checking his notes for what has historically come next after the suiggle->squoogle transformation. Even if there is an 80% observed case of input A and 20% B, there may be a time when a third option is encountered. But the optimization to expect A or B following squiggle->squoogle gives the man another advantage in signal processing. I didn't expect to describe it as such but this process is very similar to what (i imagine) is going on in either a data compression or realtime audio encoding application. Are you going to argue that every action performed by the man in the box is part of the transformation process? Are you taking aware his apparent optimization intelligence by scripting his actions? If you remove any responsibility from the man in the box, you move the responsibility to the author of the script. That's programming. I thought you said programming isn't intelligence... If I haven't stated this position clearly, please explain. From lacertilian at gmail.com Fri Feb 19 05:57:57 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 18 Feb 2010 21:57:57 -0800 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <762702.96421.qm@web36504.mail.mud.yahoo.com> References: <762702.96421.qm@web36504.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Now then should we consider consciousness an adaptive trait or a non-adaptive trait? I think it clearly qualifies as an adaptive trait, and I think you will agree. Conscious awareness enhances intelligence and gives the organism more flexibility in dealing with multiple stimuli simultaneously. See, it's remarks like these that make me think you and I mean completely different things when we use the word "consciousness". I give it a more metaphysical position. I say philosophical zombies are conceivable, but, as far as my reasoning goes, physically impossible. Imagine the whole physical universe as a bubble. Next to it, place a "godhead" bubble; an entirely distinct universe which is built out of the stuff of phenomenal experience rather than spacetime and such. Now arbitrarily suppose a one-way line of communication between the two bubbles, so that the mechanistic universe is feeding content to the phenomenal universe. There is no way to determine that the downloading bubble exists from within the uploading bubble, even if you suppose that it does. That's about what I think consciousness is right now, but I'm not anywhere near satisfied with it as a theory. It's inelegant and naive. I see no reason why reality should be set up in that way; the only utility I'm getting out of believing this is to ward off any troubling thoughts about the role of consciousness in the world. I am not a consciousness researcher, and I do not intend to become one, so thoughts like those are less than productive for me. Not that chatting about it here is any less a waste of time, of course; I've long since come to regard Extropy-Chat as purely a source of idle entertainment. At least it's better brainfood than TV. From lacertilian at gmail.com Fri Feb 19 07:01:38 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 18 Feb 2010 23:01:38 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <794676.58785.qm@web36502.mail.mud.yahoo.com> References: <794676.58785.qm@web36502.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > As if I had a "being with godlike intelligence in my head who makes the neurons fire"? Honestly Stathis I have no idea what you're talking about. It boggles my mind that you don't understand what he's talking about yet. He's mentioned this to you several times; I'll call it Stathis' daemon. It's quite late and I don't have time to reiterate it completely, but it involves turning off the ability of your neurons to fire in response to other firings. The daemon does that for you instead, manually operating the electrical patterns of your brain so that they match a programmatic simulation (which, of course, is internal to the daemon). Gordon Swobe : > The CRA thought experiment involves *you the reader* imagining *yourself* in the room (or as the room) using *your* mind to attempt to understand the Chinese symbols. No one cares! Do *you* understand *why* no one cares? I will give you a hint: it isn't because everyone who isn't Gordon Swobe is an ignorant fool. I don't think lack of comprehension is the problem either, and I severely doubt that you can chalk up all of the resistance you're getting to the willful rejection of inconvenient ideas. Mostly what we're looking at here is bad communication and debilitating frustration. It's a vicious cycle. The more frustrated people get, the worse they communicate; the worse they communicate, the more frustrating things become. Gordon Swobe : > Nobody wants to know about strange speculations of *something else* in or about your brain that might understand the symbols when you don't understand them. I mentioned the pink unicorns the other day for that reason. If mysterious pink unicorns in some mysterious place understand the symbols, but you have no access to their understanding, then Searle still got it right. EVERYONE wants to know about those strange speculations! That's the entire point of the Chinese room! Has anyone addressed the very obvious point, yet, that there is someone talking to the Chinese room from outside of it? It's blatantly obvious that that person is talking to *something*. The question is: does that something understand Chinese? If so, does it have consciousness? If you simply jump straight to the conclusion that the someone is talking to the man in the room, then it's perfectly clear (to me, at least) that the answer to the first question is no; in which case we never reach the second question. However, it is far from obvious that the man is a participant in the conversation at all. Certainly the man doesn't know that he's talking to anyone, and no one knows that they're talking to the man. So, if the man is not in the conversation, who or what is communicating the things that he is writing down? If there is such a thing at all, it must be a strange thing indeed: a delocalized virtual entity lacking any physical reality to speak of*. I, for one, do not take for granted that this disqualifies it in any way for the possession of understanding, intelligence, or consciousness. *Reminiscent of a nolipsistic description of the self, wouldn't you say? From stathisp at gmail.com Fri Feb 19 08:17:59 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 19 Feb 2010 19:17:59 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <62c14241002181542q33487b6fg6791e48ba5c853bb@mail.gmail.com> References: <62c14241002180626m7f05bee0tbccfdb1b5fe442f1@mail.gmail.com> <767720.61984.qm@web36508.mail.mud.yahoo.com> <62c14241002181542q33487b6fg6791e48ba5c853bb@mail.gmail.com> Message-ID: On 19 February 2010 10:42, Mike Dougherty wrote: > On Thu, Feb 18, 2010 at 4:35 PM, Gordon Swobe wrote: >> Syntactic order exists in the CR, just as syntactic order exists in your computer. But syntactic order does not give understanding. Think of how the grammatical order of a sentence does not reveal the meanings of the words. > > There exists hundreds antiquity language writings, many of which have > not been deciphered yet. ?Are you saying that there is no way to ever > unravel those patterns of markings to glean insight from ancient > writing? ?Would you have acid-washed the walls of Egypt because the > pictures are irrelevant scribbling? ?Would you have destroyed the > Rosetta Stone because it held no purpose after you destroyed the other > writings you couldn't understand? "Rygyiglop" is a word in a language I have just created. Can you translate it into English? It is in general impossible to decipher an unknown language, no matter how intelligent you are. You can only decipher it in special cases, where you have a translation or partial translation, or where the symbols or words bear some resemblance to a known language or to the objects they represent. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 19 10:51:35 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 19 Feb 2010 21:51:35 +1100 Subject: [ExI] Consciouness and paracrap In-Reply-To: <244245.17387.qm@web36502.mail.mud.yahoo.com> References: <244245.17387.qm@web36502.mail.mud.yahoo.com> Message-ID: On 19 February 2010 11:15, Gordon Swobe wrote: > --- On Wed, 2/17/10, Stathis Papaioannou wrote: > >> The thought experiment involves replacing brain components >> with artificial components that perfectly reproduce the I/O >> behaviour of the original components, but not the consciousness. >> Gordon agrees that this is possible. However, he then either claims that >> the artificial components will not behave the same as the biological >> components (even though it is an assumption of the experiment that they >> will) or else says the experiment is ridiculous. > > You make what I consider an over-simplification when you assume here as you do that that the i/o behavior of a brain/neuron is all there is to the brain. To you it just seems "obvious" but I consider it anything but obvious. On your view, artificial neurons stuffed with mashed potatoes and gravy would work just fine provided they had the right i/o's. The experiment generalises to any scale: replace the cell nucleus, a ribosome, a cubic centimetre of brain tissue with a functionally identical component. It would be hard to do this with mashed potato and gravy, but if all the behaviour of the cell can be described algorithmically, according to Church's thesis it can be modelled by a digital computer. The computer would need sensors and effectors in order to interact with neighbouring brain structures or the rest of the environment. > It does not even seem occur to you, for example, that consciousness may involve the electrical signals that travel down the axons internal to the neurons, or involve any number of a million other electrical or chemical processes *internal* to natural neurons. The idea is to reproduce every detail of the neuron that affects its behaviour as seen by another neuron. If you were to make a robot that passes as human it would not do just to make it look human and have a recording of a human voice: it would have to, for example, move like a human and participate in a conversation like a human. There are two ways in which this could be done. One way is to model the internal workings of a human brain; the other way is to make a detailed model from the observed external behaviour. It would not be easy to do this for the whole brain or for any subset of the brain, but for the purposes of the thought experiment this is not an issue. Imagine that we are being studied by extremely advanced aliens who have no idea whether we are conscious or not. The aliens scan individual neurons in the hapless human's brain and from this information make little robot neurons which they use to replace the original neurons one by one. There is only one design requirement for the artificial neurons: that they behave just like the biological neurons from the point of view of the remaining biological neurons with which they interact. It may be, for example, that the tiny electric field created by an electrical impulse travelling down an axon affects in some subtle way the behaviour of neurons up to a millimetre away. The aliens would figure this out and might decide to reproduce the effect by controlling the current in a solenoid mounted in the centre of each artificial neuron. The important point is that they do not put the solenoid there because it might have something to do with consciousness, since they neither know nor care about our consciousness. They put the solenoid there because otherwise the artificial neuron would behave abnormally, causing the remaining biological neurons to behave abnormally, causing the human to behave abnormally. Now the question which I have asked several times is this: is it possible for the aliens to make artificial brain components which behave exactly the same as the biological components but lack consciousness? Searle believes it is possible but I still don't know what you believe. You say that it is possible, but then you claim (I think) that the brain would start behaving abnormally if the artificial components are installed, which could only happen if these components behave abnormally. Could you please clarify your position? > When pressed you say that your argument applies to the whole brain and not only to individual neurons, so let's take a look at that: > > Let us say that we created an artificial brain that contained a cubic foot of warm leftover mashed potatoes and gravy. Only the neurons on the exterior exist, but they have the i/o's of the neurons external to a natural brain so the brain as a whole has the same i/o behavior of a natural brain. > > Would your mister potato-head have consciousness? After all it has the same i/o's of a natural brain and you think nothing else matters. It would have to be very special mashed potato and gravy because it would have to do enough processing to sustain normal intelligence, but if it did have this property then yes, Mr. Potato Head would have normal consciousness. This may seem counter-intuitive, which is why in all my posts I have started by assuming that you are right and the artificial components would have normal behaviour but not consciousness. This assumption leads to the conclusion that it is possible to selectively remove an important aspect of a person's consciousness and not only would he behave as if nothing had changed, he would also not notice that he had become a part zombie. You yourself have said that this is absurd. I agree that it is absurd, which is why I am led to the conclusion that it is *not* possible to create such zombie brain components. Either the artificial components won't work properly and the person's behaviour will change, or they will work properly and the person will have normal consciousness. -- Stathis Papaioannou From pharos at gmail.com Fri Feb 19 10:58:53 2010 From: pharos at gmail.com (BillK) Date: Fri, 19 Feb 2010 10:58:53 +0000 Subject: [ExI] Financial Crisis - What really happened Message-ID: Rolling Stone magazine has published Matt Taibbi's latest rant against the Wall Street takeover of the US economy. Even if you feel like nit-picking on some of the detail, the overall broad sweep explanation of what Wall Street has done is remarkably clear. It explains why Main Street USA is broke, unemployed and homeless while Wall Street is paying unbelievable sums of money in bonuses as though the crash never happened. Some quotes: the financial crisis of 2008 was very much caused by a perverse series of legal incentives that often made failed investments worth more than thriving ones. Our economy was like a town where everyone has juicy insurance policies on their neighbors' cars and houses. In such a town, the driving will be suspiciously bad, and there will be a lot of fires. ------------------------ In fact, the Fed became not just a source of emergency borrowing that enabled Goldman and Morgan Stanley to stave off disaster ? it became a source of long-term guaranteed income. Borrowing at zero percent interest, banks like Goldman now had virtually infinite ways to make money. In one of the most common maneuvers, they simply took the money they borrowed from the government at zero percent and lent it back to the government by buying Treasury bills that paid interest of three or four percent. It was basically a license to print money ? no different than attaching an ATM to the side of the Federal Reserve. ------------------------ To sum up, this is what Lloyd Blankfein meant by "performance": Take massive sums of money from the government, sit on it until the government starts printing trillions of dollars in a desperate attempt to restart the economy, buy even more toxic assets to sell back to the government at inflated prices ? and then, when all else fails, start driving us all toward the cliff again with a frank and open endorsement of bubble economics. I mean, shit ? who wouldn't deserve billions in bonuses for doing all that? ------------------------ BillK From stefano.vaj at gmail.com Fri Feb 19 12:56:08 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 19 Feb 2010 13:56:08 +0100 Subject: [ExI] "Transhumanism The Way of the Future" in The Scavenger - Natasha In-Reply-To: References: Message-ID: <580930c21002190456le70eaew5409b6f80a5a66b2@mail.gmail.com> Concise, eloquent, to the point... 2010/2/18 Natasha Vita-More > My article is now available online in "The Scavenger" under Media & > Technology. > http://www.thescavenger.net/media-a-technology/transhumanism-the-way-of-the-future-98432.html > > > [image: Nlogo1.tif] Natasha Vita-More > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From gts_2000 at yahoo.com Fri Feb 19 13:19:58 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 19 Feb 2010 05:19:58 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <62c14241002181820k79985471ha1857a566cdd2a89@mail.gmail.com> Message-ID: <359662.89635.qm@web36501.mail.mud.yahoo.com> --- On Thu, 2/18/10, Mike Dougherty wrote: > Again I ask if the man is a metaphor or an actual > man.? You should consider him an actual man, even a brilliant man, but one who has no tools available to him other than those specified in the experiment. > Suppose further that there is some amount of processing > ability beyond those required of the mashed-potato driven > machine. With that pattern recognition the man is able to notice the > pattern "squiggle->squoogle" so that he no longer needs to > compare the shape to the lookup/transformation rules because he's > internalized that knowledge (commit to memory, established onboard > knowledgebase, whatever)?? Sure, we've discussed that already. Go ahead and imagine the man as having committed all the rules to memory and optimized his performance. He becomes the entire system. > Are you going to argue that every action performed by the > man in the box is part of the transformation process?? Are you > taking aware his apparent optimization intelligence by scripting his > actions?? If you remove any responsibility from the man in the box, you > move the responsibility to the author of the script.? That's > programming.? I thought you said programming isn't intelligence...? If > I haven't stated this position clearly, please explain. The man has responsibility for his own actions, but neither the man nor the programmer can use English semantics to translate the Chinese to English in such a way that the man has access to those translations. In other words we cannot simply hand the man a Chinese/English database. We cannot do this because the point of the experiment is to see if the man or the program he represents can glean semantics from syntactical rules only. I hope that answers your question. I wanted to encourage you to consider the man as literally a man for this reason: the experiment tells us something about how the real people think. The man has a normal human brain but he cannot get semantics from syntax. -gts From gts_2000 at yahoo.com Fri Feb 19 13:38:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 19 Feb 2010 05:38:34 -0800 (PST) Subject: [ExI] Consciouness and paracrap In-Reply-To: Message-ID: <625897.99325.qm@web36501.mail.mud.yahoo.com> --- On Fri, 2/19/10, Stathis Papaioannou wrote: >> Would your mister potato-head have consciousness? >> After all it has the same i/o's of a natural brain and you >> think nothing else matters. > > It would have to be very special mashed potato and gravy > because it would have to do enough processing to sustain normal > intelligence, but if it did have this property then yes, Mr. Potato Head > would have normal consciousness. Nothing special about my mashed potatoes and gravy. In fact I made the potatoes with that dry-powdered potato mix -- not even with fresh potatoes -- and I got the gravy from Colonel Saunders. The brain has no internal neurons but the artificial neurons on the perimeter have the right i/o's. Conscious? or not? -gts From msd001 at gmail.com Fri Feb 19 14:32:24 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 19 Feb 2010 09:32:24 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: <359662.89635.qm@web36501.mail.mud.yahoo.com> References: <62c14241002181820k79985471ha1857a566cdd2a89@mail.gmail.com> <359662.89635.qm@web36501.mail.mud.yahoo.com> Message-ID: <62c14241002190632s2bd77693m221e498ae628f038@mail.gmail.com> On Fri, Feb 19, 2010 at 8:19 AM, Gordon Swobe wrote: > The man has responsibility for his own actions, but neither the man nor the programmer can use English semantics to translate the Chinese to English in such a way that the man has access to those translations. ?In other words we cannot simply hand the man a Chinese/English database. We cannot do this because the point of the experiment is to see if the man or the program he represents can glean semantics from syntactical rules only. I hope that answers your question. > > I wanted to encourage you to consider the man as literally a man for this reason: the experiment tells us something about how the real people think. The man has a normal human brain but he cannot get semantics from syntax. Maybe semantics can't be gleaned directly from a syntactical rule - but over the course of the man's time in the box, he will observe patterns and hypothesize meaning which can be reinforced by repeated observation. Call it symbol grounding in experience. If you also constrain this man with a memory that only lasts for the duration of a single lookup / IO transform then you are back to a simple state machine - which isn't particularly interesting or worthy of this much discussion. From bbenzai at yahoo.com Fri Feb 19 14:31:44 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 19 Feb 2010 06:31:44 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <581023.87170.qm@web113618.mail.gq1.yahoo.com> Gordon wrote: > Too bad our man in the room has no understanding of any > symbols and so no knowledge > base to build on. He can do no more than follow the > syntactic instructions in the program: > if input = "squiggle" then output "squoogle". 'if input = "squiggle" then output "squoogle"' cannot answer random questions on any given topic in a meaningful way. To do that requires understanding of the topic. The man is irrelevant, either your CR understands chinese or it doesn't produce any comprehensible output, and the whole argument is invalid. There is no way it can give satisfactory answers to any questions put to it without actually understanding both the language and the subject (and the questions). If you dispute this, then all you need to do is produce a giant look-up table that can sensibly answer any possible questions on 'The Magnificent Seven', and you'll have proven your point, and we can then take the CRA, and your conclusions from it, seriously. Ben Zaiboc From stathisp at gmail.com Fri Feb 19 15:48:32 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 20 Feb 2010 02:48:32 +1100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: <762702.96421.qm@web36504.mail.mud.yahoo.com> References: <762702.96421.qm@web36504.mail.mud.yahoo.com> Message-ID: On 19 February 2010 09:39, Gordon Swobe wrote: > --- On Tue, 2/16/10, Stathis Papaioannou wrote: > >>> Seems to me consciousness amounts to just another >>> biological process like digestion... >> >> But digestion is nothing over and above the chemical breakdown of food >> in?the gut. Similarly, consciousness is nothing over and above the >> enacting of intelligent behaviour. > > I think we can and will one day create unconscious robots that *act* like they have consciousness. We already have some unconscious robots that act conscious for certain tasks. Do you really disagree with this? The robots we have are probably on a par with animals with very simple nervous systems. > In any case I want to take a moment to address your claim that "consciousness is a necessary by-product of intelligence". > > I notice first that the claim needs some disambiguation. I don't believe evolution scientists use that vocabulary. Generally they classify traits as either adaptive or non-adaptive. > > Adaptive traits increase the probability that the host's genes will propagate. Eyesight serves as a good example of an adaptive trait. > > Non-adaptive traits do not increase the probability that the host's genes will propagate. The red color of human blood serves as a good example of a non-adaptive trait. Blood looks red because hemoglobin contains iron, and hemoglobin carries precious oxygen to the cells. The redness of blood appears only as a "byproduct" of something else that has adaptive value. > > Now then should we consider consciousness an adaptive trait or a non-adaptive trait? I think it clearly qualifies as an adaptive trait, and I think you will agree. Conscious awareness enhances intelligence and gives the organism more flexibility in dealing with multiple stimuli simultaneously. > > As evidence of this we need only look at nature: conscious organisms like humans exhibit more complex and intelligent behaviors than do unconscious organisms like plants and microbes. Philosophical zombies by definition exhibit the same complex behaviour as conscious beings. If nature could have produced zombies then why aren't we zombies? It can't be that zombies take more effort to make, since if the patterns of neuronal firing are reproduced in a different substrate that would reproduce the behaviour but, you claim, not necessarily the consciousness. So the brain could have evolved similarly to the way it actually did, but without the added complication of consciousness. It is difficult to imagine that something as elaborate and non-adaptive as consciousness could have evolved if there were so many other pathways to the same end without consciousness. The best explanation is that the brain we happen to have ended up with is not specially blessed, and any other brain based on similar patterns resulting in similar behaviour would have also had a similar consciousness. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 19 16:04:48 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 20 Feb 2010 03:04:48 +1100 Subject: [ExI] Consciouness and paracrap In-Reply-To: <625897.99325.qm@web36501.mail.mud.yahoo.com> References: <625897.99325.qm@web36501.mail.mud.yahoo.com> Message-ID: On 20 February 2010 00:38, Gordon Swobe wrote: > --- On Fri, 2/19/10, Stathis Papaioannou wrote: > >>> Would your mister potato-head have consciousness? >>> After all it has the same i/o's of a natural brain and you >>> think nothing else matters. >> >> It would have to be very special mashed potato and gravy >> because it would have to do enough processing to sustain normal >> intelligence, but if it did have this property then yes, Mr. Potato Head >> would have normal consciousness. > > Nothing special about my mashed potatoes and gravy. In fact I made the potatoes with that dry-powdered potato mix -- not even with fresh potatoes -- and I got the gravy from Colonel Saunders. > > The brain has no internal neurons but the artificial neurons on the perimeter have the right i/o's. Conscious? or not? The surface neurons would not be receiving the appropriate inputs from the mashed potato and gravy, so they would not behave normally. Also, there would not be enough surface neurons to interface with sensory and effector organs, so there would not be any interaction with the environment. You would end up with an inert lump, neither conscious nor intelligent. -- Stathis Papaioannou From painlord2k at libero.it Fri Feb 19 16:31:29 2010 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 19 Feb 2010 17:31:29 +0100 Subject: [ExI] Financial Crisis - What really happened In-Reply-To: References: Message-ID: <4B7EBCE1.40300@libero.it> Il 19/02/2010 11.58, BillK ha scritto: > Rolling Stone magazine has published Matt Taibbi's latest rant against > the Wall Street takeover of the US economy. Even if you feel like > nit-picking on some of the detail, the overall broad sweep explanation > of what Wall Street has done is remarkably clear. It explains why > Main Street USA is broke, unemployed and homeless while Wall Street is > paying unbelievable sums of money in bonuses as though the crash never > happened. > > The solution to this is to move from an economy where banks and governments can print paper money to a economy where the money is representative of something, like gold and silver (or whatever the market prefer). I would see Ben Bernanke or any other fed. bankers to try to conjure gold and silver from thin air to bail out Goldman&Sachs, AIG, Freddy Mac and Fanny Mae and the others. When they short-circuited the feedback of money lended and money saved, printing money and giving it to bankers to loan out and stimulate the economy, these geniuses never think of the consequences? Austrians economics and economists are repeating the same arguments because doing this is simply to kick the can down the road and make it bigger. Do you want a government that interfere with the economy? Then don't complain about when they always screw the economy. It is not that you was not warned before about what would happen. Now, the governments are all trapped in a debt traps. With low interest rates (now zero interest rate) they are forcing the economy to stagnate, as people have no reason to save and any reason to take debts. And with higher interest rates the government would be unable to pay the interests on the debts without raising the taxes and killing the economy. This can be solved only by defaulting, by inflating the debt away paying the debtors with the printed paper or taxing the people and cutting the welfare. Or all of them. But, hey, the motives of the politicians were good; they wanted people to own their homes, to stimulate the economic growth, to have their cake and eat it. If you don't change the politicians, there is no way you will change their policies. No surprise you will have more of the same. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.733 / Database dei virus: 271.1.1/2697 - Data di rilascio: 02/19/10 08:34:00 From pharos at gmail.com Fri Feb 19 16:58:34 2010 From: pharos at gmail.com (BillK) Date: Fri, 19 Feb 2010 16:58:34 +0000 Subject: [ExI] Financial Crisis - What really happened In-Reply-To: <4B7EBCE1.40300@libero.it> References: <4B7EBCE1.40300@libero.it> Message-ID: 2010/2/19 Mirco Romanato wrote: > The solution to this is to move from an economy where banks and governments > can print paper money to a economy where the money is representative of > something, like gold and silver (or whatever the market prefer). > I would see Ben Bernanke or any other fed. bankers to try to conjure gold > and silver from thin air to bail out Goldman&Sachs, AIG, Freddy Mac and > Fanny Mae and the others. > > When they short-circuited the feedback of money lended and money saved, > printing money and giving it to bankers to loan out and stimulate the > economy, these geniuses never think of the consequences? > Austrians economics and economists are repeating the same arguments because > doing this is simply to kick the can down the road and make it bigger. > > Do you want a government that interfere with the economy? > Then don't complain about when they always screw the economy. > It is not that you was not warned before about what would happen. > > Now, the governments are all trapped in a debt traps. > With low interest rates (now zero interest rate) they are forcing the > economy to stagnate, as people have no reason to save and any reason to take > debts. > And with higher interest rates the government would be unable to pay the > interests on the debts without raising the taxes and killing the economy. > This can be solved only by defaulting, by inflating the debt away paying the > debtors with the printed paper or taxing the people and cutting the welfare. > Or all of them. > > But, hey, the motives of the politicians were good; they wanted people to > own their homes, to stimulate the economic growth, to have their cake and > eat it. If you don't change the politicians, there is no way you will change > their policies. No surprise you will have more of the same. > > I agree with most of your comments, though I doubt that a return to the gold standard will solve all our problems. It would probably create just as many different problems. Although the government must accept a lot of the blame it doesn't let Wall Street off the hook. As Matt Taibbi says: It isn't so much that we have inadequate rules or incompetent regulators, although both of these things are certainly true. The real problem is that it doesn't matter what regulations are in place if the people running the economy are rip-off artists. The system assumes a certain minimum level of ethical behavior and civic instinct over and above what is spelled out by the regulations. If those ethics are absent ? well, this thing isn't going to work, no matter what we do. Sure, mugging old ladies is against the law, but it's also easy. To prevent it, we depend, for the most part, not on cops but on people making the conscious decision not to do it. ----------------- I think we are talking about the breakdown of society here. If the cops have disappeared (because of 'too big to fail') and Wall Street has no ethical values left, then regulations and laws don't matter. You will end up with lots more cases like that man who became so desperate that he flew his plane into the IRS offices in Austin, Texas. BillK From Frankmac at ripco.com Fri Feb 19 17:52:47 2010 From: Frankmac at ripco.com (Frank McElligott) Date: Fri, 19 Feb 2010 12:52:47 -0500 Subject: [ExI] Financial Crisis Message-ID: <004801cab18c$5d13b1b0$ad753644@sx28047db9d36c> For a complete discussion of the past financial crisis one must read pages 199 to the end of the recent book "THIS TIME IT IS DIFFERENT" by REINHART and ROGOFF. Great read and if the government does not stop spending your money the crisis will rise from the grave and spit fire and brimstone again at you. First CDO and CMO's have been around since the early 80's of the last century. It was not their fault that the crisis began, it was the PhD's who used faulty REASONING, who created this crisis. When you create computer models which use historical information that is flawed, an example Sub-prime mortgages default at a rate of 7% on the high end, and when instead of 7% it goes to 40% you have a problem. Second credit swaps(insurance) based again on these wrong numbers was easy money at AIG. If you were in their shoes it looked like a no-brainier chance of a life time. The chance of AAA bonds(tranches of CMO and CDO'S) going to default were 1 in 200, and by writing insurance(credit swaps) against that number and getting a 8% premium would you not take those odds. And they did to the tune of 500 billion. The people who brought the insurance were Goldman and other wall street firms. So when Lehman Bros after 150 years in business went belly up, AIG was on the hook, as the AAA Bonds(Lehman Bros) defaulted and they too were leveraged to the hilt(they had sold their insurance policies from AIG to other banks who also thought it was easy money without risk too. Well when AIG threaten to default this house of cards would collapse as they at AIG did not have the money(500 billion) to pay off their credit swaps(insurance) of Lehman Bros and others. If AIG went down, Goldman would have went down and most of the core banking industry would have failed. It had it's greed factor true, but it was just a case of bad numbers sold to a group of people who did not understand the risks. If these PhD's had found jobs teaching, instead of Wall Street, there probably would have still been a crisis, a lot smaller though. It was all ok with the laws as there were and still none which were broken. Can't break what does not exist. What would make these PhD's use wrong numbers, again because they were taught that method at major colleges in this country. They believed in their numbers, but they did not take into account what is now referred to a NINJA loan, brokered by some high school graduates in the real world who found a gold mine in sub prime loans:) Taleb calls it a "Black Swan event", I call it "to smart for their own good". Bye the bye, Wall street bonuses were ok with me. They figured out that with the Gov't backing everything and was printing money like it was some game being played, they have re-inflated the stock market by 4000 points in the greatest bull run since 1930. In May of last year, at a reflection point I asked on this list is this a bear market rally or a bull rally. At that time I picked the former, and have watched the marker as it rose and rose, it was the safe choice as I am old and can't afford to lose, but on Wall Street the pro's picked the latter and now receive those wonderful bonuses for being right. They used our money true, but their is no law against that is there? What smells is that H. Paulsen former CEO of Goldman brokered the deal which save Goldman and the economy. Was he saving us, or Goldman? One person makes a mess, but it takes a PHD to make a crisis:) Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Feb 19 17:48:48 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 19 Feb 2010 12:48:48 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <767720.61984.qm@web36508.mail.mud.yahoo.com> References: <767720.61984.qm@web36508.mail.mud.yahoo.com> Message-ID: <776AA753-5772-4E13-869E-ABE5CC276E1D@bellsouth.net> Since my last post Gordon Swobe has posted 9 times. > The CRA thought experiment involves *you the reader* imagining *yourself* in the room (or as the room) using *your* mind to attempt to understand the Chinese symbols. As is Swobe's habit he is wrong yet again. The Chinese room experiment asks you to imagine yourself as a mechanical relay, that's it. However Swobe is right about one thing, a relay is not conscious. Probably. > Conscious awareness As opposed to unconscious awareness. > enhances intelligence and gives the organism more flexibility in dealing with multiple stimuli simultaneously. Consciousness enhances intelligence and changes behavior but the Turing Test cannot detect even a whiff of it. Swobe does not see this idea as being world class stupid. I do. > As evidence of this we need only look at nature: conscious organisms like humans exhibit more complex and intelligent behaviors than do unconscious organisms like plants and microbes. This is a very rare occasion where, incredible as it sounds, Swobe is actually correct. Another way to express Swobe's words quoted above is to say "The Turing Test works". > you assume here as you do that that the i/o behavior of a brain/neuron is all there is to the brain. [...] > consciousness may involve the electrical signals that travel down the axons internal to the neurons Swobe is always keen to tell us that nobody including him has any idea what causes consciousness, so it is equally likely consciousness may involve the size of one's foot, after all the only being Swobe knows with certainty to be conscious has one particular shoe size. I am not trying to be funny, it's easy to demonstrate that the brain and neurons have something to do with intelligence but, if as Swobe believes, that has nothing to do with consciousness then the organ that is the seat of awareness is anybody's guess. The foot is as good a guess as any other. > Let us say that we created an artificial brain that contained a cubic foot of warm leftover mashed potatoes and gravy[...] Would your mister potato-head have consciousness? Swobe says he loves the Chinese room crapola because it can objectively determine what is conscious and what is not, and yet when he tries to defend this ridiculous idea he repeatedly dreams up intelligent things that are "obviously" not conscious, such as a computer made of toilet paper and now one made of mashed potatoes and gravy. But if all of this is obvious Swobe does not make it clear what in hell the point of the Chinese room thought experiment is. And Swobe may be interested to know that his brain is in fact the product of last years mashed potatoes and gravy, it's just a question of rearranging the atoms in a programable way. DNA does exactly that. > I think we can and will one day create unconscious robots that *act* like they have consciousness. Swobe thinks humans can make a environment that produces a being that acts like he's conscious, but only God [or various euphemisms of that word] can create an environment that makes the real deal. I disagree. > You should consider him [ the Chinese room dude] an actual man [...] I wanted to encourage you to consider the man as literally a man Swobe says we should consider the Chinese room fellow as literally a man, a man who can live for many trillions of years and "internalize" that book of instructions, a actual man who can memorize a document far larger than the observable universe. I say that remark is idiotic. Does anyone care to dispute my criticism? > our man in the room has no understanding of any symbols and so no knowledge base to build on. Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. > He can do no more than follow the syntactic instructions in the program: if input = "squiggle" then output "squoogle". Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. > syntactic order does not give understanding. Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. > formal syntax does not give semantics Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Fri Feb 19 18:46:55 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 19 Feb 2010 10:46:55 -0800 Subject: [ExI] Semiotics and Computability In-Reply-To: <581023.87170.qm@web113618.mail.gq1.yahoo.com> References: <581023.87170.qm@web113618.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > 'if input = "squiggle" then output "squoogle"' cannot answer random questions on any given topic in a meaningful way. I pointed this out a while ago, in a much less concise way. In fact, it was in the very first post of this thread. http://lists.extropy.org/pipermail/extropy-chat/2010-January/055668.html No one seemed to care, back then. Perhaps you'll have better luck. From jonkc at bellsouth.net Fri Feb 19 18:31:30 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 19 Feb 2010 13:31:30 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <597645.99584.qm@web36501.mail.mud.yahoo.com> <1A0CA64B-57C4-4B7E-B68D-53920CB39A35@bellsouth.net> Message-ID: Me: >> I'm also a little curious why Swobe takes at face value a report from a >> human being that he has subjective experience but if a robot, regardless of >> how intelligent, reports the same thing Swobe is certain he is lying. > Spencer Campbell: > Because human beings have brains! What do brains have to do with the price of eggs? It's easy to demonstrate that brains have something to do with intelligence but if Swobe is correct and that's unrelated to consciousness then for all we know the big toe is the key organ of consciousness. > No. It's because Gordon is a human, and Gordon can detect his own > consciousness, and so Gordon assumes that other humans can do the > same. But he is a male, perhaps females are not conscious and just act like they are, he is a member of a particular race, perhaps members of other races have no feelings and just act like they do. If you don't make it an axiom that intelligent behavior and consciousness are two sides of the same thing then anything is possible. And even Swobe doesn't think other humans are conscious when they are sleeping or dead. Why? Because they don't act like they are. > It's a reasonable assumption, even if it does vaccinate him > against a whole class of extremely relevant thought. You can say that again! > > If you must keep picking on him [...] I'm not picking on him, he's a grown man, I presume, and if he wishes to counter my criticisms he is free to do so; but if he persists in sending post after post after post full of rubbish he's got to expect to get some heat. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Feb 19 18:39:59 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 19 Feb 2010 13:39:59 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <862214.84724.qm@web111207.mail.gq1.yahoo.com> References: <597645.99584.qm@web36501.mail.mud.yahoo.com> <1A0CA64B-57C4-4B7E-B68D-53920CB39A35@bellsouth.net> <862214.84724.qm@web111207.mail.gq1.yahoo.com> Message-ID: <4B084CA8-A2DD-459F-AE5B-55C7D31A3240@bellsouth.net> On Feb 18, 2010, Christopher Luebcke wrote: > Certainty that the human's report is accurate does not provide the basis for certainty that the robot's report is not. I am certain that when other humans tell me they are conscious they are telling the truth, I'm probably correct too. > This is crucial to the point--while it may be said that a functioning organic brain and nervous system is sufficient for consciousness, it has not at all been shown that it is necessary. A functioning organic brain and nervous system is sufficient for intelligence, but if Swobe is correct there is absolutely no reason to think the brain has anything to do with consciousness. The Egyptians carefully preserved every part of the body EXCEPT for the brain, maybe they were right. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Fri Feb 19 21:06:27 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 19 Feb 2010 16:06:27 -0500 Subject: [ExI] I have an elementary proof by contradiction of Fermat's last theorem for n=3. What do I do? Message-ID: <4e3a29501002191306l21fc2a31je4e41c92995de7dd@mail.gmail.com> Hey ExI, today I was working on some Fermat, and, miraculously, I seemed to have channeled the dead man himself for a proof by contradiction of his Last Theorem for n=3, and, what's more, I think, given the nature of the proof, that I may be able to extend it for all n. Naturally, I flipped out and LaTeXed it up into a neat little document. My question is: being seventeen years old and with no real connections to the world of higher academia, and if this is right, how do I go about showing people who can do something about it? Another question: should I post it to ExI? I trust the list, but there are probably many lurkers who merely receive the emails, any of them a potential proof-thief. In addition, the margins here are too narrow to contain it. Heh. But really, if any of you are interested/proficient in math and are not bastard thieves, I would be happy to email it to you for checking. But yes...what now? -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Fri Feb 19 22:58:48 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 19 Feb 2010 16:58:48 -0600 Subject: [ExI] Financial Crisis - What really happened In-Reply-To: References: Message-ID: <4B7F17A8.4040200@satx.rr.com> On 2/19/2010 4:58 AM, BillK wrote: > Excellent piece! From stathisp at gmail.com Sat Feb 20 02:25:15 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 20 Feb 2010 13:25:15 +1100 Subject: [ExI] Semiotics and Computability In-Reply-To: <794676.58785.qm@web36502.mail.mud.yahoo.com> References: <794676.58785.qm@web36502.mail.mud.yahoo.com> Message-ID: On 19 February 2010 12:41, Gordon Swobe wrote: > --- On Thu, 2/18/10, Stathis Papaioannou wrote: > >> Or 3) implementing programs leads to understanding. >> >> It seems that you just can't get past the very obvious >> point that although the man has no understanding of language, he is >> just a trivial part of the system, even if he internalises all the >> components of the system. His intelligence is in fact mostly >> superfluous. What he does is something a punchcard machine could do. >> In fact, the same could be said of the intelligence of the man with >> respect to knowledge of Chinese: it isn't a part of his cognitive >> competence, not even as zombie intelligence. It's as if you had a being >> of godlike intelligence (and consciousness) in your head whose only >> job was to make the neurons fire in the correct sequence. Do you see >> that such a being would not necessarily know anything about what you >> were thinking about, and you would not necessarily know anything about >> what it was thinking about? > > As if I had a "being with godlike intelligence in my head who makes the neurons fire"? Honestly Stathis I have no idea what you're talking about. > > The CRA thought experiment involves *you the reader* imagining *yourself* in the room (or as the room) using *your* mind to attempt to understand the Chinese symbols. > > Nobody wants to know about strange speculations of *something else* in or about your brain that might understand the symbols when you don't understand them. I mentioned the pink unicorns the other day for that reason. If mysterious pink unicorns in some mysterious place understand the symbols, but you have no access to their understanding, then Searle still got it right. I am trying to show you that the fact that a system has an intelligence that understands only the low level processes does not preclude the existence of another intelligence that has higher level understanding. The brain is exactly that sort of system, except that the neurons are much dumber than a man. To even up the competition I propose making the neurons much smarter. Here is what you claim from the CRA: The man in the room has an understanding of the low level processes but not of Chinese, even though the room speaks Chinese. Therefore, the Chinese-speaking room has no understanding of Chinese. Here is my analogous claim: if your brain contained a super-intelligent being that made the neurons fire in the appropriate order it would have an understanding of the low level brain processes but not of English, even though you speak English. Therefore, you wouldn't really understand English. If the latter experiment is silly, then the CRA is also silly. However, both experiments are logically possible, which is what we are interested in. -- Stathis Papaioannou From msd001 at gmail.com Sat Feb 20 04:32:57 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 19 Feb 2010 23:32:57 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: References: <794676.58785.qm@web36502.mail.mud.yahoo.com> Message-ID: <62c14241002192032l69df5191h32b84888fc94943f@mail.gmail.com> On Fri, Feb 19, 2010 at 9:25 PM, Stathis Papaioannou wrote: > I am trying to show you that the fact that a system has an > intelligence that understands only the low level processes does not > preclude the existence of another intelligence that has higher level > understanding. The brain is exactly that sort of system, except that > the neurons are much dumber than a man. To even up the competition I > propose making the neurons much smarter. An important point. > Here is my analogous claim: if your brain contained a > super-intelligent being that made the neurons fire in the appropriate > order it would have an understanding of the low level brain processes > but not of English, even though you speak English. Therefore, you > wouldn't really understand English. I tried to make a similar analogy about the working of hydrogen atoms in a glass of water or the sun's fusion - or for the less chemistry and physics inclined - the blue dot in a pointillist painting. At the lowest level of "what it is" there is very little apparent relationship to "what it does" or "how it works." I think organizational levels are sometimes functionally distinct for a reason. From msd001 at gmail.com Sat Feb 20 04:47:57 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 19 Feb 2010 23:47:57 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <776AA753-5772-4E13-869E-ABE5CC276E1D@bellsouth.net> References: <767720.61984.qm@web36508.mail.mud.yahoo.com> <776AA753-5772-4E13-869E-ABE5CC276E1D@bellsouth.net> Message-ID: <62c14241002192047p1eae3081o669df21917280469@mail.gmail.com> 2010/2/19 John Clark : > Since my last post?Gordon Swobe has posted 9 times. why do you even count? > And Swobe may be interested to know that his brain is in fact the product of > last years?mashed potatoes and gravy, it's just a question of rearranging > the atoms in a programable way. DNA does exactly that. O.M.G. - so what you're saying is that it's not the mashed potatoes and gravy that has the consciousness, but that the thing that does have consciousness is powered by mashed potatoes and gravy. If we can find a way to scale that up, it could replace data centers all over the country. We could just shovel potatoes and gravy into the hopper at the top and get useful computation out. That's fantastic. I nominate JKC for some kind of prize. :) > Wow, now I see the error of my ways! It's a pity Swobe didn't say that two > months and several hundred posts ago, think of the time we could have saved. > Oh wait he did. (repeat 4x) Now you're just being mean. Oh wait, that's how you roll - carry on... From steinberg.will at gmail.com Sat Feb 20 04:59:04 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 19 Feb 2010 23:59:04 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <62c14241002192047p1eae3081o669df21917280469@mail.gmail.com> References: <767720.61984.qm@web36508.mail.mud.yahoo.com> <776AA753-5772-4E13-869E-ABE5CC276E1D@bellsouth.net> <62c14241002192047p1eae3081o669df21917280469@mail.gmail.com> Message-ID: <4e3a29501002192059q2903d707oe2416bb53154c9f7@mail.gmail.com> What has consistently boggled my mind is the fact that THE CRA, EVEN IN THEORY, WILL NOT EVEN MIMIC CONSCIOUSNESS. Upon closer examination, the idea of a "rulebook for responses" is bogus and impossible. Consciousness necessitates NON-BIJECTIVE, CHANGING RESPONSES, ABSOLUTELY NOT NOT NOT SQUIGGLE FOR SQUAGGLE, or any of this, if I may pull a Clark, BULLSHIT. The analogy is wrong because it assumes an untruth and hides it behind the neat idea of this rulebook. Please stop using it to back up anything. All it proves is that a super-ELIZA is not conscious. We know this. The rulebook machine is not conscious. Humans are not rulebook machines. No comparing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Feb 20 05:25:16 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 20 Feb 2010 16:25:16 +1100 Subject: [ExI] How not to make a thought experiment In-Reply-To: <4e3a29501002192059q2903d707oe2416bb53154c9f7@mail.gmail.com> References: <767720.61984.qm@web36508.mail.mud.yahoo.com> <776AA753-5772-4E13-869E-ABE5CC276E1D@bellsouth.net> <62c14241002192047p1eae3081o669df21917280469@mail.gmail.com> <4e3a29501002192059q2903d707oe2416bb53154c9f7@mail.gmail.com> Message-ID: 2010/2/20 Will Steinberg : > What has consistently boggled my mind is the fact that THE CRA, EVEN IN > THEORY, WILL NOT EVEN MIMIC CONSCIOUSNESS. > Upon closer examination, the idea of a "rulebook for responses" is bogus and > impossible. ?Consciousness necessitates NON-BIJECTIVE, CHANGING RESPONSES, > ABSOLUTELY NOT NOT NOT SQUIGGLE FOR SQUAGGLE, or any of this, if I may pull > a Clark, BULLSHIT. > The analogy is wrong because it assumes an untruth and hides it behind the > neat idea of this rulebook. ?Please stop using it to back up anything. ?All > it proves is that a super-ELIZA is not conscious. ?We know this. ?The > rulebook machine is not conscious. ?Humans are not rulebook machines. ?No > comparing. I don't see the CRA as necessarily equivalent to a Giant Look-Up Table (GLUT). It could instead run a program that speaks Chinese, functioning as an extremely slow digital computer. Having said that, if a GLUT is good enough to behave intelligently I don't see why it should not also be conscious. -- Stathis Papaioannou From steinberg.will at gmail.com Sat Feb 20 06:19:35 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Sat, 20 Feb 2010 01:19:35 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <767720.61984.qm@web36508.mail.mud.yahoo.com> <776AA753-5772-4E13-869E-ABE5CC276E1D@bellsouth.net> <62c14241002192047p1eae3081o669df21917280469@mail.gmail.com> <4e3a29501002192059q2903d707oe2416bb53154c9f7@mail.gmail.com> Message-ID: <4e3a29501002192219w2f26f962w69f16a0c80c2e2e2@mail.gmail.com> Given Swobe's previous quote on squiggling and squoggling, it seems that this GLUT is exactly what he considers the CRA to be. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Sat Feb 20 11:09:13 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 20 Feb 2010 03:09:13 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: Message-ID: <976791.44997.qm@web113610.mail.gq1.yahoo.com> Here's something I stumbled across that might throw more light on the subject (or maybe I should say more petrol on the fire): http://www.alphadictionary.com/blog/?p=209 This is telling me that semantics is just another set of rules. These rules apply to the symbols derived from our sensory inputs in a more direct way than the rules of syntax, which apply to the more abstract symbols we use for language processing. In this view, Gordon may actually be right, in saying that syntax alone is not sufficient for understanding. You also need a set of semantic rules to make sense of your experience. Of course, there's nothing magical or mysterious about it, just another set of rules, just as amenable to being implemented in any general-purpose computing substrate as are the syntactical rules. Who knows, there may be other factors that are necessary too. Other kinds of information processing that we haven't yet discovered in the brain. So we have: Sensory Inputs (including sensorimotor feedback loops) Semantic processing Syntactical processing Other stuff I don't know much about, that's involved in language processing And possibly other kinds of processing nobody yet knows about. But it's all data processing, all the way, and it *has to be that way* (because the universe is only made of 3 kinds of thing, etc., etc.). So if we make something that can do general data-processing, we can do all those things. The very same things that we do in our heads. Hm, something that can do general data-processing... I suspect it would be something with lots and lots of brass gear wheels, springs, rods... Ben Zaiboc From msd001 at gmail.com Sat Feb 20 14:24:21 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 20 Feb 2010 09:24:21 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: <976791.44997.qm@web113610.mail.gq1.yahoo.com> References: <976791.44997.qm@web113610.mail.gq1.yahoo.com> Message-ID: <62c14241002200624u2d2c8854mbaa094759ef36ac8@mail.gmail.com> On Sat, Feb 20, 2010 at 6:09 AM, Ben Zaiboc wrote: > Hm, something that can do general data-processing... > > I suspect it would be something with lots and lots of brass gear wheels, springs, rods... and mashed potatoes From jrd1415 at gmail.com Sat Feb 20 15:45:30 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sat, 20 Feb 2010 08:45:30 -0700 Subject: [ExI] Financial Crisis - What really happened In-Reply-To: References: Message-ID: I have come to the conclusion that the "investment bankers" and other such "money game" professionals study up on all the aspects of the financial regulatory system and then ruthlessly "game" the system. Essentially, they analyze the maze of regulations until they find a way around them, that is, a legal way to "steal". They push that to the limit and beyond. Then boom goes bust, the worst of the offenders, those who, in their enthusiasm crossed the line of legality, get arrested, politicians and pundits flap their respective pie holes til interest fades, and then it starts all over again. At the nexus of capitalism and human nature, the bigger the money pile, the more uncontrollable the greed. Best, Jeff Davis "It is as morally bad not to care whether a thing is true or not, so long as it makes you feel good, as it is not to care how you got your money as long as you have got it." -Edmund Way Teale, "Circle of the Seasons" From sparge at gmail.com Sat Feb 20 16:20:55 2010 From: sparge at gmail.com (Dave Sill) Date: Sat, 20 Feb 2010 11:20:55 -0500 Subject: [ExI] The alleged existence of consciousness In-Reply-To: References: <762702.96421.qm@web36504.mail.mud.yahoo.com> Message-ID: On Fri, Feb 19, 2010 at 10:48 AM, Stathis Papaioannou wrote: > > Philosophical zombies by definition exhibit the same complex behaviour > as conscious beings. If nature could have produced zombies then why > aren't we zombies? Who knows? If zombism is possible, maybe we just got lucky that we ended up with consciousness. > It can't be that zombies take more effort to make, Sure it could. > since if the patterns of neuronal firing are reproduced in a different > substrate that would reproduce the behaviour but, you claim, not > necessarily the consciousness. That assumes that Swobe's scenario is the only way to achieve zombism. > So the brain could have evolved > similarly to the way it actually did, but without the added > complication of consciousness. What makes you so sure it's a complication? > It is difficult to imagine that > something as elaborate and non-adaptive as consciousness could have > evolved if there were so many other pathways to the same end without > consciousness. We don't know anything about the number of pathways to intelligence with or without consciousness. > The best explanation is that the brain we happen to > have ended up with is not specially blessed, and any other brain based > on similar patterns resulting in similar behaviour would have also had > a similar consciousness. Intuitively, that seems likely. But we just don't know. -Dave From thespike at satx.rr.com Sat Feb 20 16:56:37 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 20 Feb 2010 10:56:37 -0600 Subject: [ExI] Financial Crisis - What really happened In-Reply-To: References: Message-ID: <4B801445.8060408@satx.rr.com> On 2/20/2010 9:45 AM, Jeff Davis wrote: > At the nexus of capitalism and human nature, the bigger the money > pile, the more uncontrollable the greed. Well, let's say: at the nexus of (a) any seriously large aggregation of humans and (b) species small-tribal** nature. (Doesn't roll off the tongue as readily, I know.) **for values of around 50-200 humans Damien Broderick From lacertilian at gmail.com Sat Feb 20 17:33:26 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 20 Feb 2010 09:33:26 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: <4e3a29501002192219w2f26f962w69f16a0c80c2e2e2@mail.gmail.com> References: <767720.61984.qm@web36508.mail.mud.yahoo.com> <776AA753-5772-4E13-869E-ABE5CC276E1D@bellsouth.net> <62c14241002192047p1eae3081o669df21917280469@mail.gmail.com> <4e3a29501002192059q2903d707oe2416bb53154c9f7@mail.gmail.com> <4e3a29501002192219w2f26f962w69f16a0c80c2e2e2@mail.gmail.com> Message-ID: Will Steinberg : > Given Swobe's previous quote on squiggling and squoggling, it seems that this GLUT is exactly what he considers the CRA to be. I haven't been able to figure that out, myself. It's entirely likely that he considers the specifics of the implementation irrelevant, and so has pointedly refused to give such questions any thought. I would not be surprised to learn that Gordon has never written a computer program in his life. Stathis Papaioannou : > I don't see the CRA as necessarily equivalent to a Giant Look-Up Table > (GLUT). It could instead run a program that speaks Chinese, > functioning as an extremely slow digital computer. Having said that, > if a GLUT is good enough to behave intelligently I don't see why it > should not also be conscious. The thing is, it would have to be a self-modifying GLUT. That's a fundamentally different sort of thing, and there is nothing in the CRA to indicate that the man is editing his rulebooks as he goes. A static GLUT can not learn, obviously, so it can't be intelligent in the same way that humans are. Maybe in some other, less interesting way. You have to go shockingly far down the phylogenetic tree before learning disappears entirely; it's a pretty basic trait of earthly lifeforms. From jonkc at bellsouth.net Sat Feb 20 17:52:06 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 20 Feb 2010 12:52:06 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <62c14241002192047p1eae3081o669df21917280469@mail.gmail.com> References: <767720.61984.qm@web36508.mail.mud.yahoo.com> <776AA753-5772-4E13-869E-ABE5CC276E1D@bellsouth.net> <62c14241002192047p1eae3081o669df21917280469@mail.gmail.com> Message-ID: <3A86D776-7E5B-49A7-A8CD-B1120994686A@bellsouth.net> On Feb 19, 2010 Mike Dougherty wrote: > O.M.G. - so what you're saying is that it's not the mashed potatoes and gravy that has the consciousness No. Although from an engineering viewpoint mashed potatoes and gravy may not be the ideal material to build a computer out of, theoretically it's possible and theoretically it could even be made intelligent. And of course intelligence is far harder to make than consciousness. > but that the thing that does have consciousness is powered by mashed potatoes and gravy. That is not only possible it happens every day, except that the brain is not just powered up but is actually constructed out of mashed potatoes and gravy, and it happens by means of a PROGRAM; the very thing so reviled by Swobe. > If we can find a way to scale that up, it could replace data centers all over > the country. We could just shovel potatoes and gravy into the hopper > at the top and get useful computation out. Exactly. > That's fantastic. I nominate JKC for some kind of prize. And you sir are a remarkably good judge of character. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Feb 20 18:32:59 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 20 Feb 2010 13:32:59 -0500 Subject: [ExI] The alleged existence of consciousness In-Reply-To: References: <762702.96421.qm@web36504.mail.mud.yahoo.com> Message-ID: <62c14241002201032i1adb65ccu1f8b97bc885102cd@mail.gmail.com> On Sat, Feb 20, 2010 at 11:20 AM, Dave Sill wrote: > Who knows? If zombism is possible, maybe we just got lucky that we > ended up with consciousness. I bet a zombie would say the same thing. From jonkc at bellsouth.net Sat Feb 20 18:33:30 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 20 Feb 2010 13:33:30 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <767720.61984.qm@web36508.mail.mud.yahoo.com> <776AA753-5772-4E13-869E-ABE5CC276E1D@bellsouth.net> <62c14241002192047p1eae3081o669df21917280469@mail.gmail.com> <4e3a29501002192059q2903d707oe2416bb53154c9f7@mail.gmail.com> <4e3a29501002192219w2f26f962w69f16a0c80c2e2e2@mail.gmail.com> Message-ID: Swobe says that regardless of how intelligent it behaves, before we are justified in calling a machine conscious we must understand exactly how consciousness works. Swobe admits he has no understanding of consciousness but that hasn't stopped him from putting a whole bunch of stuff into the conscious or unconscious category. Why is he allowed to play from different rules than me, why the asymmetry? It seems to me that if we want to investigate consciousness we should start with the only 2 things we know for sure about it: 1) Evolution produced consciousness at least once and almost certainly many billions of times. 2) If intelligence and consciousness were not inextricably linked Evolution would never have produced it. From that I conclude that if you happen to run across a intelligent machine logically your default position should be that it is conscious rather than the reverse. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Feb 20 18:39:36 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 20 Feb 2010 13:39:36 -0500 Subject: [ExI] The alleged existence of consciousness In-Reply-To: References: <762702.96421.qm@web36504.mail.mud.yahoo.com> Message-ID: On Feb 20, 2010, at 11:20 AM, Dave Sill wrote: > Who knows? If zombism is possible, maybe we just got lucky that we > ended up with consciousness. Even if we got lucky it wouldn't last because consciousness would have no adaptive value so we would soon lose it through genetic drift, just as animals in dark caves lose their eyes. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Feb 20 18:54:05 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 20 Feb 2010 13:54:05 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <767720.61984.qm@web36508.mail.mud.yahoo.com> <776AA753-5772-4E13-869E-ABE5CC276E1D@bellsouth.net> <62c14241002192047p1eae3081o669df21917280469@mail.gmail.com> <4e3a29501002192059q2903d707oe2416bb53154c9f7@mail.gmail.com> <4e3a29501002192219w2f26f962w69f16a0c80c2e2e2@mail.gmail.com> Message-ID: <62c14241002201054k55a9a089t719b651730137911@mail.gmail.com> On Sat, Feb 20, 2010 at 12:33 PM, Spencer Campbell wrote: > A static GLUT can not learn, obviously, so it can't be intelligent in > the same way that humans are. Maybe in some other, less interesting > way. You have to go shockingly far down the phylogenetic tree before > learning disappears entirely; it's a pretty basic trait of earthly > lifeforms. And a static GLUT big enough to produce a passable zombie in a given domain of intelligence would be encoding that domain's intelligence AS a program. We would be shifting the responsibility from the zombie to the programmer. Few consider such 'narrow AI' to be very interesting at all. We already have a machine that is 'programmed' to be an expert system in the domain of burning bread to make toast (timer connected to on/off) The thermostat has extrahuman intelligence for keeping your house at the correct temperature. Maybe a really advanced toaster would ask the house thermostat for the ambient temperature and humidity of the house and consult the breadbox for the age of the bread in order to more accurately drive the bread-burning function. The as-is toaster is good enough at what it does to preclude such extreme engineering. I think the argument for artificial general intelligence (AGI) is that a cross-domain application of knowledge to solve novel situations is currently reserved for a select few creatures on earth. Humans may be the best of the bunch, but we're still not very good at it. (lets be honest) So we should turn our tool-building skills to making a machine that can solve problems better than any individual human thinker. This is the progression from man pulling a cart, to a horse pulling a cart, to a locomotive pulling several carts, etc. I think the more interesting question will be whether or not mere humans will continue to be able to drive the progression once we successfully complete this next step. From stefano.vaj at gmail.com Sat Feb 20 19:27:40 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 20 Feb 2010 20:27:40 +0100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: References: <762702.96421.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c21002201127g7b0ffe8dm26c0a81f1fa864fc@mail.gmail.com> On 20/02/2010, John Clark wrote: > Even if we got lucky it wouldn't last because consciousness would have no > adaptive value so we would soon lose it through genetic drift, just as > animals in dark caves lose their eyes. Absolutely. Probably we used to be conscious, at one time or another. Then we lost it. Only, being zombies/zimboes as a consequence, we obviously do not realise it. -- Stefano Vaj From stathisp at gmail.com Sun Feb 21 00:47:43 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 21 Feb 2010 11:47:43 +1100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: References: <762702.96421.qm@web36504.mail.mud.yahoo.com> Message-ID: On 21 February 2010 03:20, Dave Sill wrote: > On Fri, Feb 19, 2010 at 10:48 AM, Stathis Papaioannou > wrote: >> >> Philosophical zombies by definition exhibit the same complex behaviour >> as conscious beings. If nature could have produced zombies then why >> aren't we zombies? > > Who knows? If zombism is possible, maybe we just got lucky that we > ended up with consciousness. > >> It can't be that zombies take more effort to make, > > Sure it could. > >> since if the patterns of neuronal firing are reproduced in a different >> substrate that would reproduce the behaviour but, you claim, not >> necessarily the consciousness. > > That assumes that Swobe's scenario is the only way to achieve zombism. There may be other ways to make a zombie (although there are separate arguments against that possibility also) but following the architecture of the brain is not one of them. Any structure that reproduces the pattern of neural firing in the brain will also reproduce the intelligence and the consciousness of the brain. >> So the brain could have evolved >> similarly to the way it actually did, but without the added >> complication of consciousness. > > What makes you so sure it's a complication? If the brain's intelligence would remain intact despite changes that would eliminate consciousness, then consciousness would be a useless complication. >> It is difficult to imagine that >> something as elaborate and non-adaptive as consciousness could have >> evolved if there were so many other pathways to the same end without >> consciousness. > > We don't know anything about the number of pathways to intelligence > with or without consciousness. If Searle and Gordon are right and functionalism is wrong, there are trillions of ways to reproduce brain function and intelligence while eliminating consciousness. >> The best explanation is that the brain we happen to >> have ended up with is not specially blessed, and any other brain based >> on similar patterns resulting in similar behaviour would have also had >> a similar consciousness. > > Intuitively, that seems likely. But we just don't know. We do know. The partial brain replacement thought experiment makes it true as a matter of logical necessity; in other words more certainly true than any mere empirical fact, which could be proved false tomorrow. I would really like to hear a rebuttal, but no-one has yet attempted one. -- Stathis Papaioannou From gts_2000 at yahoo.com Sun Feb 21 00:59:38 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 20 Feb 2010 16:59:38 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: Message-ID: <836779.88847.qm@web36502.mail.mud.yahoo.com> --- On Sat, 2/20/10, Spencer Campbell wrote: > I would not be surprised to learn that Gordon has never written a computer > program in his life. Really? I've written many programs. I spend 50+ hours per week programming, deploying or troublshooting computers. My certs tell the story: Microsoft Certified Professional (C++ application development) Dell Certified Systems Expert Sony Certified Technician Apple Certified Macintosh Technician CompTIA A+ Certified PC Technician Alienware Certified Technician Lexmark Certified Technician Cisco Certified Professional Nortel Certified Professional Polycom Visual Telephony Technician In a typical month I work on more computers than most people here will touch in a lifetime. I have yet to meet a conscious computer, but who knows? Perhaps I'll meet one on Monday. -gts From rafal.smigrodzki at gmail.com Sun Feb 21 01:00:40 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sat, 20 Feb 2010 20:00:40 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <588495.24073.qm@web111212.mail.gq1.yahoo.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> <154647.10956.qm@web111212.mail.gq1.yahoo.com> <7641ddc61002171955i33b65d34i80dcef79dc21669b@mail.gmail.com> <588495.24073.qm@web111212.mail.gq1.yahoo.com> Message-ID: <7641ddc61002201700n25179a9h63c3667a0e42739@mail.gmail.com> On Wed, Feb 17, 2010 at 11:41 PM, Christopher Luebcke wrote: >> The core group of about 50 climate activists (The Team, as they >> refer to themselves), are wicked. They intentionally forged and >> misrepresented data to advance a preconceived position. > > I wonder if you could cite some examples of this intentional forgery? ### Probably the most egregious ones are the use of a subset of treering data from the Yamal series that resulted in a spurious lowering of temperatures in the middle ages, the use of inverted graphs of sedimentation in Finnish lakes (both related to calibration of proxies for global temperatures before the instrumental age), and grafting of instrumental data on the proxy graph after 1960, done by the CRU team and by Michael Mann. More recently there appears to be a highly selective use of existing thermometer records to exclude or adjust upwards the rural thermometers which hides the urban heat island effects, and the use of a single thermometer located at an airport to stand for the whole continent of Antarctica (these are more the domain of GISS and NOAA). Rafal From girl.meteor at gmail.com Sun Feb 21 02:31:16 2010 From: girl.meteor at gmail.com (meteor girl) Date: Sat, 20 Feb 2010 20:31:16 -0600 Subject: [ExI] How to Replace Uninjured Parts of the Brain Message-ID: <62ced1af1002201831s22d98d19sd9444160d4ff2a8d@mail.gmail.com> How can we do this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Feb 21 03:15:48 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 21 Feb 2010 14:15:48 +1100 Subject: [ExI] How to Replace Uninjured Parts of the Brain In-Reply-To: <62ced1af1002201831s22d98d19sd9444160d4ff2a8d@mail.gmail.com> References: <62ced1af1002201831s22d98d19sd9444160d4ff2a8d@mail.gmail.com> Message-ID: 2010/2/21 meteor girl : > How can we do this? We can't, but a century ago we couldn't replace knee joints either. -- Stathis Papaioannou From lacertilian at gmail.com Sun Feb 21 03:22:34 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 20 Feb 2010 19:22:34 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: <836779.88847.qm@web36502.mail.mud.yahoo.com> References: <836779.88847.qm@web36502.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Spencer Campbell : >> I would not be surprised to learn that Gordon has never written a computer program in his life. > > Really? I've written many programs. I spend 50+ hours per week programming, deploying or troublshooting computers. This, on the other hand, surprises me greatly. Then again, my father is a professional software developer and he doesn't know any more about the vagaries of low-level information processing than a stereotypical soccer mom. I wrote an extremely simple program in Haskell once, to his specifications in fact, and he was utterly blown away. I explained it to him as he was reading it, quickly at first and then more carefully as it rapidly escaped him, until eventually he was forced to re-do it himself based on vague impressions he gleaned from my apparently alien solution. It is a well-documented fact that I am excellent at explaining complex subjects, mind you. I've been told more than once that I'd make a great teacher. I spend less than an hour a week programming. I am certainly not a programmer! So, Gordon, it seems I have reached an impasse: I had assumed that you were pretty much computer-illiterate, based on the way you talk about symbols, and I can't decide whether that gave your intelligence too little credit or too much. None of this is terribly useful. I feel compelled to say it, nevertheless. At this point I think it would be a great idea to abandon the CRA entirely, seeing as it was only meant as an illustration of a broader argument. It's done nothing but obscure the matter, from what I can see. A completely unrelated thought experiment is long overdue. I've just read the Wikipedia article for the CRA in more detail, as well as the following: http://cogprints.org/4023/1/searlbook.htm It helped to clear away a few layers of confusion accrued from spending all my time in Extropy-Chat. So, now I'm beginning to agree with Searle. We'll see how things have settled in my mind after I sleep on it. From lacertilian at gmail.com Sun Feb 21 03:30:53 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 20 Feb 2010 19:30:53 -0800 Subject: [ExI] How to Replace Uninjured Parts of the Brain In-Reply-To: References: <62ced1af1002201831s22d98d19sd9444160d4ff2a8d@mail.gmail.com> Message-ID: Stathis Papaioannou : > meteor girl : >> How can we do this? > > We can't, but a century ago we couldn't replace knee joints either. This is a hypothetical question, then. In that case the sky's the limit (so, somewhere around 100 km). Replace with what? More brain? A little pointless if there isn't any injury involved, but we'll probably be able to grow substitute cortices in the lab somewhere along the line. Higher-efficiency components would be more interesting. At that point, though, the question becomes: what would work better than what we have, yet still be compatible enough for parallel implantation? From stathisp at gmail.com Sun Feb 21 04:11:51 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 21 Feb 2010 15:11:51 +1100 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <836779.88847.qm@web36502.mail.mud.yahoo.com> Message-ID: On 21 February 2010 14:22, Spencer Campbell wrote: > I've just read the Wikipedia article for the CRA in more detail, as > well as the following: > http://cogprints.org/4023/1/searlbook.htm Jeez!! These people just can't seem to get that the matter inside a brain is S-T-U-P-I-D. Really, really stupid. Certainly much more stupid that the man in the Chinese Room, who at least understands that he is doing some sort of symbol manipulation. It's an embarrassment to philosophy! -- Stathis Papaioannou From cluebcke at yahoo.com Sun Feb 21 06:19:10 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Sat, 20 Feb 2010 22:19:10 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <7641ddc61002201700n25179a9h63c3667a0e42739@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> <154647.10956.qm@web111212.mail.gq1.yahoo.com> <7641ddc61002171955i33b65d34i80dcef79dc21669b@mail.gmail.com> <588495.24073.qm@web111212.mail.gq1.yahoo.com> <7641ddc61002201700n25179a9h63c3667a0e42739@mail.gmail.com> Message-ID: <568542.51093.qm@web111205.mail.gq1.yahoo.com> I was actually looking for independently verifiable evidence of intentional forgery, not a list of accusations that anybody with a keyboard could make. I must presume that you're not actually interested in providing such, because you couldn't possibly believe that the response you gave would sway anybody who didn't already agree with your point of view. ----- Original Message ---- > From: Rafal Smigrodzki > To: ExI chat list > Sent: Sat, February 20, 2010 5:00:40 PM > Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled > > On Wed, Feb 17, 2010 at 11:41 PM, Christopher Luebcke > wrote: > >> The core group of about 50 climate activists (The Team, as they > >> refer to themselves), are wicked. They intentionally forged and > >> misrepresented data to advance a preconceived position. > > > > I wonder if you could cite some examples of this intentional forgery? > > ### Probably the most egregious ones are the use of a subset of > treering data from the Yamal series that resulted in a spurious > lowering of temperatures in the middle ages, the use of inverted > graphs of sedimentation in Finnish lakes (both related to calibration > of proxies for global temperatures before the instrumental age), and > grafting of instrumental data on the proxy graph after 1960, done by > the CRU team and by Michael Mann. More recently there appears to be a > highly selective use of existing thermometer records to exclude or > adjust upwards the rural thermometers which hides the urban heat > island effects, and the use of a single thermometer located at an > airport to stand for the whole continent of Antarctica (these are more > the domain of GISS and NOAA). > > Rafal > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From bbenzai at yahoo.com Sun Feb 21 09:48:20 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 21 Feb 2010 01:48:20 -0800 (PST) Subject: [ExI] Brain Preservation In-Reply-To: Message-ID: <159995.71251.qm@web113616.mail.gq1.yahoo.com> This is interesting: http://www.brainpreservation.org/ I've wondered before about the possibility of plastination as a brain-preservation technique, but always assumed it would cause too much damage to be practical. If this is possible, it would be far superior to vitrification at liquid N2 temperatures, because once done, the brain would be stable at normal temperatures. You wouldn't be dependent on the continued functioning of a company for your future, with all the risks that implies, such as politics or other social factors, technical failures, etc. All youd need would be storage space. Your descendants could even keep your brain on the mantelpiece waiting for the technology to scan and resurrect you. If it didn't freak them out. The only downside is for people who want their original biological tissue to be repaired. That wouldn't be possible with plastination, but for uploaders, it would be perfect, because it solves the problem of maintaining structural integrity during the scan. I'm now wondering if there aren't other possible methods of preservation, apart from low temperatures and plastination? Ben Zaiboc From bbenzai at yahoo.com Sun Feb 21 12:48:41 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 21 Feb 2010 04:48:41 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: Message-ID: <806738.55991.qm@web113616.mail.gq1.yahoo.com> Spencer Campbell wrote: > > > I've just read the Wikipedia article for the CRA in > more detail, as > > well as the following: > > http://cogprints.org/4023/1/searlbook.htm OK, I'm reading the latter, and came across this: "(1) Mental states are just implementations of (the right) computer program(s). (Otherwise put: Mental states are just computational states)." Apart from the use of the word "just", which seems both unnecessary and prejudicial, I'm puzzling over the statement. What else could mental states possibly be? Is this some confusion of terminology? I don't see how a 'mental state' can be anything but an arrangement of information, which is also what a 'computational state' is. Can anyone throw any light on this? What alternatives are there? Ben Zaiboc From stathisp at gmail.com Sun Feb 21 12:52:37 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 21 Feb 2010 23:52:37 +1100 Subject: [ExI] How not to make a thought experiment In-Reply-To: <806738.55991.qm@web113616.mail.gq1.yahoo.com> References: <806738.55991.qm@web113616.mail.gq1.yahoo.com> Message-ID: On 21 February 2010 23:48, Ben Zaiboc wrote: > Spencer Campbell wrote: >> >> > I've just read the Wikipedia article for the CRA in >> more detail, as >> > well as the following: >> > http://cogprints.org/4023/1/searlbook.htm > > > OK, I'm reading the latter, and came across this: > > "(1) Mental states are just implementations of (the right) computer program(s). (Otherwise put: Mental states are just computational states)." > > Apart from the use of the word "just", which seems both unnecessary and prejudicial, I'm puzzling over the statement. > > What else could mental states possibly be? ?Is this some confusion of terminology? ?I don't see how a 'mental state' can be anything but an arrangement of information, which is also what a 'computational state' is. > > Can anyone throw any light on this? ?What alternatives are there? Magic. -- Stathis Papaioannou From bbenzai at yahoo.com Sun Feb 21 13:15:42 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 21 Feb 2010 05:15:42 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: Message-ID: <304059.67870.qm@web113614.mail.gq1.yahoo.com> "It remains only to note that if Searle himself were executing the computer program, he would still not be understanding Chinese. Hence (by (2)) neither would the computer, executing the very same program. Q.E.D. Computationalism is false." Ah, I see. I see where the misunderstanding lies. The idea the author has is that the thing that implements the program is the same as the thing that has the mental states (which are the result of the running of the program). Which is equivalent to saying that the neurons which implement thinking are themselves thinking. If that were true, then indeed, the whole argument would be valid. But it's not true. So, this suggests that Searle and his disciples do not see the difference between a bunch of neurons and a mind (or a computer and a running program). Strictly speaking, the sentence above is quite correct, the computer itself would not understand chinese, any more than it understands maths when it runs a spreadsheet. The thing that puzzles me is that it's not obvious to these people that a brain doesn't understand anything either. The same argument proves it. Ben Zaiboc From gts_2000 at yahoo.com Sun Feb 21 13:50:46 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 21 Feb 2010 05:50:46 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <487531.48733.qm@web36508.mail.mud.yahoo.com> --- On Sun, 2/21/10, Stathis Papaioannou wrote: >>?I don't see how a 'mental >> state' can be anything but an arrangement of information, >> which is also what a 'computational state' is. > >> Can anyone throw any light on this? ?What >> alternatives are there? > Magic The brain exists in nature as just another natural artifact. It has no unique status as a "computer", except in the imaginations of some foggy-headed people who love computers so much that they imagine themselves as them. Confusion arises also because some people believe, naively, that if we can compute x then a computation of x = x. Consider a system comprised of a man, a hammer, a nail and a piece of wood. The man drives the nail into the wood with the hammer and we compute that process. The resulting computation will contain such facts as the force with which the man wields the hammer and the density of the wood, and it will predict exactly the depth to which the man drives the nail into the wood with each strike of the hammer. That computation describes and predicts the event perfectly but the event itself does not equal a computation. We can say the same of computations of any other kind of event in nature, including brain events. The trivial fact that we can compute an event does not make the event itself a computation. Is the Brain a Digital Computer? http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html -gts From ablainey at aol.com Sun Feb 21 13:53:09 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Sun, 21 Feb 2010 08:53:09 -0500 Subject: [ExI] How to Replace Uninjured Parts of the Brain In-Reply-To: References: <62ced1af1002201831s22d98d19sd9444160d4ff2a8d@mail.gmail.com> Message-ID: <8CC811455DD17F3-7CBC-14B7D@webmail-m045.sysops.aol.com> -----Original Message----- From: Spencer Campbell >This is a hypothetical question, then. In that case the sky's the >limit (so, somewhere around 100 km). >Replace with what? More brain? A little pointless if there isn't any >injury involved, but we'll probably be able to grow substitute >cortices in the lab somewhere along the line. >Higher-efficiency components would be more interesting. At that point, >though, the question becomes: what would work better than what we >have, yet still be compatible enough for parallel implantation? _______________________________________________ A dictionary lobe with spell checker and dedicated maths lobe for starters. As we know there are many things that computers can do easily and efficiently that we are pretty useless at. So my number wish would be a simple brain computer inteface lobe. Which would allow mental access to digital data stored externally. Regarding the original question of how? Here is a quick idea. We already know that donor full heads can be connected to a recipient body (as a live but useless secondary add on). So it is reasonable to assume that individual lobes could be grafted into an existing brain. If the donor lobe were infused with stem cells prior to implantation. The join could generate some neural links rather than a barrier of scar tissue. The intresting question for me would be about the blood brain barrier. As the immune system is given little or no access to the brain. Perhaps a brain graft would be much less likely to be rejected by the host? A. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Feb 21 13:59:03 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 21 Feb 2010 05:59:03 -0800 (PST) Subject: [ExI] Is the brain a digital computer? Message-ID: <961402.63314.qm@web36504.mail.mud.yahoo.com> "But now if we are trying to take seriously the idea that the brain is a digital computer, we get the uncomfortable result that we could make a system that does just what the brain does out of pretty much anything. Computationally speaking, on this view, you can make a "brain" that functions just like yours and mine out of cats and mice and cheese or levers or water pipes or pigeons or anything else [e.g., beer cans and toilet paper] provided the two systems are, in Block's sense, "computationally equivalent" . You would just need an awful lot of cats, or pigeons or waterpipes, or whatever it might be. The proponents of Cognitivism report this result with sheer and unconcealed delight. But I think they ought to be worried about it, and I am going to try to show that it is just the tip of a whole iceberg of problems." http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html -gts From stathisp at gmail.com Sun Feb 21 14:38:38 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 22 Feb 2010 01:38:38 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <487531.48733.qm@web36508.mail.mud.yahoo.com> References: <487531.48733.qm@web36508.mail.mud.yahoo.com> Message-ID: On 22 February 2010 00:50, Gordon Swobe wrote: > Confusion arises also because some people believe, naively, that if we can compute x then a computation of x = x. Who said this? The claim is that a simulated x will have at least some of the properties of x, but not necessarily all the properties. A simulation may look like x but not smell like x, for example. > Consider a system comprised of a man, a hammer, a nail and a piece of wood. The man drives the nail into the wood with the hammer and we compute that process. The resulting computation will contain such facts as the force with which the man wields the hammer and the density of the wood, and it will predict exactly the depth to which the man drives the nail into the wood with each strike of the hammer. > > That computation describes and predicts the event perfectly but the event itself does not equal a computation. We can say the same of computations of any other kind of event in nature, including brain events. The trivial fact that we can compute an event does not make the event itself a computation. But the computation may, for example, predict how far the nail will be driven into the wood, which is a replication of a property of the real event. And the computer may be harnessed to control a robot hammering nails into wood. In fact, the computer with attached sensor and effector devices might be set up to replicate any property of the real thing whatsoever - except, you claim, its consciousness. Why do you think that consciousness alone of all things in the universe can't be copied by a computer? -- Stathis Papaioannou From gts_2000 at yahoo.com Sun Feb 21 14:44:25 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 21 Feb 2010 06:44:25 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: <776AA753-5772-4E13-869E-ABE5CC276E1D@bellsouth.net> Message-ID: <140881.49160.qm@web36502.mail.mud.yahoo.com> A disturbed person who goes by the name John K Clark continues to post messages to this discussion thread in what appears as an ongoing conversation with himself about me. He uses my name excessively in a bid to broadcast his confused thoughts to the search engines. His obsessive actions amount to a malicious attempt slander me and mischaracterize my views. I have no association with John K Clark. Gordon Swobe top posting in self defense > --- On Fri, 2/19/10, John Clark > wrote: > >> I think consciousness aids and enhances > intelligence, something like the way a flashlight helps one > move about in the dark. > > I've said this many many times before but that doesn't > prevent it from being true, despite believing the above, in > a magnificent demonstration of doublethink, Swobe also > believes that a behavioral demonstration like the Turing > test cannot detect consciousness. > > > It seems probable to me that conscious intelligence > involves less biological overhead than instinctive > unconscious intelligence > > Then the logical implication is crystal clear, its harder > to make an unconscious intelligence than a conscious > intelligence. So if you encounter an intelligent machine > your default assumption should be that it is conscious. > > > especially when considering complex behaviors such as > social behaviors. Perhaps nature selected it for that reason > only. > > So if Swobe met a robot with greater social intelligence > than he has would he consider it conscious? No of course he > would not because, because,.... well just because. > > Actually that is what Swobe would say today but I don't > think that's what would really happen. If someone ever met > up with such a machine I think it would understand us so > well, better than we understand ourselves, that it could > convince anyone to believe in anything and could quite > literally charm the pants off us. As Swobe points out, even > today characters in video games seem to be conscious to > some, a robot with a Jupiter Brain would convince even the > most sophisticated among us. We would believe the robot was > conscious even if we couldn't prove it. I have the same > belief regarding Gordon Swobe and the same lack of a proof. > > John K Clark > > Since my last post Gordon Swobe has posted 9 times. > > > The CRA thought experiment involves *you the reader* > imagining *yourself* in the room (or as the room) using > *your* mind to attempt to understand the Chinese symbols. > > As is Swobe's habit he is wrong yet again. The Chinese room > experiment asks you to imagine yourself as a mechanical > relay, that's it. However Swobe is right about one thing, a > relay is not conscious. Probably. > > > Conscious awareness > > As opposed to unconscious awareness. > > > enhances intelligence and gives the organism more > flexibility in dealing with multiple stimuli > simultaneously. > > Consciousness enhances intelligence and changes behavior > but the Turing Test cannot detect even a whiff of it. Swobe > does not see this idea as being world class stupid. I do. > > > As evidence of this we need only look at nature: > conscious organisms like humans exhibit more complex and > intelligent behaviors than do unconscious organisms like > plants and microbes. > > This is a very rare occasion where, incredible as it > sounds, Swobe is actually correct. Another way to express > Swobe's words quoted above is to say "The Turing Test > works". > > > you assume here as you do that that the i/o behavior > of a brain/neuron is all there is to the brain. [...] > > consciousness may involve the electrical signals that > travel down the axons internal to the neurons > > Swobe is always keen to tell us that nobody including him > has any idea what causes consciousness, so it is equally > likely consciousness may involve the size of one's foot, > after all the only being Swobe knows with certainty to be > conscious has one particular shoe size. I am not trying to > be funny, it's easy to demonstrate that the brain and > neurons have something to do with intelligence but, if as > Swobe believes, that has nothing to do with consciousness > then the organ that is the seat of awareness is anybody's > guess. The foot is as good a guess as any other. > > > Let us say that we created an artificial brain that > contained a cubic foot of warm leftover mashed potatoes and > gravy[...] Would your mister potato-head have > consciousness? > > Swobe says he loves the Chinese room crapola because it can > objectively determine what is conscious and what is not, and > yet when he tries to defend this ridiculous idea he > repeatedly dreams up intelligent things that are "obviously" > not conscious, such as a computer made of toilet paper and > now one made of mashed potatoes and gravy. But if all of > this is obvious Swobe does not make it clear what in hell > the point of the Chinese room thought experiment is. > > And Swobe may be interested to know that his brain is in > fact the product of last years mashed potatoes and gravy, > it's just a question of rearranging the atoms in a > programable way. DNA does exactly that. > > > I think we can and will one day create unconscious > robots that *act* like they have consciousness. > > Swobe thinks humans can make a environment that produces a > being that acts like he's conscious, but only God [or > various euphemisms of that word] can create an environment > that makes the real deal. I disagree. > > > You should consider him [ the Chinese room dude] an > actual man [...] I wanted to encourage you to > consider the man as literally a man > > Swobe says we should consider the Chinese room fellow as > literally a man, a man who can live for many trillions of > years and "internalize" that book of instructions, a actual > man who can memorize a document far larger than the > observable universe. I say that remark is idiotic. Does > anyone care to dispute my criticism? > > > our man in the room has no understanding of any > symbols and so no knowledge base to build on. > > Wow, now I see the error of my ways! It's a pity Swobe > didn't say that two months and several hundred posts ago, > think of the time we could have saved. Oh wait he did. > > > He can do no more than follow the syntactic > instructions in the program: if input = "squiggle" then > output "squoogle". > > Wow, now I see the error of my ways! It's a pity Swobe > didn't say that two months and several hundred posts ago, > think of the time we could have saved. Oh wait he did. > > > syntactic order does not give understanding. > > Wow, now I see the error of my ways! It's a pity Swobe > didn't say that two months and several hundred posts ago, > think of the time we could have saved. Oh wait he did. > > > formal syntax does not give semantics > > Wow, now I see the error of my ways! It's a pity Swobe > didn't say that two months and several hundred posts ago, > think of the time we could have saved. Oh wait he did. > > John K Clark > Since my last post Gordon Swobe has posted 10 times. > > > > Come the singularity, some people will lose their > grips on reality and find themselves believing such > absurdities as that digital depictions of people have real > mental states. A few lonely philosophers of my stripe will > try in vain to restore their sanity. > > As far as the future is concerned it really doesn't matter > if Swobe's ideas are right or wrong, either way they're as > dead as the Dodo. Even if he's 100% right and I am 100% > wrong people with my ideas will have vastly more influence > than people like him because we will not be held back by > superstitious ideas about "THE ORIGINAL". So it's pedal to > the metal upgrading, Jupiter brain ahead. Swobe just won't > be able to keep up with the electronic competition. > Only a few axons in the brain can send signals as fast as > 100 meters per second, non-myelinated axon's are only able > to go about 1 meter per second. Light moves at 300,000,000 > meters per second. > > Perhaps after the singularity the more conservative and > superstitious among us could still survive in some little > backwater somewhere, like the Amish do today, but I doubt > it. > > > I think you want me to believe that my watch has a > small amount of consciousness by virtue of it having a small > amount of intelligence. But I don't think that makes even a > small amount of sense. It seems to me that my watch has no > consciousness > > I'm not surprised Swobe can't make sense of it all, nothing > in the Biological sciences makes any sense without > Evolution, and he has shown a profound ignorance not only of > that theory but of the fossil record in general. Evolution > found it far harder to come up with intelligence than > consciousness, the brain structures that produce the basic > emotions we share with many other animals and are many > hundreds of millions of years old, while the higher brain > structures that produce language, mathematics and abstract > thought in general, things that make humans unique, are less > than a million years old and possibly much less. Swobe does > not use his higher brain structures to think with and > prefers to think with his gut; but many animals have an > intestinal tract and to my knowledge none of them are > particularly good philosophers. > > > Consciousness, as I mean it today, entails the ability > to have conscious intentional states. That is, it entails > the ability to have something consciously "in mind" > > So consciousness means the ability to be conscious, that is > to say the ability to consciously think about stuff. Thank > you so much for those words of wisdom! > > > If I make a jpeg of you with my digital camera, that > digital depiction of you will have no mental states. > > Swobe may very well be right in this particular instance, > but it illustrates the useless nature of the grotesque > exercises he gives the grandiose name "thought experiment". > Swobe has no way to directly measure the mental states even > of his fellow human beings much less that of a digital > camera; and yet over the last few months he has made grand > pronouncements about the mental states of literally hundreds > of things. To add insult to injury the mental state of > things is exactly what he's trying to prove; he just doesn't > understand that saying X has no consciousness is not the > same as proving X has no consciousness. > > > The idea is that while a person doesn't understand > Chinese, somehow the conjunction of that person and bits of > paper might understand Chinese. It is not easy for me to > imagine how someone who was not in the grip of an ideology > would find the idea at all plausible. > > Swobe admits, and if fact seems delighted by the fact, that > he has absolutely no idea what causes consciousness; > nevertheless he thinks he can always determine a priori what > has consciousness and what does not, and it has nothing to > do with the way they behave. The conjunction of a person > with bits of paper might display intelligence, in fact there > is no doubt that it could, but it could never be conscious > because, because, well just because; but Swobe thinks 3 > pounds of grey goo being conscious is perfectly > logical. Can Swobe explain why one thing is ridiculous and > the other logical? Nope, it's just that he's accustomed to > one and not the other. That's it. > > > Depictions of things, digital or otherwise, do not > equal the things they depict > > Wow, now I see the error of my ways! It's a pity Swobe > didn't say that two months and several hundred posts ago, > think of the time we could have saved. Oh wait he did. > > > the man cannot grok the symbols by virtue of > manipulating them according to the rules of syntax > > > Wow, now I see the error of my ways! It's a pity Swobe > didn't say that two months and several hundred posts ago, > think of the time we could have saved. Oh wait he did. > > > Depictions of things do not equal the things they > depict. > > Wow, now I see the error of my ways! It's a pity Swobe > didn't say that two months and several hundred posts ago, > think of the time we could have saved. Oh wait he did. > > Gordon Swobe > Gordon Swobe > Gordon Swobe > Gordon Swobe > Gordon Swobe > Gordon Swobe > > > > --- On Fri, 2/19/10, John Clark > wrote: > > > From: John Clark > > Subject: How not to make a thought experiment > > To: gordon.swobe at yahoo.com, > "ExI chat list" > > Date: Friday, February 19, 2010, 12:48 PM > > Since my last > > post Gordon Swobe has posted 9 times. > > The CRA thought > > experiment involves *you the reader* imagining > *yourself* in > > the room (or as the room) using *your* mind to attempt > to > > understand the Chinese symbols. > > As is Swobe's habit he is wrong yet again. > > The Chinese room experiment asks you to imagine > yourself as > > a mechanical relay, that's it. However Swobe is right > > about one thing, a relay is not conscious. > > Probably. > > Conscious > > awareness > > As opposed to unconscious awareness. > > enhances intelligence and > > gives the organism more flexibility in dealing with > multiple > > stimuli simultaneously. > > Consciousness enhances intelligence and changes > > behavior but the Turing Test cannot detect even a > whiff of > > it. Swobe does not see this idea as being world > class > > stupid. I do. > > As evidence of this we > > need only look at nature: conscious organisms like > humans > > exhibit more complex and intelligent behaviors than > do > > unconscious organisms like plants and microbes. > > > > This is a very rare occasion where, incredible > > as it sounds, Swobe is actually correct. Another way > to > > express Swobe's words quoted above is to say "The > > Turing Test works". > > you assume here as you > > do that that the i/o behavior of a brain/neuron is all > there > > is to the brain. [...]consciousness may > > involve the electrical signals that travel down the > axons > > internal to the neurons > > Swobe is always keen to tell us that nobody > > including him has any idea what causes consciousness, > so it > > is equally likely consciousness may involve the size > of > > one's foot, after all the only being Swobe knows with > > certainty to be conscious has one particular shoe > size. I am > > not trying to be funny, it's easy to demonstrate that > > the brain and neurons have something to do with > intelligence > > but, if as Swobe believes, that has nothing to do > with > > consciousness then the organ that is the seat of > awareness > > is anybody's guess. The foot is as good a guess as > any > > other. > > Let us say that we > > created an artificial brain that contained a cubic > foot of > > warm leftover mashed potatoes and gravy[...] Would > > your mister potato-head have > > consciousness? > > Swobe says he loves the Chinese room crapola because > > it can objectively determine what is conscious and > what is > > not, and yet when he tries to defend this ridiculous > idea he > > repeatedly dreams up intelligent things that are > > "obviously" not conscious, such as a computer made > > of toilet paper and now one made of mashed potatoes > and > > gravy. But if all of this is obvious Swobe does not > make it > > clear what in hell the point of the Chinese room > thought > > experiment is. > > And Swobe may be interested to know that his > > brain is in fact the product of last years mashed > > potatoes and gravy, it's just a question of > rearranging > > the atoms in a programable way. DNA does exactly > that. > > I think we can and will > > one day create unconscious robots that *act* like they > have > > consciousness. > > Swobe thinks humans can make a environment that > > produces a being that acts like he's conscious, but > only > > God [or various euphemisms of that word] can create > an > > environment that makes the real deal. I > > disagree. > > You should consider > > him [ the Chinese room dude] an actual man > > [...] I wanted to encourage you to consider > > the man as literally a man > > Swobe says we should consider the Chinese > > room fellow as literally a man, a man who can live for > many > > trillions of years and "internalize" that book of > > instructions, a actual man who can memorize a document > far > > larger than the observable universe. I say that remark > is > > idiotic. Does anyone care to dispute my > > criticism? > > our man in the > > room has no understanding of any symbols and so no > knowledge > > base to build on. > > Wow, now I see the error of my ways! It's a > > pity Swobe didn't say that two months and several > > hundred posts ago, think of the time we could have > saved. Oh > > wait he did. > > He can do no more than > > follow the syntactic instructions in the program: if > input = > > "squiggle" then output > > "squoogle". > > Wow, now I see the error of my ways! It's a > > pity Swobe didn't say that two months and several > > hundred posts ago, think of the time we could have > saved. Oh > > wait he did. > > syntactic order does not > > give understanding. > > Wow, now I see the error of my ways! It's a > > pity Swobe didn't say that two months and several > > hundred posts ago, think of the time we could have > saved. Oh > > wait he did. > > formal syntax does > > not give semantics > > Wow, now I see the error of my ways! > > It's a pity Swobe didn't say that two months and > > several hundred posts ago, think of the time we could > have > > saved. Oh wait he did. > > John K Clark > > > > > > --- On Fri, 2/19/10, John Clark wrote: > From: John Clark > Subject: How not to make a thought experiment > To: gordon.swobe at yahoo.com, "ExI chat list" > Date: Friday, February 19, 2010, 12:48 PM > Since my last > post?Gordon Swobe has posted 9 times. > The CRA thought > experiment involves *you the reader* imagining *yourself* in > the room (or as the room) using *your* mind to attempt to > understand the Chinese symbols. > As is Swobe's habit he is wrong yet again. > The Chinese room experiment asks you to imagine yourself as > a mechanical relay, that's it. However Swobe is right > about one thing, a relay is not conscious. > Probably.? > Conscious > awareness > As opposed to unconscious awareness. > enhances intelligence and > gives the organism more flexibility in dealing with multiple > stimuli simultaneously. > Consciousness enhances intelligence and changes > behavior but the Turing Test cannot detect even a whiff of > it.?Swobe does not see this idea as being world class > stupid. I do. > As evidence of this we > need only look at nature: conscious organisms like humans > exhibit more complex and intelligent behaviors than do > unconscious organisms like plants and microbes. > > This is a very rare occasion where, incredible > as it sounds, Swobe is actually correct. Another way to > express Swobe's words quoted above is to say "The > Turing Test works". > you assume here as you > do that that the i/o behavior of a brain/neuron is all there > is to the brain.?[...]consciousness may > involve the electrical signals that travel down the axons > internal to the neurons > Swobe is always keen to tell us that nobody > including him has any idea what causes consciousness, so it > is equally likely?consciousness may involve the size of > one's foot, after all the only being Swobe knows with > certainty to be conscious has one particular shoe size. I am > not trying to be funny, it's easy to demonstrate that > the brain and neurons have something to do with intelligence > but, if as Swobe believes, that has nothing to do with > consciousness then the organ that is the seat of awareness > is anybody's guess. The foot is as good a guess as any > other. > Let us say that we > created an artificial brain that contained a cubic foot of > warm leftover mashed potatoes and gravy[...] ?Would > your mister potato-head have > consciousness? > Swobe says he loves the Chinese room crapola because > it can objectively determine what is conscious and what is > not, and yet when he tries to defend this ridiculous idea he > repeatedly dreams up intelligent things that are > "obviously" not conscious, such as a computer made > of toilet paper and now one made of mashed potatoes and > gravy. But if all of this is obvious Swobe does not make it > clear what in hell the point of the Chinese room thought > experiment is.? > And Swobe may be interested to know that his > brain is in fact the product of last years?mashed > potatoes and gravy, it's just a question of rearranging > the atoms in a programable way. DNA does exactly that. > I think we can and will > one day create unconscious robots that *act* like they have > consciousness.? > Swobe thinks humans can make a environment that > produces a being that acts like he's conscious, but only > God [or various euphemisms of that word] can create an > environment that makes the real deal. I > disagree. > You should consider > him [ the Chinese room dude] an actual man > ?[...]??I wanted to encourage you to consider > the man as literally a man > Swobe?says we should consider the Chinese > room fellow as literally a man, a man who can live for many > trillions of years and "internalize" that book of > instructions, a actual man who can memorize a document far > larger than the observable universe. I say that remark is > idiotic. Does anyone care to dispute my > criticism? > our man in the > room has no understanding of any symbols and so no knowledge > base to build on. > Wow, now I see the error of my ways! It's a > pity Swobe didn't say that two months and several > hundred posts ago, think of the time we could have saved. Oh > wait he did. > He can do no more than > follow the syntactic instructions in the program: if input = > "squiggle" then output > "squoogle".? > Wow, now I see the error of my ways! It's a > pity Swobe didn't say that two months and several > hundred posts ago, think of the time we could have saved. Oh > wait he did. > syntactic order does not > give understanding. > Wow, now I see the error of my ways! It's a > pity Swobe didn't say that two months and several > hundred posts ago, think of the time we could have saved. Oh > wait he did. > formal syntax does > not give semantics > Wow, now I see the error of my ways! > It's a pity Swobe didn't say that two months and > several hundred posts ago, think of the time we could have > saved. Oh wait he did. > ?John K Clark > ? From gts_2000 at yahoo.com Sun Feb 21 15:06:51 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 21 Feb 2010 07:06:51 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <965600.58521.qm@web36502.mail.mud.yahoo.com> --- On Sun, 2/21/10, Stathis Papaioannou wrote: >> Confusion arises also because some people believe, >> naively, that if we can compute x then a computation of x = >> x. > > Who said this? Anybody who says "A digital simulation of a brain will equal a brain" says exactly that a computation of x = x where x = a brain. > The claim is that a simulated x will have at least some > of the properties of x, but not necessarily all the > properties. A simulation may look like x but not smell like x, for > example. Except in the special case in which x = a digital program or computer, the digital simulation of x will not even look like the originals except that we *imagine* so. We *imagine* the convenient fiction that the apple-shaped patterns of pixels on our screens really look like the apples we simulated. To see this, go find your magnifying glass and take closer look at that apple on your monitor. Compare it to what you see when you hold the glass to an actual apple. That's not an apple on your screen after all, now is it? -gts From gts_2000 at yahoo.com Sun Feb 21 15:41:49 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 21 Feb 2010 07:41:49 -0800 (PST) Subject: [ExI] Semiotics and Computability In-Reply-To: <62c14241002190632s2bd77693m221e498ae628f038@mail.gmail.com> Message-ID: <810891.561.qm@web36508.mail.mud.yahoo.com> --- On Fri, 2/19/10, Mike Dougherty wrote: > Maybe semantics can't be gleaned directly from a > syntactical rule - but over the course of the man's time in the box, he > will observe patterns and hypothesize meaning which can be reinforced by > repeated observation.? But the man can learn only that 'squoogles' follow after 'squiggles'. At the machine level, he can learn no more than that certain patterns of 1's and 0's follow after certain other patterns of 1's and 0's. He can form hypotheses, as you say, but he cannot test those hypotheses without help from somewhere. Sense data seems like the obvious place to look for that help: the man in the room has no access to sense data from the outside world, so perhaps that explains why he cannot attach meanings to the symbols he manipulates. But when we look at how computers get sense data, we see that sense data also amounts to nothing more than meaningless patterns of 1's and 0's. At this point Stathis throws up his hands and proclaims that Searle preaches that human brains do something "magical". But that's not it at all. The CRA merely illustrates an ordinary mundane fact: that the human brain has no special place in the nature as a supposed "digital computer". The brain has the same ordinary non-digital status as other products of biological evolution, objects like livers and hearts and spleens and nuts and watermelons. It just happens to be one very smart melon. -gts From gts_2000 at yahoo.com Sun Feb 21 16:11:37 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 21 Feb 2010 08:11:37 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <815945.61890.qm@web36501.mail.mud.yahoo.com> --- On Sun, 2/21/10, Stathis Papaioannou wrote: > But the computation may, for example, predict how far the > nail will be driven into the wood, which is a replication of a property > of the real event. "Predicting an event" != "replication of a property". I consider computations of natural processes including brain processes as a-causal; that is, the computations describe and predict natural processes but they do not determine them. In general, nature "follows" no supposed laws or algorithms. She just does what she does, and humans talk about it with computations and so-called laws of physics. -gts From girl.meteor at gmail.com Sun Feb 21 05:42:10 2010 From: girl.meteor at gmail.com (meteor girl) Date: Sat, 20 Feb 2010 23:42:10 -0600 Subject: [ExI] How to Replace Uninjured Parts of the Brain In-Reply-To: <62ced1af1002201831s22d98d19sd9444160d4ff2a8d@mail.gmail.com> References: <62ced1af1002201831s22d98d19sd9444160d4ff2a8d@mail.gmail.com> Message-ID: <62ced1af1002202142v2aadabf0x55f98e13c9e9987f@mail.gmail.com> >> Replace with what? More brain? Uploading of the mind to an electronic substrate is my goal. >> At that point, though, the question becomes: what would >> work better than what we have, yet still be compatible enough >> for parallel implantation? We already have electronic devices that are capable of replacing certain parts of the brain. Researchers are asking your proposed question now. Will someone take a guess at how one might go about replacing each individual synapse, neuron, their neurotransmitters, etc. without losing continuity? -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Sun Feb 21 16:33:18 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 21 Feb 2010 10:33:18 -0600 Subject: [ExI] "Transhumanism The Way of the Future" in The Scavenger -Natasha In-Reply-To: <580930c21002190456le70eaew5409b6f80a5a66b2@mail.gmail.com> References: <580930c21002190456le70eaew5409b6f80a5a66b2@mail.gmail.com> Message-ID: <74AC056408D3451B86E62805C6FB4CA3@DFC68LF1> Thank you Stefano. Nlogo1.tif Natasha Vita-More _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stefano Vaj Sent: Friday, February 19, 2010 6:56 AM To: ExI chat list Subject: Re: [ExI] "Transhumanism The Way of the Future" in The Scavenger -Natasha Concise, eloquent, to the point... 2010/2/18 Natasha Vita-More My article is now available online in "The Scavenger" under Media & Technology. http://www.thescavenger.net/media-a-technology/transhumanism-the-way-of-the- future-98432.html Nlogo1.tif Natasha Vita-More _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From sparge at gmail.com Sun Feb 21 17:34:11 2010 From: sparge at gmail.com (Dave Sill) Date: Sun, 21 Feb 2010 12:34:11 -0500 Subject: [ExI] The alleged existence of consciousness In-Reply-To: References: <762702.96421.qm@web36504.mail.mud.yahoo.com> Message-ID: On Sat, Feb 20, 2010 at 7:47 PM, Stathis Papaioannou wrote: > > There may be other ways to make a zombie (although there are separate > arguments against that possibility also) but following the > architecture of the brain is not one of them. Any structure that > reproduces the pattern of neural firing in the brain will also > reproduce the intelligence and the consciousness of the brain. Agreed. > If the brain's intelligence would remain intact despite changes that > would eliminate consciousness, then consciousness would be a useless > complication. But what if the changes required resulted in more complication? Then removing consciousness would be a useless complication. >>> The best explanation is that the brain we happen to >>> have ended up with is not specially blessed, and any other brain based >>> on similar patterns resulting in similar behaviour would have also had >>> a similar consciousness. >> >> Intuitively, that seems likely. But we just don't know. > > We do know. The partial brain replacement thought experiment makes it > true as a matter of logical necessity; in other words more certainly > true than any mere empirical fact, which could be proved false > tomorrow. I would really like to hear a rebuttal, but no-one has yet > attempted one. The partial brain replacement thought experiment only covers the case of a brain that works exactly like ours. It seems possible to me that evolution could have taken a different path, and conceivable that one or more of those paths might have resulted in intelligence without consciousness. -Dave From sparge at gmail.com Sun Feb 21 17:56:49 2010 From: sparge at gmail.com (Dave Sill) Date: Sun, 21 Feb 2010 12:56:49 -0500 Subject: [ExI] Brain Preservation In-Reply-To: <159995.71251.qm@web113616.mail.gq1.yahoo.com> References: <159995.71251.qm@web113616.mail.gq1.yahoo.com> Message-ID: On Sun, Feb 21, 2010 at 4:48 AM, Ben Zaiboc wrote: > > I'm now wondering if there aren't other possible methods of preservation, apart from low temperatures and > plastination? What about optical scanning? How close are we to being able to remove then enough layers and scan at high enough resolution to reconstruct the neural map? -Dave From jonkc at bellsouth.net Sun Feb 21 17:56:35 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 21 Feb 2010 12:56:35 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <836779.88847.qm@web36502.mail.mud.yahoo.com> References: <836779.88847.qm@web36502.mail.mud.yahoo.com> Message-ID: <5A55FD93-E921-491C-955B-2FA85E179BC5@bellsouth.net> Since my last post Gordon Swobe has posted 7 times. > I have yet to meet a conscious computer One can't help but wonder how Swobe knows this, and how he knows if he has ever met a conscious human being. > some people believe, naively, that if we can compute x then a computation of x = x. So if we can compute that 2+2=4 then it's naive to think that a computation of 2+2 is 4. Huh? > > the event itself does not equal a computation. That depends entirely on what the "event" is; if it involves moving atoms then Swobe is right, but if it involves things other than nouns or in making decisions then he is not. > > Anybody who says "A digital simulation of a brain will equal a brain" says exactly that a computation of x = x where x = a brain. Swobe trots out this sad old straw man yet again! Of course a simulated brain is not identical with a biological brain, but a simulated mind is, just as a simulated 4 (whatever the hell that means) is identical with a real 4. > Except in the special case in which x = a digital program or computer, the digital simulation of x will not even look like the originals Except for that Mrs. Lincoln, how did you like the play? > the man in the [Chinese] room has no access to sense data from the outside world Yet another reason why the Chinese room just may be the most useless thought experiment ever devised; a real AI would certainly have sense data about the outside world, far more the we do in fact. > when we look at how computers get sense data, we see that sense data also amounts to nothing more than meaningless patterns of 1's and 0's. One can't help but wonder why Swobe doesn't erase the hard drive on his computer, it is after all full of nothing but a meaningless pattern of 1's and 0's. It's also puzzling how Swobe makes a living, he says he does it by generating patterns of 1's and 0's, but who would pay him for such a meaningless activity? > go find your magnifying glass and take closer look at that apple on your monitor. Compare it to what you see when you hold the glass to an actual apple. That's not an apple on your screen after all, now is it? Swobe may be the very first person to make such a brilliant observation! > Computationally speaking, on this view, you can make a "brain" that functions just like yours and mine out of cats and mice and cheese or levers or water pipes or pigeons or anything else [e.g., beer cans and toilet paper] provided the two systems are, in Block's sense, "computationally equivalent" . True but trivially obvious. > You would just need an awful lot of cats, or pigeons or waterpipes, or whatever it might be. And if you wanted to make a brain out of neurons you'd need an awful lot of them too. > the man can learn only that 'squoogles' follow after 'squiggles'. Wow, now I see the error of my ways! It's a pity Swobe didn't say that two months and several hundred posts ago, think of the time we could have saved. Oh wait he did. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sun Feb 21 18:40:04 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 21 Feb 2010 13:40:04 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <140881.49160.qm@web36502.mail.mud.yahoo.com> References: <140881.49160.qm@web36502.mail.mud.yahoo.com> Message-ID: <0D317EC0-9FB9-4948-B987-98A6BE67B67E@bellsouth.net> I want to congratulate Gordon Swobe for sending the longest post anybody has sent to the Extropian list since July 17 2009, and even then he was only beaten by a hair . Very impressive, it would be even more impressive if 98% of it were not quotations from old posts. In the remaining 2% he says: > His [me John K Clark] obsessive actions amount to a malicious attempt to slander me It seems that Swobe doesn't like slander, I can't say I blame him I don't much care for it myself, which is why I was puzzled he started his very very long post with: > A disturbed person who goes by the name John K Clark John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sun Feb 21 18:48:47 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 21 Feb 2010 13:48:47 -0500 Subject: [ExI] Semiotics and Computability In-Reply-To: <810891.561.qm@web36508.mail.mud.yahoo.com> References: <62c14241002190632s2bd77693m221e498ae628f038@mail.gmail.com> <810891.561.qm@web36508.mail.mud.yahoo.com> Message-ID: <62c14241002211048o390425cbi580ae855b8d00624@mail.gmail.com> On Sun, Feb 21, 2010 at 10:41 AM, Gordon Swobe wrote: > Sense data seems like the obvious place to look for that help: the man in the room has no access to sense data from the outside world, so perhaps that explains why he cannot attach meanings to the symbols he manipulates. But when we look at how computers get sense data, we see that sense data also amounts to nothing more than meaningless patterns of 1's and 0's. > > At this point Stathis throws up his hands and proclaims that Searle preaches that human brains do something "magical". But that's not it at all. The CRA merely illustrates an ordinary mundane fact: that the human brain has no special place in the nature as a supposed "digital computer". The brain has the same ordinary non-digital status as other products of biological evolution, objects like livers and hearts and spleens and nuts and watermelons. It just happens to be one very smart melon. So your gripe is with the digital part of computers? Suppose analog computers had become the dominant technology, would you still be complaining that they can't be meaningfully intelligent because they're merely machines lacking the quintessence that makes human consciousness? (opening yourself to potshots about the requirement of a soul) Suppose I replicate the IO transformation of CR using a complex series of tubes and buckets filled by an eternally replenished aquifer? There's no digital zombie-ism to preclude intelligence, can can my Rube Goldberg water wheel be intelligent? Is it conscious? Have you ever seen the implementation of an adding machine using cellular automata? (Game of Life, etc.) It's an interesting setup because the CA rules have nothing at all to do with counting or the operation of addition - however the CA rules can still be exploited to do interesting and useful computation. Neurons may be bound by analogous rules as the CA cells, but we still somehow exploit the function of groups of neurons to convince ourselves that we're intelligent and conscious of that belief. From lacertilian at gmail.com Sun Feb 21 18:50:16 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 21 Feb 2010 10:50:16 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: <304059.67870.qm@web113614.mail.gq1.yahoo.com> References: <304059.67870.qm@web113614.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > > Ah, I see. ?I see where the misunderstanding lies. > > The idea the author has is that the thing that implements the program is the same as the thing that has the mental states (which are the result of the running of the program). Yeah, I noticed that too. It's tricky territory. Phrasing the proposition as you do there, without the later qualifications, does not in any way make it sound false. You could rephrase it as "neurons which implement thinking are themselves thinking", yes, but you could also rephrase it as "brains which implement thinking are themselves thinking". This conveys the same essential information but is actually a better analogy, since microprocessors, for example, can't be neatly broken up the way that brains can. One neuron certainly can't implement a mind, but one microprocessor might. We have a property, "thinking", and nowhere to put it. Right smack into dualism again, and at the moment I don't much feel like working out the possible implications. If I may address the other computationalists present: would you say that a mind is a running program, or would you say that a running program instantiates a mind? These seem to me like the only two sane options for a genuine computationalist, but if you can think of a third I'd like to hear it. From msd001 at gmail.com Sun Feb 21 18:55:32 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 21 Feb 2010 13:55:32 -0500 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <965600.58521.qm@web36502.mail.mud.yahoo.com> References: <965600.58521.qm@web36502.mail.mud.yahoo.com> Message-ID: <62c14241002211055o791f34ek7221e703697f35bd@mail.gmail.com> On Sun, Feb 21, 2010 at 10:06 AM, Gordon Swobe wrote: > To see this, go find your magnifying glass and take closer look at that apple on your monitor. Compare it to what you see when you hold the glass to an actual apple. That's not an apple on your screen after all, now is it? So what? the "actual apple" to which you refer is nothing like the platonic apple to which I compare it either. You're right, a picture of an apple is not an apple. If you let this kind of problem stop you from acting in the world, you wouldn't be able to live. An "Exit" sign is not an exit. A menu with pictures of food is not food. A city map is not a city. How can you honestly make comments like this and not realize how... fruitless(!) the discussion becomes? From lacertilian at gmail.com Sun Feb 21 19:27:33 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 21 Feb 2010 11:27:33 -0800 Subject: [ExI] How to Replace Uninjured Parts of the Brain In-Reply-To: <62ced1af1002202142v2aadabf0x55f98e13c9e9987f@mail.gmail.com> References: <62ced1af1002201831s22d98d19sd9444160d4ff2a8d@mail.gmail.com> <62ced1af1002202142v2aadabf0x55f98e13c9e9987f@mail.gmail.com> Message-ID: meteor girl : > We already have electronic devices that are capable of replacing certain > parts of the brain. Researchers are asking your proposed question now. Functioning brain implants would be news to me! Let's google it: http://timesofindia.indiatimes.com/city/chennai/Deaf-boy-hears-after-brain-implant/articleshow/5562983.cms This makes me very, very happy. meteor girl : > Will someone take a guess at how one might go about replacing each > individual synapse, neuron, their neurotransmitters, etc. without losing > continuity? By my estimation, the easiest way to get from wetware to hardware is: gradually. I'll use modern-day technology for my example, so it'll be pretty clunky. http://en.wikipedia.org/wiki/Blue_Brain_Project Let's say that we build a wireless interface, call it the bridge, into the left hemisphere of my brain. The bridge is covered in a cornucopia of electrodes. It records neural impulses to be exported to a remote Blue Brain supercomputer and transmits its own simulated impulses according to information imported from the same. The idea here is to get virtual neurons to interact sanely with real neurons, so that your mind effectively has a foot in both worlds. All sorts of complications could arise in practice. Maybe "aligning" the virtual cortex with the real electrodes would be prohibitively difficult, maybe the lack of chemical signals across the bridge would have unfortunate side-effects. Maybe Gordon is right, somehow, and you would partially zombify yourself in the process even if the operation is a success by any and every objective measure. If it works, though, all you have to do is scale up. Once at least half of your effective brain is virtual, death would be comparable to a hemispherectomy. Regaining the functions that remained mostly localized in the living tissue could be a problem, but, hey, neuroplasticity. What's the worst that could happen? From cluebcke at yahoo.com Sun Feb 21 19:17:05 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Sun, 21 Feb 2010 11:17:05 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <304059.67870.qm@web113614.mail.gq1.yahoo.com> Message-ID: <412622.93012.qm@web111215.mail.gq1.yahoo.com> > We have a property, "thinking", and nowhere to put it. Right smack> into dualism again, and at the moment I don't much feel like working > out the possible implications. Had the entire concept of emergent properties been evaluated and discarded before I joined this list? One need not dive into dualism to appreciate that surface tension is a property that only emerges in systems with a lot of water molecules, not only in a certain arrangement, but in a certain state of activity. There is no "surface tension" property on a given water molecule; it is a property of the system as a whole. Honestly, the out-of-hand rejection of "thinking" or "consciousness" as an emergent property of a complex biological system baffles me. Especially because dead brains don't think. ________________________________ From: Spencer Campbell To: ExI chat list Sent: Sun, February 21, 2010 10:50:16 AM Subject: Re: [ExI] How not to make a thought experiment Ben Zaiboc : > > Ah, I see. I see where the misunderstanding lies. > > The idea the author has is that the thing that implements the program is the same as the thing that has the mental states (which are the result of the running of the program). Yeah, I noticed that too. It's tricky territory. Phrasing the proposition as you do there, without the later qualifications, does not in any way make it sound false. You could rephrase it as "neurons which implement thinking are themselves thinking", yes, but you could also rephrase it as "brains which implement thinking are themselves thinking". This conveys the same essential information but is actually a better analogy, since microprocessors, for example, can't be neatly broken up the way that brains can. One neuron certainly can't implement a mind, but one microprocessor might. We have a property, "thinking", and nowhere to put it. Right smack into dualism again, and at the moment I don't much feel like working out the possible implications. If I may address the other computationalists present: would you say that a mind is a running program, or would you say that a running program instantiates a mind? These seem to me like the only two sane options for a genuine computationalist, but if you can think of a third I'd like to hear it. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Feb 21 20:28:17 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 21 Feb 2010 12:28:17 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: <412622.93012.qm@web111215.mail.gq1.yahoo.com> Message-ID: <800855.44985.qm@web36504.mail.mud.yahoo.com> --- On Sun, 2/21/10, Christopher Luebcke wrote: > Honestly, the out-of-hand rejection of "thinking" or "consciousness" as > an emergent property of a complex biological system baffles me. I certainly don't reject it. I like your 'surface tension of water molecules' analogy. -gts From lacertilian at gmail.com Sun Feb 21 21:01:58 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 21 Feb 2010 13:01:58 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: <412622.93012.qm@web111215.mail.gq1.yahoo.com> References: <304059.67870.qm@web113614.mail.gq1.yahoo.com> <412622.93012.qm@web111215.mail.gq1.yahoo.com> Message-ID: Christopher Luebcke : > Had the entire concept of emergent properties been evaluated and discarded > before I joined this list? Maybe! I know I've brought it up before, but I don't know when you joined. Christopher Luebcke : > One need not dive into dualism to appreciate that surface tension is a > property that only emerges in systems with a lot of water molecules, not > only in a certain arrangement, but in a certain state of activity. There is > no "surface tension" property on a given water molecule; it is a property of > the system as a whole. Gordon Swobe : > I certainly don't reject it. I like your 'surface tension of water molecules' analogy. Surprisingly, yeah, I like it too. "Surface tension is a property of the surface of a liquid". That's the very first sentence in the Wikipedia article for surface tension. Just as with the brain, we're talking about the behavior of a physical system in abstract terms. Again just as with the brain, we're pointing to one specific trait: the surface, or the mind. Here's a human: where is the mind? Here's an ocean: where is the surface? You could create a very precise experiment in an attempt to locate the exact location of the transition between gas and liquid in a glass of water, but quantum physics would just laugh in your face. You fail even sooner when you enter uncontrolled circumstances. The best you can do is to say that the ocean must have a surface, and the human must have a mind, and each of these things have some kind of nearly-predictable effect in this or that approximate region of space. They're both abstractions, when you get right down to it. They go away when we stop looking at them. To paraphrase Searle, surface tension is not intrinsic to the physics. The troubling conclusion: object permanence is a sham! Christopher Luebcke : > Honestly, the out-of-hand rejection of "thinking" or "consciousness" as an > emergent property of a complex biological system baffles me. Especially > because dead brains don't think. I certainly hope you aren't ascribing that position to *me*, because I'm equally baffled by it. I can't imagine any other way they could work. Not without rewriting pretty much the whole of physics, at least. It was a little sloppy of me to use the terms "property" and "dualism". I admit it. I didn't feel like giving it a lot of thought at the time, and I suffered my just retribution for babbling about it anyway. From bbenzai at yahoo.com Sun Feb 21 22:11:22 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 21 Feb 2010 14:11:22 -0800 (PST) Subject: [ExI] How to Replace Uninjured Parts of the Brain In-Reply-To: Message-ID: <334609.45493.qm@web113618.mail.gq1.yahoo.com> meteor girl asked: > Will someone take a guess at how one might go about > replacing each > individual synapse, neuron, their neurotransmitters, etc. > without losing > continuity? OK, here's one guess: First, you need to die (for legal reasons). Then, your brain gets prepped, chilled, and ends up in liquid nitrogen. OR preserved in some other way that preserves the microscopic structure (plastination of some kind possibly, as discussed in another post). An unknown time later, your brain gets destructively scanned and the resulting information is loaded into a system that can sort through it, and compile it into a form that can be used to drive a hardware system of some kind, in such a way as to replicate all the dynamic information processes that used to go on in your biological brain. That hardware system is activated, and your mind is run. This should preserve continuity if you accept that continuity is preserved over an episode of general anaesthesia, for example. This replacement swaps biological neurons etc. for non-biological equivalents. I'm afraid that at present, that's your only option. Another guess: We'll almost certainly have neural interfaces before we have artificial neurons, so there may be a possibility of hooking certain parts of the brain up to external processors that can emulate neurons. Just today I was discussing neural interfaces with someone doing research on neuromorphic systems, and learned of a method involving a bit of genetic engineering to insert photoreceptor proteins into the cell membrane, so that light of a specific frequency can trigger the neuron to fire. Coupled with a complementary method using fluorescent proteins, it may be possible to create a two-way interface that doesn't involve stabbing neurons with spikes and electrocuting them, as current systems do. It may then be practical to route signals in and out of the brain to a computer system that can replace or duplicate various neural circuits. But if it's actual synthetic neurons in your brain alongside the biological ones that you want, I think you'll have to wait for nanotechnology so sophisticated that by the time it's developed, the other options will be easy, and will have been around for a while. Ben Zaiboc From steinberg.will at gmail.com Mon Feb 22 02:19:16 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 21 Feb 2010 21:19:16 -0500 Subject: [ExI] Is this FTL Communication? Message-ID: <4e3a29501002211819i735f457ao1b143c3683d7020e@mail.gmail.com> Imagine we build a very, very long, very, very solid rod. One end of the rod is located very, very far away. Will pushing one one end of the rod effect an immediate response on the other? Or is this just a longitudinal wave of nuclear forces propagating at From cluebcke at yahoo.com Mon Feb 22 02:39:37 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Sun, 21 Feb 2010 18:39:37 -0800 (PST) Subject: [ExI] Is this FTL Communication? In-Reply-To: <4e3a29501002211819i735f457ao1b143c3683d7020e@mail.gmail.com> References: <4e3a29501002211819i735f457ao1b143c3683d7020e@mail.gmail.com> Message-ID: <35464.47390.qm@web111204.mail.gq1.yahoo.com> The latter. It'll actually transmit much slower than light. ________________________________ From: Will Steinberg To: ExI chat list Sent: Sun, February 21, 2010 6:19:16 PM Subject: [ExI] Is this FTL Communication? Imagine we build a very, very long, very, very solid rod. One end of the rod is located very, very far away. Will pushing one one end of the rod effect an immediate response on the other? Or is this just a longitudinal wave of nuclear forces propagating at From cluebcke at yahoo.com Mon Feb 22 03:55:20 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Sun, 21 Feb 2010 19:55:20 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: <800855.44985.qm@web36504.mail.mud.yahoo.com> References: <800855.44985.qm@web36504.mail.mud.yahoo.com> Message-ID: <892864.31569.qm@web111215.mail.gq1.yahoo.com> Great. I would then gently urge you to consider that surface tension may be an emergent property of more than just water molecules. ________________________________ From: Gordon Swobe To: ExI chat list Sent: Sun, February 21, 2010 12:28:17 PM Subject: Re: [ExI] How not to make a thought experiment --- On Sun, 2/21/10, Christopher Luebcke wrote: > Honestly, the out-of-hand rejection of "thinking" or "consciousness" as > an emergent property of a complex biological system baffles me. I certainly don't reject it. I like your 'surface tension of water molecules' analogy. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From cluebcke at yahoo.com Mon Feb 22 03:57:35 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Sun, 21 Feb 2010 19:57:35 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <304059.67870.qm@web113614.mail.gq1.yahoo.com> <412622.93012.qm@web111215.mail.gq1.yahoo.com> Message-ID: <293647.18400.qm@web111202.mail.gq1.yahoo.com> Christopher Luebcke : >> Honestly, the out-of-hand rejection of "thinking" or "consciousness" as an >> emergent property of a complex biological system baffles me. Especially >> because dead brains don't think. > > >I certainly hope you aren't ascribing that position to *me*, because >I'm equally baffled by it. I can't imagine any other way they could >work. Not without rewriting pretty much the whole of physics, at >least. No, I should have been less hasty; that was a general reaction to what appears to me to be an excruciatingly fruitless back and forth on the subject, but wasn't directed at you. ________________________________ From: Spencer Campbell To: ExI chat list Sent: Sun, February 21, 2010 1:01:58 PM Subject: Re: [ExI] How not to make a thought experiment Christopher Luebcke : > Had the entire concept of emergent properties been evaluated and discarded > before I joined this list? Maybe! I know I've brought it up before, but I don't know when you joined. Christopher Luebcke : > One need not dive into dualism to appreciate that surface tension is a > property that only emerges in systems with a lot of water molecules, not > only in a certain arrangement, but in a certain state of activity. There is > no "surface tension" property on a given water molecule; it is a property of > the system as a whole. Gordon Swobe : > I certainly don't reject it. I like your 'surface tension of water molecules' analogy. Surprisingly, yeah, I like it too. "Surface tension is a property of the surface of a liquid". That's the very first sentence in the Wikipedia article for surface tension. Just as with the brain, we're talking about the behavior of a physical system in abstract terms. Again just as with the brain, we're pointing to one specific trait: the surface, or the mind. Here's a human: where is the mind? Here's an ocean: where is the surface? You could create a very precise experiment in an attempt to locate the exact location of the transition between gas and liquid in a glass of water, but quantum physics would just laugh in your face. You fail even sooner when you enter uncontrolled circumstances. The best you can do is to say that the ocean must have a surface, and the human must have a mind, and each of these things have some kind of nearly-predictable effect in this or that approximate region of space. They're both abstractions, when you get right down to it. They go away when we stop looking at them. To paraphrase Searle, surface tension is not intrinsic to the physics. The troubling conclusion: object permanence is a sham! Christopher Luebcke : > Honestly, the out-of-hand rejection of "thinking" or "consciousness" as an > emergent property of a complex biological system baffles me. Especially > because dead brains don't think. I certainly hope you aren't ascribing that position to *me*, because I'm equally baffled by it. I can't imagine any other way they could work. Not without rewriting pretty much the whole of physics, at least. It was a little sloppy of me to use the terms "property" and "dualism". I admit it. I didn't feel like giving it a lot of thought at the time, and I suffered my just retribution for babbling about it anyway. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Mon Feb 22 06:00:47 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Mon, 22 Feb 2010 01:00:47 -0500 Subject: [ExI] Why the CRA is false, methodically Message-ID: <4e3a29501002212200h22f48637m692cd517f5f4a9c3@mail.gmail.com> Searle's Chinese Room has become the heart of darkness of the recent conversations. The thing seems logically valid. But the error is hidden beneath layers of metaphor and an oversimplification of human thought. The man in the room, given access to his books of responses, can only utilize a limited set of data. A totally plain "a for b" system would limit the room to inputs whose answers never changes--base facts. Any i/o on a changing system--the people, environment, events in general--cannot function with this limited set. Another argument might say that the bookset instead contains the complete possibility tree of human questions and responses. Every question is prefaced by a million if/then statements, in the vein of (If the person is a man named Kenny who is 34 and has recently been a bit depressed and if married to Laura whose father just died and today is their anniversary and...etc. for a while, then respond "Hey Kenny, how has Laura been?" The assumption of such an improbably large set of books is more proof that the CRA lies in a strictly theoretical realm. Anyone can see that the brain is not equivalent to this system. Worst of all, Searle ignores reality and imagines his machine as magically produced from nothing. Even base physics dictate that building the machine would entail the production of the books! This apparently magic information cannot come from nowhere, but instead had to have been compiled somehow, by an understanding system! In fact, the very rating system of how well the machine understands Chinese is based on the opinions of...those who speak Chinese! With these road barriers, there is only one way around. The man must be a scribe as well, changing rules in his books according to other, higher level instructions. When there are enough of THESE instructions, the machine will seem conscious. But in doing so, the machine has had to incorporate environmental information! The man, though he can have a lack of understanding of the symbols themselves, can know his rules perfectly as to know which words go after which. The only thing stopping him from being totally aware is some way to associate these words with symbols. If he only knew what a few words actually meant, the man would be able to define more based on context. These could only be acquired through "senses," which would transmit environmental symbolic signatures to the room, allowing the man to associate things he knew with the pictures. If this physically possible, logically valid form of the CRA is used, then the man, knowing the rules and the symbols, will have gone through an identical process to one all of us have undertaken: learning a language. The ONLY functional version of Searle's Room implies that the man has gained understanding of a language, at least as much as our brains do with their methods. By the time the information gathering required to produce the existence of the system is complete, it will have formed logically equivalent structures in the man's head. No matter how you run it, learning, by a human, in a sense we can ALL agree on, MUST be applied somewhere. The information has to come from somewhere and has to go somewhere. And it is exactly the same--environmental cues being applied meta-algorithmically to a human mind, which will still carry on with its bizarre human intelligence. The CRA is a tautology. Understanding=understanding, it comes with the package. Please give this a thought before continuing to use the flawed version, though a valid and true version could help for analysis. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Feb 22 07:38:28 2010 From: spike66 at att.net (spike) Date: Sun, 21 Feb 2010 23:38:28 -0800 Subject: [ExI] Is this FTL Communication? In-Reply-To: <4e3a29501002211819i735f457ao1b143c3683d7020e@mail.gmail.com> References: <4e3a29501002211819i735f457ao1b143c3683d7020e@mail.gmail.com> Message-ID: <3F250287207A4426B4211EAEBAC643B7@spike> ...Will Steinberg Subject: [ExI] Is this FTL Communication? ...Imagine we build a very, very long, very, very solid rod. One end of the rod is located very, very far away. Will pushing one one end of the rod effect an immediate response on the other? Or is this just a longitudinal wave of nuclear forces propagating at References: <4e3a29501002212200h22f48637m692cd517f5f4a9c3@mail.gmail.com> Message-ID: 2010/2/22 Will Steinberg : > Searle's Chinese Room has become the heart of darkness of the recent > conversations. ?The thing seems logically valid. ?But the error is hidden > beneath layers of metaphor and an oversimplification of human thought. > The man in the room, given access to his books of responses, can only > utilize a limited set of data. ?A totally plain "a for b" system would limit > the room to inputs whose answers never changes--base facts. ?Any i/o on a > changing system--the people, environment, events in general--cannot function > with this limited set. Since the CRA is an argument against computationalism, I have assumed that the man implements a digital computer programmed to speak Chinese. This is beyond the capabilities of modern computers and their programmers, but it should be possible unless the brain does something fundamentally non-computable when it processes language. I expect that if we did have computers that could eloquently argue their case it would cause a thinning in the ranks of the only-brains-can-think crowd. -- Stathis Papaioannou From pharos at gmail.com Mon Feb 22 10:24:33 2010 From: pharos at gmail.com (BillK) Date: Mon, 22 Feb 2010 10:24:33 +0000 Subject: [ExI] Exi-chat - The new Dungeons & Discourse Message-ID: Maybe we need a few of the Postmodernists to help out those unfortunates caught in the Swobe black hole. Are they using 'real' words or is it all an illusion? Simulacra and Simulation by Jean Baudrillard. The postmodern age, where the simulacrum precedes the original and the distinction between reality and representation breaks down. There is only the simulacrum. BillK From stathisp at gmail.com Mon Feb 22 11:03:03 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 22 Feb 2010 22:03:03 +1100 Subject: [ExI] The alleged existence of consciousness In-Reply-To: References: <762702.96421.qm@web36504.mail.mud.yahoo.com> Message-ID: On 22 February 2010 04:34, Dave Sill wrote: > On Sat, Feb 20, 2010 at 7:47 PM, Stathis Papaioannou wrote: >> >> There may be other ways to make a zombie (although there are separate >> arguments against that possibility also) but following the >> architecture of the brain is not one of them. Any structure that >> reproduces the pattern of neural firing in the brain will also >> reproduce the intelligence and the consciousness of the brain. > > Agreed. > >> If the brain's intelligence would remain intact despite changes that >> would eliminate consciousness, then consciousness would be a useless >> complication. > > But what if the changes required resulted in more complication? Then > removing consciousness would be a useless complication. That's possible but it seems very unlikely. Consciousness is extremely elaborate, and it closely mirrors intelligence. It seems incredible to suppose that, let's say, water has this intrinsic ability to produce consciousness, such that if our cells were instead based on liquid ammonia as a solvent we would have been zombies. How did we get so lucky? >>>> The best explanation is that the brain we happen to >>>> have ended up with is not specially blessed, and any other brain based >>>> on similar patterns resulting in similar behaviour would have also had >>>> a similar consciousness. >>> >>> Intuitively, that seems likely. But we just don't know. >> >> We do know. The partial brain replacement thought experiment makes it >> true as a matter of logical necessity; in other words more certainly >> true than any mere empirical fact, which could be proved false >> tomorrow. I would really like to hear a rebuttal, but no-one has yet >> attempted one. > > The partial brain replacement thought experiment only covers the case > of a brain that works exactly like ours. It seems possible to me that > evolution could have taken a different path, and conceivable that one > or more of those paths might have resulted in intelligence without > consciousness. As you replace more and more neurons you end up with a large volume of artificial brain. This volume does not have to be structurally similar to normal brain tissue on the inside: all it has to do is process environmental inputs normally and interact with the remaining biological tissue normally. Eventually almost the entire brain will be replaced, behaving normally and sending normal signals to one remaining neuron. When that last neuron is replaced, we have an entire artificial brain which has as its essential characteristic that it reproduces the I/O behaviour of the original human interacting with the environment. That is, at this end point it need not share any structural features with the original brain, as long as it behaves exactly like the original human. And this artificial brain would have to give rise to consciousness in the same way as the original human, since it is not plausible that the consciousness gradually fades or suddenly disappears at some point during the replacement process. The conclusion is that any entity which exactly reproduces the behaviour of the original human will also reproduce the consciousness of the original human. Entities which behave differently will, of course, have different consciousnesses, but again it is implausible that consciousness could simply disappear altogether with some small increment away from standard human behaviour. -- Stathis Papaioannou From stathisp at gmail.com Mon Feb 22 11:13:26 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 22 Feb 2010 22:13:26 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <815945.61890.qm@web36501.mail.mud.yahoo.com> References: <815945.61890.qm@web36501.mail.mud.yahoo.com> Message-ID: On 22 February 2010 03:11, Gordon Swobe wrote: > --- On Sun, 2/21/10, Stathis Papaioannou wrote: > >> But the computation may, for example, predict how far the >> nail will be driven into the wood, which is a replication of a property >> of the real event. > > "Predicting an event" != "replication of a property". If you're driving the nail into the wood to see how far in it will go then the simulation does actually replicate that aspect of the event. And the simulation can then be put to work controlling a hammer-wielding robot. It's not the same as a person, but it can perform that particular function of a person. In fact a computer controlling a robot can perform any function of the person whatsoever, but you claim that the exception is consciousness. What is it about consciousness that makes it unique among all the qualities in the universe? > I consider computations of natural processes including brain processes as a-causal; that is, the computations describe and predict natural processes but they do not determine them. > > In general, nature "follows" no supposed laws or algorithms. She just does what she does, and humans talk about it with computations and so-called laws of physics. Yes, this is something I more or less said before. There is no fundamental low level difference between a computer implementing a program and a brain generating thought. -- Stathis Papaioannou From bbenzai at yahoo.com Mon Feb 22 13:00:03 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 22 Feb 2010 05:00:03 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: Message-ID: <981194.3700.qm@web113619.mail.gq1.yahoo.com> Spencer Campbell wrote: > We have a property, "thinking", and nowhere to put it. > Right smack > into dualism again, and at the moment I don't much feel > like working > out the possible implications. Not a problem at all, and not dualism. Replace "Thinking" with "Ticking", and "Brain" with "Clock". Where do you 'put' the ticking? Is it dualism to acknowledge that clocks tick? I've noticed a few times that there seems to be a bit of confusion (or in some cases maybe misrepresentation) over what 'dualism' means. I take it to mean the hypothesis that there is an immaterial and *unexplainable* component to the mind, a 'soul' if you like. This is different to ackowledging that there are nouns and verbs. As John K Clark has said many times, the mind is not a noun, it's a verb. Acknowledging that verbs exist is not dualism. Verbs are immaterial, but not supernatural. They're essential components of any dynamical system, and can be completely characterised in informational terms. As has been mentioned ad nauseam, a description of a thing is not the thing itself. This applies to verbs as well as physical things. A description is necessary if you want to build a system to behave in a particular way, though. The information that characterises the verb is used to get an assemblage of matter and energy to perform the verb. This doesn't contradict materialism in any way. Ben Zaiboc From gts_2000 at yahoo.com Mon Feb 22 13:37:16 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 22 Feb 2010 05:37:16 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <993547.87075.qm@web36502.mail.mud.yahoo.com> --- On Mon, 2/22/10, Stathis Papaioannou wrote: >> In general, nature "follows" no supposed laws or >> algorithms. She just does what she does, and humans talk >> about it with computations and so-called laws of physics. > > Yes, this is something I more or less said before. There is > no fundamental low level difference between a computer > implementing a program and a brain generating thought. You start your first sentence with "yes" as if you agree with my words, but your second sentence shows that either you disagree or misunderstand. I consider computations of natural processes as acausal descriptions of material processes. By "acausal", I mean that although we may ascribe computational descriptions to natural processes, those computations will reflect no real underlying causal mechanisms. Such programs *describe* and *predict* natural processes but they do not *cause* natural processes. This means we cannot actually duplicate natural processes with software. We can only *simulate* those processes, and simulations of natural processes have no real-world qualities; they exist only as digital descriptions of real things, as digital models of real things, as digital depictions of real things. Consider the example of a computation of a weather system, let us say a hurricane. Given enough information and the correct inputs, our computation will in principle perfectly describe and predict the hurricane's behavior. I think you will agree however such a perfect simulation would not prove that programs actually *cause* hurricane behavior. I do not believe the brain qualifies for any special exception to this rule. What applies to hurricanes and other natural processes applies also to the human brain. Despite the fact that we sometimes think of the brain as an "information processor", on close inspection our use of that term does not justify abandoning the view that the brain exists as just another natural object in nature, especially as it concerns consciousness. So then just as real hurricanes do not exist as computations, neither do real brains exist as computations. -gts From stathisp at gmail.com Mon Feb 22 14:22:20 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 23 Feb 2010 01:22:20 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <993547.87075.qm@web36502.mail.mud.yahoo.com> References: <993547.87075.qm@web36502.mail.mud.yahoo.com> Message-ID: On 23 February 2010 00:37, Gordon Swobe wrote: > --- On Mon, 2/22/10, Stathis Papaioannou wrote: > >>> In general, nature "follows" no supposed laws or >>> algorithms. She just does what she does, and humans talk >>> about it with computations and so-called laws of physics. >> >> Yes, this is something I more or less said before. There is >> no fundamental low level difference between a computer >> implementing a program and a brain generating thought. > > You start your first sentence with "yes" as if you agree with my words, but your second sentence shows that either you disagree or misunderstand. > > I consider computations of natural processes as acausal descriptions of material processes. By "acausal", I mean that although we may ascribe computational descriptions to natural processes, those computations will reflect no real underlying causal mechanisms. Such programs *describe* and *predict* natural processes but they do not *cause* natural processes. The computations undergo the same causal mechanisms as anything else. This bit hits that bit, which rolls onto the other bit, which closes the circuit and makes a light flash, and so on. At the basic physical level there is no "program", that's just something the human mind superimposes on a certain type of physical activity. But you make the outrageous claim that any machine which could be seen as implementing a program loses any hope of being conscious. We might run into an alien intelligence which everyone assumes to be conscious until we figure out that there's a NAND gate made of protein in the ion channels of its neurons - and then suddenly we realise that it must be a zombie. > This means we cannot actually duplicate natural processes with software. We can only *simulate* those processes, and simulations of natural processes have no real-world qualities; they exist only as digital descriptions of real things, as digital models of real things, as digital depictions of real things. > > Consider the example of a computation of a weather system, let us say a hurricane. Given enough information and the correct inputs, our computation will in principle perfectly describe and predict the hurricane's behavior. I think you will agree however such a perfect simulation would not prove that programs actually *cause* hurricane behavior. > > I do not believe the brain qualifies for any special exception to this rule. What applies to hurricanes and other natural processes applies also to the human brain. Despite the fact that we sometimes think of the brain as an "information processor", on close inspection our use of that term does not justify abandoning the view that the brain exists as just another natural object in nature, especially as it concerns consciousness. So then just as real hurricanes do not exist as computations, neither do real brains exist as computations. We can certainly duplicate natural processes with computers. We do it all the time. With the weather, we could have a computer controlling the blowing air and dropping water here and there. With language, we could have a computer interpreting an audio feed and responding via a loudspeaker. But you say the computer can do anything else in the universe (since you agree that physics is computable) *except* be conscious. Consciousness has a magical status distinguishing it from the rest of physics. -- Stathis Papaioannou From gts_2000 at yahoo.com Mon Feb 22 14:29:02 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 22 Feb 2010 06:29:02 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <163367.15486.qm@web36501.mail.mud.yahoo.com> > What is it about consciousness that makes it unique among all the > qualities in the universe? I consider consciousness unique only in so much as we can know about it only from the first-person perspective. Aside from that, it differs in no important way from any other material biological process. The reason for all the befuddlement about consciousness through the ages comes down to this simple brute fact that consciousness has a subjective first-person ontology vs. everything else in the world that has a third-person objective ontology. This difference confuses people; the mind wants to find significance in the difference. It wants to make something out of it. Unfortunate that so many philosophers and theologians tried to make something out of it. They only made a huge mess of it. Then about 100 years ago came the birth of analytic philosophy and with it a new respect for sanity and common sense. -gts From stathisp at gmail.com Mon Feb 22 15:15:59 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 23 Feb 2010 02:15:59 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <163367.15486.qm@web36501.mail.mud.yahoo.com> References: <163367.15486.qm@web36501.mail.mud.yahoo.com> Message-ID: On 23 February 2010 01:29, Gordon Swobe wrote: >> What is it about consciousness that makes it unique among all the >> qualities in the universe? > > I consider consciousness unique only in so much as we can know about it only from the first-person perspective. Aside from that, it differs in no important way from any other material biological process. But everything aspect of the world except consciousness can be duplicated by a computer. > The reason for all the befuddlement about consciousness through the ages comes down to this simple brute fact that consciousness has a subjective first-person ontology vs. everything else in the world that has a third-person objective ontology. This difference confuses people; the mind wants to find significance in the difference. It wants to make something out of it. > > Unfortunate that so many philosophers and theologians tried to make something out of it. They only made a huge mess of it. > > Then about 100 years ago came the birth of analytic philosophy and with it a new respect for sanity and common sense. Most philosophers with a bias towards functionalism are in the empiricist/analytic/positivist tradition. Those who claim that the brain has properties that make it unique in the universe are sometimes lumped together with the vitalists and the dualists. It's perhaps name-calling, but it's what happens. -- Stathis Papaioannou From amon at doctrinezero.com Mon Feb 22 14:34:26 2010 From: amon at doctrinezero.com (Amon Zero) Date: Mon, 22 Feb 2010 14:34:26 +0000 Subject: [ExI] Conference: Humanity+ UK 2010 Message-ID: <5bae33f11002220634s5de05579o879c2af3d87e051b@mail.gmail.com> (Apologies for cross-posting, but this should be interesting / relevant to people on this list) How will accelerating technological change affect human mental and physical capabilites as well as the environment in which we live? What challenges and opportunities does the human race face in this age of unprecedented change? How can we help shape the future, spread the benefits and mitigate the risks of the coming technological revolutions? Humanity+ UK2010, a one-day conference in London on 24 April 2010, gathers together some leading thinkers to discuss these topics. Speakers include: *) Max More, on "Singularity Skepticism: Exposing Exponential Errors"; *) Anders Sandberg, on "Making humans smarter via cognitive enhancers"; *) Rachel Armstrong, on "The impact of living technology on the future of humanity"; *) Aubrey de Grey, on "Human regenerative engineering ? theory and practice"; *) David Pearce, on "The Abolitionist Project: Can biotechnology abolish suffering throughout the living world?"; *) Amon Twyman, on "Augmented perception and Transhumanist Art"; *) Natasha Vita-More, on "DIY Enhancement"; *) David Orban, on "The Singularity University", and "The Internet of Things"; *) Nick Bostrom, on "Reducing Existential Risks". For more details, including speaker biographies, see http://humanityplus-uk.com PLEASE NOTE: People attending this event need to REGISTER via the event website, http://humanityplus-uk.com. Attendance costs ?25 (or ?15 for non-waged), but is free to registered members of H+ UK. The event website also provides: *) an option to join H+ UK *) an option to register to join some of the speakers for dinner and further conversation in the evening *) blog postings by and about the speakers. For information about the event venue, see http://www.conwayhall.org.uk/where.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Mon Feb 22 16:20:56 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 22 Feb 2010 08:20:56 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <398514.12186.qm@web36504.mail.mud.yahoo.com> --- On Mon, 2/22/10, Stathis Papaioannou wrote: > Those who claim that the brain has properties that make it unique in the > universe are sometimes lumped together with the vitalists and the > dualists. It's perhaps name-calling, but it's what happens. You and others here, not I, give the brain special status. You think the organic brain differs from other biological organs. You will agree with me for example when I assert that a digital simulation of a complete heart running on a computer does not equal a real complete heart capable of pumping real blood through its chambers. Yes? But then you will disagree with me when I assert the same exact principle with respect to brains: you will disagree when I assert that a digital simulation of a brain running on a computer does not equal a real brain capable of having real thoughts. Can you or anyone here explain why you think the brain deserves that special status without asserting mind/matter dualism? I don't think so. -gts From gts_2000 at yahoo.com Mon Feb 22 16:01:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 22 Feb 2010 08:01:33 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <704208.73539.qm@web36503.mail.mud.yahoo.com> --- On Mon, 2/22/10, Stathis Papaioannou wrote: >> I consider computations of natural processes as >> acausal descriptions of material processes. By "acausal", I >> mean that although we may ascribe computational descriptions >> to natural processes, those computations will reflect no >> real underlying causal mechanisms. Such programs *describe* >> and *predict* natural processes but they do not *cause* >> natural processes. > > The computations undergo the same causal mechanisms as > anything else. You say this, but then you say... > This bit hits that bit, which rolls onto the other bit, > which closes the circuit and makes a light flash, and so on. How can you call that "the same causal mechanisms as anything else?" You describe here the causal mechanisms of a digital computer, not those of a biological brain. > At the basic physical level there is no "program", that's just something > the human mind superimposes on a certain type of physical activity. In real digital computers I see real bitwise operations exactly like those you describe above. I see no such bitwise operations in the organic brain, or if I do then I see them only because I have abstracted so far away from its real biological processes that I might just as well call turnips computers too. -gts From cluebcke at yahoo.com Mon Feb 22 16:43:59 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Mon, 22 Feb 2010 08:43:59 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <398514.12186.qm@web36504.mail.mud.yahoo.com> References: <398514.12186.qm@web36504.mail.mud.yahoo.com> Message-ID: <840203.62830.qm@web111214.mail.gq1.yahoo.com> Two points: First, why do you believe that thinking can be an emergent property of physical brains, but not of digital simulations of brains? Second, your analogy is imperfect; a better comparison to the project underway would be artificial hearts, not digital simulations of hearts. Regardless, both artificial hearts, and digital simulations of hearts, can hold the property of "beating". ________________________________ From: Gordon Swobe To: ExI chat list Sent: Mon, February 22, 2010 8:20:56 AM Subject: Re: [ExI] Is the brain a digital computer? --- On Mon, 2/22/10, Stathis Papaioannou wrote: > Those who claim that the brain has properties that make it unique in the > universe are sometimes lumped together with the vitalists and the > dualists. It's perhaps name-calling, but it's what happens. You and others here, not I, give the brain special status. You think the organic brain differs from other biological organs. You will agree with me for example when I assert that a digital simulation of a complete heart running on a computer does not equal a real complete heart capable of pumping real blood through its chambers. Yes? But then you will disagree with me when I assert the same exact principle with respect to brains: you will disagree when I assert that a digital simulation of a brain running on a computer does not equal a real brain capable of having real thoughts. Can you or anyone here explain why you think the brain deserves that special status without asserting mind/matter dualism? I don't think so. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Mon Feb 22 17:15:13 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 22 Feb 2010 12:15:13 -0500 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <398514.12186.qm@web36504.mail.mud.yahoo.com> References: <398514.12186.qm@web36504.mail.mud.yahoo.com> Message-ID: On Mon, Feb 22, 2010 at 11:20 AM, Gordon Swobe wrote: > > You think the organic brain differs from other biological organs. Of course brains are different from other organs: they perform a different function. A liver would make a terrible brain, and brains are lousy lungs. > You will agree with me for example > when I assert that a digital simulation of a complete heart running on a computer does not equal a > real complete heart capable of pumping real blood through its chambers. Yes? Agreed. > But then you will disagree with me when I assert the same exact principle with respect to brains: > you will disagree when I assert that a digital simulation of a brain running on a computer does not > equal a real brain capable of having real thoughts. A digital simulation of a thing is never equal to the real thing, but that doesn't mean that a simulation can't have some intangible property of the real thing. > Can you or anyone here explain why you think the brain deserves that special status without asserting > mind/matter dualism? I don't think so. The brain doesn't deserve special status. Just as a digital simulation of a heart can have a pulse, so can simulated brains have thoughts. -Dave From pharos at gmail.com Mon Feb 22 20:20:06 2010 From: pharos at gmail.com (BillK) Date: Mon, 22 Feb 2010 20:20:06 +0000 Subject: [ExI] The Dynamite Prize in Economics Message-ID: Alan Greenspan has been judged the economist most responsible for causing the Global Financial Crisis. He and 2nd and 3rd place finishers Milton Friedman and Larry Summers, have won the first?and hopefully last?Dynamite Prize in Economics. They have been judged to be the three economists most responsible for the Global Financial Crisis. More figuratively, they are the three economists most responsible for blowing up the global economy. Dynamite Prize Citations Alan Greenspan (5,061 votes): As Chairman of the Federal Reserve System from 1987 to 2006, Alan Greenspan both led the over expansion of money and credit that created the bubble that burst and aggressively promoted the view that financial markets are naturally efficient and in no need of regulation. Milton Friedman (3,349 votes): Friedman propagated the delusion, through his misunderstanding of the scientific method, that an economy can be accurately modeled using counterfactual propositions about its nature. This, together with his simplistic model of money, encouraged the development of fantasy-based theories of economics and finance that facilitated the Global Financial Collapse. Larry Summers (3,023 votes): As US Secretary of the Treasury (formerly an economist at Harvard and the World Bank), Summers worked successfully for the repeal of the Glass-Steagall Act, which since the Great Crash of 1929 had kept deposit banking separate from casino banking. He also helped Greenspan and Wall Street torpedo efforts to regulate derivatives. The vote totals for the other finalists were: Fischer Black and Myron Scholes 2,016 Eugene Fama 1,668 Paul Samuelson 1,291 Robert Lucas 912 Richard Portes 433 Edward Prescott and Finn E. Kydland 403 Assar Lindbeck 375 This blog established the prize in response to attempts by economists to evade responsibility for the crisis by calling it an unpredictable, ?Black Swan? event. In reality, the public perception that economic theories and policies helped cause the crisis is correct. -------------------------------------- More info about the misdeeds of these economists can be found at the voting page (poll now closed). (I voted for Greenspan also! ) BillK From jonkc at bellsouth.net Mon Feb 22 20:48:06 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 22 Feb 2010 15:48:06 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <993547.87075.qm@web36502.mail.mud.yahoo.com> References: <993547.87075.qm@web36502.mail.mud.yahoo.com> Message-ID: <6704CA0E-B0E8-4455-AF08-E4B1CF6B883D@bellsouth.net> Since my last post Gordon Swobe has posted 5 times. > > Such programs *describe* and *predict* natural processes but they do not *cause* natural processes. Swobe is correct but I don't see his point. True, a computer simulation of an apple falling from a tree is not the same as an apple falling from a tree, nor does it cause that fruit to descend; but when we human beings think about an apple falling from a tree, that is to say when we make a mental simulation of it, exactly the same thing is true. Swobe says there is a vast difference between these two cases but neglects to tell us what it is other than to say over and over and over again, without the slightest hint of evidence, that all X understands is that squiggle follows squaggle. > This means we cannot actually duplicate natural processes with software. And this means that, despite what some paranormal advocates claim, human beings cannot actually duplicate natural processes with thought alone. > We can only *simulate* those processes, and simulations of natural processes have no real-world qualities; they exist only as digital descriptions of real things, as digital models of real things, as digital depictions of real things. Humans can only *think about* those processes, and thinking about natural processes have no real-world qualities; they exist only as mental descriptions of real things, as mental models of real things, as mental depictions of real things. > Given enough information and the correct inputs, our computation will in principle perfectly describe and predict the hurricane's behavior. I think you will agree however such a perfect simulation would not prove that programs actually *cause* hurricane behavior. Again I don't see his point, I believe even Swobe would agree that his thinking about hurricanes does not actually *cause* hurricane behavior. > I do not believe the brain qualifies for any special exception to this rule. What applies to hurricanes and other natural processes applies also to the human brain. At last, a complete sentence that is 100% correct! > Despite the fact that we sometimes think of the brain as an "information processor", on close inspection our use of that term does not justify abandoning the view that the brain exists as just another natural object in nature And we're right back to gibberish again. The clear implication from the above is that a information processor is not a "natural object in nature". Even without the redundancy that is ridiculous. > So then just as real hurricanes do not exist as computations, neither do real brains exist as computations. True, but real minds do. > I consider consciousness unique only in so much as we can know about it only from the first-person perspective. Aside from that, it differs in no important way from any other material biological process. Swobe's use of the word "only" doesn't seem quite appropriate to me, as the subjective objective dichotomy is about as important as things get. It's like saying "the weapon will only destroy the galaxy, aside from that the Universe as a whole will be effected in no important way". > This difference confuses people Indeed! > the mind wants to find significance in the difference. It wants to make something out of it. Like a mysterious undefined "something" that is undetectable by the Scientific Method but is nevertheless extraordinarily important that 3 pounds of grey goo has but a computer does not and never will, because, because, well just because. > Unfortunate that so many philosophers and theologians tried to make something out of it. They only made a huge mess of it. And most of those befuddled philosophers and all of those befuddled theologians had views almost identical with Swobe's not with mine. I think it's time to try something new. > > You and others here, not I, give the brain special status. I would give the brain the same status I would give any computer, digital or analog, that works extremely well. > You will agree with me for example when I assert that a digital simulation of a complete heart running on a computer does not equal a real complete heart capable of pumping real blood through its chambers. Yes? Yes, and thinking about a heart is not the same as a real heart either. > > But then you will disagree with me when I assert the same exact principle with respect to brains No I agree. > you will disagree when I assert that a digital simulation of a brain running on a computer does not equal a real brain I don't disagree with Swobe about that either, a real brain weighs about 3 pounds and it's meaningless to ask how much a computer simulation weighs. That is a difference so they are not the same. > Can you or anyone here explain why you think the brain deserves that special status without asserting mind/matter dualism? I don't think so. > Swobe seems to expect Extropians to react to the word "dualism" as if somebody said "fuck" at a sunday school picnic, I think that's silly. I'm not afraid to say I think mind is not the same as brain, I think that should be no more controversial than saying nouns are not identical with verbs and adjectives. John K Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Feb 22 20:29:13 2010 From: spike66 at att.net (spike) Date: Mon, 22 Feb 2010 12:29:13 -0800 Subject: [ExI] funny headlines again: the dangerous of outsourcing editing to india {8^D Message-ID: <1179706F14CA4ACD8618AADAE32464FE@spike> Do forgive my obsessiveness about the explosion of grammatical errors in the news biz, but these had me in stitches. The article is about errors in a scientific paper, but the article itself has three grammatical errors in 15 sentences. I expect journalism majors to spot these bugs and fix them. Note how clean Damien's books are, from a spelling and grammar perspective. Unlike this article's language mechanics, his books are never grammatically fucked upwardly. http://www.foxnews.com/scitech/2010/02/22/scientist-retracts-paper-rising-se a-levels-errors/ spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook.jpg Type: image/jpeg Size: 21074 bytes Desc: not available URL: From thespike at satx.rr.com Mon Feb 22 21:26:11 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 22 Feb 2010 15:26:11 -0600 Subject: [ExI] funny headlines again: the dangerous of outsourcing editing to india {8^D In-Reply-To: <1179706F14CA4ACD8618AADAE32464FE@spike> References: <1179706F14CA4ACD8618AADAE32464FE@spike> Message-ID: <4B82F673.40703@satx.rr.com> On 2/22/2010 2:29 PM, spike wrote: > Note how clean Damien's books are, from a spelling and grammar > perspective. Unlike this article's language mechanics, his books are > never grammatically fucked upwardly. My grammar is usually kosher, but the clean spelling in some of them is partly due to the careful proofreading by one Greg "Spike" Jones (and in other cases, some other kindly souls). Damien Broderick From pharos at gmail.com Mon Feb 22 21:38:38 2010 From: pharos at gmail.com (BillK) Date: Mon, 22 Feb 2010 21:38:38 +0000 Subject: [ExI] funny headlines again: the dangerous of outsourcing editing to india {8^D In-Reply-To: <4B82F673.40703@satx.rr.com> References: <1179706F14CA4ACD8618AADAE32464FE@spike> <4B82F673.40703@satx.rr.com> Message-ID: On Mon, Feb 22, 2010 at 9:26 PM, Damien Broderick wrote: > My grammar is usually kosher, but the clean spelling in some of them is > partly due to the careful proofreading by one Greg "Spike" Jones (and in > other cases, some other kindly souls). > > Yup, proofreading is very impotent. The the impotence of proofreading By Taylor Mali www.taylormali.com Has this ever happened to you? You work very horde on a paper for English clash And then get a very glow raid (like a D or even a D=) and all because you are the word?s liverwurst spoiler. Proofreading your peppers is a matter of the the utmost impotence. This is a problem that affects manly, manly students. I myself was such a bed spiller once upon a term that my English teacher in my sophomoric year, Mrs. Myth, said I would never get into a good colleague. And that?s all I wanted, just to get into a good colleague. Not just anal community colleague, because I wouldn?t be happy at anal community colleague. I needed a place that would offer me intellectual simulation, I really need to be challenged, challenged menstrually. I know this makes me sound like a stereo, but I really wanted to go to an ivory legal colleague. So I needed to improvement or gone would be my dream of going to Harvard, Jail, or Prison (in Prison, New Jersey). So I got myself a spell checker and figured I was on Sleazy Street. But there are several missed aches that a spell chukker can?t can?t catch catch. For instant, if you accidentally leave a word your spell exchequer won?t put it in you. And God for billing purposes only you should have serial problems with Tori Spelling your spell Chekhov might replace a word with one you had absolutely no detention of using. Because what do you want it to douch? It only does what you tell it to douche. You?re the one with your hand on the mouth going clit, clit, clit. It just goes to show you how embargo one careless clit of the mouth can be. Which reminds me of this one time during my Junior Mint. The teacher read my entire paper on A Sale of Two Titties out loud to all of my assmates. I?m not joking, I?m totally cereal. It was the most humidifying experience of my life, being laughed at pubically. So do yourself a flavor and follow these two Pisces of advice: One: There is no prostitute for careful editing. And three: When it comes to proofreading, the red penis your friend. ------------------ BillK From cluebcke at yahoo.com Mon Feb 22 21:54:04 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Mon, 22 Feb 2010 13:54:04 -0800 (PST) Subject: [ExI] The Dynamite Prize in Economics In-Reply-To: References: Message-ID: <853137.87549.qm@web111216.mail.gq1.yahoo.com> Couldn't have gone to a better guy. Frontline just aired a related program called "The Warning", about Brooksley Born's failed (and probably Quixotically hopeless) attempt to regulate the derivatives market. Greenspan and Summers feature prominently, of course. I'm no economist and derivatives baffle me, but it was quite good: http://www.pbs.org/wgbh/pages/frontline/warning/?utm_campaign=homepage&utm_medium=proglist&utm_source=proglist ________________________________ From: BillK To: Extropy Chat Sent: Mon, February 22, 2010 12:20:06 PM Subject: [ExI] The Dynamite Prize in Economics Alan Greenspan has been judged the economist most responsible for causing the Global Financial Crisis. He and 2nd and 3rd place finishers Milton Friedman and Larry Summers, have won the first?and hopefully last?Dynamite Prize in Economics. They have been judged to be the three economists most responsible for the Global Financial Crisis. More figuratively, they are the three economists most responsible for blowing up the global economy. Dynamite Prize Citations Alan Greenspan (5,061 votes): As Chairman of the Federal Reserve System from 1987 to 2006, Alan Greenspan both led the over expansion of money and credit that created the bubble that burst and aggressively promoted the view that financial markets are naturally efficient and in no need of regulation. Milton Friedman (3,349 votes): Friedman propagated the delusion, through his misunderstanding of the scientific method, that an economy can be accurately modeled using counterfactual propositions about its nature. This, together with his simplistic model of money, encouraged the development of fantasy-based theories of economics and finance that facilitated the Global Financial Collapse. Larry Summers (3,023 votes): As US Secretary of the Treasury (formerly an economist at Harvard and the World Bank), Summers worked successfully for the repeal of the Glass-Steagall Act, which since the Great Crash of 1929 had kept deposit banking separate from casino banking. He also helped Greenspan and Wall Street torpedo efforts to regulate derivatives. The vote totals for the other finalists were: Fischer Black and Myron Scholes 2,016 Eugene Fama 1,668 Paul Samuelson 1,291 Robert Lucas 912 Richard Portes 433 Edward Prescott and Finn E. Kydland 403 Assar Lindbeck 375 This blog established the prize in response to attempts by economists to evade responsibility for the crisis by calling it an unpredictable, ?Black Swan? event. In reality, the public perception that economic theories and policies helped cause the crisis is correct. -------------------------------------- More info about the misdeeds of these economists can be found at the voting page (poll now closed). (I voted for Greenspan also! ) BillK _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Feb 22 21:50:23 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 22 Feb 2010 16:50:23 -0500 Subject: [ExI] Is this FTL Communication? In-Reply-To: <4e3a29501002211819i735f457ao1b143c3683d7020e@mail.gmail.com> References: <4e3a29501002211819i735f457ao1b143c3683d7020e@mail.gmail.com> Message-ID: On Feb 21, 2010, at 9:19 PM, Will Steinberg wrote: > Imagine we build a very, very long, very, very solid rod. One end of the rod is located very, very far away. Will pushing one one end of the rod effect an immediate response on the other? If the rod was infinitely rigid then yes it would enable faster than light communication, but unfortunately infinite rigidity is not possible; the atoms in the rod must be held together by some force and even the strong nuclear force, the most powerful known, is not infinitely strong. If I push at one end of a rod when the other distant end will move depends on the speed of sound of the material in the rod and on its density. At the same density the more rigid the material the faster the speed of sound. In diamond the speed of sound is 12,000 meters per second and that's 35 times what it is in air and is the fastest known, but light moves at 300,000,000 meters per second. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Feb 22 23:09:35 2010 From: pharos at gmail.com (BillK) Date: Mon, 22 Feb 2010 23:09:35 +0000 Subject: [ExI] The Dynamite Prize in Economics In-Reply-To: <853137.87549.qm@web111216.mail.gq1.yahoo.com> References: <853137.87549.qm@web111216.mail.gq1.yahoo.com> Message-ID: 2010/2/22 Christopher Luebcke wrote: > Couldn't have gone to a better guy. > Frontline just aired a related program called "The Warning", about Brooksley > Born's failed (and probably Quixotically hopeless) attempt to regulate the > derivatives market. Greenspan and Summers feature prominently, of course. > I'm no economist and derivatives baffle me, but it was quite good: > Trust me - you'll sleep easier not knowing about the derivatives problem. ;) BillK From spike66 at att.net Mon Feb 22 23:19:39 2010 From: spike66 at att.net (spike) Date: Mon, 22 Feb 2010 15:19:39 -0800 Subject: [ExI] funny headlines again: the dangerous of outsourcing editing to india {8^D In-Reply-To: <4B82F673.40703@satx.rr.com> References: <1179706F14CA4ACD8618AADAE32464FE@spike> <4B82F673.40703@satx.rr.com> Message-ID: <028A4ACCFE084A279EB726EA3B69644B@spike> > On 2/22/2010 2:29 PM, spike wrote: > > > Note how clean Damien's books are, from a spelling and grammar > > perspective... > > My grammar is usually kosher, but the clean spelling in some > of them is partly due to the careful proofreading by one Greg > "Spike" Jones (and in other cases, some other kindly souls). > > Damien Broderick I do thank you for the privelege of proofing your work sir, for it is always an honor, and easy work, being interesting reading and nearly perfect upon receipt. I do find the occasional Australianisms puzzling and amusing, as well as culturally educational. In the process of proofreading, I have found some sparkling jewels of sf, particularly the explosive short story The Magi. As a genre, sf often suffers from insufficient character development, being more concerned with concepts and ideas. Good sf has characters that stick in ones mind, as does the lead characters in Golding's Lord of the Flies for instance, Piggy, SamnEric, Simon, etc. Long after reading The Magi, one still wonders what it would be like inside the mind of Raphael Silverman, or inside the mind of the mad priest. Excellent stuff! spike From jonkc at bellsouth.net Mon Feb 22 23:49:37 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 22 Feb 2010 18:49:37 -0500 Subject: [ExI] Is this FTL Communication? In-Reply-To: References: <4e3a29501002211819i735f457ao1b143c3683d7020e@mail.gmail.com> Message-ID: When I reread my previous message I realized I said something that was misleading if not downright untrue. What I meant to say was that when the other end of the very long rod moves it depends entirely on the speed of sound in the material in the rod, and the speed of sound depends entirely on the rigidity and density of the material in question. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Tue Feb 23 00:17:43 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 22 Feb 2010 16:17:43 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: <981194.3700.qm@web113619.mail.gq1.yahoo.com> References: <981194.3700.qm@web113619.mail.gq1.yahoo.com> Message-ID: Ben Zaiboc : > I've noticed a few times that there seems to be a bit of confusion (or in some cases maybe misrepresentation) over what 'dualism' means. ?I take it to mean the hypothesis that there is an immaterial and *unexplainable* component to the mind, a 'soul' if you like. I was not using the term rigorously, but I was going for property dualism rather than the substance dualism you seem to imply here. I was lazy; I should have specified, at least. There is no excuse. Ben Zaiboc : > Replace "Thinking" with "Ticking", and "Brain" with "Clock". ?Where do you 'put' the ticking? ?Is it dualism to acknowledge that clocks tick? The way I was using it, yes. To make the philosophical claim that clocks tick is tantamount to saying that a tree falling in the woods with no one around to hear it does, indeed, make a sound. I am just going to assume that everyone else finds this obvious as well, and leave it at that for now. Because: laziness. From possiblepaths2050 at gmail.com Tue Feb 23 00:53:46 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 22 Feb 2010 17:53:46 -0700 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> Message-ID: <2d6187671002221653k60ffede6k18fda22528da9e6@mail.gmail.com> Spike wrote: And make a cubic buttload of money of course. >> David replied: I trust you meant "a boatload of money"? Much more pleasant way to carry the stuff. >> Spike, such rough language!! ; ) But I hope David keeps in mind that you spent many years around rough and tumble aerospace engineers and IT guys, and so some of it invariably rubbed off on you! lol John ; ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Tue Feb 23 01:00:29 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 22 Feb 2010 18:00:29 -0700 Subject: [ExI] Valentine's probability factor In-Reply-To: References: <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <4B789A05.8030806@satx.rr.com> <8CC7BFBD25C775F-108C-BD05@webmail-d073.sysops.aol.com> Message-ID: <2d6187671002221700s52608b27mfdaa2e1289a47845@mail.gmail.com> 2010/2/15 Natasha Vita-More > Emlyn, Damien, Olga, Aware, Ablainey -- all such beautiful stories! > > I'm still waiting for the day when my custom designed (physically gorgeous, easygoing temperment, intellectually curious, just simply adores me) biologically based female android is decanted! I just need to save up my money and hope at least *she* will be able to tolerate me! lol I have a pretty bad track record with the females already on the planet... I remember a story quite similar to this, where a guy has a female android built just for him. And the story ends with her meeting the fellow and in the same moment rejecting him! John ; ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Feb 23 01:08:10 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 23 Feb 2010 12:08:10 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <398514.12186.qm@web36504.mail.mud.yahoo.com> References: <398514.12186.qm@web36504.mail.mud.yahoo.com> Message-ID: On 23 February 2010 03:20, Gordon Swobe wrote: > --- On Mon, 2/22/10, Stathis Papaioannou wrote: > >> Those who claim that the brain has properties that make it unique in the >> universe are sometimes lumped together with the vitalists and the >> dualists. It's perhaps name-calling, but it's what happens. > > You and others here, not I, give the brain special status. > > You think the organic brain differs from other biological organs. You will agree with me for example when I assert that a digital simulation of a complete heart running on a computer does not equal a real complete heart capable of pumping real blood through its chambers. Yes? But then you will disagree with me when I assert the same exact principle with respect to brains: you will disagree when I assert that a digital simulation of a brain running on a computer does not equal a real brain capable of having real thoughts. > > Can you or anyone here explain why you think the brain deserves that special status without asserting mind/matter dualism? I don't think so. You're setting yourself up for the obvious reply: it is possible to make an artificial heart that pumps blood just as well as a biological heart, but out of completely different materials. The same goes for any other organ: there is no obligation to replicate the actual cells in order to replicate the function. The brain should not be different in this respect. -- Stathis Papaioannou From msd001 at gmail.com Tue Feb 23 01:11:33 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 22 Feb 2010 20:11:33 -0500 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <2d6187671002221653k60ffede6k18fda22528da9e6@mail.gmail.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <2d6187671002221653k60ffede6k18fda22528da9e6@mail.gmail.com> Message-ID: <62c14241002221711x77d28922kff82a8e14181b67f@mail.gmail.com> 2010/2/22 John Grigg : > Spike, such rough language!!? ; )? But I hope David keeps in mind that you > spent many years around rough and tumble aerospace engineers and IT guys, > and so some of it invariably rubbed off on you! lol haha... was that intentional? "rough and tumble aerospace" ? re: "buttload" and "some of it ... rubbed off ..." ? From possiblepaths2050 at gmail.com Tue Feb 23 01:22:27 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 22 Feb 2010 18:22:27 -0700 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <62c14241002221711x77d28922kff82a8e14181b67f@mail.gmail.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <2d6187671002221653k60ffede6k18fda22528da9e6@mail.gmail.com> <62c14241002221711x77d28922kff82a8e14181b67f@mail.gmail.com> Message-ID: <2d6187671002221722u5c6e1523me1cdc534edc91ad5@mail.gmail.com> On Mon, Feb 22, 2010 at 6:11 PM, Mike Dougherty wrote: > 2010/2/22 John Grigg : > > Spike, such rough language!! ; ) But I hope David keeps in mind that > you > > spent many years around rough and tumble aerospace engineers and IT guys, > > and so some of it invariably rubbed off on you! lol > Mike replied" > haha... was that intentional? > > "rough and tumble aerospace" ? > > re: "buttload" and "some of it ... rubbed off ..." ? > > It was not intentional! LOL I must be more careful with people like you on the list! You and Spike, two of a kind! hee John ; ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Tue Feb 23 02:52:21 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Mon, 22 Feb 2010 18:52:21 -0800 Subject: [ExI] Continuity of experience Message-ID: Quoting from "How to Replace Uninjured Parts of the Brain": Ben Zaiboc : > This should preserve continuity if you accept that continuity is preserved over an episode of general anaesthesia, for example. A very poor analogy! General anesthetics do not cause total cessation of activity in the brain. Preservation and destructive scanning; this is much more similar, if not identical, to resuscitation of the clinically dead. Yet, even clinical death is less dead than brain death. According to my cursory research, there is, in general, a period of several minutes between the two. A thought experiment is in order! (The audience: "aaaaagh noooo") What we are looking for, here, is continuity of experience. Let's suppose there exists some property, M (for "me"), such that continuity remains unbroken as long as the individual retains that property. Now, the first question is, what is necessary for M to be lost? Let's consider five states of increasingly severe unconsciousness: drunk, asleep, knocked out (as in general anaesthesia), comatose, and dead. In the first three cases, according to common knowledge, M is conserved. It may or may not disappear for a while, but it comes back if it does. I am still me after I wake up in the morning. So far, so good. Comas are more tricky. There is no guarantee that consciousness will return, but, if it does, common knowledge likewise dictates that M must be conserved. Death, in the sense of brain death, is totally impenetrable. Common knowledge says that M transfers from the body to some weird extradimensional medium which is thus far invulnerable to scientific inquiry, so we can't really trust common knowledge anymore. This is the state we're dealing with when we freeze or plasticize the brain. It's worth noting at this point that M is a purely metaphysical concept, not subject to measurement. I act the same with M as I do without M. There really isn't any way to prove that I remain myself even from second-to-second, let alone during transitions between gross mental states. Not even I would notice if my M suddenly evaporated. You might think that M could equally well stand for "memory", in which case it would be pretty easy to measure. It does not. To prove this is trivial, and that is where the experiment really begins. I think everyone can agree that M can only belong to one entity in the universe (at most) at any given time, no matter the nature of its existence. Yet, it is inarguably the case that, if we can scan a person's brain to reproduce their mind, we can do so as many times as we want. There is nothing in the laws of physics to prevent you from making two identical copies of me, or even three, but only one can possess M at any given time. So, due to some accident at the cloning facility, my mind is replicated twice. The two copies have EXACTLY the same memories, by definition, in the moment before they wake up. They begin to diverge immediately, unless also placed in identical virtual realities. It is unproblematic to claim that both of them really are the real me, in that case. I can be in two places at once. If you put them in real synthetic bodies, on the other hand, then the story is quite different. In the very first moment of operation, they experience different things (thanks to our old friend the Pauli exclusion principle); and so, they have different memories. Now they are definitely different individuals, though still indistinguishable to a casual observer. I can not be two people at once. This concludes the experiment. We're left with a final, troubling question: where is M? There are all kinds of answers you could come up with, but none of them are very convincing and all of them are pretty uncomfortable. Nolipsism says: nowhere, because M does not exist. Simple. Panpsychism says: everywhere, because individuality is a crock to begin with. Simple! Dualism says: there must be some arbitrary soul-sorting process, which of course we have no way to prove or disprove, built directly into the fabric of reality. Not so simple. All things considered, though, relatively appealing. You can see why most people go for this one. I tend to subscribe to nolipsism, but, in this case, I am superstitious. I will not be taking steps to preserve my brain, because I believe that I will stay dead even if a perfect copy of me is made. I have M0; my first copy has M1, my second M2, and so on. Being a selfish person at heart, I don't really care about blessing future generations with my knowledge. In fact, I would outright resent a duplicate of myself who gets to live while I remain dead. I'm not going to take any chances when it comes to potential supernatural complications. I just want to survive, as long as possible and as well as possible. All I can do is take solace in the fact that Pollock tells me it is pretty much a physical impossibility for me not to cling to this particular superstition, no matter how blatantly irrational it is. And it is pretty blatantly irrational. That doesn't mean it's false, though! From spike66 at att.net Tue Feb 23 03:21:29 2010 From: spike66 at att.net (spike) Date: Mon, 22 Feb 2010 19:21:29 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <2d6187671002221653k60ffede6k18fda22528da9e6@mail.gmail.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com><7A7A0CB2766A4C879426F3F66CD94F0C@spike><87670FEF1360437981B4B84931C5FF94@spike><20100212221046.5.qmail@syzygy.com><710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <2d6187671002221653k60ffede6k18fda22528da9e6@mail.gmail.com> Message-ID: Spike wrote: And make a cubic buttload of money of course. >> David replied: I trust you meant "a boatload of money"? Much more pleasant way to carry the stuff. >> Spike, such rough language!! ; ) But I hope David keeps in mind that you spent many years around rough and tumble aerospace engineers and IT guys, and so some of it invariably rubbed off on you! lol John ; ) Actually the term buttload *is* the polite version. {8^D Regardless of the actual size of the seagoing craft, any boat contains a finite volume. On the other hand, the term buttload engenders a visual image of a periodic and ongoing supply of money, coming not all at once. The plural term buttloads can create in the imagination several backsides simultaneously spewing forth filthy lucre. Perhaps I should send the suggestion to Readers Digest for their feature "Towards More Picturesque Speech." spike From spike66 at att.net Tue Feb 23 03:28:33 2010 From: spike66 at att.net (spike) Date: Mon, 22 Feb 2010 19:28:33 -0800 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: <2d6187671002221722u5c6e1523me1cdc534edc91ad5@mail.gmail.com> References: <226303.69206.qm@web113612.mail.gq1.yahoo.com><7A7A0CB2766A4C879426F3F66CD94F0C@spike><87670FEF1360437981B4B84931C5FF94@spike><20100212221046.5.qmail@syzygy.com><710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com><2d6187671002221653k60ffede6k18fda22528da9e6@mail.gmail.com><62c14241002221711x77d28922kff82a8e14181b67f@mail.gmail.com> <2d6187671002221722u5c6e1523me1cdc534edc91ad5@mail.gmail.com> Message-ID: Mike replied" haha... was that intentional? "rough and tumble aerospace" ? re: "buttload" and "some of it ... rubbed off ..." ? It was not intentional! LOL I must be more careful with people like you on the list! You and Spike, two of a kind! hee John ; ) Ah, FINALLY you guys lighten up. I post some of my best jokes, not a HarHar, not a guffaw. I give you perfectly good Valentines Day advice, nothing but somber comments about who posted how many this or that since last time. My extropian friends, this place is a PARTY! We have deep ideas and often discuss heavy topics, but we are here to have FUN too. Eliezer assures us that there is an infinite amount of fun, so even if we use up an infinitesimal fraction of it, there is plenty of time for fun. If we manage radical life extension, then we will eventually solve all of humanity's most persistent problems. Then our task will be to entertain each other. So lets get in practice, shall we? spike From msd001 at gmail.com Tue Feb 23 04:09:42 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 22 Feb 2010 23:09:42 -0500 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: References: <226303.69206.qm@web113612.mail.gq1.yahoo.com> <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <2d6187671002221653k60ffede6k18fda22528da9e6@mail.gmail.com> Message-ID: <62c14241002222009t182ba8d6rb7241d7cbfd1cb9b@mail.gmail.com> On Mon, Feb 22, 2010 at 10:21 PM, spike wrote: > Actually the term buttload *is* the polite version. ?{8^D > > Regardless of the actual size of the seagoing craft, any boat contains a > finite volume. ?On the other hand, the term buttload engenders a visual > image of a periodic and ongoing supply of money, coming not all at once. > The plural term buttloads can create in the imagination several backsides > simultaneously spewing forth filthy lucre. ?Perhaps I should send the > suggestion to Readers Digest for their feature "Towards More Picturesque > Speech." buttload is rather dull compared to "several backsides simultaneously spewing forth filthy lucre" (I hesitate to ponder what was spewed first through third) From ablainey at aol.com Tue Feb 23 04:17:00 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Mon, 22 Feb 2010 23:17:00 -0500 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: <8CC82562D8320E0-2804-1C50@webmail-m064.sysops.aol.com> OK, You are sitting in a chair in a blank white room with a data socket in the back of your head. You body is scanned to the nth degree and a full and exact copy is created in an identical room. It sits in an identical position. However it is devoid of mind which is being suppressed by the control system. The control system starts to conect your mind to that of the copy via the data cables you both have. So your consiouness spreads from your head down the cable, though the machine and starts to fill up the copy mind. While this occurs the control system is in command of the copies body, mimicking your every movement. Eventually your mind is spread exactly accross two bodies. each has the exact same thoughts and at this point and sees exactly the same thing and senses the same. the control system reliquishes all ifluence and supression so that your spread out conciousness is seeing through 4 eyes and thinknig with two brains. At thi sinstant your M is one large entity consisting of two human forms, with conciousness joined by a machine. The link is cut. Where is M? The continuity is constant for both bodies. the stream of conciousness is unbroken. The bodies are identical, the surroundings and stimulus are identical. To me both bodies possess M at that moment of seperation. However the instant any variable acting upon one or the other becomes different. A different number of photons on the skin, different magnetic fields etc they become different, but they still have the original M. only each is a fractional part and becomes more different over time. Just as with a single body your brain lobes are very different but still part of the whole M. I see this as comparable to cutting off your own leg, posting it to somewhere sunny so it can get a nice tan and then sewing it back on. It is still your leg and part of you, but its life has been different. A -----Original Message----- From: Spencer Campbell To: ExI chat list Sent: Tue, 23 Feb 2010 2:52 Subject: [ExI] Continuity of experience Quoting from "How to Replace Uninjured Parts of the Brain": Ben Zaiboc : > This should preserve continuity if you accept that continuity is preserved over an episode of general anaesthesia, for example. A very poor analogy! General anesthetics do not cause total cessation of activity in the brain. Preservation and destructive scanning; this is much more similar, if not identical, to resuscitation of the clinically dead. Yet, even clinical death is less dead than brain death. According to my cursory research, there is, in general, a period of several minutes between the two. A thought experiment is in order! (The audience: "aaaaagh noooo") What we are looking for, here, is continuity of experience. Let's suppose there exists some property, M (for "me"), such that continuity remains unbroken as long as the individual retains that property. Now, the first question is, what is necessary for M to be lost? Let's consider five states of increasingly severe unconsciousness: drunk, asleep, knocked out (as in general anaesthesia), comatose, and dead. In the first three cases, according to common knowledge, M is conserved. It may or may not disappear for a while, but it comes back if it does. I am still me after I wake up in the morning. So far, so good. Comas are more tricky. There is no guarantee that consciousness will return, but, if it does, common knowledge likewise dictates that M must be conserved. Death, in the sense of brain death, is totally impenetrable. Common knowledge says that M transfers from the body to some weird extradimensional medium which is thus far invulnerable to scientific inquiry, so we can't really trust common knowledge anymore. This is the state we're dealing with when we freeze or plasticize the brain. It's worth noting at this point that M is a purely metaphysical concept, not subject to measurement. I act the same with M as I do without M. There really isn't any way to prove that I remain myself even from second-to-second, let alone during transitions between gross mental states. Not even I would notice if my M suddenly evaporated. You might think that M could equally well stand for "memory", in which case it would be pretty easy to measure. It does not. To prove this is trivial, and that is where the experiment really begins. I think everyone can agree that M can only belong to one entity in the universe (at most) at any given time, no matter the nature of its existence. Yet, it is inarguably the case that, if we can scan a person's brain to reproduce their mind, we can do so as many times as we want. There is nothing in the laws of physics to prevent you from making two identical copies of me, or even three, but only one can possess M at any given time. So, due to some accident at the cloning facility, my mind is replicated twice. The two copies have EXACTLY the same memories, by definition, in the moment before they wake up. They begin to diverge immediately, unless also placed in identical virtual realities. It is unproblematic to claim that both of them really are the real me, in that case. I can be in two places at once. If you put them in real synthetic bodies, on the other hand, then the story is quite different. In the very first moment of operation, they experience different things (thanks to our old friend the Pauli exclusion principle); and so, they have different memories. Now they are definitely different individuals, though still indistinguishable to a casual observer. I can not be two people at once. This concludes the experiment. We're left with a final, troubling question: where is M? There are all kinds of answers you could come up with, but none of them are very convincing and all of them are pretty uncomfortable. Nolipsism says: nowhere, because M does not exist. Simple. Panpsychism says: everywhere, because individuality is a crock to begin with. Simple! Dualism says: there must be some arbitrary soul-sorting process, which of course we have no way to prove or disprove, built directly into the fabric of reality. Not so simple. All things considered, though, relatively appealing. You can see why most people go for this one. I tend to subscribe to nolipsism, but, in this case, I am superstitious. I will not be taking steps to preserve my brain, because I believe that I will stay dead even if a perfect copy of me is made. I have M0; my first copy has M1, my second M2, and so on. Being a selfish person at heart, I don't really care about blessing future generations with my knowledge. In fact, I would outright resent a duplicate of myself who gets to live while I remain dead. I'm not going to take any chances when it comes to potential supernatural complications. I just want to survive, as long as possible and as well as possible. All I can do is take solace in the fact that Pollock tells me it is pretty much a physical impossibility for me not to cling to this particular superstition, no matter how blatantly irrational it is. And it is pretty blatantly irrational. That doesn't mean it's false, though! _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Feb 23 04:59:00 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 23 Feb 2010 15:59:00 +1100 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: On 23 February 2010 13:52, Spencer Campbell wrote: > Quoting from "How to Replace Uninjured Parts of the Brain": > > Ben Zaiboc : >> This should preserve continuity if you accept that continuity is preserved over an episode of general anaesthesia, for example. > > A very poor analogy! General anesthetics do not cause total cessation > of activity in the brain. Preservation and destructive scanning; this > is much more similar, if not identical, to resuscitation of the > clinically dead. Yet, even clinical death is less dead than brain > death. According to my cursory research, there is, in general, a > period of several minutes between the two. > > A thought experiment is in order! (The audience: "aaaaagh noooo") > > What we are looking for, here, is continuity of experience. Let's > suppose there exists some property, M (for "me"), such that continuity > remains unbroken as long as the individual retains that property. Now, > the first question is, what is necessary for M to be lost? > > Let's consider five states of increasingly severe unconsciousness: > drunk, asleep, knocked out (as in general anaesthesia), comatose, and > dead. > > In the first three cases, according to common knowledge, M is > conserved. It may or may not disappear for a while, but it comes back > if it does. I am still me after I wake up in the morning. So far, so > good. > > Comas are more tricky. There is no guarantee that consciousness will > return, but, if it does, common knowledge likewise dictates that M > must be conserved. > > Death, in the sense of brain death, is totally impenetrable. Common > knowledge says that M transfers from the body to some weird > extradimensional medium which is thus far invulnerable to scientific > inquiry, so we can't really trust common knowledge anymore. This is > the state we're dealing with when we freeze or plasticize the brain. > > It's worth noting at this point that M is a purely metaphysical > concept, not subject to measurement. I act the same with M as I do > without M. There really isn't any way to prove that I remain myself > even from second-to-second, let alone during transitions between gross > mental states. Not even I would notice if my M suddenly evaporated. > You might think that M could equally well stand for "memory", in which > case it would be pretty easy to measure. It does not. > > To prove this is trivial, and that is where the experiment really > begins. I think everyone can agree that M can only belong to one > entity in the universe (at most) at any given time, no matter the > nature of its existence. Yet, it is inarguably the case that, if we > can scan a person's brain to reproduce their mind, we can do so as > many times as we want. There is nothing in the laws of physics to > prevent you from making two identical copies of me, or even three, but > only one can possess M at any given time. > > So, due to some accident at the cloning facility, my mind is > replicated twice. The two copies have EXACTLY the same memories, by > definition, in the moment before they wake up. They begin to diverge > immediately, unless also placed in identical virtual realities. It is > unproblematic to claim that both of them really are the real me, in > that case. I can be in two places at once. > > If you put them in real synthetic bodies, on the other hand, then the > story is quite different. In the very first moment of operation, they > experience different things (thanks to our old friend the Pauli > exclusion principle); and so, they have different memories. Now they > are definitely different individuals, though still indistinguishable > to a casual observer. I can not be two people at once. > > This concludes the experiment. We're left with a final, troubling > question: where is M? > > There are all kinds of answers you could come up with, but none of > them are very convincing and all of them are pretty uncomfortable. > > Nolipsism says: nowhere, because M does not exist. Simple. > > Panpsychism says: everywhere, because individuality is a crock to > begin with. Simple! > > Dualism says: there must be some arbitrary soul-sorting process, which > of course we have no way to prove or disprove, built directly into the > fabric of reality. Not so simple. All things considered, though, > relatively appealing. You can see why most people go for this one. > > I tend to subscribe to nolipsism, but, in this case, I am > superstitious. I will not be taking steps to preserve my brain, > because I believe that I will stay dead even if a perfect copy of me > is made. I have M0; my first copy has M1, my second M2, and so on. > > Being a selfish person at heart, I don't really care about blessing > future generations with my knowledge. In fact, I would outright resent > a duplicate of myself who gets to live while I remain dead. I'm not > going to take any chances when it comes to potential supernatural > complications. I just want to survive, as long as possible and as well > as possible. > > All I can do is take solace in the fact that Pollock tells me it is > pretty much a physical impossibility for me not to cling to this > particular superstition, no matter how blatantly irrational it is. And > it is pretty blatantly irrational. That doesn't mean it's false, > though! I wonder how it is that you can point out the stupidity of this question so clearly: > It's worth noting at this point that M is a purely metaphysical > concept, not subject to measurement. I act the same with M as I do > without M. There really isn't any way to prove that I remain myself > even from second-to-second, let alone during transitions between gross > mental states. Not even I would notice if my M suddenly evaporated. And yet still come to this absurd conclusion: > I tend to subscribe to nolipsism, but, in this case, I am > superstitious. I will not be taking steps to preserve my brain, > because I believe that I will stay dead even if a perfect copy of me > is made. I have M0; my first copy has M1, my second M2, and so on. Incidentally, the question of personal identity is not the same as the question of existence of the self or of consciousness, and it is not the same as what you call M. I am happy to say that there is no self and no consciousness, in the sense meant by most of those people who deny these things. In any case, I am happy to say that I am a different self, person or consciousness from moment to moment, and that the idea that I remain the "same" person is a delusion. Nevertheless, it is very important to me that this delusion continue in much the same way as it always has. -- Stathis Papaioannou From stefano.vaj at gmail.com Tue Feb 23 09:45:53 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 23 Feb 2010 10:45:53 +0100 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <568542.51093.qm@web111205.mail.gq1.yahoo.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> <154647.10956.qm@web111212.mail.gq1.yahoo.com> <7641ddc61002171955i33b65d34i80dcef79dc21669b@mail.gmail.com> <588495.24073.qm@web111212.mail.gq1.yahoo.com> <7641ddc61002201700n25179a9h63c3667a0e42739@mail.gmail.com> <568542.51093.qm@web111205.mail.gq1.yahoo.com> Message-ID: <580930c21002230145l4b7958bbr309978ab58f1e1d8@mail.gmail.com> On 21 February 2010 07:19, Christopher Luebcke wrote: > I was actually looking for independently verifiable evidence of intentional forgery, not a list of accusations that anybody with a keyboard could make. I must presume that you're not actually interested in providing such, because you couldn't possibly believe that the response you gave would sway anybody who didn't already agree with your point of view. This is a common misconception. Forgery tells us something on the sociology (and possibly the ethics) of science in a given society at a given time, does not tell anything at all as to the subject matter. Unless of course not only false data are exposed, but true, hidden data emerge. -- Stefano Vaj From gts_2000 at yahoo.com Tue Feb 23 12:42:09 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 23 Feb 2010 04:42:09 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <475205.96343.qm@web36503.mail.mud.yahoo.com> --- On Mon, 2/22/10, Stathis Papaioannou wrote: > You're setting yourself up for the obvious reply: it is > possible to make an artificial heart that pumps blood just as well as... Your answer misses the point. I'll try again in a different way: You have three powerful futuristic digital computers, call them H, S, and B, running on the desk in front of you. On H runs a copy of some futuristic software titled "Simulated Heart v. 1254". On S runs a copy of "Simulated Stomach v. 2434". On B runs a copy of "Simulated Brain v. 0989873". On computer H the simulated heart simulates the processing of simulated blood, and you have no objection I assume to my saying the simulated heart running on H doesn't really pump blood. On computer S the simulated stomach simulates the processing of simulated food, and you have no objection I assume to my saying the simulated stomach running on S doesn't really process food. On computer B the simulated brain simulates the processing of simulated thoughts, but here you do object when I tell you the simulated brain doesn't really think. Here you want to tell me the simulated brain really does think real thoughts. How do you explain your inconsistency? More precisely, why do you classify "thoughts" in a different category than you do "blood" and "food", if not because you have adopted a dualistic world-view in which mental phenomena fall into a different category than do ordinary material entities? -gts From stathisp at gmail.com Tue Feb 23 13:52:17 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 24 Feb 2010 00:52:17 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <475205.96343.qm@web36503.mail.mud.yahoo.com> References: <475205.96343.qm@web36503.mail.mud.yahoo.com> Message-ID: On 23 February 2010 23:42, Gordon Swobe wrote: > --- On Mon, 2/22/10, Stathis Papaioannou wrote: > >> You're setting yourself up for the obvious reply: it is >> possible to make an artificial heart that pumps blood just as well as... > > Your answer misses the point. I'll try again in a different way: > > You have three powerful futuristic digital computers, call them H, S, and B, running on the desk in front of you. On H runs a copy of some futuristic software titled "Simulated Heart v. 1254". On S runs a copy of "Simulated Stomach v. 2434". On B runs a copy of "Simulated Brain v. 0989873". > > On computer H the simulated heart simulates the processing of simulated blood, and you have no objection I assume to my saying the simulated heart running on H doesn't really pump blood. > > On computer S the simulated stomach simulates the processing of simulated food, and you have no objection I assume to my saying the simulated stomach running on S doesn't really process food. > > On computer B the simulated brain simulates the processing of simulated thoughts, but here you do object when I tell you the simulated brain doesn't really think. Here you want to tell me the simulated brain really does think real thoughts. > > How do you explain your inconsistency? > > More precisely, why do you classify "thoughts" in a different category than you do "blood" and "food", if not because you have adopted a dualistic world-view in which mental phenomena fall into a different category than do ordinary material entities? H' - connect H to a pump so that it pumps blood. S' - connect S to a motorised cavity so that it processes food. B' - connect B to cameras, microphones, speakers, electric motors etc. so that it interacts with its environment in an intelligent way. H' and S' are not *identical* to a heart or stomach but they perform the *function* of a heart or stomach. Similarly, B' is not *identical* to a brain but it performs the function of a brain. You probably don't even need all the sensors and effectors since a person can still think if they are paralysed and deprived of sensory input. You won't claim that H' doesn't "really" pump blood but only pretends to pump blood. It's obvious that if it pumps blood, it pumps blood. To some people (especially on this list) it's equally obvious that B' must be able to think. It's not quite so obvious to me: I entertain the possibility that perhaps consciousness is different to other phenomena in the universe and might be separable from the observable behaviour it seems to underpin. That would mean I could replace part of my brain with a functionally identical but unconscious analogue, selectively removing any aspect of my consciousness without noticing that anything had changed and without displaying any outward change in behaviour. I believe that is absurd, and this leads me to conclude that those who immediately saw that B' must be conscious were right. As far as I have been able to tell, you also agree that becoming a partial zombie without noticing or showing any outward behavioural change is absurd, but you still think that it is possible to make zombie brains or brain components. Several times you have said that the zombie components would cause the recipient to behave differently, but this is obviously a contradiction, since you agreed that the zombie components can be made to behave exactly like biological components. Philosophical discussions often just fizzle out without consensus being reached, but when one party claims that both P and ~P are true I think everyone would agree that they have lost at least that part of the debate. -- Stathis Papaioannou From gts_2000 at yahoo.com Tue Feb 23 13:52:47 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 23 Feb 2010 05:52:47 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <60436.32577.qm@web36501.mail.mud.yahoo.com> --- On Mon, 2/22/10, Dave Sill wrote: > A digital simulation of a thing is never equal to the real > thing, Thanks for saying that for me. :) > Just as a digital simulation of a heart can have a pulse, so can > simulated brains have thoughts. A simulated heart can only simulate a pulse -- it's not a real pulse. Likewise, a simulated brain can only simulate having thoughts -- they aren't real thoughts. On the hypothesis that strong AI=true, a simulation of a brain running on a computer should have real subjective mental states, i.e., it should really think real conscious thoughts like you and me. Not simulated thoughts. What's a simulated thought, you ask? Here's a famous one that I've mentioned before: print "Hello World" It doesn't get any better than that except in science-fiction novels written by our friend Damien. -gts From gts_2000 at yahoo.com Tue Feb 23 14:21:25 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 23 Feb 2010 06:21:25 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <894394.53861.qm@web36508.mail.mud.yahoo.com> --- On Tue, 2/23/10, Stathis Papaioannou wrote: > You probably don't even need all the sensors and effectors since a > person can still think if they are paralysed and deprived of sensory > input. Okay let's stipulate that and forget all that extra paraphernalia that you want to add to my simple thought experiment. You have a simulated heart, a simulated stomach and a simulated brain running on three separate computers. You agree the simulated heart doesn't really pump real blood and that the simulated stomach doesn't really digest real food, but you want me to believe the simulated brain really processes real thoughts, i.e., you want me to believe strong AI=true. Again I ask, how do you explain your inconsistency? Why do you classify "thoughts" in a different category than you do "blood" and "food", if not because you have adopted a dualistic world-view in which mental phenomena fall into a different category than do ordinary material entities? -gts From sparge at gmail.com Tue Feb 23 14:34:26 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 23 Feb 2010 09:34:26 -0500 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <894394.53861.qm@web36508.mail.mud.yahoo.com> References: <894394.53861.qm@web36508.mail.mud.yahoo.com> Message-ID: On Tue, Feb 23, 2010 at 9:21 AM, Gordon Swobe wrote: > > You have a simulated heart, a simulated stomach and a simulated brain running on three separate > computers. You agree the simulated heart doesn't really pump real blood and that the simulated > stomach doesn't really digest real food, but you want me to believe the simulated brain really > processes real thoughts, i.e., you want me to believe strong AI=true. If the simulated brain does arithmetic, is it real arithmetic or simulated arithmetic? > Again I ask, how do you explain your inconsistency? > > Why do you classify "thoughts" in a different category than you do "blood" > and "food", if not because you have adopted a dualistic world-view in which > mental phenomena fall into a different category than do ordinary material > entities? Blood is tangible, mental phenomena are not. Of *course* they're in different categories. You can hold a pint of blood in your hand, but you can't hold a pint of thought, a pound of thought, or a dozen thoughts. -Dave From rafal.smigrodzki at gmail.com Tue Feb 23 14:46:21 2010 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 23 Feb 2010 09:46:21 -0500 Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <568542.51093.qm@web111205.mail.gq1.yahoo.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> <154647.10956.qm@web111212.mail.gq1.yahoo.com> <7641ddc61002171955i33b65d34i80dcef79dc21669b@mail.gmail.com> <588495.24073.qm@web111212.mail.gq1.yahoo.com> <7641ddc61002201700n25179a9h63c3667a0e42739@mail.gmail.com> <568542.51093.qm@web111205.mail.gq1.yahoo.com> Message-ID: <7641ddc61002230646r1123c6b2xe37b543382197232@mail.gmail.com> On Sun, Feb 21, 2010 at 1:19 AM, Christopher Luebcke wrote: > I was actually looking for independently verifiable evidence of intentional forgery, not a list of accusations that anybody with a keyboard could make. I must presume that you're not actually interested in providing such, because you couldn't possibly believe that the response you gave would sway anybody who didn't already agree with your point of view. ### I am not sure what you are asking for. Do you want pointers to court cases determining that the activists committed forgery? No, so far there haven't been such cases. If you are willing to read a few papers by Briffa et al, and file FOIA requests, you should be able to make an independent verification of forgery, in the form of selection of data points for publication while suppressing data not consistent with intended outcome. I can send you a list of such fraudulent papers. Will that suffice? Rafal From gts_2000 at yahoo.com Tue Feb 23 15:31:29 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 23 Feb 2010 07:31:29 -0800 (PST) Subject: [ExI] Is the brain a digital computer? Message-ID: <552218.15686.qm@web36504.mail.mud.yahoo.com> --- On Tue, 2/23/10, Stathis Papaioannou wrote: > I entertain the possibility that perhaps consciousness is different to > other phenomena in the universe I posit nothing special about consciousness. It only *seems* unusual because we can know about it only from the first-person perspective. Only you can feel your toothache, for example. > and might be separable from the > observable behaviour it seems to underpin. I can write a program today that will make a computer act conscious to a limited degree. I have many times. With enough knowledge and resources I could write one that fooled you into thinking it had consciousness -- that caused a computer or robot to behave in such a way that it passed the Turing test. So I don't understand why you should even question the separability of behavior and consciousness. > That would mean I could replace part of my brain with a functionally > identical but unconscious analogue selectively removing any aspect of my > consciousness without noticing that anything had changed and without > displaying any outward change in behaviour. I believe that is absurd, > and this leads me to conclude that those who immediately saw that B' > must be conscious were right. We've been through this so many times. :) One cannot on my view make one of your "functionally identical but unconscious analogues" in the first place if the component normally affects experience, at least not without changing other parts of the brain along with it. One might just as well try to draw a square triangle. The undertaking becomes problematic because replacing the neural correlates of consciousness or any part of them with a *supposed* functional but unconscious analogue will eliminate or compromise subjective experience. Experience affects behavior in normal people, and because the subject will have abnormal experience, the doctor will then need to do more work to make him behave and report normally. He'll keep working to reprogram/rewire his brain until the subject finally speaks and act normally. In the end he'll make him into something like a functional analogue of his former self, but he may have little or no awareness of his own existence depending on facts of neuroscience that nobody today knows. That's my view, anyway. > Philosophical discussions often just fizzle out > without consensus being reached, but when one party claims that > both P and ~P are true I think everyone would agree that they have lost > at least that part of the debate. See above. -gts From jonkc at bellsouth.net Tue Feb 23 16:47:00 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 23 Feb 2010 11:47:00 -0500 Subject: [ExI] Continuity of experience. In-Reply-To: References: Message-ID: <8B916321-956B-41CE-934C-E134B9B30B8C@bellsouth.net> On Feb 22, 2010, at 9:52 PM, Spencer Campbell wrote: > I can not be two people at once. Sure you can. They would be two different people who would go on to lead different lives but they would both be Spencer Campbell. There is nothing paradoxical in this it's just unusual because up to now only one chunk of matter in the universe behaved in a Spencercampbellian way, but someday that might change. > > I will not be taking steps to preserve my brain, because I believe that I will stay dead even if a perfect copy of me is made. Then you will be dead in the next few seconds because the human body is constantly rebuilding itself and never does so perfectly. You are last years mashed potatoes. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Feb 23 16:24:43 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 23 Feb 2010 11:24:43 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <475205.96343.qm@web36503.mail.mud.yahoo.com> References: <475205.96343.qm@web36503.mail.mud.yahoo.com> Message-ID: <337201EB-581A-4FC9-B69E-51C1AD20E1EC@bellsouth.net> Since my last post Gordon Swobe has posted 4 times. > On computer B the simulated brain simulates the processing of simulated thoughts So now we have a simulation of a simulation of a simulation, aren't Swobe's thought experiments grand, he always makes things so crystal clear. > but here you do object when I tell you the simulated brain doesn't really think. Here you want to tell me the simulated brain really does think real thoughts. Simulated thoughts differ from real thoughts in exactly the same way simulated arithmetic differs from real arithmetic, not at all. > What's a simulated thought, you ask? Here's a famous one that I've mentioned before: > print "Hello World" > It doesn't get any better than that That's what I thought and if that's the best example Swobe can come up with then the concept is empty. > it [a computer] should really think real conscious thoughts like you and me. If Swobe was consistent and played by his own rules he would ascribe consciousness only to himself not to other people, but of course consistency is not his strong suit. > More precisely, why do you classify "thoughts" in a different category than you do "blood" and "food" What a incredibly stupid question! If thoughts do not belong in a separate category from blood or food then what the hell is the point in having categories? > you have adopted a dualistic world-view in which mental phenomena fall into a different category than do ordinary material entities Well of course I have as would any sane person. > I can write a program today that will make a computer act conscious to a limited degree. I'm sure he can, but Swobe would find it much more difficult to write a program that was intelligent even to a limited degree, he would find that consciousness is easy but intelligence is hard just as Evolution did. > With enough knowledge and resources I could write one that fooled you into thinking it had consciousness -- that caused a computer or robot to behave in such a way that it passed the Turing test. Swobe is saying that if he knew how to make an AI then he could make an AI if he had a lot of money. I could be wrong but I believe there may be others who could make a similar claim. > I don't understand why you should even question the separability of behavior and consciousness. I'm not surprised Swobe doesn't understand, there is much in the natural world that is baffling if one is totally ignorant of Darwin's Theory of Evolution. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Feb 23 17:03:18 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 23 Feb 2010 18:03:18 +0100 Subject: [ExI] Continuity of experience. In-Reply-To: <8B916321-956B-41CE-934C-E134B9B30B8C@bellsouth.net> References: <8B916321-956B-41CE-934C-E134B9B30B8C@bellsouth.net> Message-ID: <580930c21002230903w70f78881wffc66a8cbe1f6f78@mail.gmail.com> 2010/2/23 John Clark > Then you will be dead in the next few seconds because the human body is > constantly rebuilding itself and never does so perfectly. You are last years > mashed potatoes. > Indeed. Survival instinct is entirely about the gene whisper. Face to situations where such whisper has little to say, all depends from the metaphor one chooses to adhere to. What can be said for sure is that face to, say, the generalised availability of teleport, those still willing to wait in lines at an airport out of the argument that by going through the process you are killed, and that the copy subsequently created at destination is of little consolation for their former self, will be quickly ranged in the ranks of weirdoes. The same applies to whether uploads or persuasive, albeit artificial, personas, are really "conscious" or not, etc. Sociology has often a way to resolve for good what philosophy per se could never decide. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Tue Feb 23 17:51:15 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 23 Feb 2010 09:51:15 -0800 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: : [snip] > Just as with a single body your brain lobes are very different but still > part of the whole M. > I see this as comparable to cutting off your own leg, posting it to > somewhere sunny so it can get a nice tan > and then sewing it back on. It is still your leg and part of you, but its > life has been different. Hmm! There's only one problem that I can see: in your hypothetical, as in mine, the two copies of me are never re-attached in the manner of a suntanned leg. You can certainly make the argument that they retain a "connection" of some type, even if they only speak to each other once a century. They can be considered two parts of the same M-possessing whole, that way, but you can make exactly the same argument for any pair of individuals whatsoever as long as their lives detectably influence each other at least once. Who's to say that not everyone shares the same M? On the other hand, what happens if the doubles DO rejoin into one individual later on? Well, then we have a Dr. McNinja scenario! http://drmcninja.com/archives/comic/17p19 (That's page 19. There is a bunch of other magnificent nonsense earlier on.) Stathis Papaioannou : > I wonder how it is that you can point out the stupidity of this > question so clearly: > >> [I say something clever] > > And yet still come to this absurd conclusion: > >> [I say something less clever] Yeah, so do I. It isn't even accurate to call that a "conclusion", really, because it doesn't follow from any of the preceding logical statements. It stands on its own, short-circuiting the whole process of reasoning. At that point I was just saying: well, all of this logic is pointing me towards a conclusion which I am constitutionally incapable of accepting. My hand is forced! The question of continuity is interesting to me because of how trivial it is in theory, and how difficult it is to genuinely grasp. Stathis Papaioannou : > Incidentally, the question of personal identity is not the same as the > question of existence of the self or of consciousness, and it is not > the same as what you call M. If I understand you, I agree. It is possible for two clones to share a personal identity, which I associate with memory, without sharing the same M. Stathis Papaioannou : > I am happy to say that there is no self > and no consciousness, in the sense meant by most of those people who > deny these things. In any case, I am happy to say that I am a > different self, person or consciousness from moment to moment, and > that the idea that I remain the "same" person is a delusion. > Nevertheless, it is very important to me that this delusion continue > in much the same way as it always has. Would it satisfy you if, at the moment of your death, a person similar to the one in my Napoleon argument were to suddenly acquire the delusion of being Stathis Papaioannou? The assumption, again, would be that they know more about your life than anyone else in the world after you die. They would be a nearly-perfect mimic. Surely, the difference between your mind and your mimic's mind would be no greater than the difference between your mind as it is now and your mind as it was ten years ago. Is this any different from transferring your mind to a new body? This seems consistent with your statements, but I'm willing to bet you'll think that this solution to the problem of mortality is inadequate in some way. Maybe something to do with the infeasibility of such a specific, high-quality delusion forming in a random passerby. That's really just for show, though. You can ratchet down the accuracy of the mimic as much as you like, and the conclusions should remain pretty much the same (if less obvious). My answer would be that your mimic lacks M; continuity of experience is broken in exactly the same way that it would be in the case of a freeze-and-scan resurrection. It's simply a less sophisticated technology. The same thing is accomplished with less precision. But, I am open to alternate explanations. From lacertilian at gmail.com Tue Feb 23 18:09:57 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Tue, 23 Feb 2010 10:09:57 -0800 Subject: [ExI] Continuity of experience. In-Reply-To: <8B916321-956B-41CE-934C-E134B9B30B8C@bellsouth.net> References: <8B916321-956B-41CE-934C-E134B9B30B8C@bellsouth.net> Message-ID: John Clark : > Spencer Campbell wrote: > >> I can not be two people at once. > > Sure you can. They would be two different people who would go on to lead > different lives but they would both be?Spencer Campbell. There is nothing > paradoxical in this it's just unusual because up to now only one chunk of > matter in the universe behaved in a?Spencercampbellian way, but someday that > might change. Two people can be me, sure, but I can't be two people! Fine distinction. Some day I might come to believe that these two statements are, in fact, equivalent. Right now I do not. John Clark : > Then you will be dead in the next few seconds because the human body is > constantly rebuilding itself and never does so perfectly. You are last years > mashed potatoes. The mashed potatoes argument again! I just had potatoes for breakfast. It's so plausible. You're right: the only difference is one of degree between my atomic makeup reconfiguring itself moment-to-moment and, say, clone-based teleportation. I'm left with the not-quite-rational belief that the degree is what matters. I could start making analogies to sudden phase changes and the like, here, but I doubt it would be very convincing. Stefano Vaj : > Sociology has often a way to resolve for good what philosophy per se could > never decide. You're quite correct! http://www.smbc-comics.com/index.php?db=comics&id=1677 Being already ranged in the ranks of weirdos, I have little to lose by shying away from the mind-clone revolution. Just time. But, hey, we have plenty of time, right?* *Read: "plenty of oil". From cluebcke at yahoo.com Tue Feb 23 18:27:23 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 23 Feb 2010 10:27:23 -0800 (PST) Subject: [ExI] Phil Jones acknowledging that climate science isn'tsettled In-Reply-To: <7641ddc61002230646r1123c6b2xe37b543382197232@mail.gmail.com> References: <201002162111.o1GLBY3S004345@andromeda.ziaspace.com> <412543.58639.qm@web111214.mail.gq1.yahoo.com> <7641ddc61002170859g6c25fcag3ddb51826fff23bd@mail.gmail.com> <154647.10956.qm@web111212.mail.gq1.yahoo.com> <7641ddc61002171955i33b65d34i80dcef79dc21669b@mail.gmail.com> <588495.24073.qm@web111212.mail.gq1.yahoo.com> <7641ddc61002201700n25179a9h63c3667a0e42739@mail.gmail.com> <568542.51093.qm@web111205.mail.gq1.yahoo.com> <7641ddc61002230646r1123c6b2xe37b543382197232@mail.gmail.com> Message-ID: <658705.86279.qm@web111204.mail.gq1.yahoo.com> I must confess that a selection of data points that you find disagreeable doesn't sound like forgery to me. I further find the fact that you make this accusation without any apparent proof of deliberate forgery slanderous. There is an extremely dangerous current in political discourse, the current that dictates that it is not enough to disagree with an opponent's position; no, you must convince your fellows that the opponent is dangerous, evil, wicked, bent on the destruction of all that is good and holy. People who behave in this manner will bring this glorious civilization to its knees. If you don't have proof of forgery--actual proof that would hold up in a court of law--don't make the claim. You do a disservice to humanity. If you do have proof, file suit. ----- Original Message ---- > From: Rafal Smigrodzki > To: ExI chat list > Sent: Tue, February 23, 2010 6:46:21 AM > Subject: Re: [ExI] Phil Jones acknowledging that climate science isn'tsettled > > On Sun, Feb 21, 2010 at 1:19 AM, Christopher Luebcke wrote: > > I was actually looking for independently verifiable evidence of intentional > forgery, not a list of accusations that anybody with a keyboard could make. I > must presume that you're not actually interested in providing such, because you > couldn't possibly believe that the response you gave would sway anybody who > didn't already agree with your point of view. > > ### I am not sure what you are asking for. Do you want pointers to > court cases determining that the activists committed forgery? No, so > far there haven't been such cases. If you are willing to read a few > papers by Briffa et al, and file FOIA requests, you should be able to > make an independent verification of forgery, in the form of selection > of data points for publication while suppressing data not consistent > with intended outcome. I can send you a list of such fraudulent > papers. Will that suffice? > > Rafal > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ablainey at aol.com Tue Feb 23 20:15:28 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Tue, 23 Feb 2010 15:15:28 -0500 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: <8CC82DC13495ADA-1454-38EF@webmail-m048.sysops.aol.com> Yes they could be rejoined at a later date, to consolidate experiences back into a single body M. As per the leg. This also throws up the question what if two uploads join to give an M^2. Then what if all humans/posthumans become mentally joined as a hive mind? giving M^nth. This is where it gets really interesting and the argument of M starts to become irrelevant. For me as long as there is a continuous conscious stream, the issues of who is the real identity are moot. It makes me wonder if we would have more respect for an entity that we know is directly related or part of our own M, or if we would see it as a sub component and therefor of less value than a true individual? I agree that In a way yes we all share a single M. Either seen as from a common ancestor, although this is a genetic M not a conscious self. Or as you said if we influence someone else's life, we share part of M with them. A kind of vicarious M where the experience and knowledge of others is assimilated into our own M. A -----Original Message----- From: Spencer Campbell : [snip] > Just as with a single body your brain lobes are very different but still > part of the whole M. > I see this as comparable to cutting off your own leg, posting it to > somewhere sunny so it can get a nice tan > and then sewing it back on. It is still your leg and part of you, but its > life has been different. Hmm! There's only one problem that I can see: in your hypothetical, as in mine, the two copies of me are never re-attached in the manner of a suntanned leg. You can certainly make the argument that they retain a "connection" of some type, even if they only speak to each other once a century. They can be considered two parts of the same M-possessing whole, that way, but you can make exactly the same argument for any pair of individuals whatsoever as long as their lives detectably influence each other at least once. Who's to say that not everyone shares the same M? On the other hand, what happens if the doubles DO rejoin into one individual later on? Well, then we have a Dr. McNinja scenario! http://drmcninja.com/archives/comic/17p19 (That's page 19. There is a bunch of other magnificent nonsense earlier on.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Tue Feb 23 20:30:54 2010 From: scerir at libero.it (scerir) Date: Tue, 23 Feb 2010 21:30:54 +0100 (CET) Subject: [ExI] Is the brain a digital computer? Message-ID: <27766770.67771266957054486.JavaMail.defaultUser@defaultHost> [mixed items] Justin Sytsma Phenomenological Obviousness and the New Science of Consciousness Philosophy of Science, 76 (December 2009) pp. 958?969 Is phenomenal consciousness a problem for the brain sciences? An increasing number of researchers hold not only that it is but that its very existence is a deep mystery. That this problematic phenomenon exists is generally taken for granted: It is asserted that phenomenal consciousness is just phenomenologically obvious. In contrast, I hold that there is no such phenomenon and, thus, that it does not pose a problem for the brain sciences. For this denial to be plausible, however, I need to show that phenomenal consciousness is not phenomenologically obvious. That is the goal of this article. Patrick Crotty, Daniel Schult, Ken Segall (Colgate University) Josephson junction simulation of neurons http://arxiv.org/abs/1002.2892 With the goal of understanding the intricate behavior and dynamics of collections of neurons, we present superconducting circuits containing Josephson junctions that model biologically realistic neurons. These "Josephson junction neurons" reproduce many characteristic behaviors of biological neurons such as action potentials, refractory periods, and firing thresholds. They can be coupled together in ways that mimic electrical and chemical synapses. Using existing fabrication technologies, large interconnected networks of Josephson junction neurons would operate fully in parallel. They would be orders of magnitude faster than both traditional computer simulations and biological neural networks. Josephson junction neurons provide a new tool for exploring long-term large-scale dynamics for networks of neurons. See also this page here http://physicsandcake.wordpress.com/2009/07/20/quantum-neural-networks-1-the- superconducting-neuron-model/ From jonkc at bellsouth.net Tue Feb 23 21:04:39 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 23 Feb 2010 16:04:39 -0500 Subject: [ExI] Continuity of experience. In-Reply-To: References: <8B916321-956B-41CE-934C-E134B9B30B8C@bellsouth.net> Message-ID: On Feb 23, 2010, at 1:09 PM, Spencer Campbell wrote: > General anesthetics do not cause total cessation of activity in the brain. There is always something going on in the brain, even beheading or a bullet to the brain won't stop chemical reactions continuing there, but the brain is not important, the mind is, and general anesthesia will totally stop the mind. But who cares, it'll start up again. I don't understand all this worry about continuity; if objectively your mind stops for a century or two, subjectively it will seem like it never stopped at all but the rest of the world made a discontinuous jump, and after all subjectivity is the only thing that's important. > Two people can be me, sure, but I can't be two people! Exactly. > Fine distinction. It's only puzzling if you think of "I" as a fixed unchanging thing. The you of yesterday and the you of today are not identical but they are both Spencer Campbell. > You're right: the only difference is one of degree between my atomic > makeup reconfiguring itself moment-to-moment and, say, clone-based > teleportation. I can't see how the rate of change could have any bearing, and after all even the reconfiguration given to you by a stick of dynamite would seem quite slow and plodding by some time scales. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Tue Feb 23 22:48:21 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 23 Feb 2010 15:48:21 -0700 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: References: <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <2d6187671002221653k60ffede6k18fda22528da9e6@mail.gmail.com> <62c14241002221711x77d28922kff82a8e14181b67f@mail.gmail.com> <2d6187671002221722u5c6e1523me1cdc534edc91ad5@mail.gmail.com> Message-ID: <2d6187671002231448q46b9b6a5we5b82caec1cff341@mail.gmail.com> > > Spike wrote: > Ah, FINALLY you guys lighten up. I post some of my best jokes, not a > HarHar, not a guffaw. I give you perfectly good Valentines Day advice, > nothing but somber comments about who posted how many this or that since > last time. > > My extropian friends, this place is a PARTY! We have deep ideas and often > discuss heavy topics, but we are here to have FUN too. Eliezer assures us > that there is an infinite amount of fun, so even if we use up an > infinitesimal fraction of it, there is plenty of time for fun. If we > manage > radical life extension, then we will eventually solve all of humanity's > most > persistent problems. Then our task will be to entertain each other. So > lets get in practice, shall we? > >>> Spike, thank you for the reminder that this email list is here for us to ENJOY ourselves! : ) I love the whole idea of this place being a big party (following Eliezer's rule of infinite fun, vrs. Eliezer's notorious organization known as the "Committee for Investigating Un-Extropian Activities"). As for us entertaining each other after we have achieved indefinite lifespan, I can imagine a bunch of somber old fogies (with the outward appearance of young people) getting together to grumble about how they were never appreciated for the pioneering generation of transhumanists that they were, back in the early 21st century. But we will only be remembered by the public (and future historians) if we really know how to party! Spike, please show us the way... ; ) John > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Feb 24 00:07:21 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 24 Feb 2010 11:07:21 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <894394.53861.qm@web36508.mail.mud.yahoo.com> References: <894394.53861.qm@web36508.mail.mud.yahoo.com> Message-ID: On 24 February 2010 01:21, Gordon Swobe wrote: > --- On Tue, 2/23/10, Stathis Papaioannou wrote: > >> You probably don't even need all the sensors and effectors since a >> person can still think if they are paralysed and deprived of sensory >> input. > > Okay let's stipulate that and forget all that extra paraphernalia that you want to add to my simple thought experiment. > > You have a simulated heart, a simulated stomach and a simulated brain running on three separate computers. You agree the simulated heart doesn't really pump real blood and that the simulated stomach doesn't really digest real food, but you want me to believe the simulated brain really processes real thoughts, i.e., you want me to believe strong AI=true. > > Again I ask, how do you explain your inconsistency? > > Why do you classify "thoughts" in a different category than you do "blood" > and "food", if not because you have adopted a dualistic world-view in which > mental phenomena fall into a different category than do ordinary material > entities? Thoughts seem different in that for other bodily functions you would need to make a robotic AI, while for the thoughts perhaps just the computer would suffice. However, another way to look at it is that thoughts do involve behaviour but the behaviour is information processing rather than pumping blood. -- Stathis Papaioannou From cluebcke at yahoo.com Wed Feb 24 01:02:08 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Tue, 23 Feb 2010 17:02:08 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: References: <894394.53861.qm@web36508.mail.mud.yahoo.com> Message-ID: <415576.15254.qm@web111211.mail.gq1.yahoo.com> I think the difficulty that you're having here is that you're talking about a thought like it's a thing--even an intangible thing. In the way you're using the word, you'd be better off saying "concept". The state of being in possession of a concept is not the same as the state of thinking, or being conscious. I stand by my earlier contention that this conversation is doomed to 9 more levels of circular hell unless the main participants can agree on a definition of "conscious", "thinking". Gordon: Try reframing your position by thinking about artificial hearts, rather than simulated hearts. Really. Please. You're missing something important. Artificial hearts pump real blood. Some day artificial stomaches will digest real food, I'm sure. These are activities that the devices can engage in, not physical properties of the things themselves. ________________________________ From: Stathis Papaioannou To: gordon.swobe at yahoo.com; ExI chat list Sent: Tue, February 23, 2010 4:07:21 PM Subject: Re: [ExI] Is the brain a digital computer? On 24 February 2010 01:21, Gordon Swobe wrote: > --- On Tue, 2/23/10, Stathis Papaioannou wrote: > >> You probably don't even need all the sensors and effectors since a >> person can still think if they are paralysed and deprived of sensory >> input. > > Okay let's stipulate that and forget all that extra paraphernalia that you want to add to my simple thought experiment. > > You have a simulated heart, a simulated stomach and a simulated brain running on three separate computers. You agree the simulated heart doesn't really pump real blood and that the simulated stomach doesn't really digest real food, but you want me to believe the simulated brain really processes real thoughts, i.e., you want me to believe strong AI=true. > > Again I ask, how do you explain your inconsistency? > > Why do you classify "thoughts" in a different category than you do "blood" > and "food", if not because you have adopted a dualistic world-view in which > mental phenomena fall into a different category than do ordinary material > entities? Thoughts seem different in that for other bodily functions you would need to make a robotic AI, while for the thoughts perhaps just the computer would suffice. However, another way to look at it is that thoughts do involve behaviour but the behaviour is information processing rather than pumping blood. -- Stathis Papaioannou _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Wed Feb 24 01:40:06 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 23 Feb 2010 19:40:06 -0600 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <415576.15254.qm@web111211.mail.gq1.yahoo.com> References: <894394.53861.qm@web36508.mail.mud.yahoo.com> <415576.15254.qm@web111211.mail.gq1.yahoo.com> Message-ID: <4B848376.1050008@satx.rr.com> On 2/23/2010 7:02 PM, Christopher Luebcke wrote: > Gordon: Try reframing your position by thinking about artificial hearts, > rather than simulated hearts. I made this point about 4004 years ago by noting that what is at issue is emulation, not simulation. Gordon evidently doesn't get this distinction. Damien Broderick From stathisp at gmail.com Wed Feb 24 06:22:27 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 24 Feb 2010 17:22:27 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <552218.15686.qm@web36504.mail.mud.yahoo.com> References: <552218.15686.qm@web36504.mail.mud.yahoo.com> Message-ID: On 24 February 2010 02:31, Gordon Swobe wrote: > --- On Tue, 2/23/10, Stathis Papaioannou wrote: > >> I entertain the possibility that perhaps consciousness is different to >> other phenomena in the universe > > I posit nothing special about consciousness. It only *seems* unusual because we can know about it only from the first-person perspective. Only you can feel your toothache, for example. > >> and might be separable from the >> observable behaviour it seems to underpin. > > I can write a program today that will make a computer act conscious to a limited degree. I have many times. With enough knowledge and resources I could write one that fooled you into thinking it had consciousness -- that caused a computer or robot to behave in such a way that it passed the Turing test. So I don't understand why you should even question the separability of behavior and consciousness. You are only able to program a computer so that it has very limited and specialised intelligence, like an amoeba or a flatworm, and therefore proportionately limited consciousness. I understand that you say the difference is only a matter of degree, but then the difference between a flatworm and a mouse or a human is also just a matter of degree. >> That would mean I could replace part of my brain with a functionally >> identical but unconscious analogue selectively removing any aspect of my >> consciousness without noticing that anything had changed and without >> displaying any outward change in behaviour. I believe that is absurd, >> and this leads me to conclude that those who immediately saw that B' >> must be conscious were right. > > We've been through this so many times. :) > > One cannot on my view make one of your "functionally identical but unconscious analogues" in the first place if the component normally affects experience, at least not without changing other parts of the brain along with it. One might just as well try to draw a square triangle. So you claim both that weak AI is possible and that weak AI is impossible? If weak AI is possible then by definition it is possible to make an artificial neuron, collection of neurons or person that behaves just like a biological neuron, collection of neurons or person but lacks consciousness. If it does not behave identically then you have failed in your effort to create weak AI. I think I started by pointing this out in the very first posts on these threads as the position of Roger Penrose, who thinks that the NCC is something fundamentally non-algorithmic, incorporating exotic physics that no Turing machine could compute. The result of this would be that it is impossible in general to simulate the behaviour of any organism or part of an organism that contains a NCC by using a digital computer. A computer might be able to do some tasks that are considered intelligent, but it would never be able to reproduce the full behavioural gamut of a real human, perhaps failing in tasks involving creativity or natural language, for example. This position, unlike yours and Searle's, is internally consistent, but there is no scientific evidence for it. > The undertaking becomes problematic because replacing the neural correlates of consciousness or any part of them with a *supposed* functional but unconscious analogue will eliminate or compromise subjective experience. Experience affects behavior in normal people, and because the subject will have abnormal experience, the doctor will then need to do more work to make him behave and report normally. The NCC will, of course, affect the behaviour of the whole brain and hence person. My claim is that this effect on behaviour *is* the NCC, so that if it is reproduced, whether by a digital computer, beer cans and toilet paper or a little man pulling levers, then the consciousness will also be reproduced. I still don't understand your position on simulating the NCC because you keep alternating between, (a) it is impossible to reproduce the behaviour of the NCC using a computer; and (b) it is possible to reproduce the behaviour of the NCC using a computer but this still won't reproduce the behaviour of the rest of the brain. As I have said, (a) is philosophically sound but there is no scientific evidence in its support, while (b) is worse than wrong, it is contradictory. -- Stathis Papaioannou From stathisp at gmail.com Wed Feb 24 06:55:26 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 24 Feb 2010 17:55:26 +1100 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: On 24 February 2010 04:51, Spencer Campbell wrote: > Stathis Papaioannou : >> I am happy to say that there is no self >> and no consciousness, in the sense meant by most of those people who >> deny these things. In any case, I am happy to say that I am a >> different self, person or consciousness from moment to moment, and >> that the idea that I remain the "same" person is a delusion. >> Nevertheless, it is very important to me that this delusion continue >> in much the same way as it always has. > > Would it satisfy you if, at the moment of your death, a person similar > to the one in my Napoleon argument were to suddenly acquire the > delusion of being Stathis Papaioannou? > > The assumption, again, would be that they know more about your life > than anyone else in the world after you die. They would be a > nearly-perfect mimic. Surely, the difference between your mind and > your mimic's mind would be no greater than the difference between your > mind as it is now and your mind as it was ten years ago. Is this any > different from transferring your mind to a new body? > > This seems consistent with your statements, but I'm willing to bet > you'll think that this solution to the problem of mortality is > inadequate in some way. Maybe something to do with the infeasibility > of such a specific, high-quality delusion forming in a random > passerby. That's really just for show, though. You can ratchet down > the accuracy of the mimic as much as you like, and the conclusions > should remain pretty much the same (if less obvious). > > My answer would be that your mimic lacks M; continuity of experience > is broken in exactly the same way that it would be in the case of a > freeze-and-scan resurrection. It's simply a less sophisticated > technology. The same thing is accomplished with less precision. But, I > am open to alternate explanations. The mimic would have to not only know what I know and believe that he is me, but actually have the same sorts of mental states as I do. If this could be guaranteed then I would have no problem with it, otherwise I would have to fret about the fact that I lose M in the course of ordinary life. Essentially this is Frank Tipler's route to immortality: in the far future a humongous computer recreates the brain patterns of all the dead, either from historical data or, failing that, by emulating every possible human brain using brute computational force. The physics of this scenario may be dubious but I see no problem with the philosophy. -- Stathis Papaioannou From bbenzai at yahoo.com Wed Feb 24 09:03:14 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 24 Feb 2010 01:03:14 -0800 (PST) Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: Message-ID: <435677.19454.qm@web113608.mail.gq1.yahoo.com> > Spike wrote: > Ah, FINALLY you guys lighten up. I post some of my best jokes, not a > HarHar, not a guffaw. I give you perfectly good Valentines Day advice, > nothing but somber comments about who posted how many this or that since > last time. > > My extropian friends, this place is a PARTY! We have deep ideas and often > discuss heavy topics, but we are here to have FUN too. Eliezer assures us > that there is an infinite amount of fun, so even if we use up an > infinitesimal fraction of it, there is plenty of time for fun. If we > manage > radical life extension, then we will eventually solve all of humanity's > most > persistent problems. Then our task will be to entertain each other. So > lets get in practice, shall we? > But Spike, this thread is the best laugh I've had for ages! Seriously, it's a bit like those films that you remember as being great because of their mixture of tragedy and comedy. 'Little Big Man', 'Forrest Gump', etc. I realise that some people are taking it oh so seriously, but that just makes it funnier. Am I a bad person? (I think 'Jesus and Mo' is funny too. http://www.jesusandmo.net/). Ben Zaiboc From stefano.vaj at gmail.com Wed Feb 24 11:15:05 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 24 Feb 2010 12:15:05 +0100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: References: <552218.15686.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c21002240315t363c6552mb1fee6b7b70a31e2@mail.gmail.com> On 24 February 2010 07:22, Stathis Papaioannou wrote: > (a) it is impossible to reproduce the behaviour of the NCC using a computer; and > (b) it is possible to reproduce the behaviour of the NCC using a > computer but this still won't reproduce the behaviour of the rest of > the brain. > > As I have said, (a) is philosophically sound but there is no > scientific evidence in its support, while (b) is worse than wrong, it > is contradictory. I have a few doubts on the philosophical soundness of a), as it postulates that something "else" or "more" may exist exceeding what can be emulated by another universal computer (say, a cellular automaton), something which smacks of dualism to my ears... -- Stefano Vaj From stathisp at gmail.com Wed Feb 24 12:24:03 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 24 Feb 2010 23:24:03 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <580930c21002240315t363c6552mb1fee6b7b70a31e2@mail.gmail.com> References: <552218.15686.qm@web36504.mail.mud.yahoo.com> <580930c21002240315t363c6552mb1fee6b7b70a31e2@mail.gmail.com> Message-ID: On 24 February 2010 22:15, Stefano Vaj wrote: > On 24 February 2010 07:22, Stathis Papaioannou wrote: >> (a) it is impossible to reproduce the behaviour of the NCC using a computer; and >> (b) it is possible to reproduce the behaviour of the NCC using a >> computer but this still won't reproduce the behaviour of the rest of >> the brain. >> >> As I have said, (a) is philosophically sound but there is no >> scientific evidence in its support, while (b) is worse than wrong, it >> is contradictory. > > I have a few doubts on the philosophical soundness of a), as it > postulates that something "else" or "more" may exist exceeding what > can be emulated by another universal computer (say, a cellular > automaton), something which smacks of dualism to my ears... The physical Church-Turing thesis has not been proved. Non-computable functions exist as mathematical objects, but it is not known if they play a role in physics, or in the physics of the brain in particular. However, given that everything science has discovered so far is computable (except perhaps true randomness in QM, which however seems indistinguishable from computable pseudo-randomness), it seems unlikely that there is something noon-computable lurking in the brain. In any case, there is no reason to believe that the brain is non-computable other than the grandiosity that seems to afflict humans when they contemplate their place in the world. -- Stathis Papaioannou From gts_2000 at yahoo.com Wed Feb 24 12:38:28 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 24 Feb 2010 04:38:28 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <216959.17909.qm@web36502.mail.mud.yahoo.com> --- On Wed, 2/24/10, Stathis Papaioannou wrote: > You are only able to program a computer so that it has very > limited and specialised intelligence, like an amoeba or a flatworm, > and therefore proportionately limited consciousness. No sir. I have never written a program that had consciousness. > So you claim both that weak AI is possible and that weak AI > is impossible? No. > If weak AI is possible then by definition it is > possible to make an artificial neuron, collection of neurons or > person that behaves just like a biological neuron, collection of > neurons or person but lacks consciousness. And that's exactly what happens in your experiment. After a lot of work the doctor finally creates a patient that passes the TT but lacks consciousness - weak AI. -gts From gts_2000 at yahoo.com Wed Feb 24 12:45:34 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 24 Feb 2010 04:45:34 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: <337201EB-581A-4FC9-B69E-51C1AD20E1EC@bellsouth.net> Message-ID: <35968.58084.qm@web36504.mail.mud.yahoo.com> In this discussion thread a person who goes by the name John K Clark continues to have a conversation with himself about me in a malicious and obsessive attempt to slander my name and falsely characterize my views. I have no association with John K Clark. Gordon Swobe From sparge at gmail.com Wed Feb 24 12:57:29 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 24 Feb 2010 07:57:29 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <35968.58084.qm@web36504.mail.mud.yahoo.com> References: <337201EB-581A-4FC9-B69E-51C1AD20E1EC@bellsouth.net> <35968.58084.qm@web36504.mail.mud.yahoo.com> Message-ID: On Wed, Feb 24, 2010 at 7:45 AM, Gordon Swobe wrote: > In this discussion thread a person who goes by the name John K Clark continues to have a conversation > with himself about me in a malicious and obsessive attempt to slander my name and falsely characterize > my views. I have no association with John K Clark. You may not like his approach, but I don't think it's malicious or slanderous. This discussion has gone on for over two months now, and there's no sign that either side is making any headway. Yet, on it goes. If you could satisfactorily address any of his points, *that* would be progress. I'm skeptical, because you have a history of ignoring strong arguments against your position but actively taking advantage of any opportunity to rehash arguments you've been making--unconvincingly--for months. His post counts are there because he was publicly warned that he was exceeding the acceptable posting rate for the list. -Dave From stathisp at gmail.com Wed Feb 24 13:36:35 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 25 Feb 2010 00:36:35 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <216959.17909.qm@web36502.mail.mud.yahoo.com> References: <216959.17909.qm@web36502.mail.mud.yahoo.com> Message-ID: On 24 February 2010 23:38, Gordon Swobe wrote: > --- On Wed, 2/24/10, Stathis Papaioannou wrote: > >> You are only able to program a computer so that it has very >> limited and specialised intelligence, like an amoeba or a flatworm, >> and therefore proportionately limited consciousness. > > No sir. I have never written a program that had consciousness. > > >> So you claim both that weak AI is possible and that weak AI >> is impossible? > > No. > >> If weak AI is possible then by definition it is >> possible to make an artificial neuron, collection of neurons or >> person that behaves just like a biological neuron, collection of >> neurons or person but lacks consciousness. > > And that's exactly what happens in your experiment. After a lot of work the doctor finally creates a patient that passes the TT but lacks consciousness - weak AI. Could you please clarify that your definition of weak AI is that it behaves exactly like strong AI according to the strong AI's with which it interacts? You seem to be saying that this is possible for AI's interacting with people but not for robot neurons interacting with biological neurons, which is a very odd position: humans can be fooled by zombie humans but individual neurons are too smart to be fooled by zombie neurons! In any case, if it isn't possible to make weak AI brain components that would mean that these components utilise non-computable physics, which you keep insisting is not true. Can you at least see why it looks like you are contradicting yourself? -- Stathis Papaioannou From gts_2000 at yahoo.com Wed Feb 24 13:42:47 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 24 Feb 2010 05:42:47 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <833699.73560.qm@web36501.mail.mud.yahoo.com> --- On Wed, 2/24/10, Stathis Papaioannou wrote: > My claim is that this effect on behaviour *is* the NCC, so that if it is > reproduced, whether by a digital computer, beer cans and toilet paper or > a little man pulling levers then the consciousness will also be > reproduced. It seems to me totally absurd to assign consciousness to a pile of beer cans and toilet paper, or to a billion men pulling levers, or to a nation of Chinese people talking to one another on the telephone (Block, 1978) or to an arrangement of cats chasing mice, or to pigeons trained to peck as Turing machines (Pylyshn, 1985) or to any number of other such theoretically possible "Turing equivalent" implementations. Functionalists of the computationalist persuasion need to defend those kinds of bizarre notions to support their philosophy. It looks to me like they have fallen prey to an ideology. -gts From sparge at gmail.com Wed Feb 24 13:50:54 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 24 Feb 2010 08:50:54 -0500 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <833699.73560.qm@web36501.mail.mud.yahoo.com> References: <833699.73560.qm@web36501.mail.mud.yahoo.com> Message-ID: On Wed, Feb 24, 2010 at 8:42 AM, Gordon Swobe wrote: > > It seems to me totally absurd to assign consciousness to a pile of beer cans and toilet paper, or to a billion men pulling levers, or to a nation of Chinese people talking to one another on the telephone (Block, 1978) or to an arrangement of cats chasing mice, or to pigeons trained to peck as Turing machines (Pylyshn, 1985) or to any number of other such theoretically possible "Turing equivalent" implementations. OK, great, so it seems absurd. Does that mean it's impossible? Of course not. It may be impossible for other reasons, but "apparent absurdity" isn't one of them. Does it not seem absurd to assign consciousness to a glob of gelatinous gray goo? -Dave From gts_2000 at yahoo.com Wed Feb 24 13:52:52 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 24 Feb 2010 05:52:52 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: Message-ID: <869157.52897.qm@web36502.mail.mud.yahoo.com> --- On Wed, 2/24/10, Dave Sill wrote: > You may not like his approach, but He uses my name excessively in his posts in a discussion with himself. It's a despicable ploy to slander me on the search engines. I have no association with John K Clark. Gordon Swobe From stathisp at gmail.com Wed Feb 24 13:53:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 25 Feb 2010 00:53:12 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <833699.73560.qm@web36501.mail.mud.yahoo.com> References: <833699.73560.qm@web36501.mail.mud.yahoo.com> Message-ID: On 25 February 2010 00:42, Gordon Swobe wrote: > --- On Wed, 2/24/10, Stathis Papaioannou wrote: > >> My claim is that this effect on behaviour *is* the NCC, so that if it is >> reproduced, whether by a digital computer, beer cans and toilet paper or >> a little man pulling levers then the consciousness will also be >> reproduced. > > It seems to me totally absurd to assign consciousness to a pile of beer cans and toilet paper, or to a billion men pulling levers, or to a nation of Chinese people talking to one another on the telephone (Block, 1978) or to an arrangement of cats chasing mice, or to pigeons trained to peck as Turing machines (Pylyshn, 1985) or to any number of other such theoretically possible "Turing equivalent" implementations. > > Functionalists of the computationalist persuasion need to defend those kinds of bizarre notions to support their philosophy. It looks to me like they have fallen prey to an ideology. That's not an argument, it's just rhetorical posturing. It's like saying Turing was wrong because how could a device made of beer cans and toilet paper ever run a PS3 game? -- Stathis Papaioannou From msd001 at gmail.com Wed Feb 24 14:08:22 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 24 Feb 2010 09:08:22 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <869157.52897.qm@web36502.mail.mud.yahoo.com> References: <869157.52897.qm@web36502.mail.mud.yahoo.com> Message-ID: <62c14241002240608w34299961w53a5c3fe2a0e7901@mail.gmail.com> On Wed, Feb 24, 2010 at 8:52 AM, Gordon Swobe wrote: > I have no association with John K Clark. > > Gordon Swobe Ironically, you keep mentioning his name then signing yours. If your concern is search engines then you ARE associating your name with his by repeatedly listing them together. From gts_2000 at yahoo.com Wed Feb 24 14:08:17 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 24 Feb 2010 06:08:17 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <761633.70037.qm@web36505.mail.mud.yahoo.com> --- On Wed, 2/24/10, Stathis Papaioannou wrote: > That's not an argument, it's just rhetorical posturing. I call it an appeal to intuition. I think most people find it intuitively implausible that we would create a new conscious entity if we trained a large group of pigeons to type in a Turing equivalent pattern. But that's what the computationalist theory of mind implies. -gts From stefano.vaj at gmail.com Wed Feb 24 15:04:16 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 24 Feb 2010 16:04:16 +0100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: References: <552218.15686.qm@web36504.mail.mud.yahoo.com> <580930c21002240315t363c6552mb1fee6b7b70a31e2@mail.gmail.com> Message-ID: <580930c21002240704y63e6d6ddr5ffdd292b3c60404@mail.gmail.com> On 24 February 2010 13:24, Stathis Papaioannou wrote: > The physical Church-Turing thesis has not been proved. Non-computable > functions exist as mathematical objects, but it is not known if they > play a role in physics, or in the physics of the brain in particular. I take "non-computable" in this sense simply to mean that there are no algoritmic shortcuts, and that you have to run the system (or any emulation thereof) to see where it leads. Remaining in the field of cellular automata, a few instances of such scenario are easily found. But this does mean that the final outcome of step "n" cannot be determined by any universal computer which goes through the very same steps. Very possibly, with a definite loss of performance. In this sense, I think one could be reasonably argue that a human brain might be the most efficient way to produce a human identity. Many AGI partisans take on the contrary a little too much for granted that an electronic computer could easily compete with it... -- Stefano Vaj From gts_2000 at yahoo.com Wed Feb 24 14:59:02 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 24 Feb 2010 06:59:02 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: <62c14241002240608w34299961w53a5c3fe2a0e7901@mail.gmail.com> Message-ID: <108894.28367.qm@web36506.mail.mud.yahoo.com> I attempted meaningful dialog with Clark on a number of occasions. It didn't work out, so he created this thread as a PR campaign against me. Disgraceful. -gts From jonkc at bellsouth.net Wed Feb 24 16:42:32 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 24 Feb 2010 11:42:32 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <35968.58084.qm@web36504.mail.mud.yahoo.com> References: <35968.58084.qm@web36504.mail.mud.yahoo.com> Message-ID: On Feb 24, 2010 Gordon Swobe wrote: > I have no association with John K Clark. And you're not my son, I have no son. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Feb 24 17:18:21 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 24 Feb 2010 18:18:21 +0100 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <35968.58084.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c21002240918l4ae1f5b0v7951b0679db6c687@mail.gmail.com> 2010/2/24 John Clark : > I have no son. Mmhhh. Can you prove that? Or should we take you at your word? :-) -- Stefano Vaj From jonkc at bellsouth.net Wed Feb 24 17:45:29 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 24 Feb 2010 12:45:29 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <216959.17909.qm@web36502.mail.mud.yahoo.com> References: <216959.17909.qm@web36502.mail.mud.yahoo.com> Message-ID: <8B8A1D2E-A70F-4589-9646-7B6959FCE556@bellsouth.net> Since my last post Gordon Swobe has posted 5 times. > No sir. I have never written a program that had consciousness. And I will tell you exactly how Swobe knows this, because he as never written a program that acted intelligently. > it seems to me totally absurd to assign consciousness to a pile of beer cans and toilet paper, or to a billion men pulling levers, or to a nation of Chinese people talking to one another on the telephone (Block, 1978) or to an arrangement of cats chasing mice, or to pigeons trained to peck as Turing machines (Pylyshn, 1985) or to any number of other such theoretically possible "Turing equivalent" implementations. We are asked to accept that certain ideas are untrue entirely because of the incredulity of a person who goes by the name of Gordon Swobe. Never mind that Turing proved that theoretically you could get any behavior you wanted out of one of his machines, never mind that Darwin showed that if it were not linked with intelligence consciousness would never exist on planet Earth, never mind that the fossil record shows that strong emotion existed hundreds of millions of years before advanced intelligence; none of that matters, Gordon Swobe thinks a beer can computer is kinda goofy and if Swobe thinks something is odd that proves it could never exist. > Functionalists of the computationalist persuasion need to defend those kinds of bizarre notions to support their philosophy. But Swobe believes a thinking machine made out of grey goo is not bizarre because goo is inherently more logical than beer cans or toilet paper. > I think most people find it intuitively implausible that we would create a new conscious entity if we trained a large group of pigeons to type in a Turing equivalent pattern. What Swobe says above is correct but it tells us nothing about nature. It's intuitively implausible that an object in motion will stay in motion unless acted on by another force, it's intuitively implausible that simultaneity is not an absolute property, it's intuitively implausible that one object can be in two places at the same time; and yet all these things are true. There is a reason our intuition stinks in these areas, it's because the conditions where these facts become important were unlikely to be encountered by our ape-like ancestors, so Evolution had no reason to make our intuition good in these areas. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Feb 24 19:33:11 2010 From: spike66 at att.net (spike) Date: Wed, 24 Feb 2010 11:33:11 -0800 Subject: [ExI] How not to make a thought experiment In-Reply-To: <580930c21002240918l4ae1f5b0v7951b0679db6c687@mail.gmail.com> References: <35968.58084.qm@web36504.mail.mud.yahoo.com> <580930c21002240918l4ae1f5b0v7951b0679db6c687@mail.gmail.com> Message-ID: > ...On Behalf Of Stefano Vaj > > 2010/2/24 John Clark : > > I have no son. > > Mmhhh. Can you prove that? Or should we take you at your word? > > :-) Stefano Vaj Perhaps he means no sons that he knows of. spike {8^D From pharos at gmail.com Wed Feb 24 20:21:11 2010 From: pharos at gmail.com (BillK) Date: Wed, 24 Feb 2010 20:21:11 +0000 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: References: <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <2d6187671002221653k60ffede6k18fda22528da9e6@mail.gmail.com> <62c14241002221711x77d28922kff82a8e14181b67f@mail.gmail.com> <2d6187671002221722u5c6e1523me1cdc534edc91ad5@mail.gmail.com> Message-ID: On Tue, Feb 23, 2010 at 3:28 AM, spike wrote: > My extropian friends, this place is a PARTY! ?We have deep ideas and often > discuss heavy topics, but we are here to have FUN too. ?Eliezer assures us > that there is an infinite amount of fun, so even if we use up an > infinitesimal fraction of it, there is plenty of time for fun. ?If we manage > radical life extension, then we will eventually solve all of humanity's most > persistent problems. ?Then our task will be to entertain each other. ?So > lets get in practice, shall we? > > Ah ha! I see the problem. BillK From pharos at gmail.com Wed Feb 24 20:26:00 2010 From: pharos at gmail.com (BillK) Date: Wed, 24 Feb 2010 20:26:00 +0000 Subject: [ExI] better self-transcendence through selective brain damage In-Reply-To: References: <7A7A0CB2766A4C879426F3F66CD94F0C@spike> <87670FEF1360437981B4B84931C5FF94@spike> <20100212221046.5.qmail@syzygy.com> <710b78fc1002141626o4f8673f3x61c387388126f17b@mail.gmail.com> <2d6187671002221653k60ffede6k18fda22528da9e6@mail.gmail.com> <62c14241002221711x77d28922kff82a8e14181b67f@mail.gmail.com> <2d6187671002221722u5c6e1523me1cdc534edc91ad5@mail.gmail.com> Message-ID: On Wed, Feb 24, 2010 at 8:21 PM, BillK wrote: > Ah ha! I see the problem. > > > After you click on the link, wait a minute while it installs. :) BillK From nebathenemi at yahoo.co.uk Wed Feb 24 20:09:46 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Wed, 24 Feb 2010 20:09:46 +0000 (GMT) Subject: [ExI] Partying (was re:Is the brain a digital computer?) In-Reply-To: Message-ID: <40563.96196.qm@web27008.mail.ukl.yahoo.com> John wrote "Spike, thank you for the reminder that this email list is here for us to ENJOY ourselves! : ) I love the whole idea of this place being a big party (following Eliezer's rule of infinite fun, vrs. Eliezer's notorious organization known as the "Committee for Investigating Un-Extropian Activities")." I recently re-read "Great Mambo Chicken and the Transhuman Condition" (lousy title, it never did explain the importance of Mambo Chicken to transhumanism) and it mentions in there "The Other Side Party" - the multiple selves of all the transhumanists who left earth meet up on the far side of the galaxy having explored it all, and then party! One problem cited is that several billion Keith Hensons plus all his group-mind friends would consume so much beer, there may be a black hole formed by beer cans. On the other hand, this may be where we can find enough beer cans from to make a functioning brain emulation with, just to posthumously prove Gordon wrong :) Tom From spike66 at att.net Wed Feb 24 22:14:34 2010 From: spike66 at att.net (spike) Date: Wed, 24 Feb 2010 14:14:34 -0800 Subject: [ExI] Partying (was re:Is the brain a digital computer?) In-Reply-To: <40563.96196.qm@web27008.mail.ukl.yahoo.com> References: <40563.96196.qm@web27008.mail.ukl.yahoo.com> Message-ID: <9684CE963FCF4B58B222EFD84AFE86D9@spike> > ...On Behalf Of Tom Nowell > ... > I recently re-read "Great Mambo Chicken and the Transhuman > Condition" (lousy title, it never did explain the importance > of Mambo Chicken to transhumanism)... The weird title was to sell the book. Hey, it worked on me. Before I saw that in about 1989, I had never heard of transhumanism, but I looked it over in the bookstore and bought the book, entered it on my list, set it on the shelf and forgot about it for about a year. Those were busy times with my career and grad school. Then I was at a gathering of some sort (I honestly do not remember what it was, might have been some electronics geek future tech meeting or something) and was introduced to Ed Regis. His name sounded familiar, but I failed to connect it with the book at first. After that party I went home and read Mambo Chicken, which I found most entertaining. It was in that time frame that I read Eric Drexler's nanotechnology, which was not as well written as Mambo Chicken but far more electrifying in subject matter. After that I met Keith at a cryonics party if I recall correctly, about in 1991 or 92. Ed Regis showed up at one of the Extrocons, might have been Extro4 over at Berkeley? I find it amusing that I kept running into the same people over the years, from what I thought were disparate interests. They seemed to converge on Extropians. spike From thespike at satx.rr.com Wed Feb 24 23:14:55 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 24 Feb 2010 17:14:55 -0600 Subject: [ExI] Great Mambo Chicken In-Reply-To: <9684CE963FCF4B58B222EFD84AFE86D9@spike> References: <40563.96196.qm@web27008.mail.ukl.yahoo.com> <9684CE963FCF4B58B222EFD84AFE86D9@spike> Message-ID: <4B85B2EF.6070301@satx.rr.com> On 2/24/2010 4:14 PM, spike wrote: >> ...On Behalf Of Tom Nowell >> I recently re-read "Great Mambo Chicken and the Transhuman >> Condition" (lousy title, it never did explain the importance >> of Mambo Chicken to transhumanism)... > The weird title was to sell the book. Hey, it worked on me. Before I saw > that in about 1989, I had never heard of transhumanism, but I looked it over > in the bookstore and bought the book A great, weird, grabby title! I found it in the local library and wolfed it down with delight. I suspect it was my first introduction to transhumanism as such, and it was certainly a big influence on my writing THE SPIKE a few years later. Regis's later books never really had the same pizazz. (Although I admit I haven't read his Einstein, G?del & Co.: Genialit?t und Exzentrid-die Princeton-Geschichte.) Damien Broderick From spike66 at att.net Thu Feb 25 01:11:04 2010 From: spike66 at att.net (spike) Date: Wed, 24 Feb 2010 17:11:04 -0800 Subject: [ExI] Great Mambo Chicken In-Reply-To: <4B85B2EF.6070301@satx.rr.com> References: <40563.96196.qm@web27008.mail.ukl.yahoo.com><9684CE963FCF4B58B222EFD84AFE86D9@spike> <4B85B2EF.6070301@satx.rr.com> Message-ID: > ...On Behalf Of > Damien Broderick ... > Subject: Re: [ExI] Great Mambo Chicken > > ... > Regis's later books never really had the same pizazz... > > Damien Broderick There are plenty of us here who might agree Mambo Chicken was Regis' best work. That I find odd, for I recall he was a young-ish guy when I met him in 1989, about 40-45ish. Perhaps it was just the subject matter? No, Mambo Chicken had a definite flair, or a sort of playful bounciness to it that seems lacking in the later Regis material. spike From thespike at satx.rr.com Thu Feb 25 01:49:08 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 24 Feb 2010 19:49:08 -0600 Subject: [ExI] Great Mambo Chicken In-Reply-To: References: <40563.96196.qm@web27008.mail.ukl.yahoo.com><9684CE963FCF4B58B222EFD84AFE86D9@spike> <4B85B2EF.6070301@satx.rr.com> Message-ID: <4B85D714.1010400@satx.rr.com> On 2/24/2010 7:11 PM, spike wrote: > > > I recall [Regis] was a young-ish guy when I met him > in 1989, about 40-45ish. b. 1944, so 45. From thespike at satx.rr.com Thu Feb 25 02:42:55 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 24 Feb 2010 20:42:55 -0600 Subject: [ExI] Tipler on hurricanes and IPCC Message-ID: <4B85E3AF.6010201@satx.rr.com> sample: See graphs etc at site. From cluebcke at yahoo.com Thu Feb 25 04:39:36 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Wed, 24 Feb 2010 20:39:36 -0800 (PST) Subject: [ExI] Tipler on hurricanes and IPCC In-Reply-To: <4B85E3AF.6010201@satx.rr.com> References: <4B85E3AF.6010201@satx.rr.com> Message-ID: <397899.63244.qm@web111213.mail.gq1.yahoo.com> Unsurprisingly for a Pajamas Media article, the author does not provide a link to the claims being challenged, so I'll provide it for you: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-3-3.html The first thing that you'll note is that it's a FAQ, not, as Tipler claims, an "executive summary". This of course has no bearing on the truth of the matter, but does suggest that one ought to be on the guard for other instances of apparent carelessness. Much more interesting than the FAQ is the actual chapter in AR4 that addresses changes in extreme weather patterns: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-8-3.html I am not a scientist and am not qualified to critically assess either the IPCC's interpretation of the source articles, or the strengths or weaknesses of the source articles themselves. I will note that this chapter of AR4 appears to include a fair number of references to source articles, and specifically calls out instances of (and provides references to) dissenting views. I am similarly unqualified to assess the merits of Ryan Maue's claims, though one should bear in mind that the source being cited here is a blog post, not a peer-reviewed article (or a rebuttal thereof). I would simply point out that the statements "hurricane activity has increased since 1970" and "numbers of hurricanes in the North Atlantic have also been above normal (based on 1981?2000 averages) in 9 of the last 11 years" do not contradict the statement that "hurricane activity has decreased since 2005". Yet Tipler can't stop there. He has to claim that this is a "fraud in the hurricane data". Tipler does not provide any basis for leveling this extremely serious charge, but simply states it as fact. Per my invective against my new friend Rafal the other day, I am deeply concerned that the inability to agree without trying to generate the maximum anger possible is going to doom many of us to a very dark future. The last thing I'll mention is that the issue has absolutely no bearing whatsoever on the existence or scal of global warming, nor whether such warming (if it exists) is human-caused. It is simply a disagreement over data or the interpretation thereof. Yet some can't resist the opportunity to throw kerosene on the fire. Somebody, some day, is going to lose a life over this. Or more than one. ________________________________ From: Damien Broderick To: ExI chat list Sent: Wed, February 24, 2010 6:42:55 PM Subject: [ExI] Tipler on hurricanes and IPCC sample: See graphs etc at site. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Thu Feb 25 05:19:14 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 24 Feb 2010 21:19:14 -0800 (PST) Subject: [ExI] Josephson Brains was Re: Is the brain a digital computer? Message-ID: <752259.97063.qm@web65608.mail.ac4.yahoo.com> ----- Original Message ---- > From: scerir > To: ExI chat list > Sent: Tue, February 23, 2010 12:30:54 PM > Subject: Re: [ExI] Is the brain a digital computer? Patrick Crotty, Daniel Schult, Ken Segall (Colgate > University) Josephson junction simulation of > neurons http://arxiv.org/abs/1002.2892 With the goal of understanding the > intricate behavior and dynamics of collections of neurons, we present > superconducting circuits containing Josephson junctions that model > biologically realistic neurons. These "Josephson junction neurons" reproduce > many characteristic behaviors of biological neurons such as action > potentials, refractory periods, and firing thresholds. They can be coupled > together in ways that mimic electrical and chemical synapses. Using existing > fabrication technologies, large interconnected networks of Josephson > junction neurons would operate fully in parallel. They would be orders of > magnitude faster than both traditional computer simulations and biological > neural networks. Josephson junction neurons provide a new tool for exploring > long-term large-scale dynamics for networks of neurons. See also this > page > here http://physicsandcake.wordpress.com/2009/07/20/quantum-neural-networks-1-the- superconducting-neuron-model/ This is actually a very cool idea.?I see how Josephson junctions do act a lot like biological neurons. But there are also other features of JJs that are "value added". One thing that springs to mind is that Josephson junctions are also used in super-conducting quantum interference devices (SQUIDS) because they are extraordinarily sensitive to minute magnetic fields. SQUIDS can even measure the tiny magnetic fields produced by biological brains. The?implications of this ability?are quite interesting. Artificial brains that could detect or perhaps even read the thoughts of other brains might be possible. Kind of like built- in ESP. I will have to think about it more, but I wanted to separate it from the noise of the other thread.?Thanks Serafino. ? Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From jonkc at bellsouth.net Thu Feb 25 05:17:48 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 25 Feb 2010 00:17:48 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <869157.52897.qm@web36502.mail.mud.yahoo.com> References: <869157.52897.qm@web36502.mail.mud.yahoo.com> Message-ID: <3CAA171D-526A-407C-9154-3E36979E24D0@bellsouth.net> On Feb 24, 2010, at 8:52 AM, Gordon Swobe wrote: > I have no association with John K Clark. I understand, but we'll always have Paris. On Feb 24, 2010, at 7:45 AM, Gordon Swobe wrote: > I have no association with John K Clark. Does that mean I'm going to have to find a new costar for our stage production of "A chorus Line"? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 25 06:42:13 2010 From: spike66 at att.net (spike) Date: Wed, 24 Feb 2010 22:42:13 -0800 Subject: [ExI] funny headlines again: the creditability of news agencies {8^D In-Reply-To: <028A4ACCFE084A279EB726EA3B69644B@spike> References: <1179706F14CA4ACD8618AADAE32464FE@spike><4B82F673.40703@satx.rr.com> <028A4ACCFE084A279EB726EA3B69644B@spike> Message-ID: Are you guys getting tired of this? I am entertaining myself to no end finding goofs in articles about other agencies' goofs. This one is a borderline case. I think they meant credibility, but creditability is a Scrabble-able word, and the definition almost sorta kinda fits in this context, although not particularly well. I will cut it out if the extropians wax weary of my snarkitude. spike http://www.foxnews.com/scitech/2010/02/24/exclusive-climate-panel-announce-s ignificant-changes/?test=latestnews -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook.jpg Type: image/jpeg Size: 22480 bytes Desc: not available URL: From thespike at satx.rr.com Thu Feb 25 07:05:36 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 25 Feb 2010 01:05:36 -0600 Subject: [ExI] Josephson Brains was Re: Is the brain a digital computer? In-Reply-To: <752259.97063.qm@web65608.mail.ac4.yahoo.com> References: <752259.97063.qm@web65608.mail.ac4.yahoo.com> Message-ID: <4B862140.2090705@satx.rr.com> On 2/24/2010 11:19 PM, The Avantguardian wrote: > I see how Josephson junctions do act a lot like biological neurons. But there are also other features of JJs that are "value added". One thing that springs to mind is that Josephson junctions are also used in super-conducting quantum interference devices (SQUIDS) because they are extraordinarily sensitive to minute magnetic fields. SQUIDS can even measure the tiny magnetic fields produced by biological brains. The implications of this ability are quite interesting. Artificial brains that could detect or perhaps even read the thoughts of other brains might be possible. Kind of like built- in ESP. Of course Josephson himself accepts the reality of pre-installed human ESP. Damien Broderick From stathisp at gmail.com Thu Feb 25 11:21:22 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 25 Feb 2010 22:21:22 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <580930c21002240704y63e6d6ddr5ffdd292b3c60404@mail.gmail.com> References: <552218.15686.qm@web36504.mail.mud.yahoo.com> <580930c21002240315t363c6552mb1fee6b7b70a31e2@mail.gmail.com> <580930c21002240704y63e6d6ddr5ffdd292b3c60404@mail.gmail.com> Message-ID: On 25 February 2010 02:04, Stefano Vaj wrote: > On 24 February 2010 13:24, Stathis Papaioannou wrote: >> The physical Church-Turing thesis has not been proved. Non-computable >> functions exist as mathematical objects, but it is not known if they >> play a role in physics, or in the physics of the brain in particular. > > I take "non-computable" in this sense simply to mean that there are no > algoritmic shortcuts, and that you have to run the system (or any > emulation thereof) to see where it leads. > > Remaining in the field of cellular automata, a few instances of such > scenario are easily found. > > But this does mean that the final outcome of step "n" cannot be > determined by any universal computer which goes through the very same > steps. Very possibly, with a definite loss of performance. > > In this sense, I think one could be reasonably argue that a human > brain might be the most efficient way to produce a human identity. > Many AGI partisans take on the contrary a little too much for granted > that an electronic computer could easily compete with it... There are many examples of non-computable numbers, functions and problems in mathematics: http://en.wikipedia.org/wiki/Computable_number http://en.wikipedia.org/wiki/Computable_function http://en.wikipedia.org/wiki/List_of_undecidable_problems -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Feb 25 13:39:46 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 25 Feb 2010 05:39:46 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <113925.2075.qm@web36504.mail.mud.yahoo.com> --- On Wed, 2/24/10, Stathis Papaioannou wrote: > Could you please clarify that your definition of weak AI is > that it behaves exactly like strong AI according to the strong AI's > with which it interacts? I define weak AI as capable of passing the Turing test but having no subjective mental states. Strong AI also passes the TT, of course, and has subjective states. And yes weak AI would fool strong AI in the TT, just it fools humans with "strong I". > Can you at least see why it looks like you are contradicting yourself? You contradicted yourself when you wrote as you did yesterday of "functionally identical but unconscious" brain components. This is why I wrote that making such a thing would be like trying to draw a square triangle. You took my comment wrongly to mean that I deny the possibility of weak AI. You claim to deny the possibility of weak AI, or claim that weak AI = strong AI. That seems to me incoherent. -gts From stathisp at gmail.com Thu Feb 25 14:05:16 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 26 Feb 2010 01:05:16 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <113925.2075.qm@web36504.mail.mud.yahoo.com> References: <113925.2075.qm@web36504.mail.mud.yahoo.com> Message-ID: On 26 February 2010 00:39, Gordon Swobe wrote: > --- On Wed, 2/24/10, Stathis Papaioannou wrote: > >> Could you please clarify that your definition of weak AI is >> that it behaves exactly like strong AI according to the strong AI's >> with which it interacts? > > I define weak AI as capable of passing the Turing test but having no subjective mental states. Strong AI also passes the TT, of course, and has subjective states. > > And yes weak AI would fool strong AI in the TT, just it fools humans with "strong I". > >> Can you at least see why it looks like you are contradicting yourself? > > You contradicted yourself when you wrote as you did yesterday of "functionally identical but unconscious" brain components. This is why I wrote that making such a thing would be like trying to draw a square triangle. You took my comment wrongly to mean that I deny the possibility of weak AI. I have been at pains to say that the brain component is functionally identical from the point of view of its external behaviour. You would then be able to drop it into the brain and the remaining biological part would have to function normally also, including in its case any consciousness it might normally generate. There would be no need for a surgeon to make further alterations to the brain: in fact, if this was done the brain would *stop* functioning normally. This isn't something anyone would dispute, no matter what their position on computer consciousness. It's just an obvious fact that follows from the definition of the terms. The only way around it would be if there was some miraculous intervention. > You claim to deny the possibility of weak AI, or claim that weak AI = strong AI. That seems to me incoherent. It may seem coherent to speak of an intelligent entity without consciousness, since we understand what the words mean, but on closer inspection it turns out to be logically impossible, at least if the artificial brain follows the architecture of the biological brain. -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Feb 25 14:20:05 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 25 Feb 2010 06:20:05 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <644933.61129.qm@web36502.mail.mud.yahoo.com> > In any case, if it isn't possible to make weak AI brain components that > would mean that these components utilise non-computable physics, which > you keep insisting is not true. I think you make a fundamental mistake when you assign so much significance to the question of the computability of brain physics, or to the question of the computability of physics in general. Unlike observer-independent objects like mountains and planets, computations are always observer-relative. They exist only relative to the mind of some observer who does the computations. At the most basic level, this explains why it makes no sense to think of the brain as a computer. If the brain really equals a computer then it needs an observer/user, which leads to the homunculus fallacy. -gts From gts_2000 at yahoo.com Thu Feb 25 14:45:19 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 25 Feb 2010 06:45:19 -0800 (PST) Subject: [ExI] Is the brain a digital computer? Message-ID: <492088.37833.qm@web36508.mail.mud.yahoo.com> --- On Thu, 2/25/10, Stathis Papaioannou wrote: >> You contradicted yourself when you wrote as you did >> yesterday of "functionally identical but unconscious" brain >> components. This is why I wrote that making such a thing >> would be like trying to draw a square triangle. You took my >> comment wrongly to mean that I deny the possibility of weak >> AI. > > I have been at pains to say that the brain component is > functionally identical from the point of view of its external behaviour. And I have been at pains to explain that you cannot get identical external behavior from an unconscious component. Because of the feedback loop between conscious experience and behavior, the components you have in mind cannot exist. They're oxymoronic. You open a can of worms when you replace any component of the NCC with one of your unconscious dummy components. But you can close that can of worms with enough work on other parts of the patient. You can continue to work on the patient, patching the software so to speak and rewiring the brain in other areas, until finally you create weak AI. The final product of your efforts will pass the Turing test, but its brain structure will differ somewhat from that of the original patient. -gts From gts_2000 at yahoo.com Thu Feb 25 15:29:40 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 25 Feb 2010 07:29:40 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <669205.61101.qm@web36508.mail.mud.yahoo.com> --- On Thu, 2/25/10, Stathis Papaioannou wrote: > It may seem coherent to speak of an intelligent entity > without consciousness, since we understand what the words mean, but > on closer inspection it turns out to be logically impossible, at > least if the artificial brain follows the architecture of the biological > brain. Why should an unconscious weak AI brain follow the architecture of a conscious biological brain? It seems you want to burden AI researchers with a pointless constraint. -gts From jonkc at bellsouth.net Thu Feb 25 16:08:09 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 25 Feb 2010 11:08:09 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <113925.2075.qm@web36504.mail.mud.yahoo.com> References: <113925.2075.qm@web36504.mail.mud.yahoo.com> Message-ID: Since my last post Gordon Swobe has posted 4 times. > I define weak AI as capable of passing the Turing test but having no subjective mental states. Another good definition of "weak AI" is the only type of intelligence Evolution could produce if consciousness were not linked with intelligence. > Strong AI also passes the TT, of course, and has subjective states. And strong AI CANNOT be produced by Evolution unless Swobe is wrong. > Why should an unconscious weak AI brain follow the architecture of a conscious biological brain? It seems you want to burden AI researchers with a pointless constraint. It seems you want to burden Evolution with a pointless constraint. > You claim to deny the possibility of weak AI, or claim that weak AI = strong AI. That seems to me incoherent. Lots of things would be ignorant to someone like Swobe who is completely innocent of any knowledge about how Darwin's Theory of Evolution actually works. > you cannot get identical external behavior from an unconscious component. And yet in his next breath Swobe assures us the Turing Test cannot provide even a hint of underlying consciousness; as I said before consistency is not Swobe's strong point. > Unlike observer-independent objects like mountains and planets, computations are always observer-relative. They exist only relative to the mind of some observer who does the computations. So to observer A 2+2 =4, but to observer B it is 5, and to observer C it is 3, and to observer D it is both 3 and 5 but never 4. Does anybody else thing that is dumb? I'd like to say something about Swobe's unique debating style; ignore any substantial criticisms brought up and hope people will just forget about them, instead just keep saying the same tired old arguments over and over and hope pure repetition will eventually wear down your opponent. I didn't find Swobe convincing two and a half months and many hundreds of posts ago when he started all this, and today I find that Swobe's ideas age like a fine milk. I would humbly suggest that it might make for more interesting reading if rather than go on and on about a malevolent person "who goes by the name of John K Clark" he might try actually defending himself against the points raised. I don't claim that this radical change in tactics would convince more people to his way of thinking, but it would make Swobe's posts more interesting. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Thu Feb 25 16:48:06 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 25 Feb 2010 10:48:06 -0600 Subject: [ExI] Book: Online Worlds: Convergence of the Real and the Virtual - Giulio Prisco authors chapter) Message-ID: <152C4DBBE6B24B3C848A3E275827CC90@DFC68LF1> I'd like to suggest this book! Description: "Virtual worlds are persistent online computer-generated environments where people can interact, whether for work or play, in a manner comparable to the real world. The most popular current example is World of Warcraft, a massively multiplayer online game with eleven million subscribers. However, other virtual worlds, notably Second Life, are not games at all but internet-based collaboration contexts in which people can create virtual objects, simulated architecture, and working groups. "This book brings together an international team of highly accomplished authors to examine the phenomena of virtual worlds, using a range of theories and methodologies to discover the principles that are making virtual worlds increasingly popular, and which are establishing them as a major sector of human-centered computing." http://www.amazon.com/Online-Worlds-Convergence-Human-Computer-Interaction/d p/1848828241/ref=sr_1_1?ie=UTF8 &s=books&qid=1266739578&sr=8-1#noop Giulio's chapter is titled: "Future Evolution of Virtual Worlds as Communication Environments" [Blog here http://giulioprisco.blogspot.com/ ] Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: attf7c2c.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From msd001 at gmail.com Thu Feb 25 20:09:28 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 25 Feb 2010 15:09:28 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <113925.2075.qm@web36504.mail.mud.yahoo.com> Message-ID: <62c14241002251209g78eba372sc25bdab8b760ef6e@mail.gmail.com> 2010/2/25 John Clark : > And strong AI CANNOT be produced by Evolution unless Swobe is wrong. So... do humans have strong AI? Evolution produced us (barring Creationist / Intelligent Designer Theory) Oh wait, that's "artificial intelligence" - I got confused with "actual intelligence" Maybe we should have a term that disambiguates what we're discussing. I considered "human intelligence" but that seems like a slanderous put-down. "Homosapien Intelligence" is too narrow and rules out dolphins, monkeys, elephants, dogs, etc. So then I was thinking of "Fleshly Intelligence" but imagined the objection to a contrivance using slabs of beef and a complex network of pulleys and buckets... Really digging down to what evolution has produced in brains is a kind of chemical dervish that keeps whirling about within the confines of fuel/food and past performance. Is it complete hubris to imagine that some number of these brains could reflect on their own function long enough to reproduce the effect using materials evolution did not? I don't think so. We have yet to put the right clever monkeys to work together successfully, but that's a combinatorial problem since the Internet has all but solved the engineering issue. From lacertilian at gmail.com Thu Feb 25 20:18:09 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 25 Feb 2010 12:18:09 -0800 Subject: [ExI] Continuity of experience. In-Reply-To: References: <8B916321-956B-41CE-934C-E134B9B30B8C@bellsouth.net> Message-ID: John Clark : > >Spencer Campbell wrote: >> General anesthetics do not cause total cessation?of activity in the brain. > > There is always something going on in the brain, even beheading or a bullet to the brain won't stop chemical reactions continuing there, but the brain is not important, the mind is, and?general anesthesia will totally stop the mind. But who cares, it'll start up again. I'm not sure why you're so convinced that general anaesthesia totally stops the mind. Beheading DOES eventually result in a brain that doesn't operate in any way like a brain, and cannot be considered to instantiate anything resembling a mind, even if you have to wait a week or two. John Clark : > I don't understand all this worry about continuity; if objectively your mind stops for a century or two, subjectively it will seem like it never stopped at all but the rest of the world made a discontinuous jump, and after all subjectivity is the only thing that's important. First I'll note, again, that it's an inherently irrational subject. It bears repeating. Logic is only going to help here insofar as it makes the argument more viscerally convincing; we don't really have any axioms to start with. Anyway: yes, that's all true. However, you're making the assumption that your mind does start up again, which in turn makes the assumptions that (a) a mind is starting up and (b) it is your mind. There is no scientific way to determine that a mind is your mind, which indicates that it is a fundamentally incoherent concept to begin with to think of any mind as being your mind. Nevertheless, I think I have a mind. I would like to continue having a mind, because I am genetically programmed to. This makes me extremely uneasy when I consider completely terminating all of the brain functions giving rise to my mind, for any period of time, even if I have a guarantee that a perfect functional duplicate of my brain will be running sooner or later. It doesn't matter whether it lasts a hundred years or a hundred seconds. Absolute brain death is *scary*. John Clark : > It's only puzzling if you think of "I" as a fixed unchanging thing. The you of yesterday and the you of today are not identical but they are both?Spencer Campbell. I think of it this way: I am a fixed, unchanging, immaterial subject. I posses a variety of loose, transient, material objects, including among them all the subatomic parts making up what I call "my body". I also possess some immaterial objects which are nevertheless still loose and transient, and I group all of them together under the general header "my mind". Realistically, this is as enlightened as I can get right now. I could make myself sound a lot better if I used more philosophical froo-froo language to show that I generally understand what the great thinkers think about the topic, but I don't see anything to gain by doing that. Better to articulate exactly what I really believe, as simply and precisely as I can, so that others have the best possible chance to find an internal inconsistency that I myself was blind to. John Clark : > I can't see how the rate of change could have any bearing, and after all even the reconfiguration given to you by a stick of dynamite would seem quite slow and plodding by some time scales. Maybe my soul is attached to my mind by some kind of ethereal rubber band, and if you pull too fast or too hard the connection will snap! Ever think of that? Hmm? (Disclaimer: the preceding paragraph was entirely silly. You can tell because I used the term "my soul". Nevertheless, it is an embarrassingly passable metaphor for my gut impression of the problem.) From msd001 at gmail.com Thu Feb 25 20:29:31 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 25 Feb 2010 15:29:31 -0500 Subject: [ExI] Continuity of experience. In-Reply-To: References: <8B916321-956B-41CE-934C-E134B9B30B8C@bellsouth.net> Message-ID: <62c14241002251229u3188bccey436215ad7ccdced6@mail.gmail.com> > Maybe my soul is attached to my mind by some kind of ethereal rubber > band, and if you pull too fast or too hard the connection will snap! > Ever think of that? Hmm? > > (Disclaimer: the preceding paragraph was entirely silly. You can tell > because I used the term "my soul". Nevertheless, it is an > embarrassingly passable metaphor for my gut impression of the > problem.) Hey John, what if that already happened and that's why you can't find it anymore? It proves that those who have a soul should take good care of it, else something bad might happen to it. From lacertilian at gmail.com Thu Feb 25 20:44:25 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 25 Feb 2010 12:44:25 -0800 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: Stathis Papaioannou : > The mimic would have to not only know what I know and believe that he > is me, but actually have the same sorts of mental states as I do. Elaborate, please. I'm guessing that you're equating "mental state" with "computational state of a brain", in which case you could measure the mimic's similarity in two ways: 1): Observed functional I/O behavior. Very, very imprecise in the proposed absence of rigorous virtual simulations. 2): Low-level neural structure. The same program can be implemented in a wildly different configuration of matter, though, so this information is useless without: that's right, rigorous virtual simulations. Basically I am stressing the profound uncertainty involved here. However, even assuming that you COULD determine the mental states cycled through by any given brain with enough precision to differentiate between individuals, I don't see any obvious reason to believe that sufficiently-similar states would somehow "attract" your M to occupy what was (moments before) a completely distinct body. I approve of the way you phrased it: "the same sorts of mental states". It gives a nod to the fact that even within your real presently-existing brain, you will never hit exactly the same mental state twice. Not within the lifetime of the universe, at any rate. So it is, again, a matter of degree: how close does the mimicking brain have to be to a supposed earlier mental state in order to count as a later mental state of a foreign mind? The question is very confusing! If you represent a given mental state as just one very large number, written in binary let's say, it is easier to grasp. The similarity between A and B is directly proportional to A AND B. Simple. But: distressingly fuzzy. Thinking this way does not help me locate the boundary between continuity and discontinuity, except to point out that such a boundary will be very ill-defined if it exists at all. From lacertilian at gmail.com Thu Feb 25 20:50:47 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 25 Feb 2010 12:50:47 -0800 Subject: [ExI] Continuity of experience. In-Reply-To: <62c14241002251229u3188bccey436215ad7ccdced6@mail.gmail.com> References: <8B916321-956B-41CE-934C-E134B9B30B8C@bellsouth.net> <62c14241002251229u3188bccey436215ad7ccdced6@mail.gmail.com> Message-ID: Mike Dougherty : > Hey John, what if that already happened and that's why you can't find > it anymore? > > It proves that those who have a soul should take good care of it, else > something bad might happen to it. If he's ever had a root canal, it's a decent bet! Ordinary sedatives work for most people, but there's always a chance. Remember kids: brush your teeth. If you don't, the anesthesiologist will take your soul away. From gts_2000 at yahoo.com Thu Feb 25 21:37:39 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 25 Feb 2010 13:37:39 -0800 (PST) Subject: [ExI] Continuity of experience. In-Reply-To: Message-ID: <283357.32131.qm@web36506.mail.mud.yahoo.com> --- On Thu, 2/25/10, Spencer Campbell wrote: > There is no scientific way to determine that a mind is your > mind, which indicates that it is a fundamentally incoherent > concept to begin with to think of any mind as being your mind. I see. Nevermind common sense. Who needs it anyway? Nobody really knows what they think, because after all their thoughts might exist in someone else's mind. Or whatever. > Nevertheless, I think I have a mind. Ah, thanks for saying that! I think that puts you head and shoulders above the rest. -gts From gts_2000 at yahoo.com Thu Feb 25 22:01:54 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 25 Feb 2010 14:01:54 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <333352.39982.qm@web36503.mail.mud.yahoo.com> --- On Tue, 2/23/10, Dave Sill wrote: > Blood is tangible, mental phenomena are not. Of *course* > they're in different categories. You argue then for mind/matter dualism. > You can hold a pint of blood in your hand, but you can't > hold a pint of thought Again you argue for mind/matter dualism. The question of how to answer you has weighed heavily on my mind. If we want to call ourselves materialists (a noble motivation, I think) then it seems to me we must say that thoughts have mass. This does not mean however that one's brain becomes more massive when one has thoughts; it means only that one has thoughts by virtue of having a massive brain. If should seem obvious, for example, that thoughts don't exist in the absence of brain matter. -gts From hkeithhenson at gmail.com Thu Feb 25 22:42:41 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 25 Feb 2010 15:42:41 -0700 Subject: [ExI] extropy-chat Digest, Vol 77, Issue 47 In-Reply-To: References: Message-ID: On Wed, Feb 24, 2010 at 10:06 PM, Christopher Luebcke wrote: snip > I am deeply concerned that the inability to agree without trying to generate the maximum anger possible is going to doom many of us to a very dark future. > The last thing I'll mention is that the issue has absolutely no bearing whatsoever on the existence or scale of global warming, nor whether such warming (if it exists) is human-caused. It is simply a disagreement over data or the interpretation thereof. Yet some can't resist the opportunity to throw kerosene on the fire. > > Somebody, some day, is going to lose a life over this. Or more than one. On the substance (global warming/climate change/whatever) I doubt it will be easy to sort out global warming deaths from the much larger and sooner problem of running out of cheap energy. But the "dark future" is something that might be addressed with evolutionary psychology. If some behavior is widespread, it's probably due to positive selection for that behavior or the underlying physical brain structures over a long evolutionary time. Humans seem to have psychological mechanisms that are rarely switched on in full force. The mechanism behind Stockholm syndrome (capture-bonding) is one of them. I make that case that it manifests as BDSM and is mildly switched on in military basic training. It's fairly easy to see that there was fairly intense selection for this (conditionally turned on) mechanism when our ancestors lived as hunter gatherers. It's fairly easy to see where human "war mode" would be switched on by anticipation of a bleak future or (much faster) by being attacked. I don't have any good ideas about why people get so angry over global warming arguments. Keith From stathisp at gmail.com Thu Feb 25 22:49:19 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 26 Feb 2010 09:49:19 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <644933.61129.qm@web36502.mail.mud.yahoo.com> References: <644933.61129.qm@web36502.mail.mud.yahoo.com> Message-ID: On 26 February 2010 01:20, Gordon Swobe wrote: >> In any case, if it isn't possible to make weak AI brain components that >> would mean that these components utilise non-computable physics, which >> you keep insisting is not true. > > I think you make a fundamental mistake when you assign so much significance to the question of the computability of brain physics, or to the question of the computability of physics in general. > > Unlike observer-independent objects like mountains and planets, computations are always observer-relative. They exist only relative to the mind of some observer who does the computations. > > At the most basic level, this explains why it makes no sense to think of the brain as a computer. If the brain really equals a computer then it needs an observer/user, which leads to the homunculus fallacy. I'm not asking whether the brain is a computer. I'm asking whether it is possible to reproduce the externally observable behaviour of the brain or brain components with a computer that has appropriate sensors and effectors. It is logically possible that the brain is not a computer, but its behaviour can still be copied by a computer; or indeed by another device, being neither brain nor computer. There is then the further question of whether copying the behaviour would also necessarily result in copying of the consciousness. I think it would, since otherwise it would be possible to make partial zombies, which you agree are absurd. -- Stathis Papaioannou From spike66 at att.net Thu Feb 25 22:49:05 2010 From: spike66 at att.net (spike) Date: Thu, 25 Feb 2010 14:49:05 -0800 Subject: [ExI] extropy-chat Digest, Vol 77, Issue 47 In-Reply-To: References: Message-ID: > ...On Behalf Of Keith Henson ... > > I don't have any good ideas about why people get so angry > over global warming arguments... Keith Because these arguments are being used to artificially raise energy prices and taxes? spike From pharos at gmail.com Thu Feb 25 23:42:02 2010 From: pharos at gmail.com (BillK) Date: Thu, 25 Feb 2010 23:42:02 +0000 Subject: [ExI] extropy-chat Digest, Vol 77, Issue 47 In-Reply-To: References: Message-ID: On Thu, Feb 25, 2010 at 10:49 PM, spike wrote: >> ...On Behalf Of Keith Henson >> I don't have any good ideas about why people get so angry >> over global warming arguments... Keith > > Because these arguments are being used to artificially raise energy prices > and taxes? > > Because it has been changed into arguments about politics and money (power) and people get very angry about politics. Even when there is little difference between the two main parties. BillK From cluebcke at yahoo.com Thu Feb 25 23:31:56 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Thu, 25 Feb 2010 15:31:56 -0800 (PST) Subject: [ExI] extropy-chat Digest, Vol 77, Issue 47 In-Reply-To: References: Message-ID: <509007.94643.qm@web111203.mail.gq1.yahoo.com> Just to be clear, while global warming may cost between zero and millions of lives, depending on both the current truth and the effects of future actions, my immediate concern is with threats of murder. http://www.timesonline.co.uk/tol/news/environment/article7017905.ece Much like Dr. Tiller, if the rhetoric continues at its current level, somebody, some day, will actually follow through on one of these threats. ________________________________ From: Keith Henson To: extropy-chat at lists.extropy.org Sent: Thu, February 25, 2010 2:42:41 PM Subject: Re: [ExI] extropy-chat Digest, Vol 77, Issue 47 On Wed, Feb 24, 2010 at 10:06 PM, Christopher Luebcke wrote: snip > I am deeply concerned that the inability to agree without trying to generate the maximum anger possible is going to doom many of us to a very dark future. > The last thing I'll mention is that the issue has absolutely no bearing whatsoever on the existence or scale of global warming, nor whether such warming (if it exists) is human-caused. It is simply a disagreement over data or the interpretation thereof. Yet some can't resist the opportunity to throw kerosene on the fire. > > Somebody, some day, is going to lose a life over this. Or more than one. On the substance (global warming/climate change/whatever) I doubt it will be easy to sort out global warming deaths from the much larger and sooner problem of running out of cheap energy. But the "dark future" is something that might be addressed with evolutionary psychology. If some behavior is widespread, it's probably due to positive selection for that behavior or the underlying physical brain structures over a long evolutionary time. Humans seem to have psychological mechanisms that are rarely switched on in full force. The mechanism behind Stockholm syndrome (capture-bonding) is one of them. I make that case that it manifests as BDSM and is mildly switched on in military basic training. It's fairly easy to see that there was fairly intense selection for this (conditionally turned on) mechanism when our ancestors lived as hunter gatherers. It's fairly easy to see where human "war mode" would be switched on by anticipation of a bleak future or (much faster) by being attacked. I don't have any good ideas about why people get so angry over global warming arguments. Keith _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From cluebcke at yahoo.com Thu Feb 25 23:33:49 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Thu, 25 Feb 2010 15:33:49 -0800 (PST) Subject: [ExI] extropy-chat Digest, Vol 77, Issue 47 In-Reply-To: References: Message-ID: <199515.82491.qm@web111216.mail.gq1.yahoo.com> One is free to argue science, or policy, all one wishes. Accusations of fraud, however, should be backed by evidence and followed through with appropriate legal or actions where possible. Or they should be withdrawn. Death threats should never be made. That's the kind of anger I'm worried about. ________________________________ From: spike To: ExI chat list Sent: Thu, February 25, 2010 2:49:05 PM Subject: Re: [ExI] extropy-chat Digest, Vol 77, Issue 47 > ...On Behalf Of Keith Henson ... > > I don't have any good ideas about why people get so angry > over global warming arguments... Keith Because these arguments are being used to artificially raise energy prices and taxes? spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Fri Feb 26 00:35:28 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 25 Feb 2010 16:35:28 -0800 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <333352.39982.qm@web36503.mail.mud.yahoo.com> References: <333352.39982.qm@web36503.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > If we want to call ourselves materialists (a noble motivation, I think) then it seems to me we must say that thoughts have mass. Ugh! This is meaningless and misleading. Thoughts aren't even real things. There are such things as thoughts in precisely the same way that there are such things as solitons. http://en.wikipedia.org/wiki/Soliton Solitons are quasiparticles, like plasmons and phonons. They're blatantly imaginary. They have no mass. They are vague perturbations in a continuous field. Nevertheless, it is useful to think of them as discrete objects. Photons make a worse analogy, but are easier to grasp. Photons are real particles that lack mass. Materialism does not say anything even remotely resembling "everything that exists has mass". To imply otherwise is an inexcusable misrepresentation of basic physics. Look at it this way: thoughts are just perturbations in the activities of the brain. This is why Searle's appeals to intuition are not such a great idea. And, I should note, this is an INTJ saying this. I am all kinds of intuitive, but intuition is no more useful when thinking about thoughts than it is when thinking about quantum electrodynamics. It would be far, far more respectful of the reality to ask whether or not a system *is thinking* rather than to ask whether or not it *has thoughts*. Verbs, not nouns. John K. Clark has a point. From thespike at satx.rr.com Fri Feb 26 00:46:42 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 25 Feb 2010 18:46:42 -0600 Subject: [ExI] why anger? In-Reply-To: References: Message-ID: <4B8719F2.9070004@satx.rr.com> On 2/25/2010 5:42 PM, BillK wrote: > On Thu, Feb 25, 2010 at 10:49 PM, spike wrote: >>> ...On Behalf Of Keith Henson >>> I don't have any good ideas about why people get so angry >>> over global warming arguments... Keith a clue: * NEW SCIENTIST issue 2749. * 24 February 2010 Honesty is the best policy for climate scientists FOR many environmentalists, all human influence on the planet is bad. Many natural scientists implicitly share this outlook. This is not unscientific, but it can create the impression that greens and environmental scientists are authoritarian tree-huggers who value nature above people. That doesn't play well with mainstream society, as the apparent backlash against climate science reveals. Environmentalists need to find a new story to tell. Like it or not, we now live in the anthropocene - an age in which humans are perturbing many of the planet's natural systems, from the water cycle to the acidity of the oceans. We cannot wish that away; we must recognise it and manage our impacts. That is central to our cover story. Johan Rockstr?m, head of the Stockholm Environment Institute in Sweden, and colleagues have distilled recent research on how Earth systems work into a list of nine "planetary boundaries" that we must stay within to live sustainably (see "From ocean to ozone: Earth's nine life-support systems"). It is preliminary work, and many will disagree with where the boundaries are set. But the point is to offer a new way of thinking about our relationship with the environment - a science-based picture that accepts a certain level of human impact and even allows us some room to expand. The result is a breath of fresh air: though we are already well past three of the boundaries, we haven't trashed the place yet. It is in the same spirit that we also probe the basis for key claims in the Intergovernmental Panel on Climate Change's 2007 report on climate impacts (see "Can we trust the IPCC on the big stuff?"). This report has been much discussed since our revelations about its unsubstantiated statement on melting Himalayan glaciers. Why return to the topic? Because there is a sense that the IPCC shares the same anti-human agenda and, as a result, is too credulous of unverified numbers. While the majority of the report is assuredly rigorous, there is no escaping the fact that parts of it make claims that go beyond the science. For example, the chapter on Africa exaggerates a claim about crashes in farm yields, and also highlights projections of increased water stress in some regions while ignoring projections in the same study that point to reduced water stress in other regions. These errors are not trifling. They are among the report's headline conclusions. Some will see our investigation as an unwelcome distraction in a propaganda battle to get action on climate change. But if we are to manage the anthropocene successfully, we need cooler heads and clearer statistics. Above all, we need a dispassionate view of the state of the planet and our likely future impact on it. There's no room for complacency: Rockstr?m's analysis shows us that we face real dangers, but exaggerating our problems is not the way to solve them. From lacertilian at gmail.com Fri Feb 26 00:56:08 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Thu, 25 Feb 2010 16:56:08 -0800 Subject: [ExI] Continuity of experience. In-Reply-To: <283357.32131.qm@web36506.mail.mud.yahoo.com> References: <283357.32131.qm@web36506.mail.mud.yahoo.com> Message-ID: Gordon Swobe : > Nobody really knows what they think, because after all their thoughts might exist in someone else's mind. Or whatever. Fluff up the language a little, take one or two steps toward the metaphorical, and you replicate much of the sentiment behind modern psychology. Jung's theory of shadow halves, for example. Common sense has its limitations. In the event that the two contradict one another, I always favor uncommon sense. From stathisp at gmail.com Fri Feb 26 01:59:42 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 26 Feb 2010 12:59:42 +1100 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: On 26 February 2010 07:44, Spencer Campbell wrote: > Basically I am stressing the profound uncertainty involved here. > However, even assuming that you COULD determine the mental states > cycled through by any given brain with enough precision to > differentiate between individuals, I don't see any obvious reason to > believe that sufficiently-similar states would somehow "attract" your > M to occupy what was (moments before) a completely distinct body. If I tell you that sneezing destroys M what is your response? Your response is that that's silly: you sneezed a few minutes ago and you're still the same person, and your friend next to you also sneezed (it was dusty environment) and you aren't in mourning for him either. I could present you with scientific evidence that sneezing increases intracranial pressure and claim that it's this which destroys the M, but you will again reply that this is absurd, since it's self-evident that people survive sneezing. The point is that you already implicitly define M in terms of continuity of mental states. There is no other standard to which you can refer, either objective or subjective, to settle a dispute if we disagree on whether M is preserved. -- Stathis Papaioannou From cluebcke at yahoo.com Fri Feb 26 01:37:28 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Thu, 25 Feb 2010 17:37:28 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: References: <333352.39982.qm@web36503.mail.mud.yahoo.com> Message-ID: <80019.54772.qm@web111205.mail.gq1.yahoo.com> This request seems to keep falling on deaf ears, but I'll state again that a question like "do thoughts have mass" absolutely cannot ever be answered if the people involved do not have a solid working definition of what a "thought" is. If a thought is a unit of information, then no, it doesn't have mass. If you pass a high-powered magnet over the hard drive of my computer, the mass of the hard drive will not change, but the information will be destroyed. So too for adding information back to it. If by "thoughts" you instead mean something like "thinking", then yes, it's a verb, and "thinking" doesn't mass any more (or less) than "running" or "typing". What a massive category error. Do thoughts have mass? Depends on whether you've beaten your wife lately :P ________________________________ From: Spencer Campbell To: ExI chat list Sent: Thu, February 25, 2010 4:35:28 PM Subject: Re: [ExI] Is the brain a digital computer? Gordon Swobe : > If we want to call ourselves materialists (a noble motivation, I think) then it seems to me we must say that thoughts have mass. Ugh! This is meaningless and misleading. Thoughts aren't even real things. There are such things as thoughts in precisely the same way that there are such things as solitons. http://en.wikipedia.org/wiki/Soliton Solitons are quasiparticles, like plasmons and phonons. They're blatantly imaginary. They have no mass. They are vague perturbations in a continuous field. Nevertheless, it is useful to think of them as discrete objects. Photons make a worse analogy, but are easier to grasp. Photons are real particles that lack mass. Materialism does not say anything even remotely resembling "everything that exists has mass". To imply otherwise is an inexcusable misrepresentation of basic physics. Look at it this way: thoughts are just perturbations in the activities of the brain. This is why Searle's appeals to intuition are not such a great idea. And, I should note, this is an INTJ saying this. I am all kinds of intuitive, but intuition is no more useful when thinking about thoughts than it is when thinking about quantum electrodynamics. It would be far, far more respectful of the reality to ask whether or not a system *is thinking* rather than to ask whether or not it *has thoughts*. Verbs, not nouns. John K. Clark has a point. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From cluebcke at yahoo.com Fri Feb 26 01:43:12 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Thu, 25 Feb 2010 17:43:12 -0800 (PST) Subject: [ExI] why anger? In-Reply-To: <4B8719F2.9070004@satx.rr.com> References: <4B8719F2.9070004@satx.rr.com> Message-ID: <232041.33086.qm@web111212.mail.gq1.yahoo.com> Again, the anger that's troubling is not the anger due to the IPCC published a very large report that is marred in certain areas by bias. The anger that's troubling is due to a large number of voices stridently attempting to convince a large number of people, with a great deal of apparent success, that the scientists who believe in AGW are part of a conspiracy to deprive them of their rights, devastate the economy and destroy much that is good about the world. That's when people start making death threats. That's what's going to get somebody killed. ________________________________ From: Damien Broderick To: ExI chat list Sent: Thu, February 25, 2010 4:46:42 PM Subject: [ExI] why anger? On 2/25/2010 5:42 PM, BillK wrote: > On Thu, Feb 25, 2010 at 10:49 PM, spike wrote: >>> ...On Behalf Of Keith Henson >>> I don't have any good ideas about why people get so angry >>> over global warming arguments... Keith a clue: * NEW SCIENTIST issue 2749. * 24 February 2010 Honesty is the best policy for climate scientists FOR many environmentalists, all human influence on the planet is bad. Many natural scientists implicitly share this outlook. This is not unscientific, but it can create the impression that greens and environmental scientists are authoritarian tree-huggers who value nature above people. That doesn't play well with mainstream society, as the apparent backlash against climate science reveals. Environmentalists need to find a new story to tell. Like it or not, we now live in the anthropocene - an age in which humans are perturbing many of the planet's natural systems, from the water cycle to the acidity of the oceans. We cannot wish that away; we must recognise it and manage our impacts. That is central to our cover story. Johan Rockstr?m, head of the Stockholm Environment Institute in Sweden, and colleagues have distilled recent research on how Earth systems work into a list of nine "planetary boundaries" that we must stay within to live sustainably (see "From ocean to ozone: Earth's nine life-support systems"). It is preliminary work, and many will disagree with where the boundaries are set. But the point is to offer a new way of thinking about our relationship with the environment - a science-based picture that accepts a certain level of human impact and even allows us some room to expand. The result is a breath of fresh air: though we are already well past three of the boundaries, we haven't trashed the place yet. It is in the same spirit that we also probe the basis for key claims in the Intergovernmental Panel on Climate Change's 2007 report on climate impacts (see "Can we trust the IPCC on the big stuff?"). This report has been much discussed since our revelations about its unsubstantiated statement on melting Himalayan glaciers. Why return to the topic? Because there is a sense that the IPCC shares the same anti-human agenda and, as a result, is too credulous of unverified numbers. While the majority of the report is assuredly rigorous, there is no escaping the fact that parts of it make claims that go beyond the science. For example, the chapter on Africa exaggerates a claim about crashes in farm yields, and also highlights projections of increased water stress in some regions while ignoring projections in the same study that point to reduced water stress in other regions. These errors are not trifling. They are among the report's headline conclusions. Some will see our investigation as an unwelcome distraction in a propaganda battle to get action on climate change. But if we are to manage the anthropocene successfully, we need cooler heads and clearer statistics. Above all, we need a dispassionate view of the state of the planet and our likely future impact on it. There's no room for complacency: Rockstr?m's analysis shows us that we face real dangers, but exaggerating our problems is not the way to solve them. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Fri Feb 26 02:16:29 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 25 Feb 2010 21:16:29 -0500 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <333352.39982.qm@web36503.mail.mud.yahoo.com> References: <333352.39982.qm@web36503.mail.mud.yahoo.com> Message-ID: <62c14241002251816r17095a9cqeab4b5b56a70152a@mail.gmail.com> On Thu, Feb 25, 2010 at 5:01 PM, Gordon Swobe wrote: > --- On Tue, 2/23/10, Dave Sill wrote: >> Blood is tangible, mental phenomena are not. Of *course* >> they're in different categories. > > You argue then for mind/matter dualism. > >> You can hold a pint of blood in your hand, but you can't >> hold a pint of thought > > Again you argue for mind/matter dualism. The question of how to answer you has weighed heavily on my mind. > > If we want to call ourselves materialists (a noble motivation, I think) then it seems to me we must say that thoughts have mass. This does not mean however that one's brain becomes more massive when one has thoughts; it means only that one has thoughts by virtue of having a massive brain. Then large animals are smarter than small animals? See how stupid that is? > If should seem obvious, for example, that thoughts don't exist in the absence of brain matter. Playing devil's advocate here: Suppose thoughts exist like radio waves and brains are merely the antenna that tunes them in. If there is no radio present, do you really believe the radio signals stop existing? I'm not arguing for mind/matter dualism. I'm just arguing for fun. From spike66 at att.net Fri Feb 26 03:21:23 2010 From: spike66 at att.net (spike) Date: Thu, 25 Feb 2010 19:21:23 -0800 Subject: [ExI] why anger? In-Reply-To: <4B8719F2.9070004@satx.rr.com> References: <4B8719F2.9070004@satx.rr.com> Message-ID: > ...On Behalf Of Damien Broderick ... > > * NEW SCIENTIST issue 2749. > * 24 February 2010 > > Honesty is the best policy for climate scientists > > FOR many environmentalists, all human influence on the planet is bad... Ja. This attitude has made me uncomfortable for some time. It seems to carry a fundamental misunderstanding of the process of evolution. Humans evolved, so we are natural. Our works are natural. So if we create a world in which many or most species cannot survive, even accidentally, then that is as natural a process as the beaver creating a dam, flooding out and slaying the local endangered species, along with my brand new goddam pump house. If we alter our environment so that most species cannot survive but in which humans do just fine (being Africans only recently capable of moving away from the tropics) that represents an evolutionary process, and is natural. If we create an environment that (somehow) supports 100 billion humans but nothing else that does not directly support human life, is that not evolution? Does not that subset of environmentalism recognize that if humans manage to create a singularity and eventually transform all the metals of the solar system into an Mbrain or a bunch of Sbrains, this too is a natural product of evolution? If we then send nanoprobes into the rest of the galaxy to turn other stars' metals into computronium, destroying all indigenous life there but facilitating the thinking of pure thought, is that not evolution in action? spike From spike66 at att.net Fri Feb 26 03:08:02 2010 From: spike66 at att.net (spike) Date: Thu, 25 Feb 2010 19:08:02 -0800 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <80019.54772.qm@web111205.mail.gq1.yahoo.com> References: <333352.39982.qm@web36503.mail.mud.yahoo.com> <80019.54772.qm@web111205.mail.gq1.yahoo.com> Message-ID: On Behalf Of Christopher Luebcke ...If you pass a high-powered magnet over the hard drive of my computer, the mass of the hard drive will not change, but the information will be destroyed... Christopher In every imaginable practical sense, I agree. However, any absolute statement with this bunch of scientific yahoos is a red cape before the raging bull. The argument would go like this: You specified that the magnet destroys information, which represents the creation of entropy, which implies an exothermic process, so energy is released from the system (granted it is an unmeasurably tiny amount, but some). Energy is equivalent to mass: E = mc^2 so mass = E/c^2, so passing the magnet over your drive causes it to lose mass. Granted it might be only a few AMU, so you would need to pass magnets over a mole of drives to cause their collective mass to go down by a few grams, but in any case, OLE! {8^D spike From spike66 at att.net Fri Feb 26 03:09:39 2010 From: spike66 at att.net (spike) Date: Thu, 25 Feb 2010 19:09:39 -0800 Subject: [ExI] why anger? In-Reply-To: <4B8719F2.9070004@satx.rr.com> References: <4B8719F2.9070004@satx.rr.com> Message-ID: > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > Damien Broderick > Sent: Thursday, February 25, 2010 4:47 PM > To: ExI chat list > Subject: [ExI] why anger? > > On 2/25/2010 5:42 PM, BillK wrote: > > On Thu, Feb 25, 2010 at 10:49 PM, spike wrote: > >>> ...On Behalf Of Keith Henson > >>> I don't have any good ideas about why people get so angry over > >>> global warming arguments... Keith > > a clue: > > * NEW SCIENTIST issue 2749. > * 24 February 2010 > > Honesty is the best policy for climate scientists > > FOR many environmentalists, all human influence on the planet is bad. > Many natural scientists implicitly share this outlook. This > is not unscientific, but it can create the impression that > greens and environmental scientists are authoritarian > tree-huggers who value nature above people. That doesn't play > well with mainstream society, as the apparent backlash > against climate science reveals. > > Environmentalists need to find a new story to tell. Like it > or not, we now live in the anthropocene - an age in which > humans are perturbing many of the planet's natural systems, > from the water cycle to the acidity of the oceans. We cannot > wish that away; we must recognise it and manage our impacts. > > That is central to our cover story. Johan Rockstr?m, head of > the Stockholm Environment Institute in Sweden, and colleagues > have distilled recent research on how Earth systems work into > a list of nine "planetary boundaries" that we must stay > within to live sustainably (see "From ocean to ozone: Earth's > nine life-support systems"). It is preliminary work, and many > will disagree with where the boundaries are set. But the > point is to offer a new way of thinking about our > relationship with the environment - a science-based picture > that accepts a certain level of human impact and even allows > us some room to expand. The result is a breath of fresh air: > though we are already well past three of the boundaries, we > haven't trashed the place yet. > > It is in the same spirit that we also probe the basis for key > claims in the Intergovernmental Panel on Climate Change's > 2007 report on climate impacts (see "Can we trust the IPCC on > the big stuff?"). This report has been much discussed since > our revelations about its unsubstantiated statement on > melting Himalayan glaciers. Why return to the topic? > Because there is a sense that the IPCC shares the same > anti-human agenda and, as a result, is too credulous of > unverified numbers. While the majority of the report is > assuredly rigorous, there is no escaping the fact that parts > of it make claims that go beyond the science. > > For example, the chapter on Africa exaggerates a claim about > crashes in farm yields, and also highlights projections of > increased water stress in some regions while ignoring > projections in the same study that point to reduced water > stress in other regions. These errors are not trifling. > They are among the report's headline conclusions. > > Some will see our investigation as an unwelcome distraction > in a propaganda battle to get action on climate change. But > if we are to manage the anthropocene successfully, we need > cooler heads and clearer statistics. > > Above all, we need a dispassionate view of the state of the > planet and our likely future impact on it. There's no room > for complacency: > Rockstr?m's analysis shows us that we face real dangers, but > exaggerating our problems is not the way to solve them. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Fri Feb 26 04:04:55 2010 From: spike66 at att.net (spike) Date: Thu, 25 Feb 2010 20:04:55 -0800 Subject: [ExI] why anger? In-Reply-To: References: <4B8719F2.9070004@satx.rr.com> Message-ID: ... > > > > * NEW SCIENTIST issue 2749. > > * 24 February 2010 > > > > Honesty is the best policy for climate scientists > > > > FOR many environmentalists, all human influence on the > planet is bad... Apologies, accidentally hit send too early. More to the point: it seems to me that an MBrain is the only logical endpoint for evolution. Like a mathematical proof, where we see where we are trying to go, the MBrain is the endpoint. We need only find a path, any path to get there. As imaginative as I am (and modest too) I cannot think of ANY other logical evolutionary endpoint than an MBrain. If the MBrain is our destiny, it doesn't matter if we save every extant species, but it does matter if we maximize thought. Thought is good. Can anyone here propose any other endpoint to evolution besides an MBrain around every star throughout the galaxy? spike From cluebcke at yahoo.com Fri Feb 26 06:43:30 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Thu, 25 Feb 2010 22:43:30 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: References: <333352.39982.qm@web36503.mail.mud.yahoo.com> <80019.54772.qm@web111205.mail.gq1.yahoo.com> Message-ID: <800566.59290.qm@web111216.mail.gq1.yahoo.com> I'm sure you're right about how the argument would (and probably will) go. However, the opening notion that materialists must believe that thoughts have mass seems to come from a preposterous assumption that everything in the material universe has mass; this is inarguably not the case, yet acknowledging the existence of photons does not commit one to dualism. And the argument is still pointless without an agreed-upon definition of "a thought". But I'm sure that won't stop anybody. ________________________________ From: spike To: ExI chat list Sent: Thu, February 25, 2010 7:08:02 PM Subject: Re: [ExI] Is the brain a digital computer? On Behalf Of Christopher Luebcke ...If you pass a high-powered magnet over the hard drive of my computer, the mass of the hard drive will not change, but the information will be destroyed... Christopher In every imaginable practical sense, I agree. However, any absolute statement with this bunch of scientific yahoos is a red cape before the raging bull. The argument would go like this: You specified that the magnet destroys information, which represents the creation of entropy, which implies an exothermic process, so energy is released from the system (granted it is an unmeasurably tiny amount, but some). Energy is equivalent to mass: E = mc^2 so mass = E/c^2, so passing the magnet over your drive causes it to lose mass. Granted it might be only a few AMU, so you would need to pass magnets over a mole of drives to cause their collective mass to go down by a few grams, but in any case, OLE! {8^D spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Feb 26 07:57:47 2010 From: pharos at gmail.com (BillK) Date: Fri, 26 Feb 2010 07:57:47 +0000 Subject: [ExI] why anger? In-Reply-To: References: <4B8719F2.9070004@satx.rr.com> Message-ID: On Fri, Feb 26, 2010 at 4:04 AM, spike wrote: > More to the point: it seems to me that an MBrain is the only logical > endpoint for evolution. ?Like a mathematical proof, where we see where we > are trying to go, the MBrain is the endpoint. ?We need only find a path, any > path to get there. ?As imaginative as I am (and modest too) I cannot think > of ANY other logical evolutionary endpoint than an MBrain. ?If the MBrain is > our destiny, it doesn't matter if we save every extant species, but it does > matter if we maximize thought. ?Thought is good. > > Can anyone here propose any other endpoint to evolution besides an MBrain > around every star throughout the galaxy? > > Yup. This thinking stuff is only useful because survival is such a struggle. Human life is generally a battle for the necessities of life, with occasional good times. Once happiness can be obtained as a normal state, with systems set up to enable the continuation of ecstatic bliss with no effort on our part, then the vast majority of humanity will opt out of the thought game. Couch potatoes to the max. BillK From stathisp at gmail.com Fri Feb 26 12:12:02 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 26 Feb 2010 23:12:02 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <492088.37833.qm@web36508.mail.mud.yahoo.com> References: <492088.37833.qm@web36508.mail.mud.yahoo.com> Message-ID: On 26 February 2010 01:45, Gordon Swobe wrote: > --- On Thu, 2/25/10, Stathis Papaioannou wrote: > >>> You contradicted yourself when you wrote as you did >>> yesterday of "functionally identical but unconscious" brain >>> components. This is why I wrote that making such a thing >>> would be like trying to draw a square triangle. You took my >>> comment wrongly to mean that I deny the possibility of weak >>> AI. >> >> I have been at pains to say that the brain component is >> functionally identical from the point of view of its external behaviour. > > And I have been at pains to explain that you cannot get identical external behavior from an unconscious component. Because of the feedback loop between conscious experience and behavior, the components you have in mind cannot exist. They're oxymoronic. > > You open a can of worms when you replace any component of the NCC with one of your unconscious dummy components. But you can close that can of worms with enough work on other parts of the patient. You can continue to work on the patient, patching the software so to speak and rewiring the brain in other areas, until finally you create weak AI. > > The final product of your efforts will pass the Turing test, but its brain structure will differ somewhat from that of the original patient. You have no problem with the idea that an AI could behave like a human but you don't think it could behave like a neuron. Does this mean you also think it would be harder to make a convincing zombie mouse than a zombie human, and a zombie flatworm harder still? The task is to replace all the components of a neuron with artificial components so that the neuron behaves just the same. We assume that this is done by aliens with extremely advanced technology. They could, for example, replace every atom in the neuron with exactly the same atom, and the resulting neuron will behave normally and have normal consciousness, even though the aliens only set out to reproduce the behaviour and neither know nor care about human consciousness. As an exercise, the aliens decide to replace the neuronal components with equivalently functioning analogues. For example, the neuron will have a system that is responsible for the timing and amount of neurotransmitter release, and the aliens install in its place a nanoprocessor which does the same job. Suppose the original system was part of the NCC. Are you saying that however hard the aliens try, they won't be able to get the modified neuron to control neurotransmitter release in the same way as the original neuron? If you are, then you are saying that the NCC utilises physics which is NOT COMPUTABLE. It does infinite precision arithmetic, or solves the halting problem, or something. If this is not the case, then the aliens would be able to make computerised neurons which are drop-in replacements for biological neurons; no further adjustment to biological tissue would be needed for the whole brain to behave normally. Note that functionalism can still be true even if the NCC is not computable. Functionalism says that if you replace the NCC with a similarly functioning device, you will preserve the mind as well as the behaviour. If a digital computer can't do the job, another device that functions as a hypercomputer might. For example, a little man pulling levers could be up to the task as his own brain will not suffer from the limitations of a Turing machine. The neuron containing the little man would then behave exactly the same as a biological neuron, and would also have the consciousness of a biological neuron, otherwise we are once more up against the problem of the partial zombie. -- Stathis Papaioannou From stathisp at gmail.com Fri Feb 26 12:22:12 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 26 Feb 2010 23:22:12 +1100 Subject: [ExI] why anger? In-Reply-To: References: <4B8719F2.9070004@satx.rr.com> Message-ID: On 26 February 2010 14:21, spike wrote: > >> ...On Behalf Of Damien Broderick > ... >> >> ? ? ? * NEW SCIENTIST issue 2749. >> ? ? ? * 24 February 2010 >> >> Honesty is the best policy for climate scientists >> >> FOR many environmentalists, all human influence on the planet is bad... > > Ja. ?This attitude has made me uncomfortable for some time. ?It seems to > carry a fundamental misunderstanding of the process of evolution. ?Humans > evolved, so we are natural. ?Our works are natural. ?So if we create a world > in which many or most species cannot survive, even accidentally, then that > is as natural a process as the beaver creating a dam, flooding out and > slaying the local endangered species, along with my brand new goddam pump > house. > > If we alter our environment so that most species cannot survive but in which > humans do just fine (being Africans only recently capable of moving away > from the tropics) that represents an evolutionary process, and is natural. > If we create an environment that (somehow) supports 100 billion humans but > nothing else that does not directly support human life, is that not > evolution? > > Does not that subset of environmentalism recognize that if humans manage to > create a singularity and eventually transform all the metals of the solar > system into an Mbrain or a bunch of Sbrains, this too is a natural product > of evolution? ?If we then send nanoprobes into the rest of the galaxy to > turn other stars' metals into computronium, destroying all indigenous life > there but facilitating the thinking of pure thought, is that not evolution > in action? To be strictly correct, those who oppose environment-altering engineering have to change the language they use slightly to say that that which nature created prior to the advent of technologically capable humans is good and worth preserving. -- Stathis Papaioannou From stefano.vaj at gmail.com Fri Feb 26 12:51:38 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 26 Feb 2010 13:51:38 +0100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: References: <552218.15686.qm@web36504.mail.mud.yahoo.com> <580930c21002240315t363c6552mb1fee6b7b70a31e2@mail.gmail.com> <580930c21002240704y63e6d6ddr5ffdd292b3c60404@mail.gmail.com> Message-ID: <580930c21002260451o4c8592f6we7d4663761eca7f8@mail.gmail.com> On 25 February 2010 12:21, Stathis Papaioannou wrote: > There are many examples of non-computable numbers, functions and > problems in mathematics: > > http://en.wikipedia.org/wiki/Computable_number > http://en.wikipedia.org/wiki/Computable_function > http://en.wikipedia.org/wiki/List_of_undecidable_problems Yes. But this has not much to do with the issue of whether a given system may be non-computable in the sense that it may exhibit an output which could not be replicated by another system when performance is not an issue. -- Stefano Vaj From gts_2000 at yahoo.com Fri Feb 26 13:05:52 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 26 Feb 2010 05:05:52 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <62c14241002251816r17095a9cqeab4b5b56a70152a@mail.gmail.com> Message-ID: <631635.87594.qm@web36503.mail.mud.yahoo.com> --- On Thu, 2/25/10, Mike Dougherty wrote: >> If should seem obvious, for example, that thoughts >> don't exist in the absence of brain matter. > > Playing devil's advocate here:? Suppose thoughts exist > like radio waves and brains are merely the antenna that tunes them > in.? If there is no radio present, do you really believe the radio > signals stop existing? That sounds like some of sort of dualistic religion to me. We can suppose all sorts of religious ideas, but I'd rather not. > I'm not arguing for mind/matter dualism.? I'm just > arguing for fun. I like your attitide. :) -gts From stathisp at gmail.com Fri Feb 26 13:19:57 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 27 Feb 2010 00:19:57 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <580930c21002260451o4c8592f6we7d4663761eca7f8@mail.gmail.com> References: <552218.15686.qm@web36504.mail.mud.yahoo.com> <580930c21002240315t363c6552mb1fee6b7b70a31e2@mail.gmail.com> <580930c21002240704y63e6d6ddr5ffdd292b3c60404@mail.gmail.com> <580930c21002260451o4c8592f6we7d4663761eca7f8@mail.gmail.com> Message-ID: On 26 February 2010 23:51, Stefano Vaj wrote: > On 25 February 2010 12:21, Stathis Papaioannou wrote: >> There are many examples of non-computable numbers, functions and >> problems in mathematics: >> >> http://en.wikipedia.org/wiki/Computable_number >> http://en.wikipedia.org/wiki/Computable_function >> http://en.wikipedia.org/wiki/List_of_undecidable_problems > > Yes. But this has not much to do with the issue of whether a given > system may be non-computable in the sense that it may exhibit an > output which could not be replicated by another system when > performance is not an issue. It could, if the system to be replicated is a hypercomputer. It could still be replicated by another hypercomputer, but not by a Turing machine. -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Feb 26 12:55:26 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 26 Feb 2010 04:55:26 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <142221.89448.qm@web36507.mail.mud.yahoo.com> --- On Thu, 2/25/10, Spencer Campbell wrote: > Ugh! > > This is meaningless and misleading. Thoughts aren't even real things. Sorry to hear then that you can't really think. I can really think real thoughts. <- I even typed that one > Look at it this way: thoughts are just perturbations in the > activities of the brain. Material perturbations in brain matter, I would say. If you really alter or eliminate the real brain matter, you will really alter or eliminate the real thoughts. So it looks like thoughts really exist as real material events in the real brain. -gts From gts_2000 at yahoo.com Fri Feb 26 14:39:46 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 26 Feb 2010 06:39:46 -0800 (PST) Subject: [ExI] Is the brain a digital computer? Message-ID: <897978.75613.qm@web36505.mail.mud.yahoo.com> --- On Fri, 2/26/10, Stathis Papaioannou wrote: > You have no problem with the idea that an AI could behave > like a human but you don't think it could behave like a neuron. The Turing test defines (weak) AI and neurons cannot take the Turing test, so I don't know what it means to speak of an AI behaving like a neuron. > The task is to replace all the components of a neuron with > artificial components so that the neuron behaves just the same. If and when we understand how neurons cause consciousness, we will perhaps have it on our power to make the kind of artificial neurons you want. They'll work a lot like biological neurons, and might work exactly like them. We might need effectively to get into the business of manufacturing biological neurons, rendering the distinction between artificial and natural meaningless. > Are you saying that however hard the aliens try, they > won't be able to get the modified neuron to control > neurotransmitter release in the same way as the original neuron? No, I mean that where consciousness is concerned, I don't believe digital computations of its causal mechanisms will do the trick. To understand me here, you need to understand what I wrote a few days ago about the acausal and observer-relative nature of computations. -gts From gts_2000 at yahoo.com Fri Feb 26 16:15:08 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 26 Feb 2010 08:15:08 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <800566.59290.qm@web111216.mail.gq1.yahoo.com> Message-ID: <35041.87142.qm@web36503.mail.mud.yahoo.com> --- On Fri, 2/26/10, Christopher Luebcke wrote: > ...If you pass a high-powered magnet over the hard drive of > my computer, the mass of the hard drive will not change, but the > information will be destroyed... Christopher I do not consider information on a hard drive as constitutive of thought, for the same reason that I do not believe newspapers think about the words printed on them. -gts From spike66 at att.net Fri Feb 26 17:00:17 2010 From: spike66 at att.net (spike) Date: Fri, 26 Feb 2010 09:00:17 -0800 Subject: [ExI] endpoint of evolution: was RE: why anger? In-Reply-To: References: <4B8719F2.9070004@satx.rr.com> Message-ID: <6444DC19A6864907B18CEB56F11F061F@spike> > ...On Behalf Of Stathis Papaioannou ... > > ...If we then send nanoprobes into the > > rest of the galaxy to turn other stars' metals into computronium, > > destroying all indigenous life there but facilitating the thinking of > > pure thought, is that not evolution in action? > > To be strictly correct, those who oppose environment-altering > engineering have to change the language they use slightly to > say that that which nature created prior to the advent of > technologically capable humans is good and worth preserving. > -- > Stathis Papaioannou OK I am with you on that, but let's look at the endpoint question: What is the ultimate endpoint of evolution? We can imagine a planet with a climate like ours today with about a billion well-fed well-educated humans and a bunch of areas where humans never go, filled with lots of bugs and other interesting beasts, all in an equilibrium with little change over millenia until the heat death of the universe, forever and ever amen. Many in the environmental movement picture this, but I consider it an unlikely outcome. Rather, I envision a continually changing chaotic system pressing towards (I hope) increasing intelligence with ever improving means of making matter ever more efficient in thinking. Thought experiment: picture the earth as it was for most of its 5 billion year history: a big rock with the only lifeform being blue-green algal mats. If we saw it that way, we might say this is a big waste, there are no thought being thunk here. The actual metals (everything that isn't hydrogen or helium) is idle. Multicellular life comes along, Prerecambrian explosion, dinosaurs, etc, now suddenly right at the endgame, sentient beings show up. >From the point of view of an MBrain, the overwhelming majority of the metals on this planet still are unemployed with thought, with the rare exception of a few billion scattered points of proto-thought. My notion of an endpoint of evolution is that these few points of proto-thought create mind children capable of robust thought, and eventually put all the available metals to the task. With a mere few grams or perhaps a few kg of those metals, we could simulate all the interesting currently extant lifeforms, and use other metals to simulate other possible evolutionary paths, demonstrating something about which I have long been fascinated: convergent evolution, as seen in the ratites. Until we get all the metals thinking in the form of computronium, they are going to waste. We need to get on it, since we have a limited time before the heat death of the universe, perhaps as short as a few hundred billion years. Question please: is there any other logical endpoint for evolution besides an MBrain? spike From cluebcke at yahoo.com Fri Feb 26 17:39:19 2010 From: cluebcke at yahoo.com (Chris Luebcke) Date: Fri, 26 Feb 2010 09:39:19 -0800 (PST) Subject: [ExI] Is the brain a digital computer? Message-ID: <618058.65504.qm@web111212.mail.gq1.yahoo.com> And as I keep pointing out, if you can't define what a thought is, then statements about its properties are vacuous. You are also interchangeably using the terms 'thought', representing the activity of thinking, with 'thoughts'. Surely you weren't claiming that the activity of thinking has mass, so your response seems to almost willfully misrepresent what I said. I believe that the only workable definition of the noun 'a thought' is, fundamentally, 'a statement', which is certainly information, certainly can be held in media other than brains, and certainly does not have mass. If you have an alternative definition other than the assertion that everybody already knows exactly what you're talking about so you don't really need to say it, I'd love to hear it. Especially if the definition explains how it can have mass. On Feb 26, 2010, at 8:15 AM, Gordon Swobe wrote: --- On Fri, 2/26/10, Christopher Luebcke wrote: ...If you pass a high-powered magnet over the hard drive of my computer, the mass of the hard drive will not change, but the information will be destroyed... Christopher I do not consider information on a hard drive as constitutive of thought, for the same reason that I do not believe newspapers think about the words printed on them. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From bbenzai at yahoo.com Fri Feb 26 17:12:55 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 26 Feb 2010 09:12:55 -0800 (PST) Subject: [ExI] Continuity of experience In-Reply-To: Message-ID: <392497.83388.qm@web113615.mail.gq1.yahoo.com> Spencer Campbell wrote: > It doesn't matter whether it lasts a hundred years or a > hundred > seconds. Absolute brain death is *scary*. What about a hundred yoctoseconds? Ben Zaiboc From spike66 at att.net Fri Feb 26 17:28:49 2010 From: spike66 at att.net (spike) Date: Fri, 26 Feb 2010 09:28:49 -0800 Subject: [ExI] endpoint of evolution: was RE: why anger? In-Reply-To: References: <4B8719F2.9070004@satx.rr.com> Message-ID: <4E790BB0F5784683AAC6BAD8FCD13571@spike> > ...On Behalf Of Stathis Papaioannou... > > To be strictly correct, those who oppose environment-altering > engineering have to change the language they use slightly to > say that that which nature created prior to the advent of > technologically capable humans is good and worth preserving. > Stathis Papaioannou Ja Stathis, I agree and wish to get more to your point. By simulating the previously evolved life, the creation of an MBrain is the only evolutionary path that makes sense to me. I don't see how we can create sufficient nature preserves for all of it on a long term basis. We haven't been particularly successful with that approach so far, and the future looks less promising for nature preserves than today. I can easily imagine something analogous to a robust religion of some sort, arising and declaring that pretty much all lifeforms other than sheep or cattle are unclean, and are worthy of only neglect or brutal destruction for instance. Any humans who oppose this course of action would also be considered unclean and worthy of destruction. Humankind appears to be tragically susceptible to these kinds of notions. The scary part is that an MBrain could also be susceptible to these kinds of memefections. spike From jonkc at bellsouth.net Fri Feb 26 17:39:41 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 26 Feb 2010 12:39:41 -0500 Subject: [ExI] How not to make a thought experiment. In-Reply-To: <333352.39982.qm@web36503.mail.mud.yahoo.com> References: <333352.39982.qm@web36503.mail.mud.yahoo.com> Message-ID: <56BCABDE-0760-4526-AA10-2988677B8774@bellsouth.net> Since my last post Gordon Swobe has posted 6 times. > The Turing test defines (weak) AI and neurons cannot take the Turing test, > so I don't know what it means to speak of an AI behaving like a neuron. Swobe thinks its possible for a computer to behave externally just like you or me, but he thinks its impossible for a computer to behave externally like a neuron, it can simulate a 100 billion things but it can't simulate one thing. But wait I can hear the Swobe apologists say, he is saying the neuron is doing something (something pointless from Evolution's viewpoint) completely internal to the neuron that generates consciousness, a "something" that is in no way communicated to other neurons. Since in the consciousness realm neurons are completely isolated from each other, it would be interesting if Swobe could explain why there aren't 100 billion independent conscious beings in his head. Or perhaps there are. > If and when we understand how neurons cause consciousness [...] There is no hope of Swobe being satisfied in a explanation of how neurons cause consciousness because he is not interested in what the neurons actually do, so the Scientific Method can not help him. In fact I don't understand why he assumes neurons or even the brain in general has anything to do with consciousness. The brain clearly has much to do with intelligence but according to Swobe that means nothing. In the Magical Kingdom of Swobeland the key organ of consciousness is just as likely to be the big toe as the brain. > I do not consider information on a hard drive as constitutive of thought, for the same reason that I do not believe newspapers think about the words printed on them. That is a really lousy example. Newspapers are static and a static mind is not a mind, computers are not static. > You argue then for mind/matter dualism. [...] Again you argue for mind/matter dualism. [...]That sounds like some of sort of dualistic religion to me. For some strange reason Swobe thinks we should regard with horror something that has two contrasting aspects, but I'll let him in on a little secret, some things can even have three. > If we want to call ourselves materialists (a noble motivation, I think) then it seems to me we must say that thoughts have mass. Christianity has ten commandments so "ten" must have mass, that car is broken so "broken" must have mass, I do not understand what the hell Swobe is talking about so "not" must have mass. > If should seem obvious, for example, that thoughts don't exist in the absence of brain matter. Thoughts need something to think them, grey goo can do that but so could beer cans and toilet paper if you had enough of them and were cunning enough to arrange them just so. If Swobe's intuition rebels against that then that's just too bad because Turing proved it's a fact; and there is no reason to think our intuition should be any good in this matter because as I said before there would be little survival value in being good at it, so Evolution didn't bother. When our ancestors were living on the African savannas they encountered very few beer can and toilet paper computers. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Fri Feb 26 18:19:58 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 26 Feb 2010 13:19:58 -0500 Subject: [ExI] How not to make a thought experiment. In-Reply-To: <56BCABDE-0760-4526-AA10-2988677B8774@bellsouth.net> References: <333352.39982.qm@web36503.mail.mud.yahoo.com> <56BCABDE-0760-4526-AA10-2988677B8774@bellsouth.net> Message-ID: 2010/2/26 John Clark : > > ... Since in the > consciousness realm neurons are completely isolated from each other, it > would be interesting if Swobe could explain why there aren't 100 billion > independent conscious beings in his head. Or perhaps there are. And this thread won't end until each has had its say--even if they're all in agreement. On the positive side, we're nearing the halfway point. :-) -Dave From dan_ust at yahoo.com Fri Feb 26 16:44:30 2010 From: dan_ust at yahoo.com (Dan) Date: Fri, 26 Feb 2010 08:44:30 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <631635.87594.qm@web36503.mail.mud.yahoo.com> References: <631635.87594.qm@web36503.mail.mud.yahoo.com> Message-ID: <887732.3482.qm@web30102.mail.mud.yahoo.com> On?Fri, February 26, 2010 8:05:52 AM Gordon Swobe gts_2000 at yahoo.com wrote: >--- On Thu, 2/25/10, Mike Dougherty wrote: >>> If should seem obvious, for example, that thoughts >>> don't exist in the absence of brain matter. >> >> Playing devil's advocate here:? Suppose thoughts exist >> like radio waves and brains are merely the antenna that tunes them >> in.? If there is no radio present, do you really believe the radio >> signals stop existing? > > That sounds like some of sort of dualistic religion to me. We can suppose all sorts of religious ideas, but I'd rather not. I'm curious: Is dualism ruled out a priori? Dualism, as such, is not religious. It's merely a different stand than monism or other such stands. This is not to say I'm a dualist, but I don't think one has to stand firm against it on these issues -- though, to be sure, I'm more persuaded by the likes of Jaegwon Kim than by dualists at this time. Regards, Dan From gts_2000 at yahoo.com Fri Feb 26 19:13:45 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 26 Feb 2010 11:13:45 -0800 (PST) Subject: [ExI] Is the brain a digital computer? Message-ID: <506948.96513.qm@web36506.mail.mud.yahoo.com> > I believe that the only workable definition of the noun 'a > thought' is, fundamentally, 'a statement', which is > certainly information, certainly can be held in media other > than brains, and certainly does not have mass. By the noun thought I mean the intentional mental state of a conscious mind. Hard drives and newspapers contain information but they do not have mental states intentional or otherwise, at least not this side of science- fiction. On my materialist view, brain matter causes and contains thoughts. And brain matter has mass. So notwithstanding the possible involvement of massless particles, thoughts have mass. -gts From aware at awareresearch.com Fri Feb 26 19:22:32 2010 From: aware at awareresearch.com (Aware) Date: Fri, 26 Feb 2010 11:22:32 -0800 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <887732.3482.qm@web30102.mail.mud.yahoo.com> References: <631635.87594.qm@web36503.mail.mud.yahoo.com> <887732.3482.qm@web30102.mail.mud.yahoo.com> Message-ID: On Fri, Feb 26, 2010 at 8:44 AM, Dan wrote: > This is not to say I'm a dualist, but I don't think one has to stand firm against it on these issues -- though, to be sure, I'm more persuaded by the likes of Jaegwon Kim than by dualists at this time. Dan, I'm not familiar with Jaegwon Kim's work beyond what I got from Wikipedia, but his position appears clearly recognizable as that of the analytic philosopher who sees the incoherence of his retreat to the shallow end of the pool of dualism, but has not yet realized that he can exit the pool with no loss or cost whatsoever. Since you professed some alignment with his position, let me ask you: Have YOU read John Pollacks paper on "nolipsism." Does it not resolve this matter simply and clearly, except that it doesn't provide the comfort of having an *essential* self that one can grasp? [Note that the one who would grasp is quite real in any and all ways that matter.] - Jef From cluebcke at yahoo.com Fri Feb 26 19:52:56 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Fri, 26 Feb 2010 11:52:56 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <506948.96513.qm@web36506.mail.mud.yahoo.com> References: <506948.96513.qm@web36506.mail.mud.yahoo.com> Message-ID: <670709.8525.qm@web111211.mail.gq1.yahoo.com> Whether it's T is caused by B B has the property m or T is contained in B B has the property m You cannot, from either of these cases, deduce that T has the property m You could only deduce that statement if you modified the original statements such that T is B Surely you're not claiming that thoughts are brain matter? ________________________________ From: Gordon Swobe To: ExI chat list Sent: Fri, February 26, 2010 11:13:45 AM Subject: Re: [ExI] Is the brain a digital computer? > I believe that the only workable definition of the noun 'a > thought' is, fundamentally, 'a statement', which is > certainly information, certainly can be held in media other > than brains, and certainly does not have mass. By the noun thought I mean the intentional mental state of a conscious mind. Hard drives and newspapers contain information but they do not have mental states intentional or otherwise, at least not this side of science- fiction. On my materialist view, brain matter causes and contains thoughts. And brain matter has mass. So notwithstanding the possible involvement of massless particles, thoughts have mass. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From x at extropica.org Fri Feb 26 18:51:28 2010 From: x at extropica.org (x at extropica.org) Date: Fri, 26 Feb 2010 10:51:28 -0800 Subject: [ExI] endpoint of evolution: was RE: why anger? In-Reply-To: <6444DC19A6864907B18CEB56F11F061F@spike> References: <4B8719F2.9070004@satx.rr.com> <6444DC19A6864907B18CEB56F11F061F@spike> Message-ID: On Fri, Feb 26, 2010 at 9:00 AM, spike wrote: >> ...On Behalf Of Stathis Papaioannou > ... >> > ...If we then send nanoprobes into the >> > rest of the galaxy to turn other stars' metals into computronium, >> > destroying all indigenous life there but facilitating the thinking of >> > pure thought, is that not evolution in action? >> >> To be strictly correct, those who oppose environment-altering >> engineering have to change the language they use slightly to >> say that that which nature created prior to the advent of >> technologically capable humans is good and worth preserving. >> -- >> Stathis Papaioannou > > > OK I am with you on that, but let's look at the endpoint question: > > What is the ultimate endpoint of evolution? > Question please: is there any other logical endpoint for evolution besides > an MBrain? Jarring to see on this list "endpoint" in reference to evolution, which is not only the only known source of persistent novelty, but is itself evolving. Is this just another example of the reification of agent-centered "intelligence", constraining nearly all transhumanist discussion to anthropocentric imaginings of personal immortality, godlike powers, inexhaustible monkey-hedonism, and protection by benevolent AI nanny? There was a time when speculation about such as Kardashev levels, Matrioshka Brains, and evolution as a process of increasing perfection could be excused--those were simpler times and we didn't have the wealth of information now easily accessible on the net. Not very long ago, most of these topics were seen as engineering problems--how to make things bigger, faster, stronger--and the leading paradigm was that of the digital computer. Since then, we (should have) seen the rise of chaos and complexity theory, ubiquitous fractal self-similarity, increasing ephemeralization, "evolution" as merely a special case of free-energy rate density and "intelligence" merely a phase--with increasing awareness that its not so much about engineering but about information--and the leading paradigm becomes that of an ecology with sustainable, ongoing, meaningful growth. My point: What the hell happened to halt the growth of the Extropy discussion list? I seem to remember that back in the 90s it was pretty much leading edge. -Jef From gts_2000 at yahoo.com Fri Feb 26 20:54:54 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 26 Feb 2010 12:54:54 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <670709.8525.qm@web111211.mail.gq1.yahoo.com> Message-ID: <617665.41447.qm@web36503.mail.mud.yahoo.com> --- On Fri, 2/26/10, Christopher Luebcke wrote: > Surely you're not claiming that thoughts are brain matter? I do claim that conscious thoughts arise as high-level features of brain matter, yes. The idea seems odd only because we don't yet understand the mechanism. I see no alternative aside from dualism. In answer to Dan's post: Dualism is not ruled out a priori, but as you probably know it leads to serious problems. For example if mental phenomena have a reality distinct from material phenomena then by what causal mechanism does the mental realm affect the material realm? What sort of magic happens when, as a result of your mental act of wanting to raise your arm, your arm rises? By modern standards, at least, Descartes failed to provide a plausible answer to this question of how mind affects matter. He posited a theory that the pineal gland in the brain acts as the intermediary. He considered this endocrine gland the "seat of the soul". But I think it just secretes melatonin. -gts From dan_ust at yahoo.com Fri Feb 26 19:46:07 2010 From: dan_ust at yahoo.com (Dan) Date: Fri, 26 Feb 2010 11:46:07 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: References: <631635.87594.qm@web36503.mail.mud.yahoo.com> <887732.3482.qm@web30102.mail.mud.yahoo.com> Message-ID: <603442.81442.qm@web30107.mail.mud.yahoo.com> Maybe I'm getting the wrong view from reading his work, but my view is Kim is a materialist or a "physicalist" -- which seems to me to be a euphemism for materialist. I get this from, e.g., his _Physicalism, Or Something Near Enough. Regarding Pollack's paper, no, I haven't read it, but is this something like the user illusion view? I'll get the paper... maybe I'm completely off on this. That said, though, I don't think the dualist position is necessarily religious. To me, there are just many different views one can have walking into this issue. Dualism happens to be the view many take, but I don't think they take it for religious reasons -- meaning, they hold it on faith. Rather, I think it's just a default position for many. Regards, Dan ----- Original Message ---- From: Aware To: ExI chat list Sent: Fri, February 26, 2010 2:22:32 PM Subject: Re: [ExI] Is the brain a digital computer? On Fri, Feb 26, 2010 at 8:44 AM, Dan wrote: > This is not to say I'm a dualist, but I don't think one has to stand firm against it on these issues -- though, to be sure, I'm more persuaded by the likes of Jaegwon Kim than by dualists at this time. Dan, I'm not familiar with Jaegwon Kim's work beyond what I got from Wikipedia, but his position appears clearly recognizable as that of the analytic philosopher who sees the incoherence of his retreat to the shallow end of the pool of dualism, but has not yet realized that he can exit the pool with no loss or cost whatsoever. Since you professed some alignment with his position, let me ask you: Have YOU read John Pollacks paper on "nolipsism."? Does it not resolve this matter simply and clearly, except that it doesn't provide the comfort of having an *essential* self that one can grasp?? [Note that the one who would grasp is quite real in any and all ways that matter.] - Jef From stefano.vaj at gmail.com Fri Feb 26 21:27:07 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 26 Feb 2010 22:27:07 +0100 Subject: [ExI] How not to make a thought experiment In-Reply-To: References: <35968.58084.qm@web36504.mail.mud.yahoo.com> <580930c21002240918l4ae1f5b0v7951b0679db6c687@mail.gmail.com> Message-ID: <580930c21002261327r479acc59te67b0a74159ffa98@mail.gmail.com> On 24 February 2010 20:33, spike wrote: >> ...On Behalf Of Stefano Vaj >> 2010/2/24 John Clark : >> > I have no son. >> Mmhhh. Can you prove that? Or should we take you at your word? >> :-) ? Stefano Vaj > > Perhaps he means no sons that he knows of. Why, if he is a philosophical zombie he would not know anything anyway... :-D -- Stefano Vaj From lacertilian at gmail.com Fri Feb 26 21:40:30 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 26 Feb 2010 13:40:30 -0800 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: Stathis Papaioannou : > If I tell you that sneezing destroys M what is your response? Your > response is that that's silly: you sneezed a few minutes ago and > you're still the same person, and your friend next to you also sneezed > (it was dusty environment) and you aren't in mourning for him either. Actually, no. My response is: I have no possible way to refute that statement. I am perfectly comfortable with absurdity*. It is certainly self-evident that people survive sneezing, but it is certainly not self-evident that M survives sneezing. So we could argue the metaphysics of sneezing for a while, if you want. It *would* be a whole lot sillier than talking about mind-scanning, if only because we've already sneezed at least once each, but it would likely run along a similar course. Any possible conclusion to either would, I suspect, be equally unverifiable. Some arguments carry more logical weight than others, though. Mind-scanning has more variables to grab hold of, what with all the different ways of copying and potentially recombining, so it's more subject to analysis; whether or not such analysis is futile. *That is, self-consistent absurdity. I'd immediately crumble under a paradox** similar to the one you've been repeatedly hitting Gordon over the head with, to no apparent effect, for the last few centuries. **Zombie neurons. Ben Zaiboc : > What about a hundred yoctoseconds? That's... that's almost TWO SEXTILLION planck times! I shudder to think what would happen to M in a blackout of that duration. From thespike at satx.rr.com Fri Feb 26 21:45:10 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 26 Feb 2010 15:45:10 -0600 Subject: [ExI] endpoint of evolution: was RE: why anger? In-Reply-To: References: <4B8719F2.9070004@satx.rr.com> <6444DC19A6864907B18CEB56F11F061F@spike> Message-ID: <4B8840E6.2070408@satx.rr.com> On 2/26/2010 12:51 PM, Jef wrote: > There was a time when speculation about such as Kardashev levels, > Matrioshka Brains, and evolution as a process of increasing perfection > could be excused--those were simpler times and we didn't have the > wealth of information now easily accessible on the net. > > Not very long ago, most of these topics were seen as engineering > problems--how to make things bigger, faster, stronger--and the leading > paradigm was that of the digital computer. > > Since then, we (should have) seen the rise of chaos and complexity > theory, ubiquitous fractal self-similarity, increasing > ephemeralization, "evolution" as merely a special case of free-energy > rate density and "intelligence" merely a phase--with increasing > awareness that its not so much about engineering but about > information--and the leading paradigm becomes that of an ecology with > sustainable, ongoing, meaningful growth. I wonder if you'd find this interesting? ?Self-Organised Reality? Prof. Dr. Brian Josephson Cambridge University I shall describe a new approach to modelling reality, synthesising ideas of Steven Rosen, Ilexa Yardley and Stuart Kauffman. Conventionally, physics presumes a specific fundamental mathematical equation, the solutions to which represent all possible realities. The alternative that we discuss is that domains of order progressively self-organise into more comprehensive domains of order, with the longevity of complexes at the various levels being a decisive factor in determining what manifests, as is the case in biology. This is emergent law rather than pre-existent law, and demands very different kind of thinking to the usual kind. For example, there is no universal description of what is the case but instead many descriptions, corresponding to the variety of effective divisions of the totality into figure and ground. Such descriptions are not merely ?in the mind of the scientist? but (again, as is the case in biology) an integral part of nature?s processes, while the determination of the nature of space is also an aspect of these processes. These ideas have clear implications for parapsychology. [For Dr. Josephson?s powerpoint click here: ] From lacertilian at gmail.com Fri Feb 26 21:52:20 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 26 Feb 2010 13:52:20 -0800 Subject: [ExI] endpoint of evolution: was RE: why anger? In-Reply-To: <6444DC19A6864907B18CEB56F11F061F@spike> References: <4B8719F2.9070004@satx.rr.com> <6444DC19A6864907B18CEB56F11F061F@spike> Message-ID: spike : > Question please: is there any other logical endpoint for evolution besides an MBrain? Sure. Those little suckers are microbes, less than microbes, infinitely less than microbes, compared to the Omega Point. http://en.wikipedia.org/wiki/Omega_Point_(Tipler) I haven't done much investigation into the topic, and I certainly haven't double-checked Tipler's physics in any systematic way. Draw what conclusions you will. From cluebcke at yahoo.com Fri Feb 26 21:38:18 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Fri, 26 Feb 2010 13:38:18 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <617665.41447.qm@web36503.mail.mud.yahoo.com> References: <617665.41447.qm@web36503.mail.mud.yahoo.com> Message-ID: <184494.20029.qm@web111208.mail.gq1.yahoo.com> I've already argued for thinking as an emergent property, but to assume that since the brain is made of mass that any of its properties must also have mass is absurd. Mass is a noun. Thinking is a verb. The brain is greyish. Greyish does not have mass. ________________________________ From: Gordon Swobe To: ExI chat list Sent: Fri, February 26, 2010 12:54:54 PM Subject: Re: [ExI] Is the brain a digital computer? --- On Fri, 2/26/10, Christopher Luebcke wrote: > Surely you're not claiming that thoughts are brain matter? I do claim that conscious thoughts arise as high-level features of brain matter, yes. The idea seems odd only because we don't yet understand the mechanism. I see no alternative aside from dualism. In answer to Dan's post: Dualism is not ruled out a priori, but as you probably know it leads to serious problems. For example if mental phenomena have a reality distinct from material phenomena then by what causal mechanism does the mental realm affect the material realm? What sort of magic happens when, as a result of your mental act of wanting to raise your arm, your arm rises? By modern standards, at least, Descartes failed to provide a plausible answer to this question of how mind affects matter. He posited a theory that the pineal gland in the brain acts as the intermediary. He considered this endocrine gland the "seat of the soul". But I think it just secretes melatonin. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Fri Feb 26 22:06:33 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 26 Feb 2010 14:06:33 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <184494.20029.qm@web111208.mail.gq1.yahoo.com> Message-ID: <761925.74130.qm@web36502.mail.mud.yahoo.com> --- On Fri, 2/26/10, Christopher Luebcke wrote: > I've already argued for thinking as an emergent property Yes, and I liked your analogy. You argued that conscious thought arises as an emergent property like the surface tension of water. I've made similar analogies here in which I compared consciousness to the frozen state of water. I like your analogy but I like mine better: the brain enters a state of consciousness like water enters a state of solidity. >, but to assume that since the brain is made of mass that any of its > properties must also have mass is absurd. I wonder now if you want to defend property dualism. I reject both substance and property dualism. On my view, at time t when the brain exists in conscious state s, the conscious thought at t exists in/as a particular configuration of brain matter. That configuration of brain matter at t has mass. -gts From avantguardian2020 at yahoo.com Fri Feb 26 22:27:41 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Fri, 26 Feb 2010 14:27:41 -0800 (PST) Subject: [ExI] Is the brain a digital computer? Message-ID: <834002.74785.qm@web65615.mail.ac4.yahoo.com> ----- Original Message ---- > From: Dan > To: ExI chat list > Sent: Fri, February 26, 2010 11:46:07 AM > Subject: Re: [ExI] Is the brain a digital computer? > > Maybe I'm getting the wrong view from reading his work, but my view is Kim is a > materialist or a "physicalist" -- which seems to me to be a euphemism for > materialist. I get this from, e.g., his _Physicalism, Or Something Near > Enough. I think that physicalism is a bit more robust than simple materialism. Physicalism allows for all of physics to come to play in the philosophy of mind. This includes concepts like energy, information, and entropy. Things?that are distinctly not matter. So in a way physicalism is more supportive of a dualistic worldview, especially when QM is concerned. Since a?non-superstitious irreducible?dualism is at the heart of measurement problem. Of the prevailing opinions on the matter either?an observation by a mind?collapses a?non-material wavefunction?*or* an observation by a mind creates whole new universes out of nothing. That said, though, I don't think the dualist > position is necessarily religious. To me, there are just many different views > one can have walking into this issue. Dualism happens to be the view many take, > but I don't think they take it for religious reasons -- meaning, they hold it on > faith. Rather, I think it's just a default position for > many. I don't think dualism is necessarily religious either. Indeed I don't see how computationalism can avoid being a?dualist philosophy. I mean computers have hardware and software and those distinctions?are every bit as dualist as body and mind. Stuart LaForge "Never express yourself more clearly than you think." - Niels Bohr From dan_ust at yahoo.com Fri Feb 26 22:19:18 2010 From: dan_ust at yahoo.com (Dan) Date: Fri, 26 Feb 2010 14:19:18 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <761925.74130.qm@web36502.mail.mud.yahoo.com> References: <761925.74130.qm@web36502.mail.mud.yahoo.com> Message-ID: <258239.2264.qm@web30106.mail.mud.yahoo.com> This is like arguing that the letter Z has a specific mass and then claiming anyone who disagrees is a dualist of some sort -- whether of property, of substance, or of?whatever. Regards, Dan ----- Original Message ---- From: Gordon Swobe To: ExI chat list Sent: Fri, February 26, 2010 5:06:33 PM Subject: Re: [ExI] Is the brain a digital computer? --- On Fri, 2/26/10, Christopher Luebcke wrote: > I've already argued for thinking as an emergent property Yes, and I liked your analogy. You argued that conscious thought arises as an emergent property like the surface tension of water. I've made similar analogies here in which I compared consciousness to the frozen state of water. I like your analogy but I like mine better: the brain enters a state of consciousness like water enters a state of solidity. >, but to assume that since the brain is made of mass that any of its > properties must also have mass is absurd. I wonder now if you want to defend property dualism. I reject both substance and property dualism. On my view, at time t when the brain exists in conscious state s, the conscious thought at t exists in/as a particular configuration of brain matter. That configuration of brain matter at t has mass. -gts From gts_2000 at yahoo.com Fri Feb 26 23:07:21 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 26 Feb 2010 15:07:21 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <834002.74785.qm@web65615.mail.ac4.yahoo.com> Message-ID: <557085.7003.qm@web36503.mail.mud.yahoo.com> --- On Fri, 2/26/10, The Avantguardian wrote: > Indeed I don't see how computationalism can avoid being > a?dualist philosophy. I mean computers have hardware and > software and those distinctions?are every bit as dualist as > body and mind. Exactly. Strong AI on s/h systems = true iff dualism = true. -gts From cluebcke at yahoo.com Fri Feb 26 23:57:18 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Fri, 26 Feb 2010 15:57:18 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <761925.74130.qm@web36502.mail.mud.yahoo.com> References: <761925.74130.qm@web36502.mail.mud.yahoo.com> Message-ID: <700118.79567.qm@web111208.mail.gq1.yahoo.com> Not dualism in the sense of some ineffable, non-interactive properties. But in the sense that certain properties pertain to information, or motion, or activity, but not necessarily to matter. Spreadsheets aren't magnetic (or, when projected, made of photons). Surface tension isn't wet. Thinking isn't grey and squishy. ________________________________ From: Gordon Swobe To: ExI chat list Sent: Fri, February 26, 2010 2:06:33 PM Subject: Re: [ExI] Is the brain a digital computer? --- On Fri, 2/26/10, Christopher Luebcke wrote: > I've already argued for thinking as an emergent property Yes, and I liked your analogy. You argued that conscious thought arises as an emergent property like the surface tension of water. I've made similar analogies here in which I compared consciousness to the frozen state of water. I like your analogy but I like mine better: the brain enters a state of consciousness like water enters a state of solidity. >, but to assume that since the brain is made of mass that any of its > properties must also have mass is absurd. I wonder now if you want to defend property dualism. I reject both substance and property dualism. On my view, at time t when the brain exists in conscious state s, the conscious thought at t exists in/as a particular configuration of brain matter. That configuration of brain matter at t has mass. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From lacertilian at gmail.com Fri Feb 26 23:59:41 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 26 Feb 2010 15:59:41 -0800 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <142221.89448.qm@web36507.mail.mud.yahoo.com> References: <142221.89448.qm@web36507.mail.mud.yahoo.com> Message-ID: Yeah, this is going to be a long one. Gordon Swobe : > Spencer Campbell : >> This is meaningless and misleading. Thoughts aren't even real things. > > Sorry to hear then that you can't really think. > > I can really think real thoughts. <- I even typed that one Well of course I can really think, but I don't really have real* thoughts. I imagine I have thoughts. Everyone does. My thoughts are imaginary. In a world where thoughts were real, we would be able to build devices capable of extracting thoughts mechanically. Here and now, we can only extract thoughts from people informatically. As you have just successfully done with me. * I am using my idiosyncratic definitions again here. Thoughts exist, but are not real. I have not yet figured out a more sensible terminology for the distinction I have in mind, but it's the same thing that causes Searle to call computations observer-dependent or whatever term he used. Gordon Swobe : > Material perturbations in brain matter, I would say. If you really alter or eliminate the real brain matter, you will really alter or eliminate the real thoughts. So it looks like thoughts really exist as real material events in the real brain. "Material event" almost seems like a contradiction in terms, but okay. I will play this game. No, if you alter or eliminate brain matter, real or imaginary, you will NOT alter or eliminate thoughts, real or imaginary. This would only be the case if thoughts were stored in the brain (or the mind, if you prefer). They are not. Thoughts aren't *stored* anywhere, they just unpredictably pop in and out of existence in a vaguely-defined space. Thoughts are events, not objects; taking the view that they exist as discrete things to begin with, such as in the manner of a quasiparticle, you are still forced to conclude that they are ever-changing. So it's nonsense to talk about "altering" thoughts. They alter themselves a whole lot faster than you could ever hope to through crude brain surgery. I could just as easily make the case that it's also nonsense to talk about "eliminating" a discrete thought, but that's fuzzier territory; obviously we can eliminate a whole lot of thoughts at once, using nothing more sophisticated than a crowbar and a swift swinging motion. But, that is really more like preventing new thoughts from occurring. The thoughts present at the moment of impact would have been long gone anyway by the time consciousness fades. Chris Luebcke : > I believe that the only workable definition of the noun 'a thought' is, fundamentally, 'a statement', which is certainly information, certainly can be held in media other than brains, and certainly does not have mass. I was using Gordon's definition here, namely: "at time t when the brain exists in conscious state s, the conscious thought at t exists in/as a particular configuration of brain matter" I like yours a lot better. It implies that thoughts can be recorded quite easily, which says something very interesting about the following two thoughts: "I can really think real thoughts. <- I even typed that one" and: "I do not consider information on a hard drive as constitutive of thought, for the same reason that I do not believe newspapers think about the words printed on them." So an email message can contain thoughts, but newspapers can't. And hard drives can't either, in spite of the fact they can contain emails (which, of course, can). This does not make sense. Gordon's formulation equates thinking with "having thoughts", but the word "have" is surprisingly ambiguous. It may either mean to possess (I have a computer) or to contain (my computer has files) or even simply to be attached to (I have arms, my computer has a keyboard). Gordon considers thoughts as configurations of brain matter*, which implies that we can, in principle, literally print thoughts into the brain; just as we can with a hard drive or a newspaper. Now, I imagine when he reads this he will instantly decide to accuse me of some kind of homunculus fallacy since the words on a newspaper only mean something when we interpret them to. A thought printed on neurons generates semantics from within! How? By interacting with the rest of the brain, of course. Yet, thoughts printed on a hard drive can surely interact with the rest of the hard drive. It isn't as obvious that this is so, and in fact it is not necessarily always so. I'm assuming a certain sort of program is being run on the computer in question. Probably the easiest to imagine would simply turn the whole hard drive into one big two-dimensional cellular automaton, so that it is clear that every piece of information stored on it is constantly interacting (at least indirectly) with every other. *Incidentally, in addition to supporting an internally inconsistent view, I'm pretty certain that this is just flat out wrong in and of itself. Thoughts aren't physical, local structures that you can excise at will. I see now why Gordon thinks they have mass; he sees them as literal clumps of grey goo, tiny local substructures in the greater system. That actually makes good intuitive sense, and it's pretty close to how things work in a hard drive, but it shares little in common with observable reality. Punky should make this pretty obvious. Consider "I must eat worms" to be a thought, and you'll see what I mean. http://www.indiana.edu/~pietsch/shufflebrain.html (Compliments of Jeff Davis, Extropy-Chat, circa February 13th) Gordon Swobe : > The Turing test defines (weak) AI and neurons cannot take the Turing test, > so I don't know what it means to speak of an AI behaving like a neuron. Blue Brain. http://en.wikipedia.org/wiki/Blue_Brain_Project If you prefer, you could just put a digital computer inside a robotic neuron. I think this is more like what Stathis is hinting at, and it's why I asked you once upon a time whether or not you considered the Internet itself to be one large digital computer. The logic goes like this: two computers, networked together, constitute a single computer. This means that any number of networked computers can be considered one computer. Say you built a brain out of synthetic computerized neurons (SCNs), whose apparent structure matched an ordinary brain but whose low-level behavior was dictated programmatically (that is, each neuron is operating according to a copy of the same program). If those SCNs are computers, in spite of the fact they have little artificial axons and such, then the whole brain is a computer. So, Gordon, I have a humble request. A plea, really. I actually do feel pangs of guilt whenever I single you out like this, but it's pretty much unavoidable Please visualize, as precisely as you possibly can, just what an SCN is. Don't just apply labels to it, calling it this or that, deciding on the fly what properties it has. No. Think about what it would literally look like, what would be involved in its construction and its programming. Only then, after you've gotten a firm grasp on the concrete technology involved, would I like to hear whether you believe (a) that an SCN is a digital computer or (b) that an SCN is theoretically capable of perfectly replicating the external behavior of a naturally-grown neuron (NGN). There. Maybe *this* repackaging of the same old idea will get through! Christopher Luebcke : > Whether it's > ??T is caused by B > ??B has the property m > or > ??T is contained in B > ??B has the property m > You cannot, from either of these cases, deduce that > ??T has the property m > You could only deduce that statement if you modified the original statements such that > ??T is B You beat me to it, and did a better job than I would have to boot! Magnificent. Sadly, it does not look like the kind of thing Gordon responds to. Indeed, he merely cherry-picked the very last sentence, and twisted the meaning rather egregiously in the process: Christopher: "Surely you're not claiming that thoughts are brain matter?" Gordon: "I do claim that conscious thoughts arise as high-level features of brain matter, yes." The breakdown in communication is, I think, self-evident. What to do about it? From steinberg.will at gmail.com Sat Feb 27 00:24:31 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 26 Feb 2010 19:24:31 -0500 Subject: [ExI] intelligence, coherence, the frontal lobe, quantum evolution Message-ID: <4e3a29501002261624l7c712bep744ae88e4a366f57@mail.gmail.com> http://arxiv.org/abs/1001.5108 Someone posted that a few weeks ago, quantum mechanics in photosynthesis. This opens the door for what should be apparent anyway--that systems in organisms can make use of quantum effects. Not a surprise, given that an organism's success is based on an ability to work efficiently in the realm of reality. Making use of widespread physical systems (properties of water, charge gradients, etc.) is the key to survival. Chance can only ever mutate systems within the bounds of physical reality, so success is working the best with what's been given. Intelligence can broadly be viewed as a better ability to predict success or failure. Intelligent people can solve math problems more easily, because their notions towards what direction to proceed to in solving the problems are more honed. By choosing many correct steps in succession, the problem is solved. The same goes for social situations, where the socially intelligent can make choices that will benefit themselves. Better predictive analysis leads to a higher chance of actions working. The dual parts of intellect are composed of the speed with which these problems are solved and the actual ability to conceive of the steps involved. There are many species who can perform experimentation yet do not seem by any mean intelligent. Flies can remember that a source causes damage by testing it once. This experimentation seems to have arisen well before classically intelligent systems. Many, many animals can exhibit learned behaviors by this method. This is correlated to the ability to store a specific state of being (smelling x, seeying y) in reference to taxes. If something was "bad," exhibit negative taxis wrt the source; if good, positive. Perhaps this lies in the hippocampus and homologous structures, found across the animal kingdom. But intelligent creatures are able to make decisions without having directly experienced them. Instead of learning only inputs and outputs, an intelligent creature can create general rules and somehow apply them to find efficient solutions. This suggests that something more than mere i/o is occurring. Prefrontal cortex activity is strongly associated with decision making. Maybe, through advanced neocortical sensory methods (more information,) advanced hippocampal memory methods (efficient storage of such information,) and the ability of the frontal lobe to infer general rules from these baser concepts, animals with highly evolved portions of all three or similar structures to them have advanced intellect. I cannot think right now of how exactly rules are learned--how by testing a few similar situations, the function relating them can be culled. But even without a method of acquiring them, I think it can be agreed upon that intelligent creatures take advantage of "knowing the rules," with *conscious* creatures even being able to state their knowledge of these rules (a human can state the rule: "saying please and thank you is beneficial in social situations.") Frontal lobe/prefrontal cortex damage causes loss of sociability and fluid intelligence--forgetting the rules of spontaneous decision making. Testing outcomes of rules would be akin to solving problems by brute force. When you try and think of a solution to something spur-of-the-moment, you don't generally go "Well, this would work a little, so I'll alter this part a bit, and then it would work a bit better, and then I'll change this part, and--oh, no, that one was no good. Better try a different one." Simple observation of our own thoughts shows that intelligent intuition is largely a quick, seemingly one-and-done process. Quantum effects fit in nicely. Plants use coherence to find the most efficient path for energy. With evidence that coherence is a property that can certainly be harnessed, we can perhaps give it greater weight in consideration for other biologies. Now consider the frontal lobe as an organized system of these rules, which have been formed meta-algorithmically from starting information based on actions. The brain uses electron-delocalized neural pathways and exhibits coherence across them, effectively sending impulses through multiple orders of rule choices at once. When pathways coincide, the electron recoheres and begins to spread out its wavefunction again across neurons, leading to spikes in EM activity. The interesting part is that once a split function returns to unity, that IS the position of the particle. So--the fastest, most efficient pathway has its recombination occur *first*, which means that, as these processes happen over and over, the most efficient pathway overall is chosen. At the end of the line, once an efficient pathway has been found to bridge a problem and a solution, the brain can analyze the path taken and reform it into perceptions of actions, which are subsequently performed. When thinking * does* occur in steps, we are given a glimpse of this information as it returns to a crystallized form, perhaps when the new process itself is being cemented into the rule-base. Then this new rule can be applied until we need to crystallize again. So our thought processes when doing proofs represent actual quantum leaps of information--we jump to something, store it, then use coherence again to engage in more jumps. I believe this method could bear a strong resemblance to actual neural processes. It would also show something very interesting: organisms that utilize quantum effects are the kings of this planet. The organisms which can organize a large effect per organism on this earth are animals and plants. Uncomplicated, microscopic organisms are simply too small to produce large causal effects, and fungi, absorbing energy in already-chemical forms, were perhaps never able to develop quantum effects because of not directly encountering wavelike objects. Plants' incredibly efficient use of energy through quantum mechanics has allowed them to become extremely large and extremely prominent. And animal use again leads to increased physical prominence. Perhaps, in terms of generalized efficiency relative to neighboring organisms, there is a ceiling to practical efficiency without using quantum effects. The organism would simply be too large or complicated to exist. At a point, the macroscopic problems associated with higher levels of existence are too daunting for classical systems. It is true enough for other problems--circulatory systems began to exist when multi-cell delivery was needed. Why not have quantum systems begin to exist when the complexity of decisions became unfeasible to manage? I am almost certain that processes akin to these occur in the brain. Close analysis of mental structures related to intelligence is paramount to their discovery. And their discovery is absolutely necessary should we hope to ever alter or transmit our minds directly. Are there wave patterns related to quantum coherence? If these patterns were seen in EEG during intelligent activity, maybe they could imply quantum effects taking place. Interesting to note that certain EEG waves will only manifest at specific milestones in development related to problem solving. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Feb 27 00:27:20 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 27 Feb 2010 11:27:20 +1100 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: On 27 February 2010 08:40, Spencer Campbell wrote: > Stathis Papaioannou : >> If I tell you that sneezing destroys M what is your response? Your >> response is that that's silly: you sneezed a few minutes ago and >> you're still the same person, and your friend next to you also sneezed >> (it was dusty environment) and you aren't in mourning for him either. > > Actually, no. My response is: I have no possible way to refute that > statement. I am perfectly comfortable with absurdity*. It is certainly > self-evident that people survive sneezing, but it is certainly not > self-evident that M survives sneezing. There is a vast number of possible statements like this, but we all completely ignore them as meaningless (in the sense of the logical positivists: though coherent, they are not analytically true nor empirically true). Are we doing the wrong thing? > So we could argue the metaphysics of sneezing for a while, if you > want. It *would* be a whole lot sillier than talking about > mind-scanning, if only because we've already sneezed at least once > each, but it would likely run along a similar course. > > Any possible conclusion to either would, I suspect, be equally > unverifiable. Some arguments carry more logical weight than others, > though. Mind-scanning has more variables to grab hold of, what with > all the different ways of copying and potentially recombining, so it's > more subject to analysis; whether or not such analysis is futile. There are a lot of differences between mind scanning and sneezing, but since nothing we could learn about the world makes any difference to what we can know about M, none of these differences are relevant in the discussion. -- Stathis Papaioannou From bbenzai at yahoo.com Sat Feb 27 00:03:46 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 26 Feb 2010 16:03:46 -0800 (PST) Subject: [ExI] Continuity of experience In-Reply-To: Message-ID: <98537.88278.qm@web113618.mail.gq1.yahoo.com> Spencer Campbell's inconceivably ancient ancestor wrote: > Ben Zaiboc : > > What about a hundred yoctoseconds? > > That's... that's almost TWO SEXTILLION planck times! > > I shudder to think what would happen to M in a blackout of > that duration. OK, what about one plank time? (Geez, 10^-24 s isn't short enough for you?) You see the point, I'm sure. Everyone's mind is flickering like a movie, no harm comes to it in the invervals between the flickers, so why should any harm come to it in any interval of any size, as long as all the information is preserved and reinstated? Or is your objection based on the idea that any conceivable scanning process has to take time, and the brain would change state (probably many times) in-between the beginning and the end of the scan, so any reconstituted brain using that information would not necessarily work properly? Would you be happy with a simultaneous scan of all brain areas, if that could be achieved? I can think of a few solutions to the non-simultaneous scan problem, but maybe that's not your concern anyway? Ben Zaiboc From bbenzai at yahoo.com Sat Feb 27 00:07:09 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 26 Feb 2010 16:07:09 -0800 (PST) Subject: [ExI] How not to make a thought experiment In-Reply-To: Message-ID: <304932.217.qm@web113619.mail.gq1.yahoo.com> Dave Sill opined: > And this thread won't end until each has had its say--even > if they're > all in agreement. > > On the positive side, we're nearing the halfway point. :-) Yeah, the Zeno halfway point! Ben Zaiboc From bbenzai at yahoo.com Sat Feb 27 00:19:37 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 26 Feb 2010 16:19:37 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <654436.18871.qm@web113601.mail.gq1.yahoo.com> Gordon Swobe wrote: > --- On Fri, 2/26/10, Christopher Luebcke > wrote: > > > Surely you're not claiming that thoughts are brain > matter? > > I do claim that conscious thoughts arise as high-level > features of brain > matter, yes. The idea seems odd only because we don't yet > understand > the mechanism. Ahem. Beer cans & toilet paper. Pot. Kettle. Black. You can't claim that someone else's odd idea with a not-yet-understood mechanism is absurd-therefore-wrong, while maintaining that your own odd idea with a not-yet-understood mechanism is perfectly reasonable. This is generally known as 'inconsistent', and earns no brownie points whatever. Ben Zaiboc From sparge at gmail.com Sat Feb 27 00:50:59 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 26 Feb 2010 19:50:59 -0500 Subject: [ExI] Is the brain a digital computer? In-Reply-To: References: <142221.89448.qm@web36507.mail.mud.yahoo.com> Message-ID: On Fri, Feb 26, 2010 at 6:59 PM, Spencer Campbell wrote: > > The breakdown in communication is, I think, self-evident. What to do about it? Stop trying? I know it's hard, and I've sworn this thread off a couple times, but this time I'm serious. I'm done. -Dave From lacertilian at gmail.com Sat Feb 27 00:56:07 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 26 Feb 2010 16:56:07 -0800 Subject: [ExI] Is the brain a digital computer? In-Reply-To: References: <142221.89448.qm@web36507.mail.mud.yahoo.com> Message-ID: Dave Sill : > Stop trying? I know it's hard, and I've sworn this thread off a couple > times, but this time I'm serious. I'm done. Probably a good idea. I may be joining you soon enough. I seem to have started another interminable discussion (continuity of experience) to occupy my time with, anyway. From lacertilian at gmail.com Sat Feb 27 01:28:54 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 26 Feb 2010 17:28:54 -0800 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: Stathis Papaioannou : > There is a vast number of possible statements like this, but we all > completely ignore them as meaningless (in the sense of the logical > positivists: though coherent, they are not analytically true nor > empirically true). Are we doing the wrong thing? Depends on what you want to accomplish. SUBJECTIVIST ZING Stathis Papaioannou : > There are a lot of differences between mind scanning and sneezing, but > since nothing we could learn about the world makes any difference to > what we can know about M, none of these differences are relevant in > the discussion. Assuming there are no supernatural-paranormal events in the universe, as I have thus far, yes. That would be the only way that M could conceivably have an effect on the physical world. Almost by definition, really. One difference that I would say makes a difference anyway, between sneezing and scanning, is that scanning obviously implies that some kind of process might take place to result in two instances of M. Which, the way I'm formulating it, is an axiomatic impossibility. Sneezing, no problem. You could say that whenever I sneeze, my consciousness doubles. Now I have two M in one body. But there's no obvious *reason* to say that, except maybe to be argumentative. Basically I am starting with the premise that M presents a logic problem by its very nature. The easiest way out of the problem is nolipsism: M doesn't mean anything anyway, so there was never a problem to begin with. It's such an interesting problem, though! It would be a waste to trash it so quickly. Ben Zaiboc : > Spencer Campbell's inconceivably ancient ancestor wrote: >> That's... that's almost TWO SEXTILLION planck times! > > OK, what about one plank time? (Geez, 10^-24 s isn't short enough for you?) I haven't seen much to convince me that time is quantized, so I could just divide by 2*10^21 again. We could be here forever. Ben Zaiboc : > You see the point, I'm sure. ?Everyone's mind is flickering like a movie, no harm comes to it in the invervals between the flickers, so why should any harm come to it in any interval of any size, as long as all the information is preserved and reinstated? Yeah, I see the point. It's a fine point. It gives me pause. But, I am not sure that my mind flickers in the way you're implying it flickers. Certainly I have "more" mind at some times than at other times, and sometimes I have so little mind that I appear to have none, but I don't lose sleep over it as long as there is always at least the most faint wisp of mentality remaining. You can pick whatever time scale you want. Find a definite period during which I have had *absolutely no mind*, and you win. I don't think such a period is physically possible short of traditionally-irreversible death. Ben Zaiboc : > Or is your objection based on the idea that any conceivable scanning process has to take time, and the brain would change state (probably many times) in-between the beginning and the end of the scan, so any reconstituted brain using that information would not necessarily work properly? ?Would you be happy with a simultaneous scan of all brain areas, if that could be achieved? > > I can think of a few solutions to the non-simultaneous scan problem, but maybe that's not your concern anyway? Yeah, not my concern. I'm taking it for granted that my mind can be replicated with arbitrary accuracy, easily exceeding the faithfulness of my mind-of-today to my mind-of-yesterday. The point to refute is that there isn't any gap between those two minds, not even a little one, whereas there is a very clear gap in the case of plasticization and recovery. A plasticized brain is truly and inarguably dead, unlike, say, the brain of someone whose heart has recently stopped. That is basically the essence of my concern. From stathisp at gmail.com Sat Feb 27 03:53:22 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 27 Feb 2010 14:53:22 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <897978.75613.qm@web36505.mail.mud.yahoo.com> References: <897978.75613.qm@web36505.mail.mud.yahoo.com> Message-ID: On 27 February 2010 01:39, Gordon Swobe wrote: > --- On Fri, 2/26/10, Stathis Papaioannou wrote: > >> You have no problem with the idea that an AI could behave >> like a human but you don't think it could behave like a neuron. > > The Turing test defines (weak) AI and neurons cannot take the Turing test, > so I don't know what it means to speak of an AI behaving like a neuron. I'm mystified by this response of yours. Yes, the TT involves a machine talking to humans and trying to convince them that it has a mind. But surely you can see that this test was proposed because language is taken to be one of the most difficult things for a machine to pull off? It is usually taken as given that if a philosophical zombie can trick a human with its lively conversation it won't then go and give itself away with its blank stare and inability to walk without arms and legs fully extended. The controversial question is whether it is possible for an AI which is not conscious to behave as if it is conscious. If you agree that an AI can do this, then you should agree that it can copy all of the behaviour of a conscious entity, both that which is dependent on consciousness and that which is not. Thus the AI should be able to behave like a human, a flatworm, an amoeba or a neuron. >> The task is to replace all the components of a neuron with >> artificial components so that the neuron behaves just the same. > > If and when we understand how neurons cause consciousness, we will perhaps > have it on our power to make the kind of artificial neurons you want. > They'll work a lot like biological neurons, and might work exactly like > them. We might need effectively to get into the business of manufacturing > biological neurons, rendering the distinction between artificial and > natural meaningless. We don't necessarily need to understand anything about consciousness or cognition in order to do this. The extreme example is to copy a neuron atom for atom: it will function exactly the same as the original, including consciousness, even if the alien engineers are convinced that human brains are too primitive to be conscious. >> Are you saying that however hard the aliens try, they >> won't be able to get the modified neuron to control >> neurotransmitter release in the same way as the original neuron? > > No, I mean that where consciousness is concerned, I don't believe digital > computations of its causal mechanisms will do the trick. To > understand me here, you need to understand what I wrote a few days ago > about the acausal and observer-relative nature of computations. So you *are* saying that the aliens will fail to copy the behaviour of a neuron if they use computational mechanisms. They may be able to get neurotransmitter release right - that was just an example - but there will be some other function the NCC performs that affects the which they just won't be able to reproduce, no matter how advanced their computers. The modified neuron will, on close examination, deviate from the behaviour of the original neuron, and if installed in the brain the brain's behaviour and hence the person's behaviour will also be different. The aliens will conclude, while still suspecting nothing about human consciousness, that the neuron is not Turing emulable, and they will have to use a hypercomputer if they want to copy its behaviour. -- Stathis Papaioannou From stathisp at gmail.com Sat Feb 27 04:08:47 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 27 Feb 2010 15:08:47 +1100 Subject: [ExI] endpoint of evolution: was RE: why anger? In-Reply-To: <6444DC19A6864907B18CEB56F11F061F@spike> References: <4B8719F2.9070004@satx.rr.com> <6444DC19A6864907B18CEB56F11F061F@spike> Message-ID: On 27 February 2010 04:00, spike wrote: > What is the ultimate endpoint of evolution? > > We can imagine a planet with a climate like ours today with about a billion > well-fed well-educated humans and a bunch of areas where humans never go, > filled with lots of bugs and other interesting beasts, all in an equilibrium > with little change over millenia until the heat death of the universe, > forever and ever amen. > > Many in the environmental movement picture this, but I consider it an > unlikely outcome. ?Rather, I envision a continually changing chaotic system > pressing towards (I hope) increasing intelligence with ever improving means > of making matter ever more efficient in thinking. > > Thought experiment: picture the earth as it was for most of its 5 billion > year history: a big rock with the only lifeform being blue-green algal mats. > If we saw it that way, we might say this is a big waste, there are no > thought being thunk here. ?The actual metals (everything that isn't hydrogen > or helium) is idle. ?Multicellular life comes along, Prerecambrian > explosion, dinosaurs, etc, now suddenly right at the endgame, sentient > beings show up. > > >From the point of view of an MBrain, the overwhelming majority of the metals > on this planet still are unemployed with thought, with the rare exception of > a few billion scattered points of proto-thought. > > My notion of an endpoint of evolution is that these few points of > proto-thought create mind children capable of robust thought, and eventually > put all the available metals to the task. ?With a mere few grams or perhaps > a few kg of those metals, we could simulate all the interesting currently > extant lifeforms, and use other metals to simulate other possible > evolutionary paths, demonstrating something about which I have long been > fascinated: convergent evolution, as seen in the ratites. > > Until we get all the metals thinking in the form of computronium, they are > going to waste. ?We need to get on it, since we have a limited time before > the heat death of the universe, perhaps as short as a few hundred billion > years. > > Question please: is there any other logical endpoint for evolution besides > an MBrain? You know, of course, that evolution has no purpose or "endpoint". If life on Earth devolves to protoslime that's fine by evolution, just as good as an MBrain. -- Stathis Papaioannou From stathisp at gmail.com Sat Feb 27 04:29:30 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 27 Feb 2010 15:29:30 +1100 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: On 27 February 2010 12:28, Spencer Campbell wrote: > Stathis Papaioannou : >> There is a vast number of possible statements like this, but we all >> completely ignore them as meaningless (in the sense of the logical >> positivists: though coherent, they are not analytically true nor >> empirically true). Are we doing the wrong thing? > > Depends on what you want to accomplish. I want to accomplish something. M, by its nature, accomplishes nothing, since its presence or absence makes no subjective or objecyive difference. >> There are a lot of differences between mind scanning and sneezing, but >> since nothing we could learn about the world makes any difference to >> what we can know about M, none of these differences are relevant in >> the discussion. > > Assuming there are no supernatural-paranormal events in the universe, > as I have thus far, yes. That would be the only way that M could > conceivably have an effect on the physical world. Almost by > definition, really. A supernatural or paranormal event would still have some effect on the world. M has no effect whatsoever. -- Stathis Papaioannou From lacertilian at gmail.com Sat Feb 27 04:37:31 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Fri, 26 Feb 2010 20:37:31 -0800 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: Stathis Papaioannou : >Spencer Campbell : >> Assuming there are no supernatural-paranormal events in the universe, >> as I have thus far, yes. That would be the only way that M could >> conceivably have an effect on the physical world. Almost by >> definition, really. > > A supernatural or paranormal event would still have some effect on the > world. M has no effect whatsoever. ... I just said that! Exactly that. Was I unclear? No matter the formulation, if M has any observable effect on the world then that effect must be supernatural and/or paranormal by its very nature. This has nothing to do with continuity of experience, however. From spike66 at att.net Sat Feb 27 07:29:00 2010 From: spike66 at att.net (spike) Date: Fri, 26 Feb 2010 23:29:00 -0800 Subject: [ExI] endpoint of evolution: was RE: why anger? In-Reply-To: References: <4B8719F2.9070004@satx.rr.com><6444DC19A6864907B18CEB56F11F061F@spike> Message-ID: <32D0908903C74D179298184326489D25@spike> > ...On Behalf Of x at extropica.org ... > > On Fri, Feb 26, 2010 at 9:00 AM, spike wrote: ... > > > Question please: is there any other logical endpoint for evolution > > besides an MBrain? > > Jarring to see on this list "endpoint" in reference to > evolution, which is not only the only known source of > persistent novelty, but is itself evolving... -Jef Evolution would continue even after all the available metals are converted to computronium and nearly all the energy being emitted by the star is being converted into thought. The outward appearance of the MBrain wouldn't change much after that point, but information structure would continue to develop. spike From stathisp at gmail.com Sat Feb 27 10:38:25 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 27 Feb 2010 21:38:25 +1100 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: On 27 February 2010 15:37, Spencer Campbell wrote: > Stathis Papaioannou : >>Spencer Campbell : >>> Assuming there are no supernatural-paranormal events in the universe, >>> as I have thus far, yes. That would be the only way that M could >>> conceivably have an effect on the physical world. Almost by >>> definition, really. >> >> A supernatural or paranormal event would still have some effect on the >> world. M has no effect whatsoever. > > ... I just said that! Exactly that. Was I unclear? > > No matter the formulation, if M has any observable effect on the world > then that effect must be supernatural and/or paranormal by its very > nature. This has nothing to do with continuity of experience, however. M can have no observable effect on the world whatsoever, and that means no supernatural effect either. For example, if M allows me to perform miracles and I lose M as a result of sneezing or whatever, then I won't be able to perform miracles any more. I will sink when I try to walk on water, an easily observable effect. I don't think it is possible for anything not to exist in a stronger sense than M does not exist. -- Stathis Papaioannou From stathisp at gmail.com Sat Feb 27 10:45:44 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 27 Feb 2010 21:45:44 +1100 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: On 27 February 2010 21:38, I wrote: >I don't think it is > possible for anything not to exist in a stronger sense than M does not > exist. That sentence was a bit confusing. What I meant was that I don't think it is possible for anything to not-exist in a stronger sense than M does not-exist. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Feb 27 13:45:23 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 27 Feb 2010 05:45:23 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <114796.39864.qm@web36506.mail.mud.yahoo.com> --- On Fri, 2/26/10, Stathis Papaioannou wrote: > The task is to replace all the components of a neuron with > artificial components so that the neuron behaves just the same. No, this sentence above of yours counts as a sample of false assumption #2. AI researchers in the real world seek to replace all the components of a brain with artificial components such that the complete product passes the Turing test. Period. It does not matter to them or to me, nor should it matter to you, whether the finished artificial neuron or the finished AI behaves "just the same" as it would have behaved had it not been replaced. Nobody can know the answer to that question. -gts From gts_2000 at yahoo.com Sat Feb 27 13:58:28 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 27 Feb 2010 05:58:28 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <114796.39864.qm@web36506.mail.mud.yahoo.com> Message-ID: <961760.19813.qm@web36508.mail.mud.yahoo.com> Stathis, In my last post I mentioned false assumption #2. I meant to send this post first: It dawned on me that you hold two false assumptions, and that these assumptions explain the supposed problem that you present that leads to the supposed conclusion that no distinction exists between weak and strong AI. 1) In your arguments, you assume that for the weak AI hypothesis to hold, your supposed unconscious components/brains must follow the same physical architecture as organic brains. No such requirement exists in reality. AI researchers have the freedom to use whatever architecture they please to create weak AI, and it will come as no surprise to anyone if a successful architecture differs from that of an organic brain. and 2) In your arguments, you assume that your supposed artificial components/brains must "behave identically" to those of a non-AI. No such requirement exists in reality. The Turing test defines the only requirement, and just as you and I behave differently from one another while passing the TT, an AI might pass the TT while behaving quite differently from a human or another AI. It looks to me then that you have burdened AI researchers with two unnecessary and imaginary constraints. These assumptions of yours also explain why you have reached the unusual conclusion that no difference exists between weak and strong AI. It seems to me that we have spent a lot of time thinking about an imaginary problem. -gts --- On Sat, 2/27/10, Gordon Swobe wrote: > From: Gordon Swobe > Subject: Re: [ExI] Is the brain a digital computer? > To: "ExI chat list" > Date: Saturday, February 27, 2010, 8:45 AM > --- On Fri, 2/26/10, Stathis > Papaioannou > wrote: > > > The task is to replace all the components of a neuron > with > > artificial components so that the neuron behaves just > the same. > > No, this sentence above of yours counts as a sample of > false assumption #2. > > AI researchers in the real world seek to replace all the > components of a brain with artificial components such that > the complete product passes the Turing test. Period. > > It does not matter to them or to me, nor should it matter > to you, whether the finished artificial neuron or the > finished AI behaves "just the same" as it would have behaved > had it not been replaced. Nobody can know the answer to that > question. > > -gts > > > > > ? ? ? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From gts_2000 at yahoo.com Sat Feb 27 14:31:40 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 27 Feb 2010 06:31:40 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <700118.79567.qm@web111208.mail.gq1.yahoo.com> Message-ID: <857313.2135.qm@web36505.mail.mud.yahoo.com> --- On Fri, 2/26/10, Christopher Luebcke wrote: > Not dualism in the sense of some ineffable, non-interactive > properties. But in the sense that certain properties pertain > to information, or motion, or activity, but not necessarily > to matter. If you hold that matter can have both physical and non-physical properties, and if you consider mental states (thoughts, beliefs, desires, and so on) as non-physical properties of matter, then you may consider yourself a property dualist. Property dualism appeared as a reasonable reaction to Cartesian substance dualism, but it still leaves us with a mysterious non-physical mind that materialist/physicalists like me find unsatisfactory. I don't consider myself a big fan of "isms" but it does sometimes help to know where one stands. -gts From bbenzai at yahoo.com Sat Feb 27 11:23:22 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 27 Feb 2010 03:23:22 -0800 (PST) Subject: [ExI] Continuity of experience In-Reply-To: Message-ID: <878910.9083.qm@web113605.mail.gq1.yahoo.com> Spencer Campbell wrote: > I haven't seen much to convince me that time is quantized, > so I could > just divide by 2*10^21 again. We could be here forever. > > Yeah, I see the point. It's a fine point. It gives me > pause. But, I am > not sure that my mind flickers in the way you're implying > it flickers. > > You can pick whatever time scale you want. Find a definite > period > during which I have had *absolutely no mind*, and you win. > I don't > think such a period is physically possible short of > traditionally-irreversible death. > OK, if you don't accept that time is quantised, this could be a problem. Or maybe not.. Just to be clear, when you say 'absolutely no mind', can I assume this is the same as saying 'no change of brain state'? If so, then surely there must be some tiny infinitesimal period of time during which the brain does not change, even (or perhaps especially) if you don't accept quantised time. (If you don't buy plank time, does that mean you also don't buy quantised matter or energy or information? Is it even theoretically possible to have quantisation in one without the others?) > The point to refute is that there isn't any gap between > those two minds, not even a little one I think there *has to be* a gap, you just need to go down far enough on the time scale. There must be a time of such a short duration that it's impossible for anything to occur that has any significance to a brain. If time is quantised, then maybe there is no time so short that absolutely nothing happens between one tick and the next, but the events possible in that short time will be on the very smallest scale, and be very far from making any difference whatsoever to a brain. (Of course you could claim that if even the tiniest motion of a single particle of the smallest possible bit of subatomic matter in your brain is missed out, you'd die, but that would be a rather Swobian argument, don't you think?) Ben Zaiboc From stathisp at gmail.com Sat Feb 27 14:50:30 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 28 Feb 2010 01:50:30 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <961760.19813.qm@web36508.mail.mud.yahoo.com> References: <114796.39864.qm@web36506.mail.mud.yahoo.com> <961760.19813.qm@web36508.mail.mud.yahoo.com> Message-ID: On 28 February 2010 00:58, Gordon Swobe wrote: > It dawned on me that you hold two false assumptions, and that these > assumptions explain the supposed problem that you present that leads to > the supposed conclusion that no distinction exists between weak and strong > AI. > > 1) In your arguments, you assume that for the weak AI hypothesis to hold, > your supposed unconscious components/brains must follow the same physical > architecture as organic brains. No such requirement exists in reality. AI > researchers have the freedom to use whatever architecture they please to > create weak AI, and it will come as no surprise to anyone if a successful > architecture differs from that of an organic brain. It's true that there is no requirement for an AI to follow brain architecture, but I am considering the special case where it does. Having reached agreement on what happens in this special case, it is then a separate question what happens in the more general case. > 2) In your arguments, you assume that your supposed artificial > components/brains must "behave identically" to those of a non-AI. No such > requirement exists in reality. The Turing test defines the only > requirement, and just as you and I behave differently from one another > while passing the TT, an AI might pass the TT while behaving quite > differently from a human or another AI. The original TT was proposed in order to answer the question of whether computers can think. Turing thought that communication in natural language over a text channel was sufficient to answer this question, language being one of the highest expressions of human intelligence. If a computer is capable of human language, then it should be capable of every other behaviour a human can display. Do you agree with that? And if a computer is capable of every behaviour that a human can display, it should be capable of every behaviour that simpler living things like mice, amoebae or neurons can display. Do you agree with that? It is true that no matter how advanced the technology it would not in general be possible to make an AI device that would behave *exactly* the same as the biological original, due to the effects of classical chaos and quantum uncertainty. However, for these same reasons it would also be impossible to make a biological copy that behaves exactly the same as the original. Even as a result of normal metabolic processes a cell changes over time, and occasionally things go wrong and the cell dies or becomes cancerous. So if you want to be absolutely precise, the task is to make an AI device that differs no more in its behaviour from the original than a good biological copy would. >> > The task is to replace all the components of a neuron >> with >> > artificial components so that the neuron behaves just >> the same. >> >> No, this sentence above of yours counts as a sample of >> false assumption #2. >> >> AI researchers in the real world seek to replace all the >> components of a brain with artificial components such that >> the complete product passes the Turing test. Period. >> >> It does not matter to them or to me, nor should it matter >> to you, whether the finished artificial neuron or the >> finished AI behaves "just the same" as it would have behaved >> had it not been replaced. Nobody can know the answer to that >> question. Some AI researchers, such as Henry Markram's group, are interested in reproducing as closely as possible the structure and function of neural tissue. However, it doesn't matter, since the thought experiment I have been discussing is designed to show something important about consciousness, and not meant as advice on the best techniques to make an AI. -- Stathis Papaioannou From jrd1415 at gmail.com Sat Feb 27 16:09:36 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sat, 27 Feb 2010 09:09:36 -0700 Subject: [ExI] Josephson Brains was Re: Is the brain a digital computer? In-Reply-To: <4B862140.2090705@satx.rr.com> References: <752259.97063.qm@web65608.mail.ac4.yahoo.com> <4B862140.2090705@satx.rr.com> Message-ID: While it adds no substance to the info here, I would note that the "Extra" in ESP is not extra at all, but rather perception by another non-magical modality. Human non-understanding of some natural phenomenon does not make the underlying mechanism 'magical'. You know what I'm saying. ESP has acquired a meme-tone of spooky magicalness, and I'd like to revise that silliness. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles On Thu, Feb 25, 2010 at 12:05 AM, Damien Broderick wrote: > On 2/24/2010 11:19 PM, The Avantguardian wrote: > >> I see how Josephson junctions do act a lot like biological neurons. But >> there are also other features of JJs that are "value added". One thing that >> springs to mind is that Josephson junctions are also used in >> super-conducting quantum interference devices (SQUIDS) because they are >> extraordinarily sensitive to minute magnetic fields. SQUIDS can even measure >> the tiny magnetic fields produced by biological brains. The implications of >> this ability are quite interesting. Artificial brains that could detect or >> perhaps even read the thoughts of other brains might be possible. Kind of >> like built- in ESP. > > Of course Josephson himself accepts the reality of pre-installed human ESP. > > Damien Broderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From gts_2000 at yahoo.com Sat Feb 27 16:18:48 2010 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 27 Feb 2010 08:18:48 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: Message-ID: <781046.54939.qm@web36505.mail.mud.yahoo.com> Stathis, > However, it doesn't matter, since the thought experiment I have been > discussing is designed to show something important about consciousness Sure, it shows that we cannot, in a given human, separate that human's consciousness from his behavior. I agree with that. But it does not follow that weak AI = strong AI as you've claimed, or that we should consider strong AI a feasible research project, or anything of the sort. It only seems to because of the two false assumptions I mentioned. -gts From jonkc at bellsouth.net Sat Feb 27 16:43:11 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 27 Feb 2010 11:43:11 -0500 Subject: [ExI] How not to make a thought experiment In-Reply-To: <506948.96513.qm@web36506.mail.mud.yahoo.com> References: <506948.96513.qm@web36506.mail.mud.yahoo.com> Message-ID: <25231503-4EE5-4B0B-A4A9-2E269B8EA039@bellsouth.net> Since my last post Gordon Swobe has posted 8 times. > Hard drives and newspapers contain information but they do not have mental states intentional or otherwise I think he is probably right in this instance, but as Swobe thinks the way these things behave is irrelevant to the question of consciousness one wonders how he knows this. I have asked similar questions before but have never received an answer. Swobe fosters the affect that he is too noble to answer difficult questions about his illogical theory, but I think he doesn't answer because he has no answer. > On my materialist view, brain matter causes and contains thoughts. Brains cause thoughts but I don't even know what it means to say they "contain thoughts" as it's meaningless to specify a physical location of a thought. I very much doubt Swobe knows what the phrase means either, the man just types stuff. > the noun thought [...] brain matter has mass. So notwithstanding the possible involvement of massless particles, thoughts have mass. I might have said something like that to parody Swobe's views, but he has saved me the trouble by providing the perfect self parody. > If you hold that matter can have both physical and non-physical properties, and if you consider mental states (thoughts, beliefs, desires, and so on) as non-physical properties of matter, then you may consider yourself a property dualist. Does Swobe consider big, small, green, swiftly or the number eleven a non-physical property? I don't know and I doubt if Swobe knows either, the man just types stuff. > we cannot, in a given human, separate that human's consciousness from his behavior. But Swobe thinks we can separate a computer's consciousness from his behavior. What is the fundamental reason Swobe thinks the two should be treated so very differently? I don't know and I doubt if Swobe knows either, the man just types stuff. > > > But it does not follow that weak AI = strong AI It would seem to me that very much does follow, why does Swobe think differently? I don't know and I doubt if Swobe knows either, the man just types stuff. > I do claim that conscious thoughts arise as high-level features of brain matter, yes. The idea seems odd only because we don't yet understand the mechanism. There is nothing odd about it, the matter is bloody obvious, but then unlike Swobe I happen to think behavior is important. So at the end we are left with the same question we started with, how does Swobe know these things? How does Swobe know consciousness is not a high-level feature of the big toe? I don't know and I doubt if Swobe knows either, the man just types stuff. I don't expect Swobe to answer this or any other serious question regarding his naive views, rather he will continue with his debating style and simply ignore questions he is afraid of. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Feb 27 16:52:45 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 27 Feb 2010 11:52:45 -0500 Subject: [ExI] Josephson Brains In-Reply-To: References: <752259.97063.qm@web65608.mail.ac4.yahoo.com> <4B862140.2090705@satx.rr.com> Message-ID: <9EF39A8C-1D79-4BF4-B456-6671CE4277BF@bellsouth.net> On Feb 27, 2010, Jeff Davis wrote: > I would note that the "Extra" in ESP is not extra at all, but rather perception by another > non-magical modality. It is extra in that it has not been shown to exist by the scientific method, so if one wishes to include it in a list of human properties one must add it as a extra. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Feb 27 16:53:20 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 27 Feb 2010 10:53:20 -0600 Subject: [ExI] Josephson Brains In-Reply-To: References: <752259.97063.qm@web65608.mail.ac4.yahoo.com> <4B862140.2090705@satx.rr.com> Message-ID: <4B894E00.5020405@satx.rr.com> On 2/27/2010 10:09 AM, Jeff Davis wrote: > While it adds no substance to the info here, I would note that the > "Extra" in ESP is not extra at all, but rather perception by another > non-magical modality. Human non-understanding of some natural > phenomenon does not make the underlying mechanism 'magical'. You know > what I'm saying. ESP has acquired a meme-tone of spooky magicalness, > and I'd like to revise that silliness. Well, I agree with you, Jeff, in disliking any mysto woo-woo tone. The "extra" was meant by Rhine only to mean "not by means of known senses." But that "extra" has to go beyond, say, echolocation or pheromone detection or unusual sensitivity to high pitched sounds or infrared signals, etc, because (according to the available data) "ESP" is as effective with future random targets as with realtime hidden targets. That's what makes it interesting and truly puzzling. To explain this data is going to require a revision or expansion of current physics models--maybe some version of entanglement, or maybe Cramer's transactional handshaking, or something utterly new, that *does* permit nonlocal signaling. I should note that nobody much in the parapsychology domain uses the term "ESP" any more, except when others do. The placeholder general term has been "psi" for decades. Damien Broderick From max at maxmore.com Sat Feb 27 18:00:27 2010 From: max at maxmore.com (Max More) Date: Sat, 27 Feb 2010 12:00:27 -0600 Subject: [ExI] Review of Stewart Brand's new book, Whole Earth Discipline Message-ID: <201002271828.o1RISZRs009268@andromeda.ziaspace.com> A curious blend of doom and optimism http://www.spiked-online.com/index.php/site/reviewofbooks_article/8238/ Max From stefano.vaj at gmail.com Sat Feb 27 18:30:21 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 27 Feb 2010 19:30:21 +0100 Subject: [ExI] Josephson Brains was Re: Is the brain a digital computer? In-Reply-To: References: <752259.97063.qm@web65608.mail.ac4.yahoo.com> <4B862140.2090705@satx.rr.com> Message-ID: <580930c21002271030w27a685bey1fa2e7e65e1eaab6@mail.gmail.com> On 27 February 2010 17:09, Jeff Davis wrote: > While it adds no substance to the info here, I would note that the > "Extra" in ESP is not extra at all, but rather perception by another > non-magical modality. "Extra" would mean "out of the realms of sight, smell, taste, hearing, touch". Anything not included in those input channel would qualify. OTOH, psychokinesis, not being a form of "perception", perhaps should not be included under the label of ESP. -- Stefano Vaj From hkeithhenson at gmail.com Sat Feb 27 18:30:20 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 27 Feb 2010 11:30:20 -0700 Subject: [ExI] Continuity of experience Message-ID: >From an engineering viewpoint, it isn't significantly harder to upload, sideload, or merge with more capable computational resources while maintaining consciousness. The reverse would work as well. I worked this feature into "the clinic seed." Keith From thespike at satx.rr.com Sat Feb 27 18:44:27 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 27 Feb 2010 12:44:27 -0600 Subject: [ExI] Josephson Brains In-Reply-To: <580930c21002271030w27a685bey1fa2e7e65e1eaab6@mail.gmail.com> References: <752259.97063.qm@web65608.mail.ac4.yahoo.com> <4B862140.2090705@satx.rr.com> <580930c21002271030w27a685bey1fa2e7e65e1eaab6@mail.gmail.com> Message-ID: <4B89680B.2070201@satx.rr.com> On 2/27/2010 12:30 PM, Stefano Vaj wrote: > OTOH, psychokinesis, not being a form of "perception", perhaps should > not be included under the label of ESP. It never is or was. That's why "psi" was coined--as an umbrella term for cognitive (telepathy and clairvoyance--"psi gamma") and motor/effector (PK--"psi kappa") phenomena. The Greek suffixes were quickly abandoned. Damien Broderick From cluebcke at yahoo.com Sat Feb 27 19:18:04 2010 From: cluebcke at yahoo.com (Christopher Luebcke) Date: Sat, 27 Feb 2010 11:18:04 -0800 (PST) Subject: [ExI] Is the brain a digital computer? In-Reply-To: <857313.2135.qm@web36505.mail.mud.yahoo.com> References: <857313.2135.qm@web36505.mail.mud.yahoo.com> Message-ID: <613970.21167.qm@web111204.mail.gq1.yahoo.com> If my fervent belief that spreadsheets are not round and magnetic commits me to property dualism, then I fully embrace my hitherto undiscovered, yet apparently inescapable, dogma. I welcome to my new camp all true believers in the proposition that apples are not made of red. ________________________________ From: Gordon Swobe To: ExI chat list Sent: Sat, February 27, 2010 6:31:40 AM Subject: Re: [ExI] Is the brain a digital computer? --- On Fri, 2/26/10, Christopher Luebcke wrote: > Not dualism in the sense of some ineffable, non-interactive > properties. But in the sense that certain properties pertain > to information, or motion, or activity, but not necessarily > to matter. If you hold that matter can have both physical and non-physical properties, and if you consider mental states (thoughts, beliefs, desires, and so on) as non-physical properties of matter, then you may consider yourself a property dualist. Property dualism appeared as a reasonable reaction to Cartesian substance dualism, but it still leaves us with a mysterious non-physical mind that materialist/physicalists like me find unsatisfactory. I don't consider myself a big fan of "isms" but it does sometimes help to know where one stands. -gts _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Sat Feb 27 19:47:50 2010 From: aware at awareresearch.com (Aware) Date: Sat, 27 Feb 2010 11:47:50 -0800 Subject: [ExI] Review of Stewart Brand's new book, Whole Earth Discipline In-Reply-To: <201002271828.o1RISZRs009268@andromeda.ziaspace.com> References: <201002271828.o1RISZRs009268@andromeda.ziaspace.com> Message-ID: On Sat, Feb 27, 2010 at 10:00 AM, Max More wrote: > A curious blend of doom and optimism > > http://www.spiked-online.com/index.php/site/reviewofbooks_article/8238/ I've long been ambivalent about Brand, admiring his passion, but disliking some narrow zealotry. Reading this review, I was immediately struck that for most people, "ecology" still commonly refers to interactions among plants and wildlife and their natural habitats, etc. Just yesterday I'd referred to the emerging ecological paradigm, superseding the presently popular computational paradigm, and it registered that very likely I was misunderstood, possibly perceived as promoting a Green point of view. It didn't even occur to me, addressing the Extropy list, that thinking in "ecological" terms, i.e., in terms of complex processes of evolutionary growth, meaningless in isolation from their environment of interaction, on all scales including the cosmic, might be misconstrued as the "ecological" of the environmental conservationist. Anyone interested in bigger-picture ecological thinking might want to check out http://evodevouniverse.com/. - Jef From x at extropica.org Sat Feb 27 20:08:51 2010 From: x at extropica.org (x at extropica.org) Date: Sat, 27 Feb 2010 12:08:51 -0800 Subject: [ExI] endpoint of evolution: was RE: why anger? In-Reply-To: <32D0908903C74D179298184326489D25@spike> References: <4B8719F2.9070004@satx.rr.com> <6444DC19A6864907B18CEB56F11F061F@spike> <32D0908903C74D179298184326489D25@spike> Message-ID: On Fri, Feb 26, 2010 at 11:29 PM, spike wrote: >> >> Jarring to see on this list "endpoint" in reference to >> evolution, which is not only the only known source of >> persistent novelty, but is itself evolving... -Jef > > Evolution would continue even after all the available metals are converted > to computronium and nearly all the energy being emitted by the star is being > converted into thought. ?The outward appearance of the MBrain wouldn't > change much after that point, but information structure would continue to > develop. Spike, my evil twin, you know I love ya, but doesn't your response, emphasizing the eschatology of filling the universe with thought, illustrate that very trend which I was questioning and lamenting? I suggested "Isn't this just another example of the reification of agent-centered 'intelligence'..." I offered "'evolution' as merely a special case of increasing free-energy rate density and 'intelligence' merely a phase..." And I ask again, "What the hell happened to halt the growth of the Extropy discussion list?" I remember that discussion on this list used to be surprising, enlightening, challenging. Now the predominate pattern appears to be participants ignoring that which they don't understand, railing against that which offends their personal sensibilities, and persisting in tooting their own horn. - Jef From spike66 at att.net Sat Feb 27 21:26:59 2010 From: spike66 at att.net (spike) Date: Sat, 27 Feb 2010 13:26:59 -0800 Subject: [ExI] endpoint of evolution: was RE: why anger? In-Reply-To: References: <4B8719F2.9070004@satx.rr.com><6444DC19A6864907B18CEB56F11F061F@spike><32D0908903C74D179298184326489D25@spike> Message-ID: > ...On Behalf Of x at extropica.org > ... > > Spike, my evil twin, you know I love ya... You are too kind Jef, and do let me assure you the affection is mutual, but how can you be certain I am the evil twin? And what if we were actually triplets. Then we could have one good, one evil, and then where would the other one go? If we put goodness on the real axis, then had a perpendicular imaginary goodness axis representing (-1)^.5 times goodness, how do we map our complex good/evil? Actually I think we already riffed on that whole concept a couple years ago. Amara got involved in it as I recall when she was in a particularly playful mood and the whole thing was a total hoot. But to your question: >... but doesn't your > response, emphasizing the eschatology of filling the universe > with thought, illustrate that very trend which I was > questioning and lamenting? Ja I confess that it does, and that I am puzzled at why you would lament. With sufficient intelligence and capacity for thought, we can form a superset of everything interesting in the universe, even a superset of everything boring, should we decide we want it. If we look around us, most matter, the overwhelming majority of the metal, is not involved in any life form, never mind the vanishingly small portion involved in any smartform. Given sufficient capacity for thought and calculation, we have the option of bringing all the available metals to life! We can simulate all imaginable life forms, in every possible ecosystem, in a way that this planet could never do for its lack of room. We can sim blue green algal mats all we want, while devouring few resources, but it isn't all that interesting, even to the algae. What I would expect is that intelligence recognizes its own worth, and works towards the singularity, then on to a new and vast home space for itself, an MBrain which uses every available atom of everything below the top row on the periodic chart. Thought is good, thinking is good, invention is good. We have one species on this planet that is particularly good at it, and it has allowed that species to radiate in all directions, and fill all available spaces on this planet, and off of it. Thought is good, calculation is good, simulation is good. I see evolution as driving toward an ultimate point where all the metal is thinking, calculating, simulating, evolving ever more information and more and better concepts and ideas. I can scarcely imagine a logical stopping point or equilibrium in the evolutionary process which stops driving towards an MBrain. Can you? Do not lament Jef, rejoice! You and I are fortunate enough to have been born right exactly at the right time in history where consciousness of the coming age of super-efficient smartforms is dawning like the long awaited approach of the glorious sun's rays on a sparkling clear morning. spike From scerir at libero.it Sat Feb 27 22:07:21 2010 From: scerir at libero.it (scerir) Date: Sat, 27 Feb 2010 23:07:21 +0100 (CET) Subject: [ExI] Josephson Brains Message-ID: <11562273.2689911267308441183.JavaMail.defaultUser@defaultHost> Damien: [...] (according to the available data) "ESP" is as effective with future random targets as with realtime hidden targets. That's what makes it interesting and truly puzzling. To explain this data is going to require a revision or expansion of current physics models--maybe some version of entanglement, or maybe Cramer's transactional handshaking, or something utterly new, that *does* permit nonlocal signaling. ----------- Already in the '80s Bohm and Aharonov in a couple of papers showed that the evolution of quantum states (i.e. entangled states) cannot be made covariant (i. e. it cannot be made invariant under reltivistic boosts that change the time- ordering of events). But Bohm and Aharonov showed that the probability distributions of possible outcomes are covariant. In Gisin's words: "The probability distributions are covariant, but don't describe the actual world. The realizations describe the world we see around us, but, necessarily, in a non-covariant way. In short: Only the cloud of potentialities is covariant, the actual events aren't. Hence, in some sense, the open future is covariant, but the past is not." Decades and decades of models and experiments about Bell theorems, and inequalities, and nonlocality, and nonseparability, have also shown that: 1) there are no local hidden variables; 2) there are no nonlocal hidden variables; 3) there are no hidden variables of any possible kind; 4) the remaining option - a sort of 'sui generis' hidden variable model - being the transactional interpretation, and the Aharonov two-time interpretation, | ket > & < bra | having different / opposite times; 5) there are models, sometimes called "flash onotology", based on GRW dynamics, but they are rather "ad hoc". Summing up: nonlocality, nonseparability, no local causality, no relativistic causality, they all rule out spacetime as we know it (or they are hints to a more physical spacetime, based on nonlocal quantum processes). Under the necessary postulate of "free will" of observers (establishing specific experiments), and instruments (with random parameters to be set), during a Bell-type experiment (a sort of precondition imposed by Bell to avoid the "ultimate conspiracy"), the actual reification of single outcomes (in spacelike separated regions) is something happening completely out-of- spacetime (as we know it from relativistic theories). As Pauli once said, the outcome is something "irrational", an act of "creation" (and Gisin repeats that). The interesting point here is the following "loop". If outcomes are unique acts of creation, out-of-spacetime (since there is no possible relativistic representation), nonetheless they are real facts, they are the only real physical facts. So these outcomes must also be the only bricks from which the "true" spacetime must be build. In Gisin's words "[Q]uantum events are not mere functions of variables in spacetime, but true creations: time does not merely unfold, true becoming is at work. The accumulation of creative events is the fabric of time.? Rather, I would have said ... " is the fabric of spacetime". Dunno the possible consequences of all that for PSI stuff. But for sure some (unknown) conceptual revolution is needed also here. Gisin: http://arxiv.org/abs/1002.1392 Gisin: http://arxiv.org/abs/1002.1390 Kastner: http://arxiv.org/abs/1002.2675 Tumulka: http://arxiv.org/abs/quant-ph/0602208 Suarez: http://arxiv.org/abs/1002.2697 From thespike at satx.rr.com Sat Feb 27 23:06:30 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 27 Feb 2010 17:06:30 -0600 Subject: [ExI] Josephson Brains In-Reply-To: <11562273.2689911267308441183.JavaMail.defaultUser@defaultHost> References: <11562273.2689911267308441183.JavaMail.defaultUser@defaultHost> Message-ID: <4B89A576.9010507@satx.rr.com> On 2/27/2010 4:07 PM, scerir wrote: > for sure some > (unknown) conceptual revolution is needed also here. > > Gisin:http://arxiv.org/abs/1002.1392 > Gisin:http://arxiv.org/abs/1002.1390 > Kastner:http://arxiv.org/abs/1002.2675 > Tumulka:http://arxiv.org/abs/quant-ph/0602208 > Suarez:http://arxiv.org/abs/1002.2697 Good urls, Serafino! Damien Broderick From stathisp at gmail.com Sun Feb 28 03:05:37 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 28 Feb 2010 14:05:37 +1100 Subject: [ExI] Is the brain a digital computer? In-Reply-To: <781046.54939.qm@web36505.mail.mud.yahoo.com> References: <781046.54939.qm@web36505.mail.mud.yahoo.com> Message-ID: On 28 February 2010 03:18, Gordon Swobe wrote: > Stathis, > >> However, it doesn't matter, since the thought experiment I have been >> discussing is designed to show something important about consciousness > > Sure, it shows that we cannot, in a given human, separate that human's consciousness from his behavior. I agree with that. > > But it does not follow that weak AI = strong AI as you've claimed, or that we should consider strong AI a feasible research project, or anything of the sort. It only seems to because of the two false assumptions I mentioned. There is no requirement on AI researchers to make an AI by following the structure and function of the brain. However, if they do do it this way, as mind uploading researchers would, the resulting AI will both behave like a human and have the consciousness of a human. If digital computers cannot be conscious then the practical implication for the researchers is that there will be some part of the brain - the NCC, although they won't necessarily recognise it as such - the behaviour of which cannot be duplicated by a digital computer. If they are building an artificial neuron they may find, for example, that no computer model will be able to control the timing of neurotransmitter release in a manner analogous to the biological neuron, with the result that if these artificial neurons are installed in someone's brain both the person's behaviour and their experience will deviate from normal. The researchers will then announce that it appears the brain utilises physical processes which are NOT COMPUTABLE, and the goal of uploading a human mind to a computer is therefore unattainable. More generally, it would mean that there are aspects of human intelligence that could never be replicated by a computer program, since if the behaviour of the NCC is not computable then the behaviour of any system affected by the NCC is also not computable. -- Stathis Papaioannou From lacertilian at gmail.com Sun Feb 28 04:09:34 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sat, 27 Feb 2010 20:09:34 -0800 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: Stathis Papaioannou : > M can have no observable effect on the world whatsoever, and that > means no supernatural effect either. For example, if M allows me to > perform miracles and I lose M as a result of sneezing or whatever, > then I won't be able to perform miracles any more. I will sink when I > try to walk on water, an easily observable effect. I don't think it is > possible for anything not to exist in a stronger sense than M does not > exist. Okay, okay! We agree! Looking back on my original definition of M, I realize my "barring supernatural effect" qualifiers were unnecessary. We are talking about something very similar to souls here, but M is a conception of the soul which does not in any way challenge the present scientific consensus. So: it doesn't exist. Ben Zaiboc : > OK, if you don't accept that time is quantised, this could be a problem. > > Or maybe not.. ?Just to be clear, when you say 'absolutely no mind', can I assume this is the same as saying 'no change of brain state'? No. Well... no. Dead brains still change state. I am grasping for a satisfactory definition here, but I would be more inclined to say "absolutely no mind" equals "absolutely no purposeful action". It is fuzzier than yours. You can say, "but dead brains act purposefully! Their purpose is to decay!". It would probably take something longer than the Encyclop?dia Britannica to specify a definition, in English, for what I'm thinking about with such precision that exceptions like that are impossible. Take two human bodies. Remove the brains of each, replacing them with chemically-and-electrically inert copies with identical structural properties. Density, shape, rigidity, et cetera. If the behavior of one changes (in any way) while the other is unaffected, then that body had a mind and the other was mindless. More simply: a heartbeat implies a mind. Breathing implies a mind. Pretty much every physiological function implies a mind, excepting the purely chemical ones; stomach acid does not imply a mind, even if it happens to be dissolving a ham sandwich at the time. Ben Zaiboc : > If so, then surely there must be some tiny infinitesimal period of time during which the brain does not change, even (or perhaps especially) if you don't accept quantised time. (If you don't buy plank time, does that mean you also don't buy quantised matter or energy or information? ? Is it even theoretically possible to have quantisation in one without the others?) I wouldn't argue with that. Actually I would not be at all surprised to learn that time IS quantized in reality, but it certainly isn't in the standard model of particle physics. The standard model doesn't do so great with gravity, either, though. Ben Zaiboc : > I think there *has to be* a gap, you just need to go down far enough on the time scale. ?There must be a time of such a short duration that it's impossible for anything to occur that has any significance to a brain. ?If time is quantised, then maybe there is no time so short that absolutely nothing happens between one tick and the next, but the events possible in that short time will be on the very smallest scale, and be very far from making any difference whatsoever to a brain. You're already pretty far away from a mind-scanning scenario! But, okay, let's see where this leads. If time is quantized, then the universe operates in steps. Tick-tock. We can also assume that space is quantized, then, and for the sake of convenience we'll assume there are "cells" as in a cellular automaton. Stephen Wolfram says a causal network is more likely, but I don't need to be that accurate right now. So! It is reasonable to assume that massless particles, such as light, move one cell during each step. This necessarily implies that massive particles do not always move at each step, and so, if the human brain is composed entirely of massive particles, it is theoretically possible to find two adjacent steps of time in which my brain does not change in any way. Difficult, considering the number of particles involved, but possible. You are arguing that this would qualify as a gap in activity. But, I see no gap. If you laid out the time-steps as a ribbon of squares, each square containing the universe, then the gap between two steps is obviously a zero-width one-dimensional line. Saying that there is a gap there is equivalent to saying that there is a gap between a thing and itself; no distance, no gap. So, if my logic holds up, your assumption that a gap in activity implies a gap in M never comes into play. Your move! Ben Zaiboc : > (Of course you could claim that if even the tiniest motion of a single particle of the smallest possible bit of subatomic matter in your brain is missed out, you'd die, but that would be a rather Swobian argument, don't you think?) (If I ever make such an argument, you can go ahead and assume I have somehow lost M.) Keith Henson : > From an engineering viewpoint, it isn't significantly harder to > upload, sideload, or merge with more capable computational resources > while maintaining consciousness. ?The reverse would work as well. ?I > worked this feature into "the clinic seed." Yeah, I figured that out too. I don't consider this discussion to be of vital importance to me. This "the clinic seed" of yours, though. What is it? From pharos at gmail.com Sun Feb 28 12:22:01 2010 From: pharos at gmail.com (BillK) Date: Sun, 28 Feb 2010 12:22:01 +0000 Subject: [ExI] Continuity of experience In-Reply-To: References: Message-ID: On Sun, Feb 28, 2010 at 4:09 AM, Spencer Campbell wrote: > Yeah, I figured that out too. I don't consider this discussion to be > of vital importance to me. This "the clinic seed" of yours, though. > What is it? > Ohhh. Such difficult questions......... Try Google "clinic seed" henson BillK From jonkc at bellsouth.net Sun Feb 28 17:29:10 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 28 Feb 2010 12:29:10 -0500 Subject: [ExI] Continuity of experience. In-Reply-To: References: Message-ID: On Feb 27, 2010, Spencer Campbell wrote: > > Take two human bodies. Remove the brains of each, replacing them with chemically-and-electrically inert copies with identical structural properties. Density, shape, rigidity, et cetera. If electrical charge distribution is included in the "et cetera" then it's meaningless to talk about copies that are identical to working models but are nevertheless inert. > More simply: a heartbeat implies a mind. Breathing implies a mind. Sleeping people and those in a coma due to massive brain damage do both of those things but they don't have a mind, or at least they don't act like they do. And concerning the title of this thread ask yourself one question, what would the NON continuity of existence seem like? I don't think there is any doubt that subjectively everything would seem as continuous as ever, it's just that the external world would seem to jump. So why should people get all hot and bothered worrying about the continuity of their existence? > Actually I would not be at all surprised to learn that time IS > quantized in reality Neither would I, but I think it's irrelevant to the question at hand. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Sun Feb 28 17:11:40 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 28 Feb 2010 09:11:40 -0800 (PST) Subject: [ExI] Continuity of experience In-Reply-To: Message-ID: <214822.48535.qm@web113620.mail.gq1.yahoo.com> Spencer Campbell wrote: > Ben Zaiboc : > > Just to be clear, when you say > 'absolutely no mind', can I assume this is the same as > saying 'no change of brain state'? > > No. Well... no. Dead brains still change state. I am > grasping for a > satisfactory definition here, but I would be more inclined > to say > "absolutely no mind" equals "absolutely no purposeful > action". > > It is fuzzier than yours. Indeed. But we're not concerned with dead and decaying brains, just ones that have been suspended via cryonics or some chemical preservation, etc. I'd say that such brains aren't currently 'doing mind', because they are not changing state, being unable to, and that when/if they become able to (and assuming no damage has occurred in the meantime), the mind will resume. ... > > You are arguing that this would qualify as a gap in > activity. But, I > see no gap. If you laid out the time-steps as a ribbon of > squares, > each square containing the universe, then the gap between > two steps is > obviously a zero-width one-dimensional line. Saying that > there is a > gap there is equivalent to saying that there is a gap > between a thing > and itself; no distance, no gap. > > So, if my logic holds up, your assumption that a gap in > activity > implies a gap in M never comes into play. Your move! Rather than the universe, imagine each square as the total contents of a brain, down to a suitably fine level (probably not the subatomic level, as I doubt it's relevant, but it could be if you like). The gap is in the square where nothing changes, not between adjacent squares. If square A has a certain state, and so does square B, but C is in a different state, there is a time gap between entering state A and state C. Imagine the square B, with no change from A, being carried forward into C,D,E... to the many-multi-gazillionth square, a few years or centuries or millennia, hence. Then the state changes in the next square, which will be exactly the same as if it were still C. My argument is that you will find a square B (identical to A) in a functioning brain if you look hard enough. > Ben Zaiboc : > > (Of course you could claim that if even the tiniest > motion of a single particle of the smallest possible bit of > subatomic matter in your brain is missed out, you'd die, but > that would be a rather Swobian argument, don't you think?) > > (If I ever make such an argument, you can go ahead and > assume I have somehow lost M.) That's what I meant. > Keith Henson : > > From an engineering viewpoint, it isn't significantly > harder to > > upload, sideload, or merge with more capable > computational resources > > while maintaining consciousness. ?The reverse would > work as well. ?I > > worked this feature into "the clinic seed." > > Yeah, I figured that out too. I don't consider this > discussion to be > of vital importance to me. This "the clinic seed" of yours, > though. > What is it? If Keith hasn't told you in the meantime: http://www.terasemjournals.org/GN0202/henson.html It's a short story written by him. Not bad at all, imo. Part of a larger work called "Standard Gauge": http://www.terasemjournals.org/gn0401/kh1.html Ben Zaiboc From lacertilian at gmail.com Sun Feb 28 18:25:18 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 28 Feb 2010 10:25:18 -0800 Subject: [ExI] Continuity of experience. In-Reply-To: References: Message-ID: John Clark : > If electrical charge distribution is included in the "et cetera" then it's meaningless to talk about copies that are identical to working models but are nevertheless inert. See, this is why I specified "structural properties". I would not consider electrical charge distribution to be a structural property. That's an electrical property. John Clark : > Sleeping people and those in a coma due to massive brain damage do both of those things but they don't have a mind, or at least they don't act like they do. Yeah, but people wake up from sleep and comas. I'm stretching the concept of "mind" a little far for the purposes of this discussion. I'm saying: the mind is the sum total of everything the brain does, and the brain (normally) regulates pulse and breathing. Therefore, the mind is responsible for both. John Clark : > And concerning the title of this thread ask yourself one question, what would the NON continuity of existence seem like? I don't think there is any doubt that subjectively everything would seem as continuous as ever, it's just that the external world would seem to jump. So why should people get all hot and bothered worrying about the continuity of their existence? Discontinuous experience is conceivable and not worrying in and of itself. In fact, people seem to live through it every day. You said "non-continuity of existence", though, and maybe that's a more appropriate name for the subject. That certainly is worrying. If I cease to exist for a while, and then later on something just like me comes into existence, is it really me? Stupid question. Yes, of course it's me. The part that troubles me, irrationally, is that I'm not sure I would be it. John Clark : > Neither would I, but I think it's irrelevant to the question at hand. (Talking about quantized time, here.) I'd be inclined to agree. Nevertheless, it came up. I can't just ignore the arguments that I think are irrelevant, you know! From lacertilian at gmail.com Sun Feb 28 19:02:58 2010 From: lacertilian at gmail.com (Spencer Campbell) Date: Sun, 28 Feb 2010 11:02:58 -0800 Subject: [ExI] Continuity of experience In-Reply-To: <214822.48535.qm@web113620.mail.gq1.yahoo.com> References: <214822.48535.qm@web113620.mail.gq1.yahoo.com> Message-ID: BillK : > Ohhh. Such difficult questions......... > > Try Google > "clinic seed" henson Right. Yes. I knew that. I was... being polite! Yeah, that's it. Ben Zaiboc wrote: > Indeed. ?But we're not concerned with dead and decaying brains, just ones that have been suspended via cryonics or some chemical preservation, etc. > > I'd say that such brains aren't currently 'doing mind', because they are not changing state, being unable to, and that when/if they become able to (and assuming no damage has occurred in the meantime), the mind will resume. I would agree with you if the brain itself was reactivated later on, or "resurrected" if you prefer, but this is not the case in a classic uploading scenario. After freezing or plasticization, the brain is just a block of data. A three-dimensional photograph, describing the mind to be reconstructed in another medium. Still I would not be bothered, except for the fact that a multi-clone scenario is perfectly plausible here. There are many other immortality gambits in which that is not a possibility, perhaps including a few that have a hundred-year-plus gap in activity, and in general I don't have any problem with them. Ben Zaiboc : > The gap is in the square where nothing changes, not between adjacent squares. If square A has a certain state, and so does square B, but C is in a different state, there is a time gap between entering state A and state C. ?Imagine the square B, with no change from A, being carried forward into C,D,E... to the many-multi-gazillionth square, a few years or centuries or millennia, hence. ?Then the state changes in the next square, which will be exactly the same as if it were still C. Ohh, now I get it. Okay. Yes. That is a definite gap in activity. However, I would not call it a definite gap in mind. If I have a mind in square A, and square B is identical to square A, then I also have a mind in square B. Looking at it that way, minds persist *in spite of* activity more than because of it. Technically, C has a different mind from A or B. The brain is doing something different in C. For some reason or another, M makes the leap from A to B to C, skittering across many different minds over time. Ben Zaiboc : > If Keith hasn't told you in the meantime: > http://www.terasemjournals.org/GN0202/henson.html > > It's a short story written by him. ?Not bad at all, imo. ?Part of a larger work called "Standard Gauge": > http://www.terasemjournals.org/gn0401/kh1.html Well, joke's on you Ben, I already googled it! Get with the times already. (Thanks though.) From jonkc at bellsouth.net Sun Feb 28 18:39:57 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 28 Feb 2010 13:39:57 -0500 Subject: [ExI] Continuity of experience. In-Reply-To: References: Message-ID: <77429FAC-065F-4447-A4F3-273090F84259@bellsouth.net> On Feb 28, 2010, at 1:25 PM, Spencer Campbell wrote: > See, this is why I specified "structural properties". I would not > consider electrical charge distribution to be a structural property. > That's an electrical property. Philosophically speaking, and that's what we're talking about, why is that distinction important? > Yeah, but people wake up from sleep and comas. Sometimes they do, and if they do subjectively the mind's existence has continued without a break even if it hasn't objectively. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Sun Feb 28 19:17:57 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 28 Feb 2010 14:17:57 -0500 Subject: [ExI] The Chess Room Message-ID: <4e3a29501002281117q42f7dcben9def0d23b9bbe78e@mail.gmail.com> In the near future, America's greatest game theorists develop Benthic Violet, the world's most complicated chess machine. Aside from playing exceptionally well, BV has an astounding extra property. If any player faces BV enough times, its algorithms will eventually be able to mimic this player perfectly--from any initial move or set of moves, BV knows with certainty exactly how the game will play out. It is infallible. Chess players everywhere are aghast. Their game, it seems, has been solved. So a team of chess masters resolves to set things straight. They form a group composed of themselves along with other chess players along the skill gradient. Every play style is represented, from crafty to frank, from superb to awful. The players christen their group "The Last Stand." The day has come for the match to begin, and TLS has devised a system of play. Each player is given a number in accordance with their skill level, with the best players being awarded the lowest numbers. Play begins randomized across all players, each one having an equally likely chance to play the next move. They have also written an ingenious algorithm to rate how well the group seems to be doing, and when the algorithm reports that things are starting to go sour, lower numbers are given a probabilistic weight in accordance with just how sour the scene is. The match starts, and the players all wait outside a secure room with a single door for their chance to play. On entering, they see the board in front of them, along with a list of previous moves, should they want to check the development. This list is also broadcast on a screen outside the room. The games grind on and on, and BV, as perhaps was expected, wins every time. But this is not TLS's goal. After each game, technicians ask BV whether it is able to emulate its opponent yet, and the answer is always the same: INSUFFICIENT DATA FOR MEANINGFUL ANSWER. While BV is able to face TLS more and more efficiently over time, it seems never to be able to predict the next move. TLS is ecstatic. Champagne bottles are popped and flutes are filled. Party horns are blown, shirts removed and swung over heads. As expected, humanity has beaten the devil machine. But, in the midst of this mirth, BV begins to emit some odd noises. Sparks fly about and the humans take cover to observe. After a while, BV stops making any noise. The air is still, and then, out of its print slot comes a seemingly endless ribbon of paper. Finally this stops too, and all of TLS goes over to observe. Kasparov looks at the readout, puts his hand to his head, proclaims that he seems to have come into a case of the vapors, and promptly faints. A less technically inclined player picks up the sheet and reads off it. The first line, beginning with "M0" states a few possible moves. The second line, which starts off "M0-1," again lists a few moves. Upon closer examination of the list, the players find headings like "M0-1-1", "M0-2-1-5-2-3", and so on. Each of these can be traced back to the beginning in a backwards-branching fashion. At the very end of the printout, a line states: LIKELY DEVELOPMENTS; P>.5. Under this are long lists, each with a probability next to it: "M0-12314231423255142314234231....123123: P>.72" A player sees it fit to check these moves against the last game. A horrified look comes over him and the paper falls out of his hands. The moves match well into the list. More and more of these movelists are checked, and, eventually, the exact playout of the game is found, P>.57. What does this mean? Has BV won? It can't emulate the player perfectly, but it seems like, but for a different path being taken, it could have. One would expect that, running more games, eventually BV would get it right. At this moment of realization, a bright light begins to flood out from under the door of the chess room. Suddenly, an explosion ravages the area, destroying BV and TLS, yet, magnificently, the final readout is carried on a thermal column up into the atmosphere, where it flits into a cave somewhere in the Siberian mountains. Millennia pass. In the time between then and now, a group of scientists, having delved deeper and deeper into understanding our brains, have written a formula that can take someone's chess history and compose a rough AI about it, consciousness and all. While memories are lacking, this AI will act very much like its complementary player. (This might be preposterous, but it would have been the same if the original chess room instead analyzed every action of all its players and attempted to create the consciousness directly based on this. I am simply engaging in a lengthy chronological syllogism.) In a neat coincidence, one of the scientists who wrote the algorithm is an avid mountain climber, and, while hiking in the Siberian mountains (which are now known for their beautiful flora, fauna, and general verdant nature,) stumbles into that fated cave and finds the ancient readout. He is confused, but takes the paper back to his laboratory and runs it through the machine. The algorithm, like it often tends to do, tells the scientist that it has found a number of probable consciousnesses for the movelist. He tells it to choose the most probable one. A voice ekes out of the machine: "Hello..." And this, unfortunately, is where our story ends. A shampoo bottle inspired me to write it. In the shower, I realized that the placement of the bottle and the amount of shampoo inside and other factors are determined by the actions of my family. The history of the bottle is dependent on many consciousnesses, and so "understanding" of it cannot exist on one mind. The bottle is a piece of external information that relies on multiple actions by different people, and, perhaps if we were able to understand a person based on their use and placement of this bottle, we could extend this to roughly understand *the family* that uses the bottle. Then do the actions revolving around it contain, in a sense, some sort of consciousness than cannot be understood without taking multiple minds into question? I thought about it more. The placement is in essence a computation by the brain (for a moment ignore the questions of traditional computability and say it may be a "non-computable computation.") If our brains are a network of interconnected computations leading somehow to consciousness, might any actions requiring the input of multiple minds require a sort of emergent consciousness? When events take place on Earth, humans put malleable information into the environment, be it in the form of money, television, war, shampoo, talking, sex, mountain climbing, shampoo. Other humans receive and re-emit this information. Humanity as a whole, it seems, could have this (pardon any panpsychic jargon) "overmind" associated with it. In fact, humanity seems to be following, roughly, a calculable progression. Trends exist; this is undeniable. Now, if one wants, this could be extended to places further than Earth. Earth receives inputs in the form of events and produces outputs, hopefully many more in the future in the form of interstellar travel. Maybe, (probably?) alien species with minds of their own exist that will eventually come into contact with us. Essentially, what I am asking is: does a system which contains consciousness, as a simple result of this consciousness being there, have a slight consciousness of its own, though it may act in a much more convoluted way, over a much longer period of time? Does the universe, as the "Old One," perform large-scale mind-like calculations at a very slow scale? If any of you remember Hofstadter's Siamese twins, you understand the idea of a mind existing over what seem to be multiple systems. Maybe the distances involved here are too large for something to emerge, but this might be compensated by the inordinate amounts of time needed for these calculations to take place. What say you? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sun Feb 28 19:39:52 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 28 Feb 2010 12:39:52 -0700 Subject: [ExI] related replies Message-ID: On Sun, Feb 28, 2010 at 5:00 AM, Stathis Papaioannou wrote: > There is no requirement on AI researchers to make an AI by following > the structure and function of the brain. However, if they do do it > this way, as mind uploading researchers would, the resulting AI will > both behave like a human and have the consciousness of a human. This way is *exceeding* dangerous unless we deeply understand the biological mechanisms of human behavior, particularly behaviors such as capture-bonding which are turned on by behavioral switches (due to external situations). A powerful human type AI in "war mode," irrational and controlling millions of robot fighting machines is not something you want to happen. Humans do go irrational (for reasons firmly rooted in our evolutionary past). See the descriptions of the mental state the machete killers in Rwanda were in while they killed close to a million people. > Ben Zaiboc wrote: snip > > Keith Henson : >> From an engineering viewpoint, it isn't significantly harder to >> upload, sideload, or merge with more capable computational resources >> while maintaining consciousness. ?The reverse would work as well. ?I >> worked this feature into "the clinic seed." > > Yeah, I figured that out too. I don't consider this discussion to be > of vital importance to me. This "the clinic seed" of yours, though. > What is it? Google henson clinic seed, take the first link. It's been discussed here before. http://lists.extropy.org/pipermail/extropy-chat/2008-November/046637.html Keith